Kakao Open-Sources AI Safety Verification Guardrail Model
Behind the dazzling development of generative AI, deep consideration for AI ethics and safety is essential. In response to these demands of our time, Kakao is taking the lead in fostering a responsible AI technology ecosystem by open-sourci...
Behind the dazzling development of generative AI, deep consideration for AI ethics and safety is essential. In response to these demands of our time, Kakao is taking the lead in fostering a responsible AI technology ecosystem by open-sourcing its 'Kanana Safeguard' AI guardrail model, which will significantly enhance the safety and reliability of AI services.
Born from Kakao's self-developed language model 'Kanana', this model boasts performance specifically optimized for the Korean language and culture. Thanks to a vast Korean language dataset built in-house, it has proven its excellence by achieving results that surpass leading global models in F1-score evaluations, which measure the precision and recall of AI models. This is undoubtedly welcome news for developers of Korean-language based AI services.
The three core models released this time respond to different types of risks, providing a robust safety net. First, 'Kanana Safeguard' precisely detects harmful content such as hate, harassment, and sexual content in user utterances or AI responses. Second, 'Kanana Safeguard-Siren' detects legal risks such as personal information leakage or intellectual property infringement, filtering out requests that require caution. Third, 'Kanana Safeguard-Prompt' proactively blocks aggressive prompts aimed at misusing AI services, thereby preventing system misuse. All these models are freely downloadable via developer-friendly Hugging Face.
Kakao has applied an open Apache 2.0 license to 'Kanana Safeguard', freely permitting commercial use, modification, and redistribution. This demonstrates Kakao's firm commitment to helping more developers and companies easily adopt and advance safe AI technology. Kakao does not intend to stop here and plans to continuously update and enhance the models.
Kim Kyung-hoon, Kakao's AI Safety Leader, emphasized, "As the era of generative AI arrives, the importance of AI ethics and safety is gradually emerging both domestically and internationally." He added, "Through this open-source release, we will widely disseminate the social value of 'building responsible AI' and continue proactive responses so that technological advancement can benefit humanity." Kakao's 'Kanana Safeguard' is expected to go beyond a mere technology release, serving as an ethical compass for the AI era.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0