ScatterLab Leads AI Safety Enhancement
Scatterlab, a company leading the ethical usage environment of generative AI, is establishing itself as an industry best practice through its interactive AI platform, ‘Zeta.’ The ‘Generative AI Service User Guidelines,’ which the Korea Comm...
Scatterlab, a company leading the ethical usage environment of generative AI, is establishing itself as an industry best practice through its interactive AI platform, ‘Zeta.’ The ‘Generative AI Service User Guidelines,’ which the Korea Communications Commission (KCC) will implement starting March 28, cites Scatterlab's ‘Zeta’ and its past service ‘Nutty’s’ operational policies and content standards approximately 20 times, proving their policies are a successful model in minimizing negative impacts.
‘Zeta’ is a space where users and AI create stories in real-time, aiming for content suitable for a 15-year-old audience, and strictly restricting registrations for users under 14 years of age. Scatterlab thoroughly blocks the creation of inappropriate content, such as adult material, through automated AI abuse filters and continuous monitoring, applying phased disciplinary measures in case of violations. Furthermore, to ensure the transparency of AI-generated content, a watermark is displayed on all AI conversations when the service screen is captured, and data management methods are clearly communicated to users.
Based on over 10 years of experience in developing emotional AI chatbots, Scatterlab also deeply focuses on the emotional protection of its users. Through ‘Iruda 2.0,’ launched in 2022, Scatterlab collaborated with leading domestic universities such as UNIST, Korea University Anam Hospital, and KAIST to conduct research on the positive impact of AI chatbots on alleviating loneliness and social anxiety, and promoting mental health. These studies have been published in prestigious international academic journals and recognized for their value by winning best paper awards. Recently, the company further strengthened users' psychological safety net by introducing a feature in ‘Zeta’ that immediately displays a counseling message if a user mentions topics related to extreme choices.
Kim Jong-yoon, CEO of Scatterlab, emphasized, “We will continue to adhere to ethical principles in providing AI services and establish various safeguards to ensure users can experience AI safely and positively,” thus solidifying the company's commitment as a responsible AI technology enterprise.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0