"Are We Ready for the AI Basic Law? 98% of Companies Say 'No'"

On the eve of implementation on the 22nd, Startup Alliance held a roundtable to discuss issues such as the designation of high-impact AI and mandatory labeling of generative AI. On the 6th, Startup Alliance, together with Democratic Party m...

Jan 7, 2026 - 00:00
 0  0
On the eve of implementation on the 22nd, Startup Alliance held a roundtable to discuss issues such as the designation of high-impact AI and mandatory labeling of generative AI. On the 6th, Startup Alliance, together with Democratic Party member Hwang Jeong-a, held a 'Roundtable on Transparency and Accountability in the AI Basic Law' at the National Assembly Hall. The Korea Startup Forum and the Codit Global Policy Experimental Research Institute co-hosted the event. This discussion was organized in response to a survey of 101 AI companies, where 98% indicated that they were "not prepared for the AI Basic Law." Concerns have been raised about the difficulty of understanding what obligations the industry will actually bear on-site ahead of the implementation set for the 22nd of this month. Member Hwang stated, "While the government is continuously making efforts to include only the minimal regulatory measures, it is also true that there are concerns in the field," adding, "As this is the world's first law being implemented, we will continue to reflect the voices from the field and make improvements even after its enactment." In a key presentation, Choi Seong-jin, director of the Startup Growth Research Institute, pointed out that "the current draft of the enforcement ordinance lacks specificity and predictability on how to implement the principles of transparency and accountability in industrial sites." He noted that the criteria and procedures for realistically applying individual provisions, such as designating high-impact AI, mandatory labeling of generative AI, and establishing a risk management system, are unclear. Regarding the designation of high-impact AI, he emphasized that "the judgment could vary based on the context of use and the range of impact rather than the type of technology, so businesses must establish predictable criteria before legal obligations are imposed." He added, "If the system is operated in a manner that leaves the criteria ambiguous and focuses on post-investigation or measures, startups may decide to abandon related services to avoid risks." On the requirement to label generative AI outputs, he expressed that “there is a lack of concrete criteria on when and how to indicate to users whether the output is generative or not,” pointing out that for unstructured content like voice, images, and videos, the technical feasibility is low or it may harm user experience. "Labeling obligations should not be rigid and uniform, but should be flexibly designed according to risks and intended use," he suggested. The subsequent comprehensive discussion was chaired by Professor Lee Sang-yong of the Korea University Law School. Lee Ho-young, CEO of ToonSquare, stated, “Considering the blurred lines between users, service providers, and developers, the flexible application of the AI Basic Law is necessary,” adding, “In situations where standards and criteria are unclear, companies tend to shy away or relocate overseas.” Jeong Ji-eun, chair of the External Policy Committee of the Korea Startup Forum, pointed out that “the criteria for defining the AI contribution in the production process related to the obligation to label AI-generated products are ambiguous,” suggesting, "Visible watermarks may negatively impact user experience, so it is necessary to reference the invisible meta-signature methods of global standards like C2PA." Jung Joo-yeon, a senior expert at Startup Alliance, noted that “for startups developing services using external APIs or open-source models, it is unreasonable to be held accountable in situations where they cannot measure or control the computational load.” He emphasized, "Since AI systems are composed of various models and modules, estimating the computational load is often practically impossible, thus safety standards should be set at the model level rather than the AI system level to align more closely with technological realities." Choi Woo-seok, head of AI Safety Trust Support Division at the Ministry of Science and ICT, stated, “Since AI is inherently a highly uncertain industry, we will operate the system focusing on deferral and guidance to minimize excessive burdens on early-stage companies, ensuring sufficient communication and support,” adding, “We will provide specific and effective interpretation criteria through enforcement ordinances and guidelines.” Lim Jeong-wook, representative of Startup Alliance, said, “As the AI industry is facing fierce global competition and the pace of technological change is rapid, we must consider not only the speed but also the effectiveness, predictability, and international consistency when designing new regulatory frameworks.” He added, “I hope that the opinions discussed today will be reflected in the enforcement ordinance and guidelines, allowing the AI Basic Law to become a foundation for trust and innovation rather than an additional burden on companies.”

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0