Motif Technologies Unveils In-House LLM 'Motif 12.7B'

Korean AI startup Motif Technologies has fully open-sourced its self-developed large language model (LLM) 'Motif 12.7B' on Hugging Face, heralding a new wave in the global AI market. This model, based on 12.7 billion parameters, is an accum...

Nov 5, 2025 - 00:00
 0  623
Motif Technologies Unveils In-House LLM 'Motif 12.7B'
Korean AI startup Motif Technologies has fully open-sourced its self-developed large language model (LLM) 'Motif 12.7B' on Hugging Face, heralding a new wave in the global AI market. This model, based on 12.7 billion parameters, is an accumulation of 'purely domestic' technology, with Motif Technologies directly overseeing the entire process from planning to data learning. It demonstrated its capabilities by being developed in just 7 weeks, despite the challenging conditions of the domestic AI industry. Notably, Motif 12.7B has innovatively boosted model performance and learning efficiency through two proprietary core technologies: 'Group-wise Differentiated Attention (GDA)' and the 'Muon Optimizer Parallelization Algorithm'. The benchmark results are even more astonishing. Motif 12.7B, in key metrics evaluating reasoning abilities such as mathematics, science, and logic, surpassed Alibaba's Qwen 2.5 (72B), which has a staggering 72 billion parameters, and recorded significantly superior inference scores compared to Google's Gemma models of similar scale. Motif Technologies' GDA mechanism overcomes the inefficiencies of conventional Differentiated Attention (DA) and strategically asymmetrizes head allocation, achieving maximum performance and expressiveness for the same computational load. This maximizes the model's advanced reasoning capabilities and significantly mitigates common hallucination phenomena. Furthermore, the Muon Optimizer parallelization algorithm resolves chronic communication bottlenecks in multi-node distributed learning environments. Through a proprietary method of intelligently overlapping GPU computation and communication tasks, it virtually eliminates communication latency and maximizes GPU utilization, thereby dramatically boosting learning efficiency. Another strength of this model is that it achieves advanced reasoning capabilities without a Reinforcement Learning (RL) process. Motif Technologies skipped the costly RL phase and adopted a 'Reasoning-Focused Supervised Fine-Tuning (SFT)' approach. This design enables the model to perform logical thinking and problem-solving independently, and it is implemented to execute optimal operations based on the characteristics of user queries. These technological innovations have maximized cost efficiency throughout the entire development and operational processes. In the development phase, the burden of high-cost learning was reduced, while in the operational phase, by automatically avoiding unnecessary inference computations, it achieved tangible benefits such as reducing GPU usage and minimizing response latency. Lim Jung-hwan, CEO of Motif Technologies, stated, "GDA is an innovative technology that re-engineers the brain of LLMs, and the Muon Optimizer re-designs energy efficiency." He added, "Motif 12.7B not only demonstrates the structural evolution of AI models but will also be the optimal solution for companies seeking both high performance and cost efficiency." Based on its experience in developing foundation models encompassing LLMs and LMMs (Large Multimodal Models), Motif Technologies plans to accelerate its 'LLM-LMM two-track innovation' by open-sourcing a 100B-scale LLM and a Text-to-Video (T2V) model by the end of the year.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0