Motif Unveils Compact AI Outperforming Mistral 7B by 134%
Motif Technology has announced an innovative challenge to the global AI market by openly releasing its self-developed foundation small language model (sLLM) ‘Motif 2.6B’ as open-source on Hugging Face. Despite being a lightweight model with...
Motif Technology has announced an innovative challenge to the global AI market by openly releasing its self-developed foundation small language model (sLLM) ‘Motif 2.6B’ as open-source on Hugging Face. Despite being a lightweight model with only 2.6 billion parameters that can run on a single AMD Instinct GPU, this model has garnered industry attention by demonstrating performance results that surpass those of major language models.
According to Motif Technology's announcement, Motif 2.6B recorded 134% performance compared to Mistral 7B (with 7 billion parameters), 191% compared to Google Gemma 1 (2B), 139% compared to Meta Llama 3.2 (1B), 112% compared to AMD Instella (3B), and 104% compared to Alibaba Qwen 2.5 (3B), thereby demonstrating superior excellence that significantly surpasses models of similar or higher class. In particular, it demonstrated unique strengths in challenging areas such as high-level mathematics, science, and coding abilities.
This model is profoundly significant as it is the first AI foundation model implemented based on an AMD Instinct MI250 GPU, excluding Instella developed by AMD. It is the culmination of efficient use of GPU resources and clustering software optimization technologies accumulated by its parent company, Moreh, since its inception, achieving both lightweight design and high performance simultaneously. The most significant technical feature of Motif 2.6B is its enhanced contextual understanding capability. Designed to minimize errors due to incorrect contextual references and to focus on core contexts, it meticulously utilizes attention technology, which is central to the transformer architecture, enabling more accurate and appropriate word usage.
Since its launch in February of this year with core personnel from Moreh's AI business division, Motif Technology has already demonstrated outstanding AI capabilities, including ranking first on Hugging Face with MoMo-70B and leading the development of a 102 billion-parameter Korean-specific LLM that surpasses the Korean language performance of OpenAI's GPT-4.
CEO Lim Jeong-hwan stated his future plans, saying, "As Gartner predicted that enterprise small language model usage would exceed LLM usage by three times by 2027, sLLMs have very high practical value in industrial settings due to their low power consumption and high efficiency," and added, "We will develop Motif 2.6B into an on-device AI and agentic AI model." Motif Technology plans to actively contribute to the expansion of the AI industry ecosystem and customer acquisition by combining Moreh's AI infrastructure with its self-development capabilities to open-source multimodal models such as text-to-image (T2I) and text-to-video (T2V) within the year.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0