Naver Cloud Unveils AI Full-Stack Infrastructure Tech
Naver Cloud has opened new horizons in GPU operations through its cutting-edge AI data center, 'Gak Sejong'. The content revealed at the tech meetup on the 27th proves CIO Lee Sang-joon's philosophy that the core of AI infrastructure compet...
Naver Cloud has opened new horizons in GPU operations through its cutting-edge AI data center, 'Gak Sejong'. The content revealed at the tech meetup on the 27th proves CIO Lee Sang-joon's philosophy that the core of AI infrastructure competitiveness lies not merely in acquiring GPU resources, but in how stably and efficiently they are operated.
Based on its experience in commercializing NVIDIA SuperPOD in 2019, Naver has directly designed and operated large-scale GPU clusters at 'Gak Sejong', fully internalizing core infrastructure technologies such as cooling, power, and network. This represents unparalleled full-stack AI infrastructure capability, both domestically and internationally, integrating control over the entire AI workload.
'Gak Sejong' is designed as a high-density GPU computing space optimized for AI training and inference. Its hybrid cooling system (combining direct free cooling, indirect free cooling, and chilled water), specialized for heat management, ensures the highest level of energy efficiency and stability through automatic seasonal switching. Furthermore, it is verifying immersion cooling container infrastructure and filing related patents, thereby concretizing its roadmap for next-generation cooling technology.
The redundant architecture for uninterrupted operation is also noteworthy. By separating and integrating power, cooling, and server operating systems, it fundamentally blocks fault propagation, and stability has been maximized through the repositioning of UPS and power distribution facilities, considering the high-power characteristics of GPU servers. The standardized infrastructure, built upon know-how from operating hundreds of thousands of servers, along with real-time monitoring and automatic recovery systems, ensures service continuity in any fault situation.
Naver's AI platform integrates and manages the entire AI process, from model development to training, inference, and serving. It is a core infrastructure for HyperCLOVA operations and an integrated operating system that efficiently controls GPU resource allocation, model scheduling, and more.
Based on its accumulated technology and operational capabilities, Naver Cloud is building an ecosystem where domestic companies can easily leverage AI through its 'GPUaaS (GPU as a Service)' model. CIO Lee Sang-joon presented a vision that AI infrastructure will go beyond being an asset of a specific company to become a foundation that drives growth across the entire industry.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0