Supercomputing 2023 (SC23) took place in Denver, Colorado, from November 12 to 17, where SK hynix showcased its cutting-edge AI and high-performance computing (HPC) solutions.
This annual event, organized by the Association for Computing Machinery and IEEE Computer Society since 1988, serves as a platform to highlight the latest advancements in HPC, networking, storage, and data analysis.
The products tailored for AI and HPC stole the spotlight at SK hynix's booth, affirming the company's leadership in the AI memory field. Notably, the High Bandwidth Memory (HBM)3E garnered attention for meeting industry standards in speed, capacity, heat dissipation, and power efficiency, making it well-suited for data-intensive AI server systems. SK hynix presented HBM3E alongside NVIDIA's H100, a high-performance GPU utilizing HBM3 for its memory.
A demonstration of Accelerator-in-Memory Based Accelerator (AiMX) showcased SK hynix's generative AI accelerator card, designed for large language models (LLM). This card employs GDDR6-AiM chips with Processing-In-Memory (PIM) technology, significantly reducing AI inference time in server systems compared to GPU-based systems while offering lower power consumption.
Compute Express Link (CXL) was another highlight at SK hynix's exhibit, representing a standardized interface based on Peripheral Component Interconnect Express (PCle). CXL enhances the efficiency of HPC systems, providing flexible memory expansion and holding promise for AI and big data-related applications. The Niagara CXL disaggregated memory prototype platform was featured, showcasing a pooled memory solution aimed at improving system performance in AI and big data distributed processing systems.
SK hynix presented collaborative efforts with Los Alamos National Laboratory (LANL) to enhance the performance and reduce energy requirements of HPC physics applications. The CXL-based computational memory solution (CMS) accelerates indirect memory accesses, reducing data movement and offering technological enhancements applicable to memory-intensive domains like AI and graph analytics.
The exhibition also highlighted object-based computational storage (OCS) as part of SK hynix's ongoing efforts to develop an analytics ecosystem with multiple partners. OCS minimizes data movement between analytics application systems and storage, reduces the storage software stack weight, and accelerates data analysis speed. Through a demonstration, SK hynix showcased how its interface technology enhances data processing capabilities in OCS.
At the conference, SK hynix showcased various data center solutions, among which was its DDR5 Registered Dual In-line Memory Module (RDIMM). This fifth-generation module utilizes the advanced 1bnm process technology, operating at speeds of up to 6,400 megabits per second (Mbps). Additionally, the display highlighted the DDR5 Multiplexer Combined Ranks (MCR) DIMM, achieving remarkable speeds of up to 8,800 Mbps.
These high-speed DDR5 solutions presented by SK hynix are particularly well-suited for AI computing within high-performance servers.