Samsung Develops High Bandwidth Memory with AI Processing Power

Samsung Develops High Bandwidth Memory with AI Processing Power


Samsung Develops High Bandwidth Memory with AI Processing Power

Samsung has announced that it is the first company to develop a memory type called HBM-PIM (High Bandwidth In-Memory Processing). It is equipped with artificial intelligence (AI) to improve speed in applications such as high-performance systems, data centers and other artificial intelligence solutions. Currently, most memories are based on the von Neumann architecture. It was introduced in 1945 and uses sequential processing, which can lead to bottlenecks and higher energy consumption. To avoid this, HBM-PIM places a DRAM-optimized AI engine in the memory device, which can double performance and reduce power consumption by up to 30%. According to Samsung, the technology can be integrated into existing systems because it requires no software or hardware updates.


Samsung Develops High Bandwidth Memory with AI Processing Power

Original press release

Samsung Electronics, a world leader in advanced memory technologies, today announced the development of the first high-speed memory (HBM) with embedded computing power for artificial intelligence (AI) – HBM-PIM. The new in-memory processing architecture (PIM) provides computational capabilities for artificial intelligence in high-performance memory to accelerate the processing of large data volumes in data centers, high-performance computing (HPC) and mobile applications that support artificial intelligence.

Kwangil Park, Senior Vice President Storage Planning at Samsung Electronics, said : Our revolutionary HBM PIM solution is the industry’s first programmable PIM solution designed for various AI-related workloads, such as HPC, training and inference. We plan to capitalize on this breakthrough by partnering more with AI solution providers for even more advanced PIM applications.

Rick Stevens, Associate Director of the Argonne Laboratory for Informatics, Environment and Life Sciences, noted : I’m glad Samsung is addressing bandwidth and memory performance issues for HPC and AI. HBM’s PIM design has demonstrated impressive performance and energy savings in all major AI application classes. We look forward to working with him to evaluate his performance on other issues of importance to Argonne National Laboratory.

Most modern computer systems are based on the von Neumann architecture, which uses individual processors and memory modules to perform millions of complex data processing tasks. This sequential processing requires constant forward and backward movement of data, which slows down the system, especially when dealing with increasingly large amounts of data.

Instead, HBM-PIM brings processing power directly to where the data is stored by placing a DRAM-optimized AI engine in each memory bank – a memory partition – enabling parallel data processing and minimizing data movement. Applied to the existing Samsung HBM2 Aquabolt solution, the new architecture can more than double system performance while reducing power consumption by more than 70%. In addition, HBM-PIM requires no hardware or software modifications, allowing for faster integration into existing systems.

Samsung’s GBM PIM has been selected to be presented at the prestigious International Semiconductor Circuit Conference (ISSCC), which runs until December 22. February is here. Samsung’s HBM PIM is currently being tested by leading AI solution partners in AI accelerators and all validations are expected to be completed in the first half of this year.

You May Also Like