The four key technologies powering the future of artificial intelligence
We all know how powerful artificial intelligence (AI) could be. In the not too distant future, it’s likely to transform nearly every aspect of our lives from the automotive world to the homes we live in by facilitating complex tasks like computer vision, image recognition, machine learning, natural language processing and more.
But for that future to become a reality, there are still a few technological challenges that the world needs to overcome. And Samsung is at the forefront of addressing those challenges. But how exactly?
The amount of data we generate, consume and analyse has grown exponentially over the years, and it’ll only explode further. According to an IDC, annual data creation will reach 180 zettabytes by 2025.
That sheer growth in data use makes high-bandwidth memory (HBM) even more important than it is today. HBM is a premium performance memory interface using 3D-stacked SDRAM (synchronous dynamic random-access memory), allowing to maximise data transfer rates in a small form factor that uses less power and has a substantially wider bus when compared to other DRAM solutions.
Within this space, Samsung has introduced several key products, including high-performance processors for deep learning and HBM2, the first next-generation memory product currently in mass production. And given our significant R&D efforts and global scale, we are able to provide HBM2 memory in volume while meeting the ever-increasing requirements for processing power and speed driven by AI and deep learning.
Most technologies go through iterations before they max out on performance. One could say this has been true for computing power for AI. So where do we go from here?
One solution to increase processing power lies in parallel processing — multi-core architectures that split tasks into parts executed at the same time on separate processors in the same system. With parallel processing, deep learning has grown faster, enabling a wide-reaching set of applications, from intelligent personal assistants, to smart speakers, to language translation and AI photo filters.
With Samsung’s memory-based parallel processing, deep learning has been able to grow at a much faster rate, enabling a whole host of applications including intelligent personal assistants, smart speakers, language translation and AI photo filters.
As we roll out bandwidth-intensive AI and deep learning technologies, another vector to removing the data bottleneck in deep learning stacks is in-memory computing.
In-memory computing means using a type of middleware software that allows one to store data in RAM, across a cluster of computers, and process it in parallel. In-memory computing software is purposed to store data in a distributed fashion, where the entire dataset is divided into individual computers’ memory, each storing only a portion of the overall dataset. As the demand to process ever growing datasets in real-time becomes more prevalent, the importance of in-memory computing will only continue to grow. To this end, Samsung’s high-capacity and high-performance DRAMs, including 3DS DRAM modules, GDDR6, HBM2 and SSD servers bring greater innovation to AI and continue to transform businesses worldwide
One of the issues with AI as it exists today is how much it relies on the cloud to work. Smart assistants today need to connect to the internet to be able to make decisions. But with the sheer growth of data, the industry is seeing a huge bottleneck building up in the cloud — there’s only so much data the cloud can process at a time, not to mention there’s only so much data networks can transport to the cloud.
Samsung is committed to solving this problem through solutions such as our Universal Flash Storage (UFS) and Low Power DDR4X (LPDDR4X). With the industry’s highest speeds, LPDDR4X can handle the intense requirements of AI and deep learning, supporting faster multitasking, higher capacities and lower power consumption to ultimately drive the best user experiences. Samsung’s high-speed UFS advances the AI industry by providing the highest density solution — up to 512GB — for 64-layer V-NAND Flash. With ultra-fast speeds, UFS can deliver the ground-breaking performance needed to search through multiple images simultaneously for AI photo filtering, store 4K and 8K multimedia content, and power a range of augmented reality and virtual reality devices.
Artificially intelligent, Samsung driven
The foundation supporting the future of AI clearly depends on technologies enabling it. Samsung is one of the few companies in the world playing a key role in driving the whole technology ecosystem. Whether it’s building cloud servers to make AI decisions or building chips that can help devices make decision from the themselves, you’ll find Samsung everywhere.