Recently, the RISC-V technology has been one of the hottest topics in the world of computing, and we have even seen Apple started to hire high-performance RISC-V engineers. The technology which is an Instruction Set Architecture (ISA) makes it easier for extensive customization and is open-source which provides the license-free benefit.
We even have a confirmation of their existing project designed a general-purpose GPU based on RISC-V ISA, and according to recent information, we will soon be witnessing a port Nvidia’s CUDA software library to the Vortex RISC-V GPGPU platform.
Nvidia’s CUDA (Compute Unified Device Architecture) represents a unique computing platform and application programming interface (API) that runs on Nvidia’s lineup of graphics cards. When the applications are coded for CUDA support and when a CUDA-based GPU is successfully spotted by a system, it gets massive GPU acceleration of the code.
Researchers have found a sure-fire way to enable CUDA software toolkit support on a RISC-V GPGPU project called Vortex. The Vortex RISC-V GPGPU provides a full-system RISC-V GPU based on RV32IMF ISA meaning that the 32-bit cores can be scaled from 1-core to 32-core GPU designs.
“…in this project, we propose and build a pipeline to support an end-to-end CUDA migration: the pipeline accepts CUDA source codes as input and executes them on an extended RISC-V GPU architecture. Our pipeline consists of several steps: translates CUDA source code into NVVM IR, convert NVVM IR into SPIR-V IR, forwards SPIR-V IR into POCL to get RISC-V binary file, and finally executes the binary file on an extended RISC-V GPU architecture.”
To put it simply, CUDA source code represented in the intermediate representation (IR) format called NVVM IR, is later converted to Standard Portable Intermediate Representation (SPIR-V) IR, then forwards that into the portable open-source implementation of the OpenCL standard called POCL.
However, all this is only a small step towards making it a possible beginning of an era where RISC-V is used for accelerated computing applications, very similar to what Nvidia is doing today with its GPU lineup.
For more information stay tuned.