Very cool project and AMD graphics cards deserve this kind of work! Very well done. May I ask, is there any reason why one would focus themselfves on a single type of graphics card instead of relying on a library that works for other variants too? Is it because you get more fine grained control that you lose on a abstraction level?
> May I ask, is there any reason why one would focus themselfves on a single type of graphics card instead of relying on a library that works for other variants too?
AMDGPU.jl is actually one of the backends supported by Julia.
We do support CUDA, Metal, Intel, OpenCL as well to a varying degree: https://github.com/JuliaGPU
Each GPU backend implements a common array interface and a way to compile Julia code for low-level kernels relying on the GPUCompiler infrastructure: https://github.com/JuliaGPU/GPUCompiler.jl
Once that is done, users can write code and low-level kernels (using KernelAbstractions.jl) in a backend-agnostic manner.
Here're some examples of packages that target multiple GPU backends in this way:
Very cool project and AMD graphics cards deserve this kind of work! Very well done. May I ask, is there any reason why one would focus themselfves on a single type of graphics card instead of relying on a library that works for other variants too? Is it because you get more fine grained control that you lose on a abstraction level?
Thanks!
> May I ask, is there any reason why one would focus themselfves on a single type of graphics card instead of relying on a library that works for other variants too?
AMDGPU.jl is actually one of the backends supported by Julia. We do support CUDA, Metal, Intel, OpenCL as well to a varying degree: https://github.com/JuliaGPU
Each GPU backend implements a common array interface and a way to compile Julia code for low-level kernels relying on the GPUCompiler infrastructure: https://github.com/JuliaGPU/GPUCompiler.jl
Once that is done, users can write code and low-level kernels (using KernelAbstractions.jl) in a backend-agnostic manner.
Here're some examples of packages that target multiple GPU backends in this way:
- Real-time gaussian splatting supporting AMD GPU & Nvidia GPUs (probably others as well with minor work): https://github.com/JuliaNeuralGraphics/GaussianSplatting.jl
- AcceleratedKernels.jl which is like STD library: https://github.com/JuliaGPU/AcceleratedKernels.jl
- NNop.jl implements Flash-Attention and other NN fused kernels: https://github.com/pxl-th/NNop.jl
- Flux.jl a Deep-Learning library: https://github.com/FluxML/Flux.jl