Features • Quick Start • Build • Architecture
CLA is a simple toy library for basic vector/matrix operations in C. This project main goal is to learn the foundations of CUDA, and Python bindings, using ctypes
as a wrapper, through simple Linear Algebra operations (additions, subtraction, multiplication, broadcasating, transformations, etc).
- C17 support, Python 3.13, CUDA 12.8;
- Linux support;
- Vector-vector operations;
- Matrix-matrix operations;
- Vector and matrix norms;
- GPU device selection to run operations;
- Get CUDA information from the system (i.e., support, number of devices, etc);
- Management of memory (CPU memory vs GPU memory), allowing copies between devices;
For the C-only API, obtain the latest binaries and headers from the releases tab in GitHub. For the Python API, use your favorite package manager (i.e., pip
, uv
) and install pycla
from PyPi (e.g., pip install pycla
).
The C API provides structs (see cla/include/entities.h
) and functions (see cla/include/vector_operations.h
, cla/include/matrix_operations.h
) that operate over those structs. The two main entities are Vector
and Matrix
. A vector or matrix can reside in either the CPU memory (host memory, from CUDA's terminology) or GPU memory (device memory). Those structs always keep metadata on the CPU (i.e., shape, current device), which allows the CPU to coordinate most of the workflow. In order for an operation to be run on the GPU the entities must first be copied to the GPU's memory.
For a quickstart, compile the samples/c_api.c with: (i) gcc -l cla <filename>.c
, if you installed the library system-wide (i.e., copied the headers to /usr/include/
and shared library to /usr/lib/
); or (ii) gcc -I <path-to-include> -L <path-to-root-wih-libcla> -l cla <filename>.c
.
To run, make the libcla.so
findable by the executable (i.e., either update LD_LIBRARY_PATH
environment variable or include it on /usr/lib
) and run in the shell of your preference (i.e., ./a.out
).
TODO
TODO
TODO
The library is organized as simply as possible. The goal is to make a slight distinction between the C and Python APIs, while allowing the core code with CUDA to be flexible.
The C API provides a shared library named cla
to be used by other programs/libraries during the linking stage or runtime. This C library is static linked to the CUDA kernel/functions during build.
The Python API provides a wrapper to the cla
library by a Python package named pycla
, which dynamics load the cla
library during runtime. It is necessaary to have the CUDA runtime available to use CUDA-related functionanilty.
The aforementioned relationship is depicted in the diagram below:
flowchart LR
cla("`cla`")
pycla("`pycla`")
cuda["CUDA code"]
cla-.->|Static links| cuda
pycla==>|Dynamic loads| cla
The source code is organized as follows:
cla
: source code for the C API;pycla
: source code for the Python API;bin
: binary directory for thecla
shared library;
The following diagram shows the module/package organization.
flowchart TD
vector("<strong>Vector Module</strong><br>Vector operations, norms, conversions.")
matrix("<strong>Matrix Module</strong><br>Matrix operations, norms, conversions, vector-matrix operations.")
cuda("<strong>CUDA Module</strong><br> alternative operations for Matrix and Vectors with CUDA kernels.")
subgraph cla
matrix -->|Uses for Matrix-Vector operations| vector
matrix -->|Uses for parallel operations| cuda
vector -->|Uses for parallel operations| cuda
end
The following diagram shows the module/package organization.
flowchart TD
core("<strong>Core Module</strong><br>Core entities.")
cuda("<strong>CUDA Module</strong><br>Utilities for CUDA operations.")
subgraph pycla
core -->|Uses| cuda
end