Verified Commit 91ea4803 authored by Ota Mikušek's avatar Ota Mikušek
Browse files

Add README.md

parent 8c730836
Loading
Loading
Loading
Loading

README.md

0 → 100644
+18 −0
Original line number Diff line number Diff line
# llama.cpp CUDA Build for Epimetheus (T4 GPU)

This repository provides a Makefile to clone and build [`llama.cpp`](https://github.com/ggerganov/llama.cpp) with CUDA backend support specifically configured for NVIDIA T4 GPUs on Epimetheus servers.

## Prerequisites

Ensure the following are available:

- CUDA 12.9 installed at `/usr/local/cuda-12.9`
- `git`, `cmake`, `make`, and `nvcc` available in `PATH`
- Access to the `git@github.com:ggml-org/llama.cpp.git` repository via SSH

## Build Instructions
For specific version use, for example:

```bash
VERSION=b7650 make install
```