Running CUDA Apps in Docker on Ubuntu (The Modern Way)
These days, it’s hard to find a serious Machine Learning project that doesn’t expect an NVIDIA GPU. Whether you’re training models, running inference, or doing heavy compute workloads, CUDA is usually part of the stack.
The good news: you can run GPU workloads cleanly inside Docker — without turning your host machine into dependency spaghetti.
In this guide, I’ll show you how to set up Docker + NVIDIA GPU support on Ubuntu and verify everything works by running a CUDA container.
What You’ll Need (Prerequisites)
Before you start, make sure you have:
- Ubuntu (x86_64) — Ubuntu 20.04+ recommended (22.04 and 24.04 work great)
- An NVIDIA GPU with CUDA support
- A working NVIDIA driver installed on the host
- Docker Engine installed (Docker CE recommended)
Note: Docker’s built-in--gpussupport requires modern Docker (this has been standard for years now).
Step 1 — Install Docker (Modern Ubuntu Method)
This is the current official approach using Docker’s repository and keyring (not apt-key, which is deprecated).
1) Remove old Docker installs (optional but recommended)
2) Install required packages
3) Add Docker’s official GPG key
4) Add the Docker repository
5) Install Docker Engine
6) Add your user to the Docker group (so you can run Docker without sudo)
⚠️ Important: Log out and log back in (or reboot) for group changes to take effect.
To test Docker:
Step 2 — Install NVIDIA Drivers (Host Side)
Docker containers do not install your GPU driver — the host provides it.
To confirm your NVIDIA driver is working:
If everything is installed correctly, you’ll see your GPU listed along with driver and CUDA info.
If nvidia-smi is missing
Install drivers from Ubuntu’s repo:
Step 3 — Install NVIDIA Container Toolkit (Required)
This is the modern replacement for older “nvidia-container-runtime” instructions you might find in older tutorials.
1) Add NVIDIA’s repository key + source list
2) Install the toolkit
3) Configure Docker to use the NVIDIA runtime
4) Restart Docker
Step 4 — Test CUDA Inside Docker
Now for the moment of truth: run a CUDA container and call nvidia-smi inside it.
If everything is working, you’ll see the same GPU information you saw on the host.
Bonus: Selecting Specific GPUs
If your machine has multiple GPUs, Docker can target specific ones:
Or limit by count:
Picking the Right CUDA Base Image
NVIDIA publishes multiple CUDA image types depending on what you’re doing:
-
base → smallest runtime environment
-
runtime → runtime libraries included
-
devel → includes compilers and headers (best for building inside the container)
Example:
You’re Ready 🚀
At this point, you’ve got a clean, scalable workflow:
- Host provides the NVIDIA driver
- Docker provides isolation and portability
- CUDA workloads run inside containers like any other service
Now you can start building your own Dockerfiles and ship GPU-powered apps without fighting dependency conflicts every time you update a library.

Comments
Post a Comment