How to Build a Reproducible NixOS ML Workstation with CUDA & Blender

If you’ve ever spent hours wrestling with conflicting Python versions or broken GPU drivers after an update, it’s time to switch to a declarative OS. NixOS turns your entire machine into code, so every rebuild is identical—no more “works on my laptop” excuses. In this post, you’ll get:
A copy-and-pasteable configuration.nix
for a full ML setup
An explanation of every CUDA-related package
Verification steps to confirm your GPU is firing on all cylinders
Why Choose NixOS for Machine Learning?
Declarative Infrastructure
Define your system in configuration.nix
. Want Node.js, Rust, CUDA, and Blender? List them once and let Nix handle the rest.
Reproducibility
Clone your repo on a new laptop or VM, run nixos-rebuild switch
, and you’ll end up with the exact same environment.
Atomic Updates & Rollbacks
If an update breaks, nixos-rebuild switch --rollback
restores the previous generation instantly.
Per-Project Shells with Flakes
Pin toolchains per project, isolating Python, Rust, Go, or any library to avoid global conflicts.
The Complete configuration.nix
Copy the snippet below into /etc/nixos/configuration.nix
. It covers disk unlocking, GNOME, NVIDIA/CUDA, Blender, DevSecOps tools, ML libraries, and more.
{ config, pkgs, ... }:
{
imports = [ ./hardware-configuration.nix ];
# Bootloader & Disk Encryption
boot.loader.systemd-boot.enable = true;
boot.loader.efi.canTouchEfiVariables = true;
boot.initrd.luks.devices."luks-root".device =
"/dev/disk/by-uuid/a4020b86-0db5-447a-a68f-82c65bb032e0";
# Networking & Locale
networking.hostName = "nixos";
networking.networkmanager.enable = true;
time.timeZone = "America/New_York";
i18n.defaultLocale = "en_US.UTF-8";
i18n.extraLocaleSettings = {
LC_TIME = "en_US.UTF-8";
LC_NUMERIC = "en_US.UTF-8";
LC_MONETARY = "en_US.UTF-8";
LC_PAPER = "en_US.UTF-8";
LC_NAME = "en_US.UTF-8";
LC_ADDRESS = "en_US.UTF-8";
LC_TELEPHONE = "en_US.UTF-8";
LC_IDENTIFICATION = "en_US.UTF-8";
LC_MEASUREMENT = "en_US.UTF-8";
};
# GNOME Desktop
services.xserver.enable = true;
services.xserver.displayManager.gdm.enable = true;
services.xserver.desktopManager.gnome.enable = true;
services.xserver.xkb = { layout = "us"; variant = ""; };
# NVIDIA & CUDA Support
hardware.graphics.enable = true;
services.xserver.videoDrivers = [ "nvidia" ];
hardware.nvidia = {
modesetting.enable = true;
powerManagement.enable = true;
powerManagement.finegrained = true;
open = false;
nvidiaSettings = true;
package = config.boot.kernelPackages.nvidiaPackages.stable;
prime = {
offload = { enable = true; enableOffloadCmd = true; };
intelBusId = "PCI:0:2:0";
nvidiaBusId = "PCI:1:0:0";
};
};
# Sound & Printing
services.printing.enable = true;
services.pulseaudio.enable = false;
security.rtkit.enable = true;
services.pipewire = {
enable = true;
alsa.enable = true;
alsa.support32Bit = true;
pulse.enable = true;
};
# User Account & Shell
users.users.maxwell = {
isNormalUser = true;
description = "Maxwell";
extraGroups = [ "networkmanager" "wheel" "docker" ];
shell = pkgs.zsh;
};
programs.zsh.enable = true;
# Containers & Virtualization
virtualisation.docker.enable = true;
virtualisation.podman.enable = true;
# Allow Proprietary Drivers
nixpkgs.config.allowUnfree = true;
# System Packages
environment.systemPackages = with pkgs; [
# Essentials
git wget curl htop unzip jq ripgrep fzf tmux neovim gcc gnumake direnv pciutils mesa-demos
# Infra & DevSecOps
docker kubectl k9s terraform ansible age sops gnupg openssl
# C & C++
gcc gfortran clang clang-tools cmake gdb valgrind
# Python & ML
python3 python3Packages.pip python3Packages.virtualenv
# Rust & Go
rustc cargo rust-analyzer rustfmt clippy rustup go
# Pentesting Tools
nmap wireshark john sqlmap metasploit hydra aircrack-ng burpsuite gobuster zmap lynis
# Reproducibility
nixpkgs-fmt nix-tree cachix lorri devenv
# Desktop Apps
firefox chromium gnome-terminal gnome-tweaks signal-desktop obsidian protonmail-bridge protonmail-desktop
gnome-extension-manager gnomeExtensions.dash-to-dock gnomeExtensions.user-themes
# 3D Modeling & Rendering
blender
# Web & Full-Stack
nodejs yarn nodePackages.pnpm php nginx
# Productivity
libreoffice pdfarranger imagemagick inkscape gimp flameshot
# Experimental
helix wezterm lapce
# VPN
cloudflare-warp
# **AI / CUDA / ML Stack**
cudaPackages.cudatoolkit # Core CUDA libraries & drivers
cudaPackages.cudnn # Deep Learning primitives
cudaPackages.cuda_nvcc # CUDA compiler
python3Packages.pytorchWithCuda # PyTorch linked to CUDA/cuDNN
jupyter # Interactive notebooks
ollama # Local LLM inference engine
code-server # VS Code in the browser
];
# SSH & Firewall
services.openssh.enable = true;
networking.firewall.enable = true;
# NixOS Version & Binary Cache
system.stateVersion = "25.05";
nix.settings.substituters = [ "https://cuda-maintainers.cachix.org" ];
nix.settings.trusted-public-keys = [
"cuda-maintainers.cachix.org-1:0dq3bujKpuEPMCX6U4WylrUDZ9JyUG0VpVncPKiVSI4="
];
}
Deep Dive: What Each CUDA Package Does
cudaPackages.cudatoolkit
The CUDA Toolkit is the foundation: headers, libraries, and driver bindings for compiling and running GPU code. It provides highly optimized routines (cuBLAS, cuFFT, etc.) that frameworks and custom code call for parallel math.
cudaPackages.cudnn
cuDNN is NVIDIA’s specialized library for deep neural networks. It supplies accelerated convolution, pooling, activation functions, and recurrent layers—powering the training and inference of CNNs, RNNs, transformers, and more.
cudaPackages.cuda_nvcc
nvcc
is the CUDA compiler driver. Use it to compile .cu
kernel files into GPU binaries or PTX. Essential if you plan to write custom CUDA kernels or build libraries that include GPU code.
python3Packages.pytorchWithCuda
A PyTorch build linked against your CUDA Toolkit and cuDNN. It lets you move tensors and models to the GPU (device="cuda"
), slashing training and inference times by orders of magnitude compared to CPU.
jupyter
Jupyter Notebook provides an interactive environment for prototyping and visualizing data. Run your CUDA-accelerated PyTorch code in cells, display graphs inline, and iterate rapidly.
ollama
A local LLM inference engine that runs quantized transformer models on your GPU. Deploy chatbots, embeddings, or search tools on your private hardware—no external API calls.
code-server
VS Code in the browser, wired into your system packages. Access your CUDA toolchain, Jupyter server, and full IDE in any modern browser—ideal for remote workstations or tablets.
Verifying Your Setup
After saving the config and running:
sudo nixos-rebuild switch --upgrade
Run these checks:
nvidia-smi # Driver version + GPU status
nvcc --version # CUDA compiler version
python3 -c "import torch; print(torch.cuda.is_available())"
jupyter notebook --version # Jupyter server version
blender --version # Blender launch test
ollama run llama2 --prompt "Hello" --benchmark
You should see your GPU listed, torch.cuda.is_available()
returning True
, and both Blender and Ollama launching without errors.
Next Steps
Enable Flakes: Add nix.settings.experimental-features = [ "nix-command" "flakes" ];
at the top of your config, then create flake.nix
files per project for isolated shells.
Continuous Integration: Integrate Nix builds into GitHub Actions or GitLab CI for hermetic, reproducible pipelines.
Automatic Rollbacks: If an update ever misbehaves, sudo nixos-rebuild switch --rollback
restores your last working state.
By treating your workstation as code, you gain predictability, repeatability, and lightning-fast ML workflows. Paste this into Ghost, hit publish, and give your readers a production-grade guide to NixOS-powered development.