Step 5

Install Ollama

Ollama is the engine that runs AI models locally. One command installs everything.

Install Ollama

bash — Ubuntu / macOS
# One command installs Ollama and sets it up as a system service
curl -fsSL https://ollama.com/install.sh | sh
PowerShell — Windows
irm https://ollama.com/install.ps1 | iex
💡
Official Download Page

You can also download Ollama directly from https://ollama.com/download for Ubuntu, macOS, and Windows.

Start & Enable Ollama

bash
# Start the service
sudo systemctl start ollama

# Enable auto-start on boot
sudo systemctl enable ollama

# Verify it's running
sudo systemctl status ollama
# Should show: Active: active (running)

# Test the API
curl http://localhost:11434
# Should respond: Ollama is running

Essential Ollama Commands

CommandWhat it Does
ollama listList all downloaded models and their sizes
ollama psShow which model is currently loaded in RAM
ollama pull <model>Download a model from the Ollama library
ollama run <model>Run a model interactively in the terminal
ollama run <model> --verboseRun with performance stats (tokens/sec)
ollama rm <model>Remove a model and free up disk space
ollama stop <model>Unload a model from memory
journalctl -u ollama -fWatch Ollama logs in real time
sudo systemctl restart ollamaRestart the Ollama service
ℹ️
Video Walkthrough Available

Watch a step-by-step video on YouTube: How to Install Ollama on Ubuntu Linux