How to Set Up Ollama AI in Open Web UI with Docker Desktop
Local AI Revolution: Deploying Ollama & Open Web UI with Docker
Running Large Language Models (LLMs) locally offers privacy, zero latency, and freedom from subscription fees. In this guide, we won't just install Ollama; we will build a full local stack using Docker to connect a powerful backend with the sleek Open Web UI frontend.
⚙️ The Architecture: How It Works
Before pasting commands, it is crucial to understand the setup. We are creating two distinct Docker containers that need to communicate:
- Container A (Backend): Runs Ollama, serving the API on port
11434. - Container B (Frontend): Runs Open Web UI, accessible via your browser on port
3000. - The Bridge: We use the
host.docker.internalflag to allow Container B to "see" Container A safely.
๐ Prerequisites
Ensure your environment is ready to handle LLM inference:
- Docker Desktop: Installed and running (Download here).
- Hardware: Minimum 8GB RAM. (NVIDIA GPU recommended for speed, but CPU works for quantization).
- Storage: At least 10GB free space for model weights.
๐ Step-by-Step Deployment Guide
Step 1: Deploy the Ollama Backend
First, we pull the official image and run it in detached mode. This creates our API server.
Flag check: -v persists your downloaded models so you don't lose them when the container restarts.
Step 2: Install a Model (Llama3 or Phi3)
The container is empty by default. Let's send a command into the container to download a lightweight model like Phi-3.
Step 3: Deploy Open Web UI
Now for the interface. This command is longer because we must bridge the networking gap between the two containers.
Success! Open your browser to http://localhost:3000. You will see the login screen (the first account created becomes the Admin).
๐ฅ Watch the Live Configuration
Prefer a visual walkthrough? Watch the complete setup process, including troubleshooting tips, in the video below.
๐ง Common Troubleshooting
1. "I can't connect to Ollama"
Ensure you used the --add-host flag in Step 3. If you are on Linux, you might need to use --network host instead.
2. Slow Performance?
Docker doesn't always use your GPU by default. You may need to install the NVIDIA Container Toolkit to pass GPU resources to the container.
© 2025 IOT Station. Privacy Policy
Comments
Post a Comment