How to Set Up Ollama AI in Open Web UI with Docker Desktop

Ollama and Docker Setup Hero

Local AI Revolution: Deploying Ollama & Open Web UI with Docker

Running Large Language Models (LLMs) locally offers privacy, zero latency, and freedom from subscription fees. In this guide, we won't just install Ollama; we will build a full local stack using Docker to connect a powerful backend with the sleek Open Web UI frontend.


⚙️ The Architecture: How It Works

Before pasting commands, it is crucial to understand the setup. We are creating two distinct Docker containers that need to communicate:

  • Container A (Backend): Runs Ollama, serving the API on port 11434.
  • Container B (Frontend): Runs Open Web UI, accessible via your browser on port 3000.
  • The Bridge: We use the host.docker.internal flag to allow Container B to "see" Container A safely.

๐Ÿ“‹ Prerequisites

Ensure your environment is ready to handle LLM inference:

  • Docker Desktop: Installed and running (Download here).
  • Hardware: Minimum 8GB RAM. (NVIDIA GPU recommended for speed, but CPU works for quantization).
  • Storage: At least 10GB free space for model weights.

๐Ÿš€ Step-by-Step Deployment Guide

Step 1: Deploy the Ollama Backend

First, we pull the official image and run it in detached mode. This creates our API server.

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Flag check: -v persists your downloaded models so you don't lose them when the container restarts.

Step 2: Install a Model (Llama3 or Phi3)

The container is empty by default. Let's send a command into the container to download a lightweight model like Phi-3.

docker exec -it ollama ollama pull phi3

Step 3: Deploy Open Web UI

Now for the interface. This command is longer because we must bridge the networking gap between the two containers.

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Success! Open your browser to http://localhost:3000. You will see the login screen (the first account created becomes the Admin).


๐ŸŽฅ Watch the Live Configuration

Prefer a visual walkthrough? Watch the complete setup process, including troubleshooting tips, in the video below.


๐Ÿ”ง Common Troubleshooting

1. "I can't connect to Ollama"
Ensure you used the --add-host flag in Step 3. If you are on Linux, you might need to use --network host instead.

2. Slow Performance?
Docker doesn't always use your GPU by default. You may need to install the NVIDIA Container Toolkit to pass GPU resources to the container.

© 2025 IOT Station. Privacy Policy

Comments

Popular posts from this blog

How to Import CSV Files into PostgreSQL Automatically

Apache NiFi ETL Tutorial for Beginners | Installation & Data Pipeline Basics

ESP32 TFT LCD 240x240 Tutorial-Display Images & DHT11 Sensor

Contact Form

Name

Email *

Message *