Skip to main content

Ollama Integration

Run AI models locally with Ollama for complete privacy and offline capability.

Why Ollama?

  • 100% Local: Your data never leaves your machine
  • No API Costs: Free inference after one-time download
  • Offline: Works without internet
  • Fast: Optimized for local hardware

Quick Setup

# Install Ollama
winget install Ollama.Ollama

# Download recommended model
ollama pull phi3:3.8b-mini-4k-instruct-q4_K_M

CommandLane automatically detects Ollama running on localhost:11434.

ModelSizeSpeedQuality
phi3:mini2.2 GBFastestGood
phi3:3.8b-mini-4k-instruct-q4_K_M2.4 GBFastBetter
phi3:3.8b-mini-4k-instruct-q6_K3.1 GBMediumBest

Full Setup Guide

For detailed installation and configuration, see the Ollama Setup Guide.