Ollama Integration
Run AI models locally with Ollama for complete privacy and offline capability.
Why Ollama?
- 100% Local: Your data never leaves your machine
- No API Costs: Free inference after one-time download
- Offline: Works without internet
- Fast: Optimized for local hardware
Quick Setup
# Install Ollama
winget install Ollama.Ollama
# Download recommended model
ollama pull phi3:3.8b-mini-4k-instruct-q4_K_M
CommandLane automatically detects Ollama running on localhost:11434.
Recommended Models
| Model | Size | Speed | Quality |
|---|---|---|---|
| phi3:mini | 2.2 GB | Fastest | Good |
| phi3:3.8b-mini-4k-instruct-q4_K_M | 2.4 GB | Fast | Better |
| phi3:3.8b-mini-4k-instruct-q6_K | 3.1 GB | Medium | Best |
Full Setup Guide
For detailed installation and configuration, see the Ollama Setup Guide.