AI Configuration¶
Configure local AI processing with Ollama.
Model Selection¶
Available Models¶
| Model | Size | RAM Required | Best For |
|---|---|---|---|
tinyllama |
637 MB | 2 GB | Quick tests |
phi3 |
2.2 GB | 4 GB | Light usage |
llama3.2:3b |
2.0 GB | 4 GB | Balanced |
llama3.2:7b |
4.7 GB | 8 GB | Quality |
mistral |
4.1 GB | 8 GB | General purpose |
Setting the Model¶
Download Models Manually¶
GPU Acceleration¶
NVIDIA GPUs¶
Ensure NVIDIA Container Toolkit is installed:
# Install toolkit
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt update && sudo apt install -y nvidia-container-toolkit
sudo systemctl restart docker
Verify GPU Access¶
Disabling AI¶
For resource-constrained environments:
Custom Prompts¶
AI perspectives use configurable prompts. Customize in the application settings.