Appearance
Using Local LLMs (Ollama)
Ollama lets you run LLMs locally for privacy and offline translation.
Install Ollama
- Download and install from https://ollama.ai
- Start Ollama
- Pull a model (example):
bash
ollama pull llama3Use in Supervertaler
Once Ollama is installed and running, Supervertaler can use it as a provider.
INFO
Local models vary a lot in quality. For best results, test a few models on your typical content.