mirror of
https://github.com/Palm1r/QodeAssist.git
synced 2025-11-12 21:12:44 -05:00
1.3 KiB
1.3 KiB
Configure for Ollama
- Install Ollama. Make sure to review the system requirements before installation.
- Install a language models in Ollama via terminal. For example, you can run:
For standard computers (minimum 8GB RAM):
ollama run qwen2.5-coder:7b
For better performance (16GB+ RAM):
ollama run qwen2.5-coder:14b
For high-end systems (32GB+ RAM):
ollama run qwen2.5-coder:32b
- Open Qt Creator settings (Edit > Preferences on Linux/Windows, Qt Creator > Preferences on macOS)
- Navigate to the "QodeAssist" tab
- On the "General" page, verify:
- Ollama is selected as your LLM provider
- The URL is set to http://localhost:11434
- Your installed model appears in the model selection
- The prompt template is Ollama Auto FIM or Ollama Auto Chat for chat assistance. You can specify template if it is not work correct
- Disable using tools if your model doesn't support tooling
- Click Apply if you made any changes
You're all set! QodeAssist is now ready to use in Qt Creator.