Files
QodeAssist/docs/ollama-configuration.md
2025-11-10 21:16:58 +01:00

1.3 KiB

Configure for Ollama

  1. Install Ollama. Make sure to review the system requirements before installation.
  2. Install a language models in Ollama via terminal. For example, you can run:

For standard computers (minimum 8GB RAM):

ollama run qwen2.5-coder:7b

For better performance (16GB+ RAM):

ollama run qwen2.5-coder:14b

For high-end systems (32GB+ RAM):

ollama run qwen2.5-coder:32b
  1. Open Qt Creator settings (Edit > Preferences on Linux/Windows, Qt Creator > Preferences on macOS)
  2. Navigate to the "QodeAssist" tab
  3. On the "General" page, verify:
    • Ollama is selected as your LLM provider
    • The URL is set to http://localhost:11434
    • Your installed model appears in the model selection
    • The prompt template is Ollama Auto FIM or Ollama Auto Chat for chat assistance. You can specify template if it is not work correct
    • Disable using tools if your model doesn't support tooling
  4. Click Apply if you made any changes

You're all set! QodeAssist is now ready to use in Qt Creator.

Example of Ollama settings: (click to expand) Ollama Settings