mirror of
https://github.com/Palm1r/QodeAssist.git
synced 2025-11-13 13:32:55 -05:00
659 B
659 B
Configure for llama.cpp
- Open Qt Creator settings and navigate to the QodeAssist section
- Go to General tab and configure:
- Set "llama.cpp" as the provider for code completion or/and chat assistant
- Set the llama.cpp URL (e.g. http://localhost:8080)
- Fill in model name
- Choose template for model(e.g. llama.cpp FIM for any model with FIM support)
- Disable using tools if your model doesn't support tooling