From 085659483fde668660dd2e1e7a4e96988abf64b6 Mon Sep 17 00:00:00 2001 From: Petr Mironychev <9195189+Palm1r@users.noreply.github.com> Date: Tue, 11 Mar 2025 08:34:14 +0100 Subject: [PATCH] doc: Add info about linux compatibility to README.md --- README.md | 13 +++++-------- 1 file changed, 5 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index 9055f93..aabe443 100644 --- a/README.md +++ b/README.md @@ -268,16 +268,13 @@ If QodeAssist is having problems connecting to the LLM provider, please check th - For Ollama, the default is usually http://localhost:11434 - For LM Studio, the default is usually http://localhost:1234 -2. Check the endpoint: +2. Confirm that the selected model and template are compatible: -Make sure the endpoint in the settings matches the one required by your provider - - For Ollama, it should be /api/generate - - For LM Studio and OpenAI compatible providers, it's usually /v1/chat/completions + Ensure you've chosen the correct model in the "Select Models" option + Verify that the selected prompt template matches the model you're using -3. Confirm that the selected model and template are compatible: - -Ensure you've chosen the correct model in the "Select Models" option -Verify that the selected prompt template matches the model you're using +3. On Linux the prebuilt binaries support only ubuntu 22.04+ or simililliar os. +If you need compatiblity with another os, you have to build manualy. our experiments and resolution you can check here: https://github.com/Palm1r/QodeAssist/issues/48 If you're still experiencing issues with QodeAssist, you can try resetting the settings to their default values: 1. Open Qt Creator settings