doc: Move out docs from main README

This commit is contained in:
Petr Mironychev
2025-11-10 21:16:58 +01:00
parent a1ff17eef0
commit 3be70556ec
12 changed files with 424 additions and 294 deletions

View File

@ -0,0 +1,16 @@
# Configure for Anthropic Claude
1. Open Qt Creator settings and navigate to the QodeAssist section
2. Go to Provider Settings tab and configure Claude api key
3. Return to General tab and configure:
- Set "Claude" as the provider for code completion or/and chat assistant
- Set the Claude URL (https://api.anthropic.com)
- Select your preferred model (e.g., claude-3-5-sonnet-20241022)
- Choose the Claude template for code completion or/and chat
<details>
<summary>Example of Claude settings: (click to expand)</summary>
<img width="823" alt="Claude Settings" src="https://github.com/user-attachments/assets/828e09ea-e271-4a7a-8271-d3d5dd5c13fd" />
</details>

33
docs/file-context.md Normal file
View File

@ -0,0 +1,33 @@
# File Context Feature
QodeAssist provides two powerful ways to include source code files in your chat conversations: Attachments and Linked Files. Each serves a distinct purpose and helps provide better context for the AI assistant.
## Attached Files
Attachments are designed for one-time code analysis and specific queries:
- Files are included only in the current message
- Content is discarded after the message is processed
- Ideal for:
- Getting specific feedback on code changes
- Code review requests
- Analyzing isolated code segments
- Quick implementation questions
- Files can be attached using the paperclip icon in the chat interface
- Multiple files can be attached to a single message
## Linked Files
Linked files provide persistent context throughout the conversation:
- Files remain accessible for the entire chat session
- Content is included in every message exchange
- Files are automatically refreshed - always using latest content from disk
- Perfect for:
- Long-term refactoring discussions
- Complex architectural changes
- Multi-file implementations
- Maintaining context across related questions
- Can be managed using the link icon in the chat interface
- Supports automatic syncing with open editor files (can be enabled in settings)
- Files can be added/removed at any time during the conversation

View File

@ -0,0 +1,16 @@
# Configure for Google AI
1. Open Qt Creator settings and navigate to the QodeAssist section
2. Go to Provider Settings tab and configure Google AI api key
3. Return to General tab and configure:
- Set "Google AI" as the provider for code completion or/and chat assistant
- Set the OpenAI URL (https://generativelanguage.googleapis.com/v1beta)
- Select your preferred model (e.g., gemini-2.0-flash)
- Choose the Google AI template
<details>
<summary>Example of Google AI settings: (click to expand)</summary>
<img width="829" alt="Google AI Settings" src="https://github.com/user-attachments/assets/046ede65-a94d-496c-bc6c-41f3750be12a" />
</details>

29
docs/ignoring-files.md Normal file
View File

@ -0,0 +1,29 @@
# Ignoring Files
QodeAssist supports the ability to ignore files in context using a `.qodeassistignore` file. This allows you to exclude specific files from the context during code completion and in the chat assistant, which is especially useful for large projects.
## How to Use .qodeassistignore
- Create a `.qodeassistignore` file in the root directory of your project near CMakeLists.txt or pro.
- Add patterns for files and directories that should be excluded from the context.
- QodeAssist will automatically detect this file and apply the exclusion rules.
## .qodeassistignore File Format
The file format is similar to `.gitignore`:
- Each pattern is written on a separate line
- Empty lines are ignored
- Lines starting with `#` are considered comments
- Standard wildcards work the same as in .gitignore
- To negate a pattern, use `!` at the beginning of the line
## Example
```
# Ignore all files in the build directory
/build
*.tmp
# Ignore a specific file
src/generated/autogen.cpp
```

View File

@ -0,0 +1,16 @@
# Configure for llama.cpp
1. Open Qt Creator settings and navigate to the QodeAssist section
2. Go to General tab and configure:
- Set "llama.cpp" as the provider for code completion or/and chat assistant
- Set the llama.cpp URL (e.g. http://localhost:8080)
- Fill in model name
- Choose template for model(e.g. llama.cpp FIM for any model with FIM support)
- Disable using tools if your model doesn't support tooling
<details>
<summary>Example of llama.cpp settings: (click to expand)</summary>
<img width="829" alt="llama.cpp Settings" src="https://github.com/user-attachments/assets/8c75602c-60f3-49ed-a7a9-d3c972061ea2" />
</details>

View File

@ -0,0 +1,16 @@
# Configure for Mistral AI
1. Open Qt Creator settings and navigate to the QodeAssist section
2. Go to Provider Settings tab and configure Mistral AI api key
3. Return to General tab and configure:
- Set "Mistral AI" as the provider for code completion or/and chat assistant
- Set the OpenAI URL (https://api.mistral.ai)
- Select your preferred model (e.g., mistral-large-latest)
- Choose the Mistral AI template for code completion or/and chat
<details>
<summary>Example of Mistral AI settings: (click to expand)</summary>
<img width="829" alt="Mistral AI Settings" src="https://github.com/user-attachments/assets/1c5ed13b-a29b-43f7-b33f-2e05fdea540c" />
</details>

View File

@ -0,0 +1,36 @@
# Configure for Ollama
1. Install [Ollama](https://ollama.com). Make sure to review the system requirements before installation.
2. Install a language models in Ollama via terminal. For example, you can run:
For standard computers (minimum 8GB RAM):
```bash
ollama run qwen2.5-coder:7b
```
For better performance (16GB+ RAM):
```bash
ollama run qwen2.5-coder:14b
```
For high-end systems (32GB+ RAM):
```bash
ollama run qwen2.5-coder:32b
```
3. Open Qt Creator settings (Edit > Preferences on Linux/Windows, Qt Creator > Preferences on macOS)
4. Navigate to the "QodeAssist" tab
5. On the "General" page, verify:
- Ollama is selected as your LLM provider
- The URL is set to http://localhost:11434
- Your installed model appears in the model selection
- The prompt template is Ollama Auto FIM or Ollama Auto Chat for chat assistance. You can specify template if it is not work correct
- Disable using tools if your model doesn't support tooling
6. Click Apply if you made any changes
You're all set! QodeAssist is now ready to use in Qt Creator.
<details>
<summary>Example of Ollama settings: (click to expand)</summary>
<img width="824" alt="Ollama Settings" src="https://github.com/user-attachments/assets/ed64e03a-a923-467a-aa44-4f790e315b53" />
</details>

View File

@ -0,0 +1,16 @@
# Configure for OpenAI
1. Open Qt Creator settings and navigate to the QodeAssist section
2. Go to Provider Settings tab and configure OpenAI api key
3. Return to General tab and configure:
- Set "OpenAI" as the provider for code completion or/and chat assistant
- Set the OpenAI URL (https://api.openai.com)
- Select your preferred model (e.g., gpt-4o)
- Choose the OpenAI template for code completion or/and chat
<details>
<summary>Example of OpenAI settings: (click to expand)</summary>
<img width="829" alt="OpenAI Settings" src="https://github.com/user-attachments/assets/4716f790-6159-44d0-a8f4-565ccb6eb713" />
</details>

35
docs/project-rules.md Normal file
View File

@ -0,0 +1,35 @@
# Project Rules Configuration
QodeAssist supports project-specific rules to customize AI behavior for your codebase. Create a `.qodeassist/rules/` directory in your project root.
## Quick Start
```bash
mkdir -p .qodeassist/rules/{common,completion,chat,quickrefactor}
```
## Directory Structure
```
.qodeassist/
└── rules/
├── common/ # Applied to all contexts
├── completion/ # Code completion only
├── chat/ # Chat assistant only
└── quickrefactor/ # Quick refactor only
```
All `.md` files in each directory are automatically loaded and added to the system prompt.
## Example
Create `.qodeassist/rules/common/general.md`:
```markdown
# Project Guidelines
- Use snake_case for private members
- Prefix interfaces with 'I'
- Always document public APIs
- Prefer Qt containers over STL
```

18
docs/quick-refactoring.md Normal file
View File

@ -0,0 +1,18 @@
# Quick Refactoring Feature
## Setup
Since this is actually a small chat with redirected output, the main settings of the provider, model and template are taken from the chat settings.
## Using
The request to model consist of instructions to model, selection code and cursor position.
The default instruction is: "Refactor the code to improve its quality and maintainability." and sending if text field is empty.
Also there buttons to quick call instractions:
- Repeat latest instruction, will activate after sending first request in QtCreator session
- Improve current selection code
- Suggestion alternative variant of selection code
- Other instructions[TBD]

70
docs/troubleshooting.md Normal file
View File

@ -0,0 +1,70 @@
# Troubleshooting
## Connection Issues
### 1. Verify provider URL and port
Make sure you're using the correct default URLs:
- **Ollama**: `http://localhost:11434`
- **LM Studio**: `http://localhost:1234`
- **llama.cpp**: `http://localhost:8080`
### 2. Check model and template compatibility
- Ensure the correct model is selected in settings
- Verify that the selected prompt template matches your model
- Some models may not support certain features (e.g., tool calling)
### 3. Linux compatibility
- Prebuilt binaries support Ubuntu 22.04+
- For other distributions, you may need to [build manually](../README.md#how-to-build)
- See [issue #48](https://github.com/Palm1r/QodeAssist/issues/48) for known Linux compatibility issues and solutions
## Reset Settings
If issues persist, you can reset settings to their default values:
1. Open Qt Creator → Settings → QodeAssist
2. Select the settings page you want to reset
3. Click "Reset Page to Defaults" button
**Note:**
- API keys are preserved during reset
- You will need to re-select your model after reset
## Common Issues
### Plugin doesn't appear after installation
1. Restart Qt Creator completely
2. Check that the plugin is enabled in About Plugins
3. Verify you downloaded the correct version for your Qt Creator
### No suggestions appearing
1. Check that code completion is enabled in QodeAssist settings
2. Verify your provider/model is running and accessible
3. Check Qt Creator's Application Output pane for error messages
4. Try manual suggestion hotkey (⌥⌘Q on macOS, Ctrl+Alt+Q on Windows/Linux)
### Chat not responding
1. Verify your API key is configured correctly (for cloud providers)
2. Check internet connection (for cloud providers)
3. Ensure the provider service is running (for local providers)
4. Look for error messages in the chat panel
## Getting Help
If you continue to experience issues:
1. Check existing [GitHub Issues](https://github.com/Palm1r/QodeAssist/issues)
2. Join our [Discord Community](https://discord.gg/BGMkUsXUgf) for support
3. Open a new issue with:
- Qt Creator version
- QodeAssist version
- Operating system
- Provider and model being used
- Steps to reproduce the problem