BottomBar Send Stop Attach files Link files Sync open files Automatically synchronize currently opened files with the model context ChatItem ResetTo CodeBlock Copy Copied QodeAssist::Chat::ChatRootView Token Limit Exceeded The chat history has exceeded the token limit. Would you like to create new chat? Save Chat History JSON files (*.json) Load Chat History Select Files to Attach QodeAssist::Chat::NavigationPanel QodeAssist Chat QodeAssist::EditorChatButton Open QodeAssist Chat QodeAssist::UpdateDialog QodeAssist Update QodeAssist is an open-source project that helps developers write better code. If you find it useful, please <a href='https://ko-fi.com/qodeassist' style='color: #0066cc;'>Support on Ko-fi ☕</a> A new version of QodeAssist is available! Version %1 is now available - you have %2 Release Notes: Open Release Page Open Plugin Folder Close QodeAssist is up to date You are using the latest version: %1 No release notes available. Check the release page for more information. QodeAssist::UpdateStatusWidget Update New version: v%1 Check update information QtC::QodeAssist Custom Prompt Prompt components: - model is set on General Page - {{QODE_INSTRUCTIONS}}: Placeholder for specific instructions or context. - {{QODE_PREFIX}}: Will be replaced with the actual code before the cursor. - {{QODE_SUFFIX}}: Will be replaced with the actual code after the cursor. Save Custom Template to JSON Load Custom Template from JSON Reset Page to Defaults Custom prompt for FIM model Reset Settings Are you sure you want to reset all settings to default values? Save JSON Template JSON Files (*.json) Save Successful JSON template has been saved successfully. Save Failed Failed to save JSON template. Load JSON Template Load Successful JSON template has been loaded successfully. Invalid JSON The selected file contains invalid JSON. Load Failed Failed to load JSON template. Enable QodeAssist General Check Update Select... Provider: Model: Template: URL: Status: Test Enable Logging Log messages are visible in General Messages pane Check for updates when Qt Creator starts Enable Chat(If you have performance issues try disabling this, need restart QtC) Endpoint Mode: Code Completion Chat Assistant Current template description: Connection Error Unable to retrieve the list of models from the server. Please verify the following: - Server is running and accessible - URL is correct - Provider is properly configured - API key is correctly set (if required) You can try selecting a different provider or changing the URL: Select Provider Select URL Close Model Selection Select from previously used models or enter a new model name. If entering a new model name: • For providers with automatic listing - ensure the model is installed • For providers without listing support - check provider's documentation • Make sure the model name matches exactly Model name: OK Cancel Enter Model Manually Configure API Key URL Selection Select from the list of default and previously used URLs, or enter a custom one. Please ensure the selected URL is accessible and the service is running. Use default provider URL or from history Enter custom URL Enter Model Name Manually Auto Completion Settings Add new preset for language Enable Auto Complete Enable Multiline Completion Enable stream option Enable smart process text from instruct model with delay(ms) AI suggestion triggers after typing The number of characters that need to be typed within the typing interval before an AI suggestion request is sent. character(s) within(ms) The time window (in milliseconds) during which the character threshold must be met to trigger an AI suggestion request. Temperature: Max Tokens: Top P: Top K: Presence Penalty: Frequency Penalty: Read Full File Read Strings Before Cursor: Read Strings After Cursor: Use System Prompt Use special system prompt and user message for non FIM models System prompt for non FIM models: User message for non FIM models: Additional Programming Languages for handling: Example: rust,//,rust rs,rs Specify additional programming languages in format: name,comment_style,model_names,extensions Example: rust,//,rust rs,rs Fields: language name, comment prefix, names from LLM (space-separated), file extensions (space-separated) Show progress indicator during code completion Include context from open files Max Changes Cache Size: Include context from open files in quick refactor Time to suspend Ollama after completion request (in minutes), Only Ollama, -1 to disable Context Window: Prompts for FIM models Prompts for Non FIM models General Parameters Advanced Parameters Context Settings Quick Refactor Settings Ollama Settings Chat history token limit: Maximum number of tokens in chat history. When exceeded, oldest messages will be removed. Sync open files with assistant by default Enable autosave when message received Text Font: Text Font Size: Code Font: Code Font Size: Text Format: Chat Settings Chat History Path: QodeAssist Provider Settings OpenRouter API Key: Enter your API key here OpenAI Compatible API Key: Claude API Key: OpenAI API Key: Mistral AI API Key: Codestral API Key: Google AI API Key: Ollama BasicAuth API Key: OpenRouter Settings OpenAI Settings OpenAI Compatible Settings Claude Settings Mistral AI Settings Google AI Settings Generate QodeAssist suggestion at the current cursor position. Request QodeAssist Suggestion Refactor code using QodeAssist Quick Refactor with QodeAssist QodeAssist Chat Select LLM Provider Providers: Select LLM Model Models: Select Template Templates: Quick Refactor Enter refactoring instructions: Type your refactoring instructions here... Repeat Last Instructions Improve Current Code Suggest Alternative Solution Improve the selected code by enhancing readability, efficiency, and maintainability. Follow best practices for C++/Qt and fix any potential issues. Suggest an alternative implementation approach for the selected code. Provide a different solution that might be cleaner, more efficient, or uses different Qt/C++ patterns or idioms. RootItem tokens:%1/%2 Latest chat file name: %1 Type your message here... TopBar Save Load Clear Show in system