Compare commits

...

16 Commits

Author SHA1 Message Date
40a568ebd9 Version 0.3.3
- Add streaming response to chat
- Add stopping chat request and button
2024-10-17 00:10:47 +02:00
5b43eb4fd2 Increase default max tokens for chat before deleting first messages 2024-10-17 00:05:31 +02:00
9c2516cd4c Add stopping chat requests and button 2024-10-17 00:03:12 +02:00
2257e6e45f Fix unbehavior settings of provider and template 2024-10-17 00:02:14 +02:00
80eda8c167 Add stream text to chat 2024-10-16 22:51:34 +02:00
3db2691114 Upgrade version 2024-10-16 22:44:51 +02:00
bf518b4a01 Version 0.3.2
Add StarCoder2 instruct support
2024-10-16 10:50:19 +02:00
46829720d8 Add StarCoder2 instruct support 2024-10-16 10:45:48 +02:00
9158a3ac0d Add Llama support to README.md 2024-10-14 21:52:36 +02:00
d6e02d9d2a Version 0.3.1
Improve chat text input
Add Llama chat support
Fix monospace font
2024-10-14 21:41:04 +02:00
9c8cac4e3a Upgrade to version 0.3.1 2024-10-14 21:35:49 +02:00
965af4a945 Add Llama chat support 2024-10-14 21:35:17 +02:00
95f29fefc7 Fix monospace font 2024-10-14 21:25:18 +02:00
1dd50b6c83 Replace textinput to textfield 2024-10-14 21:18:48 +02:00
146e772514 Update README.md 2024-10-14 01:38:50 +02:00
4b851f1662 Update README.md for 0.3.0 2024-10-14 01:36:48 +02:00
19 changed files with 262 additions and 130 deletions

View File

@ -42,6 +42,7 @@ add_qtc_plugin(QodeAssist
templates/DeepSeekCoderChat.hpp
templates/CodeLlamaChat.hpp
templates/QwenChat.hpp
templates/StarCoderChat.hpp
providers/OllamaProvider.hpp providers/OllamaProvider.cpp
providers/LMStudioProvider.hpp providers/LMStudioProvider.cpp
providers/OpenAICompatProvider.hpp providers/OpenAICompatProvider.cpp

View File

@ -1,6 +1,6 @@
{
"Name" : "QodeAssist",
"Version" : "0.3.0",
"Version" : "0.3.3",
"CompatVersion" : "${IDE_VERSION_COMPAT}",
"Vendor" : "Petr Mironychev",
"Copyright" : "(C) ${IDE_COPYRIGHT_YEAR} Petr Mironychev, (C) ${IDE_COPYRIGHT_YEAR} The Qt Company Ltd",

126
README.md
View File

@ -1,11 +1,59 @@
# QodeAssist
# QodeAssist - AI-powered coding assistant plugin for Qt Creator
[![Build plugin](https://github.com/Palm1r/QodeAssist/actions/workflows/build_cmake.yml/badge.svg?branch=main)](https://github.com/Palm1r/QodeAssist/actions/workflows/build_cmake.yml)
QodeAssist is an AI-powered coding assistant plugin for Qt Creator. It provides intelligent code completion and suggestions for C++ and QML, leveraging large language models through local providers like Ollama. Enhance your coding productivity with context-aware AI assistance directly in your Qt development environment.
Code completion:
<img src="https://github.com/user-attachments/assets/255a52f1-5cc0-4ca3-b05c-c4cf9cdbe25a" width="600" alt="QodeAssistPreview">
Chat with LLM models in side panels:
<img src="https://github.com/user-attachments/assets/ead5a5d9-b40a-4f17-af05-77fa2bcb3a61" width="600" alt="QodeAssistChat">
Chat with LLM models in bottom panel:
<img width="326" alt="QodeAssistBottomPanel" src="https://github.com/user-attachments/assets/4cc64c23-a294-4df8-9153-39ad6fdab34b">
## Plugin installation using Ollama as an example
1. Install Latest QtCreator
2. Install [Ollama](https://ollama.com). Make sure to review the system requirements before installation.
3. Install a language models in Ollama via terminal. For example, you can run:
For suggestions:
```
ollama run codellama:7b-code
```
For chat:
```
ollama run codellama:7b-instruct
```
4. Download the QodeAssist plugin for your QtCreator.
5. Launch Qt Creator and install the plugin:
- Go to MacOS: Qt Creator -> About Plugins...
Windows\Linux: Help -> About Plugins...
- Click on "Install Plugin..."
- Select the downloaded QodeAssist plugin archive file
## Configure Plugin
<img src="https://github.com/user-attachments/assets/00ad980f-b470-48eb-9aaa-077783d38798" width="600" alt="Configuere QodeAssist">
1. Open Qt Creator settings
2. Navigate to the "Qode Assist" tab
3. Select "General" page
4. Choose your LLM provider (e.g., Ollama)
5. Select the installed model by the "Select Model" button
- For LM Studio you will see current loaded model
6. Choose the prompt template that corresponds to your model
7. Apply the settings
You're all set! QodeAssist is now ready to use in Qt Creator.
[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/P5P412V96G)
## Supported LLM Providers
QodeAssist currently supports the following LLM (Large Language Model) providers:
- [Ollama](https://ollama.com)
@ -14,19 +62,35 @@ QodeAssist currently supports the following LLM (Large Language Model) providers
## QtCreator Version Compatibility
- Since version 0.2.3: Compatible with QtCreator 14.0.2
- QtCreator 14.0.1 and below are not supported anymore (latest compatible version: 0.2.2)
- QtCreator 14.0.2 - 0.2.3 - 0.3.x
- QtCreator 14.0.1 - 0.2.2 plugin version and below
## Supported Models
QodeAssist has been thoroughly tested and optimized for use with the following language models, all of which are specifically trained for Fill-in-the-Middle (FIM) tasks:
QodeAssist has been thoroughly tested and optimized for use with the following language models:
- Llama
- CodeLlama
- StarCoder2
- DeepSeek-Coder-V2-Lite-Base
- DeepSeek-Coder-V2
- Qwen-2.5
These models have demonstrated excellent performance in code completion and assistance tasks within the QodeAssist environment.
### Tested Models
#### Ollama:
- [starcoder2](https://ollama.com/library/starcoder2)
- [codellama](https://ollama.com/library/codellama)
#### LM Studio:
- [second-state/StarCoder2-7B-GGUF](https://huggingface.co/second-state/StarCoder2-7B-GGUF)
- [TheBloke/CodeLlama-7B-GGUF](https://huggingface.co/TheBloke/CodeLlama-7B-GGUF)
Please note that while these models have been specifically tested and confirmed to work well with QodeAssist, other models compatible with the supported providers may also work. We encourage users to experiment with different models and report their experiences.
If you've successfully used a model that's not listed here, please let us know by opening an issue or submitting a pull request to update this list.
### Custom Prompts
For advanced users or those with specific requirements, QodeAssist offers the flexibility to create, save, and load custom prompts using JSON templates. This feature allows you to tailor the AI's behavior to your exact needs.
@ -44,65 +108,15 @@ For inspiration and examples of effective custom prompts, please refer to the `r
<img width="600" alt="Custom template" src="https://github.com/user-attachments/assets/4a14c552-baba-4531-ab4f-cb1f9ac6620b">
<img width="600" alt="Select custom template" src="https://github.com/user-attachments/assets/3651dafd-83f9-4df9-943f-69c28cd3d8a3">
### Tested Models
#### Ollama:
- [starcoder2](https://ollama.com/library/starcoder2)
- [codellama](https://ollama.com/library/codellama)
#### LM Studio:
- [second-state/StarCoder2-7B-GGUF](https://huggingface.co/second-state/StarCoder2-7B-GGUF)
- [TheBloke/CodeLlama-7B-GGUF](https://huggingface.co/TheBloke/CodeLlama-7B-GGUF)
Please note that while these models have been specifically tested and confirmed to work well with QodeAssist, other models compatible with the supported providers may also work. We encourage users to experiment with different models and report their experiences.
If you've successfully used a model that's not listed here, please let us know by opening an issue or submitting a pull request to update this list.
## Development Progress
- [x] Basic plugin with code autocomplete functionality
- [x] Improve and automate settings
- [ ] Add chat functionality
- [x] Add chat functionality
- [x] Sharing diff with model
- [ ] Sharing project source with model
- [ ] Support for more providers and models
## Plugin installation using Ollama as an example
1. Install QtCreator 14.0
2. Install [Ollama](https://ollama.com). Make sure to review the system requirements before installation.
3. Install a language models in Ollama. For example, you can run:
For suggestions:
```
ollama run codellama:7b-code
```
For chat:
```
ollama run codellama:7b-instruct
```
4. Download the QodeAssist plugin.
5. Launch Qt Creator and install the plugin:
- Go to MacOS: Qt Creator -> About Plugins...
Windows\Linux: Help -> About Plugins...
- Click on "Install Plugin..."
- Select the downloaded QodeAssist plugin archive file
## Configure Plugin
<img src="https://github.com/user-attachments/assets/0743d09e-1f02-44ed-9a1a-85e2a0a0c01a" width="800" alt="QodeAssist в действии">
1. Open Qt Creator settings
2. Navigate to the "Qode Assist" tab
3. Select "General" page
4. Choose your LLM provider (e.g., Ollama)
5. Select the installed model by the "Select Model" button
- For LM Studio you will see current load model
6. Choose the prompt template that corresponds to your model
7. Apply the settings
You're all set! QodeAssist is now ready to use in Qt Creator.
## Hotkeys
- To call manual request to suggestion, you can use or change it in settings

View File

@ -68,6 +68,28 @@ QHash<int, QByteArray> ChatModel::roleNames() const
return roles;
}
void ChatModel::addMessage(const QString &content, ChatRole role, const QString &id)
{
int tokenCount = estimateTokenCount(content);
if (!m_messages.isEmpty() && !id.isEmpty() && m_messages.last().id == id) {
Message &lastMessage = m_messages.last();
int oldTokenCount = lastMessage.tokenCount;
lastMessage.content = content;
lastMessage.tokenCount = tokenCount;
m_totalTokens += (tokenCount - oldTokenCount);
emit dataChanged(index(m_messages.size() - 1), index(m_messages.size() - 1));
} else {
beginInsertRows(QModelIndex(), m_messages.size(), m_messages.size());
m_messages.append({role, content, tokenCount, id});
m_totalTokens += tokenCount;
endInsertRows();
}
trim();
emit totalTokensChanged();
}
QVector<ChatModel::Message> ChatModel::getChatHistory() const
{
return m_messages;
@ -92,17 +114,6 @@ int ChatModel::estimateTokenCount(const QString &text) const
return text.length() / 4;
}
void ChatModel::addMessage(const QString &content, ChatRole role)
{
int tokenCount = estimateTokenCount(content);
beginInsertRows(QModelIndex(), m_messages.size(), m_messages.size());
m_messages.append({role, content, tokenCount});
m_totalTokens += tokenCount;
endInsertRows();
trim();
emit totalTokensChanged();
}
void ChatModel::clear()
{
beginResetModel();
@ -176,4 +187,9 @@ int ChatModel::tokensThreshold() const
return settings.chatTokensThreshold();
}
QString ChatModel::lastMessageId() const
{
return !m_messages.isEmpty() ? m_messages.last().id : "";
}
} // namespace QodeAssist::Chat

View File

@ -46,6 +46,7 @@ public:
ChatRole role;
QString content;
int tokenCount;
QString id;
};
explicit ChatModel(QObject *parent = nullptr);
@ -54,7 +55,7 @@ public:
QVariant data(const QModelIndex &index, int role = Qt::DisplayRole) const override;
QHash<int, QByteArray> roleNames() const override;
Q_INVOKABLE void addMessage(const QString &content, ChatRole role);
Q_INVOKABLE void addMessage(const QString &content, ChatRole role, const QString &id);
Q_INVOKABLE void clear();
Q_INVOKABLE QList<MessagePart> processMessageContent(const QString &content) const;
@ -65,6 +66,7 @@ public:
int tokensThreshold() const;
QString currentModel() const;
QString lastMessageId() const;
signals:
void totalTokensChanged();

View File

@ -60,6 +60,11 @@ void ChatRootView::copyToClipboard(const QString &text)
QGuiApplication::clipboard()->setText(text);
}
void ChatRootView::cancelRequest()
{
m_clientInterface->cancelRequest();
}
void ChatRootView::generateColors()
{
QColor baseColor = backgroundColor();

View File

@ -52,6 +52,7 @@ public:
public slots:
void sendMessage(const QString &message) const;
void copyToClipboard(const QString &text);
void cancelRequest();
signals:
void chatModelChanged();

View File

@ -7,7 +7,6 @@ namespace QodeAssist::Chat {
void ChatUtils::copyToClipboard(const QString &text)
{
qDebug() << "call clipboard" << text;
QGuiApplication::clipboard()->setText(text);
}

View File

@ -38,8 +38,8 @@ ClientInterface::ClientInterface(ChatModel *chatModel, QObject *parent)
connect(m_requestHandler,
&LLMCore::RequestHandler::completionReceived,
this,
[this](const QString &completion, const QJsonObject &, bool isComplete) {
handleLLMResponse(completion, isComplete);
[this](const QString &completion, const QJsonObject &request, bool isComplete) {
handleLLMResponse(completion, request, isComplete);
});
connect(m_requestHandler,
@ -56,6 +56,8 @@ ClientInterface::~ClientInterface() = default;
void ClientInterface::sendMessage(const QString &message)
{
cancelRequest();
LOG_MESSAGE("Sending message: " + message);
LOG_MESSAGE("chatProvider " + Settings::generalSettings().chatLlmProviders.stringValue());
LOG_MESSAGE("chatTemplate " + Settings::generalSettings().chatPrompts.stringValue());
@ -74,6 +76,9 @@ void ClientInterface::sendMessage(const QString &message)
providerRequest["stream"] = true;
providerRequest["messages"] = m_chatModel->prepareMessagesForRequest(context);
if (!chatTemplate || !chatProvider) {
LOG_MESSAGE("Check settings, provider or template are not set");
}
chatTemplate->prepareRequest(providerRequest, context);
chatProvider->prepareRequest(providerRequest, LLMCore::RequestType::Chat);
@ -89,28 +94,31 @@ void ClientInterface::sendMessage(const QString &message)
QJsonObject request;
request["id"] = QUuid::createUuid().toString();
m_accumulatedResponse.clear();
m_chatModel->addMessage(message, ChatModel::ChatRole::User);
m_chatModel->addMessage(message, ChatModel::ChatRole::User, "");
m_requestHandler->sendLLMRequest(config, request);
}
void ClientInterface::clearMessages()
{
m_chatModel->clear();
m_accumulatedResponse.clear();
LOG_MESSAGE("Chat history cleared");
}
void ClientInterface::handleLLMResponse(const QString &response, bool isComplete)
void ClientInterface::cancelRequest()
{
m_accumulatedResponse += response;
auto id = m_chatModel->lastMessageId();
m_requestHandler->cancelRequest(id);
}
void ClientInterface::handleLLMResponse(const QString &response,
const QJsonObject &request,
bool isComplete)
{
QString messageId = request["id"].toString();
m_chatModel->addMessage(response.trimmed(), ChatModel::ChatRole::Assistant, messageId);
if (isComplete) {
LOG_MESSAGE("Message completed. Final response: " + m_accumulatedResponse);
emit messageReceived(m_accumulatedResponse.trimmed());
m_chatModel->addMessage(m_accumulatedResponse.trimmed(), ChatModel::ChatRole::Assistant);
m_accumulatedResponse.clear();
LOG_MESSAGE("Message completed. Final response for message " + messageId + ": " + response);
}
}

View File

@ -38,16 +38,15 @@ public:
void sendMessage(const QString &message);
void clearMessages();
void cancelRequest();
signals:
void messageReceived(const QString &message);
void errorOccurred(const QString &error);
private:
void handleLLMResponse(const QString &response, bool isComplete);
void handleLLMResponse(const QString &response, const QJsonObject &request, bool isComplete);
LLMCore::RequestHandler *m_requestHandler;
QString m_accumulatedResponse;
ChatModel *m_chatModel;
};

View File

@ -81,17 +81,16 @@ ChatRootView {
}
}
RowLayout {
Layout.fillWidth: true
spacing: 5
ScrollView {
id: view
QQC.TextField {
Layout.fillWidth: true
Layout.minimumHeight: 30
Layout.maximumHeight: root.height / 2
QQC.TextArea {
id: messageInput
Layout.fillWidth: true
Layout.minimumWidth: 60
Layout.minimumHeight: 30
rightInset: -(parent.width - sendButton.width - clearButton.width)
placeholderText: qsTr("Type your message here...")
placeholderTextColor: "#888"
color: root.primaryColor.hslLightness > 0.5 ? "black" : "white"
@ -102,30 +101,40 @@ ChatRootView {
: Qt.darker(root.primaryColor, 1.5)
border.width: 1
}
onAccepted: sendButton.clicked()
Keys.onPressed: function(event) {
if ((event.key === Qt.Key_Return || event.key === Qt.Key_Enter) && !(event.modifiers & Qt.ShiftModifier)) {
sendChatMessage()
event.accepted = true;
}
}
}
}
RowLayout {
Layout.fillWidth: true
spacing: 5
Button {
id: sendButton
Layout.alignment: Qt.AlignVCenter
Layout.minimumHeight: 30
Layout.alignment: Qt.AlignBottom
text: qsTr("Send")
onClicked: {
if (messageInput.text.trim() !== "") {
root.sendMessage(messageInput.text);
messageInput.text = ""
scrollToBottom()
}
}
onClicked: sendChatMessage()
}
Button {
id: stopButton
Layout.alignment: Qt.AlignBottom
text: qsTr("Stop")
onClicked: root.cancelRequest()
}
Button {
id: clearButton
Layout.alignment: Qt.AlignVCenter
Layout.minimumHeight: 30
text: qsTr("Clear")
Layout.alignment: Qt.AlignBottom
text: qsTr("Clear Chat")
onClicked: clearChat()
}
}
@ -158,4 +167,10 @@ ChatRootView {
function scrollToBottom() {
Qt.callLater(chatListView.positionViewAtEnd)
}
function sendChatMessage() {
root.sendMessage(messageInput.text);
messageInput.text = ""
scrollToBottom()
}
}

View File

@ -28,6 +28,19 @@ Rectangle {
property string language: ""
property color selectionColor
readonly property string monospaceFont: {
switch (Qt.platform.os) {
case "windows":
return "Consolas";
case "osx":
return "Menlo";
case "linux":
return "DejaVu Sans Mono";
default:
return "monospace";
}
}
border.color: root.color.hslLightness > 0.5 ? Qt.darker(root.color, 1.3)
: Qt.lighter(root.color, 1.3)
border.width: 2
@ -48,7 +61,8 @@ Rectangle {
text: root.code
readOnly: true
selectByMouse: true
font.family: "monospace"
font.family: monospaceFont
font.pointSize: 12
color: parent.color.hslLightness > 0.5 ? "black" : "white"
wrapMode: Text.WordWrap
selectionColor: root.selectionColor

View File

@ -43,8 +43,8 @@ void PromptTemplateManager::setCurrentFimTemplate(const QString &name)
PromptTemplate *PromptTemplateManager::getCurrentFimTemplate()
{
if (m_currentFimTemplate == nullptr) {
LOG_MESSAGE("Current fim provider is null");
return nullptr;
LOG_MESSAGE("Current fim provider is null, return first");
return m_fimTemplates.first();
}
return m_currentFimTemplate;
@ -63,8 +63,10 @@ void PromptTemplateManager::setCurrentChatTemplate(const QString &name)
PromptTemplate *PromptTemplateManager::getCurrentChatTemplate()
{
if (m_currentChatTemplate == nullptr)
LOG_MESSAGE("Current chat provider is null");
if (m_currentChatTemplate == nullptr) {
LOG_MESSAGE("Current chat provider is null, return first");
return m_chatTemplates.first();
}
return m_currentChatTemplate;
}

View File

@ -56,8 +56,8 @@ Provider *ProvidersManager::setCurrentChatProvider(const QString &name)
Provider *ProvidersManager::getCurrentFimProvider()
{
if (m_currentFimProvider == nullptr) {
LOG_MESSAGE("Current fim provider is null");
return nullptr;
LOG_MESSAGE("Current fim provider is null, return first");
return m_providers.first();
}
return m_currentFimProvider;
@ -66,8 +66,8 @@ Provider *ProvidersManager::getCurrentFimProvider()
Provider *ProvidersManager::getCurrentChatProvider()
{
if (m_currentChatProvider == nullptr) {
LOG_MESSAGE("Current chat provider is null");
return nullptr;
LOG_MESSAGE("Current chat provider is null, return first");
return m_providers.first();
}
return m_currentChatProvider;

View File

@ -80,22 +80,18 @@ void RequestHandler::handleLLMResponse(QNetworkReply *reply,
&& processSingleLineCompletion(reply, request, accumulatedResponse, config)) {
return;
}
if (isComplete) {
auto cleanedCompletion = removeStopWords(accumulatedResponse,
config.promptTemplate->stopWords());
emit completionReceived(cleanedCompletion, request, true);
}
} else if (config.requestType == RequestType::Chat) {
emit completionReceived(accumulatedResponse, request, isComplete);
}
if (isComplete || reply->isFinished()) {
if (isComplete) {
if (config.requestType == RequestType::Fim) {
auto cleanedCompletion = removeStopWords(accumulatedResponse,
config.promptTemplate->stopWords());
emit completionReceived(cleanedCompletion, request, true);
} else {
emit completionReceived(accumulatedResponse, request, true);
}
} else {
emit completionReceived(accumulatedResponse, request, false);
}
if (isComplete)
m_accumulatedResponses.remove(reply);
}
}
bool RequestHandler::cancelRequest(const QString &id)

View File

@ -55,6 +55,7 @@
#include "templates/DeepSeekCoderFim.hpp"
#include "templates/QwenChat.hpp"
#include "templates/StarCoder2Fim.hpp"
#include "templates/StarCoderChat.hpp"
using namespace Utils;
using namespace Core;
@ -94,6 +95,8 @@ public:
templateManager.registerTemplate<Templates::DeepSeekCoderChat>();
templateManager.registerTemplate<Templates::CodeLlamaChat>();
templateManager.registerTemplate<Templates::QwenChat>();
templateManager.registerTemplate<Templates::LlamaChat>();
templateManager.registerTemplate<Templates::StarCoderChat>();
Utils::Icon QCODEASSIST_ICON(
{{":/resources/images/qoderassist-icon.png", Utils::Theme::IconsBaseColor}});

View File

@ -134,7 +134,7 @@ GeneralSettings::GeneralSettings()
chatTokensThreshold.setToolTip(Tr::tr("Maximum number of tokens in chat history. When "
"exceeded, oldest messages will be removed."));
chatTokensThreshold.setRange(1000, 16000);
chatTokensThreshold.setDefaultValue(4000);
chatTokensThreshold.setDefaultValue(8000);
loadProviders();
loadPrompts();

View File

@ -46,4 +46,10 @@ public:
}
};
class LlamaChat : public CodeLlamaChat
{
public:
QString name() const override { return "Llama Chat"; }
};
} // namespace QodeAssist::Templates

View File

@ -0,0 +1,51 @@
/*
* Copyright (C) 2024 Petr Mironychev
*
* This file is part of QodeAssist.
*
* QodeAssist is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* QodeAssist is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with QodeAssist. If not, see <https://www.gnu.org/licenses/>.
*/
#pragma once
#include <QJsonArray>
#include "llmcore/PromptTemplate.hpp"
namespace QodeAssist::Templates {
class StarCoderChat : public LLMCore::PromptTemplate
{
public:
QString name() const override { return "StarCoder Chat"; }
LLMCore::TemplateType type() const override { return LLMCore::TemplateType::Chat; }
QString promptTemplate() const override { return "### Instruction:\n%1\n### Response:\n"; }
QStringList stopWords() const override
{
return QStringList() << "###"
<< "<|endoftext|>" << "<file_sep>";
}
void prepareRequest(QJsonObject &request, const LLMCore::ContextData &context) const override
{
QString formattedPrompt = promptTemplate().arg(context.prefix);
QJsonArray messages = request["messages"].toArray();
QJsonObject newMessage;
newMessage["role"] = "user";
newMessage["content"] = formattedPrompt;
messages.append(newMessage);
request["messages"] = messages;
}
};
} // namespace QodeAssist::Templates