Skip to main content

GPT Knowledge

Introduction

Answer your customer’s questions using Large Langague Models. The LLM will consider the sources you provide via Knowledge. Optionally, you can integrate an external Retrieval Augmented Generation (RAG) pipeline with Microsoft Azure AI Search (e.g. to connect Sharepoint). You can also connect sources via Model Context Protocol (MCP), add a web search, and the content of the current web page the user is on. So in contrast to the GPT Smalltalk module, the GPT Knowlede module is based on the knowledge base.

Prerequisites

Add sources or catalogs to Knowledge

The GPT Knowledge module can use the sources and catalogs that you have stored under Knowledge. The content of your added sources are used by the GPT Knowledge module to answer customer questions.

Add GPT Knowledge to your Agent

If your agent has not activated the use of AI, head to the Knowledge tab within your agent. First, activate the text input field, then click to activate GPT. Go back to the process and scroll down to find the GPT Knowledge module that has beed automatically added there.

flow

Settings of the module

1. Select the context the LLM should use to answer the users questions

Knowledge

These are the sources and catalogs stored under Knowledge. GPT utilizes the content of these sources and catalogs to respond to customer inquiries. The user's query is transformed into a vector representation, which is then compared to the vector representations of the sources and catalogs. The source or catalog with the highest similarity score is used to generate the response. If you want to restrict the sources GPT will use to answer the question, you can do so here. If you do not want to show any sources in the chat, you can also hide them here.

MCP (Model Context Protocol)

Add up to three sources via MCP. The Model Context Protocol (MCP) allows for the provision of context information for AI models. Here you can configure the MCP servers. The MCP servers must support Streamable HTTP.

Current Web Page

To include the current web page as context, select this option. GPT will use the content of the web page the user is currently viewing to answer questions. This option is useful for deeply integrating the LoyJoy agent

To search the web for answers, select this option. GPT will use web content to respond to customer inquiries, providing the most current information available online. You can use either the Brave search engine or Taviliy for web searches. If you want to restrict the web search to specific domains, you can do so here by entering the domains in the provided fields. This corresponds to the "site:" operator in search engines. For example, if you enter "loyjoy.com", the web search will only return results from the LoyJoy website. If you leave the fields empty, the web search will not be restricted to specific domains.

SharePoint

To integrate your own RAG pipeline with Microsoft SharePoint, select this option. GPT will use the content from your SharePoint sources to answer customer questions. The query is sent to Microsoft Azure AI search, where a similarity search is performed on Microsoft servers. This option is ideal if you prefer to manage your data independently and avoid using the LoyJoy knowledge base. It allows LoyJoy to work with up-to-date documents stored in SharePoint. Learn how to use Microsoft SharePoint with LoyJoy here.

2. Preprocessing of sources

Number of Chunks

Specify the number of distinct chunks the LLM should utilize to answer the user's question. Increasing the number of chunks provides more context for the answer, but it also extends the response time and increases token consumption per answer, which may lead to higher costs.

Intelligent follow-up questions

This option allows GPT to rephrase a follow-up questions from the user. It considers the chat history between the user and the chatbot to generate a standalone question. The rephrased formulation is then used to perform similarity search in the context RAG pipeline. This feature is useful if you want to provide a more natural conversation flow, allowing for more context-aware answers.

Translate foreign language questions to

This feature allows requests to be automatically translated into the language of your sources. Example: If a question is asked in German and your sources are predominantly in English, the question is first translated into English. This will lead to better results. (You can select max. 3 languages of your sources.)

Question splitter

This option allows GPT to decompose the user question into multiple subquestions. This enhances the context search in the RAG pipeline. The subquestions are then used to perform similarity search in the context RAG pipeline, allowing GPT to answer complex questions.

Enable reranking

The reranking feature significantly improves the quality and relevance of your chatbot's answers. It is an optional improvement to Retrieval Augmented Generation (RAG). It enhances chatbot answers by analyzing up to 32 sources per query. It uses intelligent relevance scoring to prioritize the most important information, resulting in more accurate and concise responses. This process expands the knowledge base, reduces information gaps, and improves answer quality, leading to better coverage of complex queries and increased customer satisfaction.

Block list

In this field, you can input specific words. If users enter these words, the GPT will not produce any responses. This option can be used to block certain topics or brand names.

e.g. If the user asks "What is the capital of France and Germany?" GPT will decompose the question into "What is the capital of France?" and "What is the capital of Germany?"

3. GPT instructions

You can modify the prompt (user message), the system message as well as the temperature GPT should use to answer your customer’s questions according to your needs. The prompt and system message can be adapted to your needs by prompting techniques.

The system message is used to set guidelines and context for the interaction. It helps define the assistant's role and tone. The user message or prompt conveys the user's query or request. It represents the information or question that the user wants the assistant to address. The temperature determines how creative the answer should be. The higher the temperature, the more creative the answer. All three settings are crucial to the quality of the answers.

When editing the prompt, you e.g. can enter a list of rules or guidelines that the assistant should follow. This can be helpful to ensure that the assistant provides the correct information. The prompt can also be used to set the context for the interaction. For example, you can specify the topic or the type of information that the assistant should provide.

Also a powerful feature is to provide the prompt with variables or functions. E.g. you can write ${firstname} into the prompt to address the customer with the first name, write ${locationHref()} to provide GPT with the current URL of the customer, write ${localDate()} to provide GPT with the current date, or write ${subscription_status} to provide an arbitrary variable that you have defined in the agent such as with the current subscription status of the customer.

4. Answer

Show AI label on AI answers

If you want to show the AI label on AI answers, you can do so here. If you do not want to show the AI label on AI answers, you can also hide them here.

Show sources

By default the sources GPT has used to answer the question is shown in the chat. If you do not want to show any sources in the chat, you can also hide them here.

5. Offer jumps to agents

If an article in a catalog is used as a source to answer the user's question, you can offer the user to jump to that specific agent. This can be useful if the user wants to learn more about a specific topic.

6. Fallback in case of error

There might be some case when GPT is not able to provide an answer. This may be the case when the GPT service is not available or when the user asks a question that is not covered by your sources or catalogs. In this case, you can specify a fallback message that will be sent to the user. Optionally, you can also specify an agent where the user will be redirected to.

7. Customer feedback to the answer

You can specify whether the answers can be rated by the user. If you leave this option activated, the user will be asked to rate the answer in order to assess the quality. In the knowledge menu under messages. You can evaluate the user feedback to improve your answers.