Skip to main content

GPT Knowledge

Introduction

Answer your customer’s questions using GPT. GPT considers the sources and catalogs that you have stored under Knowledge. Optionally, you can integrate your own RAG pipeline with Microsoft Sharepoint, use web search and the context of the current web page the user is on. So in contrast to the GPT Smalltalk module, the GPT Knowlede module is based on the knowledge base.

How to Use the Module

Add sources or catalogs to Knowledge

The GPT Knowledge module uses the sources and catalogs that you have stored under Knowledge. If you have not yet created any sources or catalogs, you can do so now. The content of your added sources and catalogs are then used by the GPT Knowledge module to answer your customer’s questions.

Add Message Start Event to your Experience

To handle incomming user messages a message event handler is mandatory in your Experience. Simply select the message start event to your Experience via drag & drop. It is recommended to add the event directly at the end of your Experience.

message_start_event

Add GPT Knowledge to your Experience

Next, add your GPT Knowledge module to your Experience via drag & drop right after the message start event.

flow

Settings

1. Select the context the LLM should use to answer the users questions

1a. Knowledge

These are the sources and catalogs stored under Knowledge. GPT utilizes the content of these sources and catalogs to respond to customer inquiries. The user's query is transformed into a vector representation, which is then compared to the vector representations of the sources and catalogs. The source or catalog with the highest similarity score is used to generate the response. If you want to restrict the sources GPT will use to answer the question, you can do so here. If you do not want to show any sources in the chat, you can also hide them here.

1b. SharePoint

To integrate your own RAG pipeline with Microsoft SharePoint, select this option. GPT will use the content from your SharePoint sources to answer customer questions. The query is sent to Microsoft Azure AI search, where a similarity search is performed on Microsoft servers. This option is ideal if you prefer to manage your data independently and avoid using the LoyJoy knowledge base. It allows LoyJoy to work with up-to-date documents stored in SharePoint. Learn how to use Microsoft SharePoint with LoyJoy here.

To search the web for answers, select this option. GPT will use web content to respond to customer inquiries, providing the most current information available online. You can use either the Brave search engine or Taviliy for web searches.

1d. Current Web Page

To include the current web page as context, select this option. GPT will use the content of the web page the user is currently viewing to answer questions. This option is useful for deeply integrating the LoyJoy experience

2. Preprocessing of sources

Block list

In this field, you can input specific words. If users enter these words, the GPT will not produce any responses. This option can be used to block certain topics or brand names.

Number of Chunks

Specify the number of distinct chunks the LLM should utilize to answer the user's question. Increasing the number of chunks provides more context for the answer, but it also extends the response time and increases token consumption per answer, which may lead to higher costs.

Enable reranking

The reranking feature significantly improves the quality and relevance of your chatbot's answers. It is an optional improvement to Retrieval Augmented Generation (RAG). It enhances chatbot answers by analyzing up to 32 sources per query. It uses intelligent relevance scoring to prioritize the most important information, resulting in more accurate and concise responses. This process expands the knowledge base, reduces information gaps, and improves answer quality, leading to better coverage of complex queries and increased customer satisfaction.

Rephrase follow-up questions

This option allows GPT to rephrase a follow-up questions from the user. It considers the chat history between the user and the chatbot to generate a standalone question. The rephrased formulation is then used to perform similarity search in the context RAG pipeline. This feature is useful if you want to provide a more natural conversation flow, allowing for more context-aware answers.

Decompose user question

This option allows GPT to decompose the user question into multiple subquestions. This enhances the context search in the RAG pipeline. The subquestions are then used to perform similarity search in the context RAG pipeline, allowing GPT to answer complex questions.

e.g. If the user asks "What is the capital of France and Germany?" GPT will decompose the question into "What is the capital of France?" and "What is the capital of Germany?"

3. GPT instructions

You can modify the prompt (user message), the system message as well as the temperature GPT should use to answer your customer’s questions according to your needs. The prompt and system message can be adapted to your needs by prompting techniques.

The system message is used to set guidelines and context for the interaction. It helps define the assistant's role and tone. The user message or prompt conveys the user's query or request. It represents the information or question that the user wants the assistant to address. The temperature determines how creative the answer should be. The higher the temperature, the more creative the answer. All three settings are crucial to the quality of the answers.

When editing the prompt, you e.g. can enter a list of rules or guidelines that the assistant should follow. This can be helpful to ensure that the assistant provides the correct information. The prompt can also be used to set the context for the interaction. For example, you can specify the topic or the type of information that the assistant should provide.

Also a powerful feature is to provide the prompt with variables or functions. E.g. you can write ${firstname} into the prompt to address the customer with the first name, write ${locationHref()} to provide GPT with the current URL of the customer, write ${localDate()} to provide GPT with the current date, or write ${subscription_status} to provide an arbitrary variable that you have defined in the experience such as with the current subscription status of the customer.

4. Answer

Show AI label on AI answers

If you want to show the AI label on AI answers, you can do so here. If you do not want to show the AI label on AI answers, you can also hide them here.

Show sources

By default the sources GPT has used to answer the question is shown in the chat. If you do not want to show any sources in the chat, you can also hide them here.

5. Offer jumps to experiences

If an article in a catalog is used as a source to answer the user's question, you can offer the user to jump to that specific experience. This can be useful if the user wants to learn more about a specific topic.

6. Fallback in case of error

There might be some case when GPT is not able to provide an answer. This may be the case when the GPT service is not available or when the user asks a question that is not covered by your sources or catalogs. In this case, you can specify a fallback message that will be sent to the user. Optionally, you can also specify an experience where the user will be redirected to.

7. Customer feedback to the answer

You can specify whether the answers can be rated by the user. If you leave this option activated, the user will be asked to rate the answer in order to assess the quality. In the knowledge menu under messages. You can evaluate the user feedback to improve your answers.