Enhancing ChatGPT Experience: Storing and Managing Chat History

Share this article

Hey there! I’d like to share a few updates we’ve made to the OpenAI features, specifically for chat GPT. These improvements aim to enhance chat GPT’s understanding of user queries, ultimately helping generate better responses.

To easily store the user’s chat history with the chatbot, you can create a JSON field. In this field, you’ll need to store both the user’s answer and the chatbot’s response. This way, the entire conversation between the user and the chatbot is saved in the same JSON field, making it simple to retrieve and review later.

Contenda image

Also UChat made the process of clearing a user’s chat history much simpler, and also provided an option to clear the entire chat history if needed. To clear the chat history, you can now use the “clear remembered chat history” action. When you send a test request, you’ll get a response back, which you can map if you’d like. Mapping the response isn’t necessary for the action to run, but if the response is anything other than “status okay,” you can choose to follow up in a different way. Just remember that mapping the response is optional.

Contenda image

Utilizing System Messages for Guided Responses and Improved Communication

Moving forward, you can now provide chat GPT with a system message to guide its responses and designate the role it needs to follow. This can be easily achieved by setting these instructions within the actions.

Contenda image

Moving on to the second action for creating chat completion, you’ll notice that there is now a system message. This means you can set an entire system message, which is an entire system role containing all the information about your business, directly inside of the system message. There’s no need to use OpenAI embeds if you prefer not to. As you can see, this system message is quite comprehensive, with all the necessary details included.

Contenda image

Here is the system message we are using: "As a salesperson for Chick’s Son Alitas, I am well-versed in our product offerings. We have a variety of delicious wing combos to choose from, each served with potatoes and a choice of sauces. For instance, we offer a 4-wing combo with 1 sauce for 16,500 COP, a 6-wing combo with 2 sauces for 21,500 COP, and an 8-wing combo with 2 sauces for 25,900 COP. For those with a bigger appetite, we also have a 12-wing combo with 3 sauces for 34,900 COP. I am here to provide you with any information you need about our products, always responding in the first person, concisely, and persuasively. Additionally, I can also share some business information if needed."

Contenda image

We also have some short FAQs to address common customer questions. For instance, we only sell the drinks that are on our list. If a customer asks for a drink that is not included in the list, we recommend suggesting another drink that is available. Additionally, when providing information on a variety of products, always present the details in a list format, such as: product 1, product 2, product 3. This helps to maintain consistency and clarity in communication.

Contenda image

Understanding and Implementing System Message Guidelines

In addition to the business information section, there are several conditions to consider when working with this system.

  1. Do not suggest another service channel for receiving orders; only use the designated method.

  2. When requesting data for product shipment, always ask for the following information in a list format:

    • Full name

    • Address

    • Phone

    • Neighborhood

    • Reference point

  3. When displaying all the products, do not include the title category in the message.

  4. If the flavor of the soft drink hasn’t been provided, make sure to ask for it.

Contenda image

This system message serves as a comprehensive guideline, covering not only the role that ChatGPT needs to follow but also information about the business, products, and services. In this example, we have a variable message with a test value asking, “Where are you located?”

To remember the history of this conversation, we can set the “remember the history” option. If we set it to “no,” the conversation will continue without storing the history. However, if we set it to “yes,” the system will automatically store the history in a system JSON field called “OpenAI.” This field is used for subsequent interactions with the “remember the history” option set to “yes.”

The other parameters in the system message configuration remain the same, including the model, max tokens, temperature, presence penalty, frequency penalty, stop sequences, and the number of completions.

When testing the request with the question “Where are you located?”, the system should provide a response similar to the two addresses mentioned.

Contenda image

In our recent exploration, we discovered that under the message, there are two locations mentioned. The first location is in Laurelis on Cal, and the second one is in Sabaneta on Carrera. By scrolling up, we can confirm that the addresses correspond to Laurelis and Sabaneta as stated.

Contenda image

When working with system messages, it’s essential to be aware that the more information you include, the more tokens will be used. For example, in this case, the prompt tokens amount to 1406, which includes the system message and any additional input. If you choose to include the entire chatGPT history, you’ll consume even more tokens. While this may not be an issue for some, it’s important to keep in mind, especially when dealing with high-traffic chatbots, as more tokens will be used at a higher frequency.

To give you a better understanding, let’s examine the completion step. In this scenario, the content message has two locations, and only 67 tokens are used for the answer. The total tokens used are 1473, out of which 1406 tokens are allocated to the prompt itself. So, always be mindful of token usage when designing your chatbot or any other AI application that relies on tokens for its functionality.

Contenda image

Now, it’s time to test this out and see what we get. Starting from the beginning, let’s preview this by clearing the remembered chat history. To do this, simply open it in the side of a web browser.

Contenda image

Welcome to the chicken store! How can I help you today? Let’s say you’re asking, “Where is my order?” In just a few seconds, I can provide a response. Here it is: I’m sorry, but I’m not able to check the status of your order without any information about it. If you could provide me with your full name, address, phone number, and neighborhood, I can definitely help you track your order.

Contenda image

Enhancing ChatGPT’s Contextual Understanding and User Interaction

Let’s consider a situation where someone asks, “Where are you located?” In response, the system will go back inside the action to examine the entire system message for the information needed. Once it has the details, it will reply with a new message. As illustrated in the example, there are two locations. However, it’s important to mention that home deliveries are only provided in one of these locations.

Contenda image

Looking at my bot user profile, you can now see an OpenAI system field in the guest section. Inside, there’s a system message that contains a long prompt. This system message will always stay on top, followed by various questions and responses between the user and ChatGPT.

For example:

  • User: “Where is my order?”

  • ChatGPT responds

  • User: “Where are you located?”

  • ChatGPT provides the necessary information

It’s important to note that this JSON field has a character limit of 20,000. However, if you exceed this limit, the system automatically deletes the oldest messages to ensure JetGPT maintains proper context and responds more accurately.

I hope you enjoy these two new updates! Give them a try and let me know what you think. If you have any questions, please feel free to ask. Have a great day, take care, and talk soon.

Share this article

Discussion

Sign up for our newsletter