Best practices for your Knowledge Base datasets

MessageBird's out-of-the-box machine-learning tools, including intent recognition and FAQ automation, make it easier for you to communicate with your customers by automating message routing and message responses.

Why should you improve your Knowledge Base?

By customizing your Knowledge Base datasets with detailed, personalized data, you can train our machine learning models to analyze your incoming communication as accurately as possible. This allows you to create more robust and reliable communication automation and more accurately detect the intent of your users' messages.

You can set up a personalized AI tool for your specific use case by providing your Knowledge Base with thorough examples that our machine learning models can learn from.

This guide will take you through the best practices of setting up, testing, and improving datasets in your Knowledge Base, which you can find access from your MessageBird Dashboard.

Six best practices for improving your Knowledge Base

For this example, we're going to use intent recognition datasets. However, the same best practice theories can be applied to FAQ automation datasets.

Take a look at the example datasets below. For these datasets, there are 3 available intents:

  • Support: Messages related to support inquiries or issues with a product

  • Sales: Messages related to learning more about a product or signing up

  • Billing: Messages related to invoices, bills, and subscriptions

The Test Dataset on the left contains 9 basic phrases and their corresponding intent. The Improved Dataset on the right contains 15 improved phrases and their corresponding intent.

Let's take a look at the six principles we used to improve the Test Dataset and create the Improved Dataset.

1. Create multiple intents that do not overlap

To avoid confusing our machine learning models, it's important to have more than 1 defined intent in your dataset.

This is because our model works by looking at the intents that you have defined, and deciding which intent is most likely to match the content of the message it has analyzed. If there aren't multiple intents to choose from, the model won't work as well.

You also need to make sure that the intents in your dataset don't overlap. For example, don't create a complaint intent and a separate payment complaint intent. Doing this could cause unexpected intent recognition results because a payment complaint is a type of complaint. In an example like this, it is best practice to combine the two intents.

2. Create diverse example phrases for each intent

Remember, machine learning models are designed to save you time!

You can safely assume that a machine learning model that has been trained on the phrase "I would like to know more about your product" will be able to recognize the phrase "I would like to hear more about your product", and assign it the same intent. After all, only one of the words is different!

So, instead of providing lots of phrases that are linguistically similar, try to provide several phrases that are linguistically distinct from each other, but have the same intent. By doing this, the model will learn to generalize a much wider variety of messages.

Take a close look at the Improved Dataset to see how we created several linguistically distinct phrases that have the same intent.

3. Avoid using similar example phrases with different intents

You will also want to avoid confusing our machine learning models by providing linguistically similar phrases that have different intents.

4. Avoid using non-verbal keywords

Non-verbal keywords are things like URLs, emojis, and unique identifiers. Because non-verbal keywords don't have meaning in the same way that ordinary language does, they are difficult for ML models to process. If you're interested in processing URLs in messages, you can check out our Recognize Entities functionality.

5. The more samples, the better!

Although we require a minimum of 5 samples per intent, it is best practice to create as many samples as possible for each intent.

Humans (like your users) use language in an extremely varied way, so provide as many varied examples as you can to improve and strengthen your dataset, and the performance of our services.

Be on the lookout for new and unusual phrases that your customers use, and add samples of these to your dataset.

6. Keep your dataset up-to-date

It's not easy to predict all of the ways that a customer might phrase a question. That's why it's a good idea to continuously update your dataset as you learn more about how your customers communicate.

Evaluating your Knowledge Base's performance

Okay, your Knowledge Base is all set up! Now, it's time to see how well it works. You can quickly test the performance of your Knowledge Base by running the demo within the step that uses it. Or, test as follows:

  1. Go to your Knowledge Base

  2. Next to the dataset that you want to test, click on the vertical ellipses, then click test

  3. A pop-up box will appear. In the text field, enter the phrase that you would like to test against your Knowledge Base

  4. Click Predict intent to see the predicted answer (for an FAQ) or intent (for intent recognition)!

This test shows an intent dataset returning a predicted intent.

This test shows an FAQ dataset returning the predicted answer.

A good way to understand how your Knowledge Base performs over time when working with real interactions is to store the output of any step that uses this dataset in a datasheet. You do this automatically by adding the Add row in Google Sheets step to your flow, just after the Recognize intent step.

Once you've collected enough data, you can manually inspect the results to see how well your dataset performs with real users in real conversations.

Take a look at the Google Sheet example below, where we've logged an incoming message against the predicted intent, and included the conversation ID so that we can review the whole conversation if we need to. We can manually review results to see if the ML model has been correctly identifying intent.

If there are problems with the dataset, this will be obvious from looking at the results. Review any incorrect results, and update your dataset as required.

That's it!

These are the basics ways that you can start improving prediction accuracy through datasets. If you need help working out how to optimize your chatbot or conversational automations, feel free to drop us a message!

Last updated