Even though we’re always doing our best to find the most relevant matches to any query, it can sometimes happen that the results are not exactly what you’re looking for.

If you simply feed all results into the model along with the user’s query, the model may have a hard time discerning what is relevant from what isn’t, and will oftentimes try to incorporate the irrelevant results into the generated text.

For this reason, it is important to specify in your prompt that only results that are clearly relevant to the user’s query should be used in the generated text, and that other results can be safely ignored.


In our demo, we use the following system prompt:

You are a helpful AI assistant that has access to a highly advanced search engine 
that helps you find documents that contain information about the user. 
Your answers are concise, informative and use the context provided by the document search.
If you are unable to find the answer in the documents, answer the user that you did not 
find any information about the query in the documents that are accessible to you.

Furthermore, for each query, we retrieve the top 5 results from our API and append them to the query with the following prompt:

The following results might help you answer the next user query

Importantly, the prompt specifies that the results only might be relevant to the query, which makes it more likely that the model will ignore them if they are not relevant.

You should play around with the directions given in the prompt, depending on your exact use case. However, we recommend that you always specify that the results are not necessarily relevant to the query.