Prompt Engineering
Learn what prompts work best with context retrieved from SID
Prompt
To use the results from SID in your app, most adopt the following flow:
- Get input from the user. Usually, this is from a chat interface like ChatGPT.
- Forward that input to our
query
endpoint to get relevant context. - Feed the results and the original user request to a language model like ChatGPT.
Using Query Results as Context
Once you have received the results from the query
endpoint
you will want to feed them as context to a language model like ChatGPT. We’ve collected a few tips on how to do this well.
The most important thing is the prompt. You need to inform the model that it can now access the personal files a user has connected,
and can use that information to answer the request. We suggest adding a section with Instructions
and a section with Context
to your prompt.
In general, if you observe behavior that the model should not exhibit, you can add a section to your prompt that instructs the model to not do that.
Formatting Query Results
If you’re inserting the raw JSON results from the query
endpoint,
the model will be inclined to writing code instead of text. To circumvent this, we recommend formatting the results as
plain text or markdown. Here is a helper function that may be useful.
The query
endpoint may return an empty results array if there are no relevant results. Be sure to tell the model that no context was found.
Was this page helpful?