Set AI

Uses an LLM-based AI model to generate an response based on the prompt given.

Available on the STARTER plan and above.

The Set AI action block is used to generate a response to the prompt supplied to it.

When using the Set AI block, the user submits a prompt, which is then either processed by the AI Studio or the LLM model to generate a response. This response is then stored into a variable

WotNot uses OpenAI GPT APIs to generate an response.

Process in which the Set AI works,

  1. A prompt is supplied

  2. AI generates the response

  3. Response is stored in a variable

Step 1: Define the prompt/instruction

LLM models require guidance to be able to generate a relevant and accurate answer. There are some tools you can use to guide your AI; they are:

Instructions/Prompt

A prompt is nothing but an instruction that helps the LLM know what to remember and follow while generating an answer.

The more clear, concise and brief your instruction, the more accurate your answers will be from the AI.

Things to write in your prompt:

  • Objective - What is its objective

  • Output format - Typically would be HTML/markdown.

  • Writing style - How should the answers be written.

  • Don'ts - Clear instructions of what to avoid.

  • Examples - Examples of question and answer.

An example prompt could be:

Given the ‘user’s question’: “[QUESTION]”

And the detailed information provided in ‘chunks’: “[CHUNKS]”

Determine, whether a clarifying question is required.

Instructions:

1. Analyse the 'chunks' and the 'user's question' to identify the specificity of the query and the scope of the information in 'chunks'.

2. If the query is broad and the 'chunks' have multiple categories or types, output '#' and guide the chatbot to ask for clarification.

3. If the query aligns well with a specific part of the 'chunks' that provides a comprehensive answer, output '~'.

Output format: [Decision: '~' or '#', (if '#') then clarification is required. If '#' also, 'specify the type of information or category that would help better address their question. This should be based on 'chunks'.

Important: if the user's question is likely to have a device specific answer, then you should ask for more information.

If the user has given a device, then we don't need to clarify.

Step 2: Store the response in a variable

The AI would process the instruction/prompt and generate a response. To get an ideal response, make sure you include in the instructions what kind of response you want to receive.

Choose the variable you want to store the response in. Learn more about variables.

For example, if you want to check if the user has asked for a follow up question, you can write the instruction like:

Examine the user's last utterance:

"[LAST INPUT]"

Determine if the user has asked a follow-up question by looking for:

- Interrogative words (who, what, where, when, why, how, etc.)

- Phrases that indicate a desire for additional information (e.g., "I would like to know", "Can you tell me about", "I'm interested in", "Could you explain")

- Continuation phrases or conjunctions that introduce new topics or questions (e.g., "but", "however", "also", "in addition")

- Output '1' if any of these indicators suggest a follow-up question is present.

- Output '0' if no follow-up question is detected.

[Note: Only output '1' or '0' based on this analysis.]

Last updated