# Human in the Loop

## 1. Overview

The **Ask Human in Chat Interface** tool enables **Human-in-the-Loop (HITL)** workflows in StackAI.

<figure><img src="https://3697023207-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FFSlso1Kjob5CLDrh0dVn%2Fuploads%2FD6lKCI5kxUqAIxyT0ajY%2Fimage.png?alt=media&#x26;token=80e6d8fd-2084-4fcf-ae76-b901526cc992" alt=""><figcaption></figcaption></figure>

It allows an AI workflow to pause execution, request input from a human in the Chat Assistant interface, and then resume execution once the user responds.

This allows you to build **hybrid AI workflows** where automation handles most of the work but defers to human judgment at critical decision points.

Common use cases include:

* Confirming actions before executing high-impact, long tasks
* Clarifying ambiguous user requests
* Selecting between multiple possible options
* Collecting information that the AI cannot infer

Example:

> “Running this analysis across the full dataset may take several minutes. Would you like me to proceed, or should I limit the analysis to the most recent records?”

## 2. How It Works

**Ask Human in Chat Interface Workflow**:

1. **Tool Invocation**: LLM calls the tool for human input.
2. **Pause Execution**: workflow pauses as the need for user input is identified.
3. **User Interaction**: The question appears in the Chat UI.
4. **User Response**: The user submits an answer through the chat.
5. **Resume Execution**: Workflow resumes with user's input.

Optimize performance by equipping an LLM node with the Ask Human tool in a chat interface instead of using it as a separate action. Refer to this [guide](https://docs.stackai.com/workflow-builder/llms/llm-node/tools#prompt-optimization-with-tools\)?) for instructions on adding tools to LLMs.

<figure><img src="https://3697023207-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FFSlso1Kjob5CLDrh0dVn%2Fuploads%2FGtUvNR7NlzuvSukSI8fC%2Fimage.png?alt=media&#x26;token=2b0c0501-56e2-4b15-b6d0-a904055db743" alt=""><figcaption></figcaption></figure>

⚠️ **Important:**\
This tool **only works inside the Chat Assistant interface**. It does **not function when workflows are triggered via API or used in Form mode**, since it relies on the interactive chat UI to collect responses.

<figure><img src="https://framerusercontent.com/images/7xdgFdLhUpGwZEJuKp1NUAs.gif?scale-down-to=2048&#x26;width=3404&#x26;height=1740" alt="" width="900"><figcaption></figcaption></figure>

## 3. Configuration

### Input Parameters

These parameters can either be:

* **Provided directly by the user**, or
* **Dynamically set by the LLM when invoking the tool from the LLM node**, just like with any other tool in a StackAI workflow.

Allowing the LLM to control these fields enables **more flexible and adaptive workflows**, while specifying them explicitly can enforce **more deterministic behavior** when required.

{% columns %}
{% column %}

<figure><img src="https://3697023207-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FFSlso1Kjob5CLDrh0dVn%2Fuploads%2Fs8hadxXozTdT2KCwB94s%2Fimage.png?alt=media&#x26;token=6db28a4f-af0b-46c3-a73c-bd1f5f228bad" alt="" width="520"><figcaption></figcaption></figure>
{% endcolumn %}

{% column %}

<figure><img src="https://3697023207-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FFSlso1Kjob5CLDrh0dVn%2Fuploads%2FElyGySTronyvcTCbXXgO%2Fimage.png?alt=media&#x26;token=d0440f98-6334-4f5f-ad24-dbbbb73d0eea" alt="" width="520"><figcaption></figcaption></figure>
{% endcolumn %}
{% endcolumns %}

<table><thead><tr><th width="140.62109375">Parameter</th><th width="88.09375">Type</th><th width="83.98046875">Required</th><th>Description</th></tr></thead><tbody><tr><td><code>question</code></td><td>string</td><td>Yes</td><td>The question presented to the human user. It should be clear and actionable.</td></tr><tr><td><code>context</code></td><td>any</td><td>No</td><td>Optional context to help the user understand the situation or decision.</td></tr><tr><td><code>response_type</code></td><td>string</td><td>No</td><td>Determines the UI input type. Options: <code>"text"</code>, <code>"approval"</code>, <code>"choice"</code>. Default is <code>"text"</code>.</td></tr><tr><td><code>choices</code></td><td>array</td><td>No</td><td>List of selectable options. Only used when <code>response_type</code> is <code>"choice"</code>.</td></tr></tbody></table>

### 3.1. Response Types

Set `response_type` to control what the user sees in chat.

#### Approval (`"approval"`)

Use this for a simple **yes/no** confirmation step.

<figure><img src="https://3697023207-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FFSlso1Kjob5CLDrh0dVn%2Fuploads%2FyzzEETPd1Qr8SaRltHLi%2Fimage.png?alt=media&#x26;token=b3c5bf7d-1f30-46d3-bb14-b8f84f70948b" alt=""><figcaption></figcaption></figure>

#### Choice (`"choice"`)

Use this when the user should pick from a predefined list.

Set `choices` when using `"choice"`.

<figure><img src="https://3697023207-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FFSlso1Kjob5CLDrh0dVn%2Fuploads%2FGKL5y76CJkT3CIrk9Fan%2Fimage.png?alt=media&#x26;token=9109cf15-6930-43aa-8505-80252f310930" alt=""><figcaption></figcaption></figure>

#### Text (`"text"`, default)

Use this when you need free-form input from the user.

<figure><img src="https://3697023207-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FFSlso1Kjob5CLDrh0dVn%2Fuploads%2FJ0LO2BKSs9IY5spbBuRP%2Fimage.png?alt=media&#x26;token=55805e46-020f-4796-880f-0bfd921cf023" alt=""><figcaption></figcaption></figure>

## 4. Common Use Cases

<table><thead><tr><th width="147.93359375">Use Case</th><th>Description</th><th>Example Question from Human in the Chat</th></tr></thead><tbody><tr><td><strong>Approval Gates</strong></td><td>Pause before performing sensitive or high-impact actions and ask the user for confirmation.</td><td>“I'm about to update 47 customer records with the new pricing. Proceed?”</td></tr><tr><td><strong>Clarification Requests</strong></td><td>Resolve ambiguity when multiple interpretations are possible.</td><td>“I found three contacts named John Smith. Which one do you mean?”</td></tr><tr><td><strong>Data Collection</strong></td><td>Gather information that the AI cannot determine automatically.</td><td>“What budget range should I use for this proposal?”</td></tr><tr><td><strong>Human Review</strong></td><td>Allow a human to review AI-generated content before executing an action.</td><td>“Here's the draft email I've prepared. Should I send it as is?”</td></tr><tr><td><strong>Routing Decisions</strong></td><td>Let a human select the next step in a workflow when multiple valid paths exist.</td><td>“This support ticket could be handled as a refund, replacement, or escalation. Which approach should we take?”</td></tr></tbody></table>

## 5. Best Practices

**Guide the LLM on When to Request Human Input**

Your LLM instructions should clearly define when human input is required. Guide the agent on how frequently it should intervene based on how critical confirmation or human judgment is for the workflow and the expected user behavior.

Use HITL sparingly for routine tasks, but require it for **high-impact, irreversible, or ambiguous actions**.

Example instruction:

> “Before sending emails or updating databases affecting more than 10 records, ask the user for approval using the **Ask Human in Chat Interface** tool.”

**Prefer Approvals or Choices Over Free-Text Input**

Whenever possible, use `approval` or `choice` response types instead of `text`. Even when the parameters are controlled by the LLM, guide the agent to prefer these options.

Structured responses are faster for users and reduce friction in the workflow. In many cases, users prefer **quick decisions (clicking a button or selecting an option)** rather than typing responses.

Using predefined options also helps:

* reduce ambiguity
* avoid input errors
* keep workflows moving quickly

Reserve **`text`** responses for situations where the user truly needs to provide **new or open-ended information**.

**Choose OpenAI**

OpenAI's GPT models consistently provide superior performance and results in various applications. We recommend selecting them when using Ask Human in the Chat Interface.
