Orchestrating AI Agents
An AI agent can be defined simple as a large language model (LLM), as complex as a multi-project construction. AI orchestration is about turning these various levels of building blocks into powerful systems. With StackAI, you can orchestrate AI agents at the model level, within a project, and across projects.
1. Orchestrate Large Language Models (LLMs)
The simplest form of AI orchestration is to allow multiple large language models (LLMs) to collaborate and complete a business process. In StackAI, these LLMs are represented by the AI Agent node.
Take report generation as an example, a report writing process generally involves a few steps, including data gathering, information synthesis, and report drafting. StackAI allows you to break down a business process into steps, where each step is carried out by a dedicated LLM, known as the "expert model".
The benefit of the expert model approach is two-fold:
This approach allows balancing the trade offs of different LLMs. Some models are faster but have shorter context window (e.g. information retained by models), and others vice versa. Assigning the best model to the particular step optimizes the overall process.
Isolating the context of each step allows the expert models to better follow the instructions. When a LLM is given different tasks to perform, it may struggle with always following all instructions.
To connect the expert models together, you can simply chain these AI Agent nodes based on the sequence of the steps.

When an expert model work off of the results of a previous expert model, you can pass through a combination of the following outputs from one expert to another.
Citation List: The list of sources the previous expert referenced
Completion: The full output of the previous expert, including the citations if enabled
Subflow Tool Input: See section below for details on subflow tools

2. Orchestrate Capabilities within a Project
There may be a set of capabilities you would like a LLM to leverage dynamically. In the report writing example, you might be thinking of casting a wide net of websites to research and only deep diving on a subset of articles based on certain criteria.
In the example workflow below, there are 2 subflow tools: research websites and article deep dive.

Each subflow tool can be as simple or complex as required by the business use case.

You can provide guidance on how to use these tools in the Instructions section.

The AI Agent node will use these tools as it sees fit. The AI Agent can use each tool in isolation or in combination.
In the example of report writing, the tools are used in combination, which means the AI Agent node will call on these tools sequentially. The AI Agent node will use the website research tool first before the article deep dive tool. This allows the main AI Agent to retain the overall context from the outputs of the various tools.
Consider another example of customer support, where each subflow tool checks the customer record in a different system. These tools can run in parallel and pass the results bask to the main Agent.
Depending on how the subflow tools work together, the input of each subflow tools may be different at each step. For example, the input to the "website research" tool is a list of websites, whereas the input to "article deep dive" are articles. As such, the Subflow Tool Input reflects the data used by the tools, not the AI Agent node.

3. Orchestrate StackAI Projects
You can combine different StackAI Projects into a unified "master project". For example, your organization may have different projects in StackAI, one for IT ticketing management, one to chat with HR policies, and another to get analytics from Sales database.
Keeping these agents in separate projects works well for AI development. These projects are likely built by different business teams, and these projects can serve as reusable components for future AI expansion in the organization.
However, having multiple projects does not work so well for AI deployment. The end users often prefer a single point of entry to interact with AI.
So, to set up a unified agent, you can use AI Routing node in conjunction with StackAI Project node. You can describe the intent of the user question in natural language, and the AI routing node will send the question to the appropriate project. See AI Routing documentation for more details.

With in each StackAI Project node, you can select the project and configure input. See StackAI Project documentation for more details.

Differences between StackAI Project and Subflow Tool
A StackAI Project assumes a deterministic workflow. There is a rule for how many times a project needs to be called, and each item can be processed independently. Under loop mode, the StackAI projects can be executed in parallel. See StackAI Project for more details.
In contrast, the Subflow Tool is designed for dynamic tool use. The AI Agent node can use each tool sequentially or in parallel, and the AI Agent can decide to use any tools multiple times. It is worth noting that sequential tool call will add latency to the workflow.
Last updated
Was this helpful?

