# Audio Input Node

The **Audio Node** allows you to upload or record an audio clip as input. The audio is converted to text (using an audio-to-text LLM) and passed to your model.

#### **Providers**

The Audio Node enables you to choose from two providers that will transcribe your audio:

* `deepgram`: Uses Deepgram's API for audio transcription. Supports multiple models and submodels.
* `whisper-1`: Uses OpenAI's Whisper v1 model. Does not support model or submodel selection (uses a default configuration).

#### **Model**

Available only when using the `deepgram` provider. Defines the main model used for transcription.

* `nova`: Legacy model, fast and lightweight.
* `nova-2`: Latest generation with improved accuracy and speed.
* `enhanced`: Optimized for high-quality audio and complex content.
* `base`: Baseline transcription model with balanced performance.

This field is disabled for `whisper-1`.

#### **Submodel**

Further refines transcription behavior. Available only with `deepgram`.

* `general`: Default submodel for general-purpose transcription.
* *Other submodels exist depending on Deepgram's model.*

This field is disabled for `whisper-1`.

### Audio Node Settings

If you're using your own audio-to-text model, here you can add your own API key to use it.

### How to use it

1. Add an Audio to Text node to your flow.
2. Connect the Audio to Text node to an LLM node.
3. Mention the Audio to Text node in the LLM node by pressing **"/"** and selecting the Audio to Text node.
4. Add an Output node to your flow.
5. Connect the Output node to the LLM node.

### Expose the Audio to Text node to your users

1. Go to the **Export** tab.
2. Enable the audio node in the **Inputs** section.
3. Press **Save Interface** to save your changes.
4. Your users should now see an upload button in the interface.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.stackai.com/workflow-builder/inputs/audio-input-node.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
