# Image Node

### What is an Image Node?

The Image node allows you to generate visual content from text prompts using AI image generation models such as OpenAI’s DALL·E 3 or Stable Diffusion.

Use this node to turn descriptions into visuals. Ideal for creative tools, content generation, or enhancing user engagement with dynamic imagery.

Common applications include:

* Illustrating chatbot responses
* Creating product mockups or concept art
* Generating visual assets on-the-fly for user interfaces

### How to use it?

To use the Image node:

* **Input:** Accepts a text string (prompt), often from a user or LLM node.
* **Output:** Returns a generated image that can be previewed or used downstream.

The model processes the prompt and returns a generated image in the specified size and style.

### Settings

#### Configuration Options

* **Model:** Choose between available image generation models:
  * `OpenAI DALL·E 3`
  * `Stable Diffusion 3.5`
* **Image size:** Select the resolution for the generated image:
  * `1024×1024` (square)
  * `1024×1792` (portrait)
  * `1792×1024` (landscape)
* **API Key:** (Optional) Provide your own key to use a custom instance or higher tier of the selected model.

By adjusting the model and size, you can tailor visual outputs to match your product’s design or artistic needs.

### How to expose Images externally?

To allow users to see the generated images:

1. Go to the **Export** tab.
2. Enable the Image node in the **Outputs** section under **Fields**.
3. Click **Save Interface**.
4. The image result will now be rendered in your external interface when the flow is triggered.
