# Agentic

## **What's an agentic app?**

An agentic app uses [*Generative Journey® v2*](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/overview/nodes#generative-journey-r-v2) to let an LLM complete a task by combining conversation, data capture, and tool use inside a single node. Instead of hard-mapping every branch of a workflow, you define the task, assign the tools, and let the agent decide what to collect, what to use, and when to move forward.

With Generative Journey v2, the agent can:

* gather required or optional details naturally from the user
* call tools such as custom data requests, managed integrations, knowledge bases, modalities, flows, and slot-based data capture
* complete multi-step tasks within one conversational experience
* exit the node based on one or more defined completion criteria or data capture completion

<figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2F4CS3yrdMc4SRRNCX1u7U%2Fimage.png?alt=media&#x26;token=7a6d3b41-5f9c-49ce-b540-f961057cc035" alt=""><figcaption></figcaption></figure>

Imagine a customer says:

> *"I’d like to exchange my shirt for a larger size."*

With agentic Generative Journey:

* the app invokes a flow designed for exchanges or returns
* the Generative Journey v2 node gathers the needed details, such as order number, requested size, and shipping preference
* the agent may use a custom data request to retrieve the order from your backend
* it may send a modality, such as a carousel, so the customer can choose from available replacement sizes or colors
* it may reference a knowledge base for return policy details
* it can trigger another flow or integration to complete the exchange

In practice, this one node can replace many traditional workflow nodes, making complex tasks easier to build and maintain.

## Step 1: Set up tools

Tools are the resources your Generative Journey v2 agent can use to complete its task. Before building the node, identify which tools the agent will need and make sure they are configured.&#x20;

<table data-view="cards"><thead><tr><th></th><th></th><th></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td><i class="fa-wrench">:wrench:</i></td><td><strong>Custom data request</strong></td><td>Use custom data requests when the agent needs to call your own API or service</td><td><a href="../../integrations/types/data-requests">data-requests</a></td></tr><tr><td><i class="fa-plug">:plug:</i></td><td><strong>Managed integration</strong></td><td>Use managed integrations to connect to supported third-party services</td><td><a href="../../integrations/types/managed-integrations">managed-integrations</a></td></tr><tr><td><i class="fa-book">:book:</i></td><td><strong>Knowledge base</strong></td><td>Use knowledge bases when the agent needs factual content or policy answers</td><td><a href="../../flows-and-building-blocks/knowledge-bases">knowledge-bases</a></td></tr><tr><td><i class="fa-image">:image:</i></td><td><strong>Modalities</strong></td><td>Use modalities when the agent needs to present structured UI, such as a carousel or card</td><td><a href="../../flows-and-building-blocks/modalities">modalities</a></td></tr><tr><td><i class="fa-code-branch">:code-branch:</i></td><td><strong>Flows</strong></td><td>Use flows as tools when you want the agent to hand off to a deterministic workflow and then return</td><td><a href="../../flows-and-building-blocks/overview/flows-and-variables">flows-and-variables</a></td></tr><tr><td><i class="fa-brackets-curly">:brackets-curly:</i></td><td><strong>Data capture</strong></td><td>Use data capture when the agent needs to collect slot values directly from the user</td><td><a href="../../../flows-and-building-blocks/overview/setup#attached-slots">#attached-slots</a></td></tr></tbody></table>

## Step 2: Create a flow

Create a flow that invokes your agent when the AI engine detects the appropriate user intent or if the flow will be assigned to an application default behavior (e.g., Welcome, Unknown, etc.) in Step 3.

<figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2FO2uTYnNDKmERbLBFeQxM%2FScreenshot%202025-08-27%20at%2012.02.32%E2%80%AFPM.png?alt=media&#x26;token=75427c2c-0d48-4e7d-829c-a2f3e458afc7" alt=""><figcaption><p>Agentic Generative Journey node</p></figcaption></figure>

{% stepper %}
{% step %}

#### Create the flow

1. Select *Resources* in your workspace menu and choose *Flows*
2. Create a new flow or open an existing one
3. Add routing data if this flow should be invoked by user intent and attach any slots
4. Place a *Generative Journey v2* node on the Canvas and connect it to the flow
   {% endstep %}

{% step %}

#### Configure the node

5. In the node’s side panel:
   * Enter a *Prompt* that explains:
     * the task the agent is responsible for
     * the information it may need to collect
     * any tone, brand, or behavioral guidance
     * anything the agent should avoid
6. Add one or more *Exit conditions* to define when the agent should leave the node and proceed through the rest of the flow
7. Attach the tools the agent should be allowed to use
   * Add an optional *Interim* message: The agent will deliver this message before a tool is invoked (e.g., "Let me take a look," or, "One moment while I check")
8. If using *Data capture*, assign required and optional slots
   * Optionally enable *Exit when complete* for data capture if the node should exit automatically once all required slots are resolved
9. Select the *LLM model* best suited for your task and channel
10. Optionally adjust node settings such as:
    * Max steps
    * Zero-turn mode
    * Timeout
      {% endstep %}

{% step %}

#### Finish the flow

8. Connect the exit paths to the next node (like a *Basic* node or *Redirect* node to a goodbye flow)
9. Click *Save*
   {% endstep %}
   {% endstepper %}

## Step 3: Add the app

{% @arcade/embed flowId="qr7oX89yJfinGFIVPfS5" url="<https://app.arcade.software/share/qr7oX89yJfinGFIVPfS5>" %}

Once your flow is ready, create an application that will use it.

1. Select *Applications* in your workspace menu
2. Click *New application*
3. Choose *Blank application*
4. Enter a name and create the application

## Step 4: Configure

The *Configuration* tab defines your AI engine, delivery channels, and flows your app will use.

<figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2FNILgWdhpP8gJTE5vfZcM%2FCapture.PNG?alt=media&#x26;token=29a93534-bd52-4b84-bc04-b8b1411ce8f6" alt=""><figcaption><p>Configuration tab of application</p></figcaption></figure>

{% stepper %}
{% step %}

#### AI Engine

Choose the AI engine that will perform intent recognition and support your application build process.

1. Use the built-in NLX model for a simple setup, or choose a provider already integrated in your workspace, such as Amazon Lex
   {% endstep %}

{% step %}

#### Delivery

Choose the channels where users will interact with your app.

2. \[Optional] For MCP setup, choose the API channel and enable *MCP interface*&#x20;
3. \[Optional] *Hosting*: Host your application as a Touchpoint app via the *conversational.app* domain. Ideal for previewing its final look during development and sharing your app externally with collaborators. Select Touchpoint from the default API channel and enable *Hosting* to configure the URL (e.g., `mybusiness.conversational.app`)
4. Select *+ Add channel* and choose one or more channels from the list where you want your application deployed
   * *Channel name*: Defaults to the chosen channel type, but you may overwrite with a custom name
   * *Integration*: For channels requiring a [one-time workspace integration](https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types), select from applicable integration(s) created
   * *Custom conversation timeout:* Customize the timeout period (in minutes) for a conversation session to end on the selected channel
   * *Escalation*: If any [escalation channels](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/escalation-channels) have been created in your workspace, they will be listed here for selection
5. Remaining fields are specific to the channel type selected. See [list of channels](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/deployment/managing-channels) for complete instructions
   {% endstep %}

{% step %}

#### Functionality + default behavior

Flows define your application’s behavior. Attach the flow that contains your Generative Journey v2 node:

6. Click *+ Add flow* and select one or more flows from your workspace for your app
7. Click <i class="fa-gear">:gear:</i> *Default behavior* and assign a flow to run during the following situations:
   * *Welcome*: Runs when a new conversation session starts. Use it to greet the user, set expectations, and collect any essentials (e.g., name or intent)
   * *Fallback*: Runs on timeouts, integration failures, state breaks, or exceeded [incomprehension events](#custom-app-settings) that are not handled by a knowledge base. Route here to recover gracefully and guide the user forward
   * *Unknown*: Runs when the AI cannot match the user's response to any flow or provided choices. Use to invoke a knowledge base and check a question against a repository of information
   * *Escalation* (optional): Runs when a node hits the escalation path. Route here to transfer to a human agent
8. Click *Save*
   {% endstep %}

{% step %}

#### Guardrails

Optionally apply [guardrails rules](https://docs.nlx.ai/platform/nlx-platform-guide/governance/guardrails) that were defined in the workspace for any user inputs and/or application outputs:

9. Select + Add guardrail and choose one or more guardrails
10. Click *Save*
11. \[Optional] For applications with multiple assigned channels, you can disable guardrails on a per-channel basis. Select a channel in your app's delivery section and turn on Skip guardrails to stop guardrails from applying to its conversations
    {% endstep %}
    {% endstepper %}

## Step 5: Deploy

Deploying an application allows you to construct a build that contains a package of the flows, AI engine, settings, and delivery details in the state they exist at the time the build is created. You may then deploy a successful build to make it live or roll back to a previous deployment.&#x20;

{% @arcade/embed flowId="xWEZITHYr3SQGsxqQTdI" url="<https://app.arcade.software/share/xWEZITHYr3SQGsxqQTdI>" %}

{% stepper %}
{% step %}

#### Build

1. Click deployment status in upper right and choose *Build and deploy*
2. Review the *Validation check* for critical errors or detected UX issues in custom flows. Before each new build initiates, a validation check is run to provide a preview of potential errors that may cause failed builds. Detected issues are listed with descriptions and potential solutions
3. Provide a *Description* of notable build edits as a changelog
4. Click *Create build*

You can now test your newest build in your workspace using any of the [test chats](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/testing).&#x20;

{% hint style="warning" %}
Experiencing a *Failed* build? Select the *All builds* option in the deployment menu and click the failed build to view details on what caused an error.
{% endhint %}
{% endstep %}

{% step %}

#### Deploy build

Channel(s) provide the frontend interface (how users experience your app). Deploying a build pushes your updates through any delivery channels setup on the app, effectively making your app live outside of your NLX workspace.

5. Click deployment status in upper right and select *All builds*
6. Choose *Deploy* on a successful build
   * \[Optional] *Deployment languages*: Select the languages to include in the deployment, if multiple are available
7. Click *Create deployment*

{% hint style="info" %}
Enable *One-click deploy* to auto-deploy every new build. From the deployment status, open *Deployment settings* and turn *One-click deploy* on. All future builds will deploy automatically.
{% endhint %}

Once a build is made, flows can be further edited without affecting a deployed application. Only deploying a new build will impact live applications. Only one build can be deployed at a time and deploying any build deactivates the previous one. You can freely alternate between newer and older builds using *Rollback* or *Deploy.*
{% endstep %}

{% step %}

#### Implement

8. Click the *Configuration* tab of your application and choose any channel assigned in the *Delivery* section
9. Choose the *Setup instructions* tab and follow instructions for installing to your [chosen communication channel](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/deployment/managing-channels)
   * For finalizing MCP, follow [these instructions](https://docs.nlx.ai/platform/nlx-platform-guide/deployment/mcp-server#step-3-install) for your chosen MCP client
     {% endstep %}
     {% endstepper %}

{% hint style="warning" %}
Want to take your app offline? Click the deployed build > Select *Deployment* tab in modal > Scroll to *Danger zone* and click *Delete* *deployment*. The app stays offline until you redeploy.
{% endhint %}

Once you deploy a build, you may use your agentic app outside the NLX workspace in two ways:

<table data-card-size="large" data-view="cards"><thead><tr><th></th><th></th></tr></thead><tbody><tr><td><strong>Delivery channel</strong></td><td>Interact with the app through the channel where it’s installed (e.g., web chat via API channel, voice, SMS)</td></tr><tr><td><strong>NLX hosted</strong></td><td>Open the hosting URL from a deployed build to chat with the app. Hosting must be enabled when you deploy</td></tr></tbody></table>

## App settings

Select your app's *Settings* tab to access the following:

<details>

<summary>General</summary>

Provide a workspace description of your app with any relevant resource tags for better filing and organization.

</details>

<details>

<summary>AI settings</summary>

* *AI description*: Input a brief description of the application for Model Context Protocol (MCP) Clients to reference when using your app as an [MCP server](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/deployment/mcp-server)
* *NLX Boost*: Choose *Enable NLX Boost* to enhance the performance of your AI engine with generative AI [intent classification](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/overview/setup#routing)
  * *Allow NLX Boost to override the NLP*: Enabling this relies on NLX's built-in generative AI to detect user intent (based on your routing data) and route to a flow accordingly, regardless of the AI engine's detected match

</details>

<details>

<summary>Advanced</summary>

* *Child-directed*: For applications subject to COPPA. Utterance information will not be stored if enabled
* *Autocorrection*: For supported NLPs, spell correction will be applied towards written user responses
* *Repeat on incomprehension*: If a user's response is unclear, the application will repeat its last message to the user
* *NLP confidence threshold*: When an utterance is at or above this value, your application assumes it matched to a flow with certainty
* *Negative sentiment threshold*: The flow assigned to the *Frustration* default will be triggered when negative sentiment is detected above this threshold by the NLP (e.g., profanity, sarcasm, etc.)
* *Incomprehension count*: Define the number of sequential utterances that the application asks for clarity on before an escalation or *Unknown* flow is triggered
* *Conversation timeout (min)*: Sets the timeout period for all channels on your application. If a timeout was also configured for a channel when [managing channels](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/deployment/managing-channels), that channel's setting will take precedence over the application's setting
* *Default project ID*: Default Project ID when using Google's Dialogflow NLP

</details>

<details>

<summary>Automated tests (enterprise)</summary>

See the [Automated tests](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/setup/automated-tests) setup for complete setup and instructions

</details>

<details>

<summary>Languages (check the list of <a href="../../../flows-and-building-blocks/advanced/translations#supported-languages">supported languages</a> in your workspace)</summary>

* Choose which languages your application supports when in production. Though the flows attached to your application may be set up and developed in several languages, your application ultimately decides which to provide when released.&#x20;
* Expand either Main language or any Supported language to view advanced settings:
  * *Use native NLP*: Sends a user's utterance to the NLP directly without translation
  * *Region*: Global is chosen by default but may be toggled to EU for compliance and performance when using Dialogflow's NLP
  * *Dialogflow project ID*: If using Dialogflow's NLP, enter the project ID generated on the application's deployment tab
  * *Amazon Lex Voice*: If using Amazon's Lex NLP for voice channels, select the conversational AI voice to be used. You may listen to the selection of [Amazon Polly voices here](https://aws.amazon.com/polly/features/)

{% hint style="info" %}
When an application is created, it also inherits the list of workspace languages that have been pushed to all resources. Under *Main language*, the default language is English (US), while *Supported languages* lists any additional workspace languages applied via *Translations*.&#x20;

Need to adjust languages and translations at the workspace level? See [*Translations*](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/translations)*.*&#x20;
{% endhint %}

{% hint style="warning" %}
As some voice channels leverage NLP models to convert audio inputs to text, check your NLP provider's supported languages for phone-enabled applications deployed with NLX.

[**Amazon Lex V2**](https://docs.aws.amazon.com/lexv2/latest/dg/how-languages.html)

[**Google Dialogflow**](https://cloud.google.com/dialogflow/es/docs/reference/language)
{% endhint %}

</details>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/types/agentic.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
