# Voice+ scripts

### **What's a** Voice+ scrip&#x74;**?**

A Voice+ script creates a multimodal experience that pairs predefined voice prompts with visual assets from your web, mobile, or IoT applications. These scripts guide users through tasks online with synchronized voice and on-screen interactions.

* A customer calls your service number
* Your app greets the caller and identifies their intent
* The system sends an SMS link to your web, mobile, or IoT experience
* Once opened, Voice+ mode begins. Your conversational AI delivers voice prompts in real time as the customer completes each step

{% embed url="<https://player.vimeo.com/video/1021730967?app_id=58479&autopause=0&badge=0&player_id=0>" %}

To access, click *Resources* in your workspace menu and choose *Voice+ scripts*:

{% @arcade/embed flowId="CLBH0speo9Wqbb3mOs6E" url="<https://app.arcade.software/share/CLBH0speo9Wqbb3mOs6E>" %}

## Requirements

* [ ] NLP, telephony channel, & SMS function integrated in the workspace
* [ ] Voice+ flow & application created
* [ ] Voice+ script & SDK installed

## Workspace integrations

Before creating a Voice+ script, your workspace needs a few integrations that will connect your app to the right language engine, telephony provider, and messaging function. These integrations allow your app to process user input, deliver voice responses over phone, and share a supporting link or asset over SMS.

<table data-view="cards"><thead><tr><th data-type="number"></th><th></th><th></th><th data-hidden data-card-target data-type="content-ref"></th></tr></thead><tbody><tr><td>1</td><td><strong>NLP engine</strong></td><td>Integrate Natural Language Processing to handle language understanding. Some providers may require specific engines for compatibility (e.g., Amazon Connect requires Amazon Lex)</td><td><a href="../../integrations/types/nlp-engines">nlp-engines</a></td></tr><tr><td>2</td><td><strong>Telephony channel</strong></td><td>Add a voice-enabled communication channel (such as Amazon Connect, Chime SDK, or Twilio) to enable real-time calling and Voice+ interactions</td><td><a href="../../integrations/types/channels">channels</a></td></tr><tr><td>3</td><td><strong>SMS function</strong></td><td><p>Configure a custom API to send an SMS. Include these properties in the <em>Request model</em>:</p><ul><li>Message (string)</li><li>PhoneNumber (string)</li><li>URL (string)</li></ul></td><td><a href="../../integrations/types/data-requests">data-requests</a></td></tr></tbody></table>

## Create flow

Every Voice+ script experience begins as a standard voice flow (similar to an IVR). You’ll design a conversation that greets the user, identifies their intent, and sends them an SMS with a link to your digital experience — the point where Voice+ mode takes over. Once the user opens the link, your app delivers synchronized voice prompts that guide them through each step of the on-screen task.

{% @arcade/embed flowId="14udpd1GEbZRJt1z4geY" url="<https://app.arcade.software/share/14udpd1GEbZRJt1z4geY>" %}

* Create a new *Flow* or select an existing one
* Add training phrases the AI model (NLP or the NLX native LLM) will use to recognize user intent
* After any greeting or logic that begins your flow handling their intent, place a *Data request* or *Action* node (depending on how you set up your custom function) that will send the SMS link
* On the node
  * Set the PhoneNumber field to the system variable `{system.UserId}`
  * In the URL field, enter your link followed by `?cid={system.conversationID}` to include the conversation ID
  * In the Message field, write a short SMS introducing the link and its purpose for the user
* Place and link a [*Basic* node](https://docs.nlx.ai/platform/nlx-platform-guide/overview/nodes#basic) after indicate a text was successfully sent to the user and prompt them to tap the link
* From the *Basic* node, place and link to a [*Voice+* node](https://docs.nlx.ai/platform/nlx-platform-guide/overview/nodes#voice) > Click *Save*

## Script <a href="#add-steps" id="add-steps"></a>

The script is where you'll set up the series of predefined voice messages for your application to communicate to users. These messages are known as *Steps* and are synchronized in real time and mapped to the visual assets displayed to users from your web, mobile, or IoT application.

{% @arcade/embed flowId="Qv0V8ukxL5Ouk5EUdLyu" url="<https://app.arcade.software/share/Qv0V8ukxL5Ouk5EUdLyu>" %}

* Select *Resources* from workspace menu > Choose *Voice+ scripts* > Create a new script or select an existing
* Click *+ Add step* > Enter the voice line in the message field\*
* Repeat for each *Step*
* Click *Save*

{% hint style="info" %}
Avoid open-ended messaging when writing Voice+ scripts.\
Instead of asking, “Would you like to use the card on file?”, say:\
“We can use the card on file. Tap Confirm to authorize or choose another card.”
{% endhint %}

> Optional (expand each *Step* to view):
>
> * *Action*: Enable to apply one of three actions to a *Step*:
>   * *End*: Terminates the call, ending the session (may be applied on the last voice step of your script)
>   * *Escalate*: Proceeds from the *Escalation* path of your [*Voice+ node*](https://docs.nlx.ai/platform/nlx-platform-guide/overview/nodes#voice) in the flow
>   * *Continue*: Proceeds from the *Continuation* path of your *Voice+* node in the flow
> * *Analytics tags*: Assign tags to a step and view traversal rate on an [*Analytics* dashboard](https://docs.nlx.ai/platform/nlx-platform-guide/monitoring/analytics)

#### Using *Context variables* <a href="#optional-using-context-attributes" id="optional-using-context-attributes"></a>

[*Context variables*](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/context-variables) may be referenced in any messaging of your script. They're useful to personalize the experience with users by referencing dynamic information (e.g., name, date) that is defined and set prior to the Voice+ node in your flow.

{% @arcade/embed flowId="Np1C0ZAjhfxkR6vewCvg" url="<https://app.arcade.software/share/Np1C0ZAjhfxkR6vewCvg>" %}

* Expand an existing step or *+ Add step* > Select the messaging field
* Enter an open curly brace { and choose the appropriate *Context variable >* Click *Save*

## Deploy script <a href="#whats-the-deployment-tab" id="whats-the-deployment-tab"></a>

A deployment packages your Voice+ script into a build that can be released to production. Once deployed, you can access the configurator to install the experience on your web, mobile, or IoT application.

After the initial deployment, AI applications connected to your Voice+ script through their attached flows don’t need to be redeployed—only changes made to the flows themselves require an application re-deployment. Instead, updates to your Voice+ steps can be published directly from the *Deployment* tab of your script.

{% @arcade/embed flowId="l0X0guy9aWnRLvBAtiAP" url="<https://app.arcade.software/share/l0X0guy9aWnRLvBAtiAP>" %}

* Select *Create your first build*
* After a build successfully completes, select *Deploy* from the Production column
* When ready, click *Create deployment*

For failed builds, expand the *Details* link in the *Build status* column for additional information.

## Touchpoint <a href="#journey-configurator" id="journey-configurator"></a>

In preparation for installing the NLX Touchpoint SDK, be sure to download your script steps to a CSV or JSON file using the available *Download* option:

{% @arcade/embed flowId="B5V8bONng2yithmzJGqC" url="<https://app.arcade.software/share/B5V8bONng2yithmzJGqC>" %}

In order for your Voice+ communication to work properly and not trigger CORS errors, make sure to whitelist the URL domain(s) in the [script's *Settings*](#voice-script-settings) and click *Save*.

On the script's *Deployment* tab:

* Select the *Details* link next to the *Deployed* status
* Under Setup instructions, click *Open Voice+ configurator*
* *API key*: If not already entered, provide the API key under your [*Voice+* script's *Settings*](#voice-settings) and enter it in the configurator's field
* *Conversation ID*: Since this is dynamically generated for each conversation session with a user by NLX, you may parse the ID from the user's URL path. Sample code: <https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams/get#examples>
* Install code snippet with applicable step IDs to each page of your frontend UI

{% @arcade/embed flowId="JKtBi2DSVyNmj8NzEXPW" url="<https://app.arcade.software/share/JKtBi2DSVyNmj8NzEXPW>" %}

## Voice+ script settings

* *API key*: Enter your API key or auto-generate one for authorization
* *Whitelisted domains*: Add your URL to this section to prevent CORS errors and allow for proper Voice+ script transmission. Supports both exact string matches and regex patterns:
  * *Exact*: `https://example.com`
  * *Regex*: `^https:\\/\\/[a-z0-9\\-_]+\\.herokuapp\\.com\\/?$`

{% hint style="info" %}
To delete a *Voice+ script*, select the *Delete* option under *Danger zone.* If a [*Voice+* node](https://docs.nlx.ai/platform/nlx-platform-guide/overview/nodes#voice) is referenced in a flow, modify the affected flows and create a [new application build/re-deploy](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/types/core#deployment) to experience the change.
{% endhint %}
