Voice+ script

From start to finish, set up a multimodal Voice+ script experience with NLX

What's a Voice+ script?

A Voice+ script provides a multimodal experience that pairs defined voice prompts with visual assets from web, mobile, or IoT applications. Voice+ scripts are synchronized voice messaging prompts with the planned progression of steps customers take to complete a task online.

User: [Calls a phone number]
AI: How may I help you?
User: I'd like to make a return
AI: Sure, I can help with that. I've just texted you a link.
AI: Please put your phone on speaker so I can guide you.

An SMS containing a link to your web/mobile/IoT asset is texted to the customer. From there, Voice+ mode initiates and your conversational AI application delivers voice prompts from a defined script that trigger in real time as a customer interacts with the digital experience

Watch a Voice+ script in action:


Checklist

You'll complete the following to successfully launch your Voice+ script application:

You may add the following to an existing phone application in your workspace, if desired.


Step 1: Integrations

A one-time integration of a Natural Language Processing (NLP) engine must be completed in your workspace. Note that some channel providers require specific engines for compatibility (e.g., Amazon Connect requires the Amazon Lex engine):

A one-time integration of a phone-enabled communication channel must be completed in your workspace:

A one-time set up of an Action or Data request that sends an SMS:

  • Create a SendSMS

    • Be sure your Action or Data request has the following properties defined in the Request model schema:

      • Message (string)

      • PhoneNumber (string)

      • URL (string)


Step 2: Create a Voice+ script

Voice+ experiences pair voice prompts from an AI assistant to a digital asset (website, mobile app, etc.) to help guide users through a self-service task. Begin by identifying the elements from your digital asset that need to be mapped to a voice line (pages, buttons, etc.) and create a complete script:

  • Select Resources in your workspace menu > Choose Voice+

  • Select New script option > Name your Voice+ experience

  • Click Save

  • Click + Add step > Enter the AI's voice line in the message field*

  • Repeat for each step

  • On the final step of your Voice+ script, enable the Action toggle

    • End: Terminates the phone call after the AI assistant delivers the voice step

    • Continue: Proceeds from the Continue edge of the Voice+ node in your intent flow (see Step 2)

  • Click Save

  • Download your steps to a .csv or .json file using the Download link

🧠 Looking for more? See Add + download script


Step 3: Construct a flow

As all Voice+ script experiences begin with a traditional voice experience (IVR), you'll construct a flow that sends the SMS containing a link to your digital asset and initiates Voice+ mode. Begin by identifying the tasks your conversational AI application will automate and organize them into individual topics handled by flows. Determine the sequence of steps and messaging that the conversational application follows to assist a user with the task. The conversation workflow is assembled in a flow's Canvas with a pattern of nodes similar to a flow diagram.

Each flow is invoked when your chosen AI model identifies user intent from a user's utterance ("I want to order room service") and matches it to a flow you've created (OrderRoomService). This match is done based on the training data you provide for your AI model.

  • Select Flows from workspace menu > Choose New flow

  • Enter a descriptive name (no spaces or special characters) > Select Save

  • Complete Flow setup by attaching training data and slots

  • After adding a greeting with a Basic node to the Canvas, place an Action node, or a Data request node if your SMS event is set up as a Data request, that will send the SMS link (requires a custom API that sends SMS be previously set up in your workspace). At a minimum, be sure your Request model has at least two properties 1) Phone number, 2) URL

  • Within your flow, select the Action or Data request node > Use the system variable {system.userId} for the Phone number field of your payload on the node's side panel

  • In your URL payload field, enter your URL followed by the query parameter ?cid={system.conversationId}

  • Place and link a Basic node after the Action or Data request to indicate a text was successfully sent to the user

  • From the Basic node, place and link to a Voice+ node > Click Save

🧠 Looking for more? See Intro to flows


Step 4: Deploy application

Now you'll create the conversational AI application users will interface with. This step involves attaching all flows you want your application to access, defining flows to handle certain behaviors, setting up the voice channel your application supports, and deploying.

  • Select Applications from workspace menu > Choose New application

  • Enter a descriptive name > Click Save

  • Click Flows tab of application > Select Attach flows > Attach one or more flows created to make available to your application > Click Attach selected then Save

  • Select Default behaviors tab of application > Assign any attached flows to the application's behaviors (if you intend for your Voice+ flow to be the only flow your application handles, then assign to the Welcome behavior) > Click Save

  • Select Channels tab of application > Expand the voice channel your application will support (e.g., Amazon Connect, Amazon Chime SDK, etc.) > Click + Create channel

  • Enter required details for voice > Click Create channel > Click Save

A build constructs the array of workflows that make up your application and updates any changes made to your flows, while deploying makes a successful build live:

  • Click Deployment tab of application > Select Create or Review & build

  • Wait for validation to complete > Select Create build*

  • When satisfied with a successful build, click Deploy

*After a build status appears as 🟢 Built, you may use the Test feature to test the conversation with your application using the latest build.

🧠 Looking for more? See Manage channels


Step 5: Deploy script & install Touchpoint

You'll need to install NLX's Voice+ SDK to each screen of your digital asset so applicable API calls can be made to trigger voice lines where you've defined them:

  • Select Resources in your workspace menu > Choose Voice+

  • Choose your Voice+ script completed in Step 1 > Click Deployment tab of your Voice+

  • Choose Review & build > Click Create build

  • After a successful build, select Deploy from the Production column > Click Create deployment

  • Select Details link next to the Deployed status > Under Setup instructions, click Open script configurator

    • API key: You may auto-generate an API key under the Voice+ script's Settings tab, Save, and then enter it in the configurator's field

  • Conversation ID: Dynamically generated for each conversation session with a user by NLX, you may parse the ID from the user's URL path. Sample code: https://developer.mozilla.org/en-US/docs/Web/API/URLSearchParams/get#examples

  • Install code snippet with applicable step IDs (downloaded in Step 1) to each page of your frontend

🧠 Looking for more? See the Developer docs

Last updated