API
Learn to set up an API communication channel for your application in your NLX workspace
API channel settings
The API channel is automatically available for all new applications in your workspace. It serves as the default delivery channel for connecting your conversational AI to external clients or frontends.
Select Configuration tab of your application and click the API channel:

General (API channel)
Custom conversation timeout: Set timeout (in minutes) after user inactivity
API key: Matched against the HTTP headers from the conversational AI application
Whitelisted domains: Define which domains can make CORS calls to your API channel.
Supports exact URLs or regex patterns.
Example (Exact):
https://example.comExample (Regex):
^https:\\/\\/[a-z0-9\\-_]+\\.herokuapp\\.com\\/?$Regex patterns must begin with
^and end with$.
MCP interface: Exposes a Model Context Protocol (MCP) interface for your API channel for all flows attached to the application that also have MCP setting enabled
Once turned on, provide an AI description in the application's Settings tab for the MCP Client to reference. Check out our complete MCP guide
Streaming: When an agentic Generative Journey calls MCP-enabled flows, predefined messages within those flows are spoken aloud to the user through the native voice channel
Voice (API channel)
ON by default for new applications. Equips your application with voice capabilities and allows your users to converse with your AI application through their microphone/speaker setup. See list of languages supported
Text-to-Speech converts your AI’s responses into spoken voice for your end users. This determines how your voice agent sounds when it speaks.
Provider: Choose a TTS service (Inworld AI, ElevenLabs, Hume, OpenAI, or Amazon Polly). If you’ve set up your own Inworld instance, create an integration first, then select it here
Voice: Pick from available voice personas offered by the selected provider. When selecting a persona, supported language tags will appear, showing which languages that voice can speak or automatically translate your AI’s responses into
Interruptible: Allow users to interrupt the AI while it’s speaking for a more natural conversational flow
Speech-to-Text converts a user’s spoken input into text for your AI to understand. This determines how your voice agent listens and interprets speech.
Provider: Default set to the built-in Deepgram service
Model: Choose from available Deepgram models to fine-tune how incoming audio is processed and transcribed
Touchpoint (API channel)
Hosting: Host your application as a Touchpoint app via the conversational.app domain. Ideal for previewing its final look during development and sharing your app externally with collaborators. Enable Hosting to configure the URL (e.g.,
mybusiness.conversational.app)Communication style: Choose how users will interact with your Touchpoint app. Text for chat-only experiences, or Voice to enable voice input/output
Layout: Choose how Touchpoint appears on your site or hosted page
Full screen: A fully immersive standalone interface
Embedded: ON by default. Allows you to embed Touchpoint directly inside your webpage or application without a launcher icon. Ideal for NLX hosting (conversational.app domain)
Half page: A split-screen layout ideal for companion experiences
Color mode: Choose Light or Dark mode to match your brand or product environment
Font: Set the typeface used in the Touchpoint UI. This controls all visible text inside the hosted interface
Accent color: Pick the highlight color used for buttons, links, and interface accents
Access setup instructions & Touchpoint SDK
Be sure to create an application build and deploy it.
Once deployed, select the Configuration tab of your application again and choose the default API channel. Click the Setup instructions tab in modal. Implement using:
NLX Touchpoint SDK (recommended, includes UI + Voice+)
Your own custom front-end built with REST/WebSocket calls to the API channel
Last updated

