# API

{% hint style="info" %}
Check out our [REST API documentation](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/deployment/managing-channels/creating-an-api-channel/rest-api) for more info.
{% endhint %}

### API channel settings

The API channel is automatically available for all new applications in your workspace. It serves as the default delivery channel for connecting your conversational AI to external clients or frontends.

Select *Configuration* tab of your application and click the API channel:

<figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2F7nc4vpyZWbKexjj7fDgd%2FAPI%20channel.png?alt=media&#x26;token=d25b6b42-2a32-4b10-b846-47e871311698" alt=""><figcaption></figcaption></figure>

#### **General (API channel)**

* *Custom conversation timeout*: Set timeout (in  minutes) after user inactivity
* *API key*: Matched against the HTTP headers from the conversational AI application
* *Whitelisted domains*: Define which domains can make CORS calls to your API channel.
  * Supports exact URLs or regex patterns.
  * Example (Exact): `https://example.com`
  * Example (Regex): `^https:\\/\\/[a-z0-9\\-_]+\\.herokuapp\\.com\\/?$`
  * Regex patterns must begin with `^` and end with `$`.
* *MCP interface*: Exposes a Model Context Protocol (MCP) interface for your API channel for all flows attached to the application that also have [MCP setting enabled](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/overview/setup#model-context-protocol-mcp)
  * Once turned on, provide an [AI description](https://docs.nlx.ai/platform/nlx-platform-guide/setup#application-settings) in the application's *Settings tab* for the MCP Client to reference. Check out our complete [MCP guide](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/deployment/mcp-server)
* *Streaming*: When an [*agentic Generative Journey*](https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/types/agentic) calls MCP-enabled flows, predefined messages within those flows are spoken aloud to the user through the native voice channel

#### **Voice (API channel)**

{% hint style="info" %}
Want to *hear* how your AI delivers before going to production? Enable voice mode in a [test session](https://docs.nlx.ai/platform/nlx-platform-guide/testing#application-test).
{% endhint %}

ON by default for new applications. Equips your application with voice capabilities and allows your users to converse with your AI application through their microphone/speaker setup. See list of [languages supported](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/advanced/translations#supported-languages)

*Text-to-Speech* converts your AI’s responses into spoken voice for your end users. This determines how your voice agent sounds when it speaks.

* *Provider*: Choose a TTS service (Inworld AI, ElevenLabs, Hume, OpenAI, Rime, or Amazon Polly).\
  If you’ve set up your own Inworld instance, create an integration first, then select it here
* *Voice*: Pick from available voice personas offered by the selected provider. When selecting a persona, supported language tags will appear, showing which languages that voice can speak or automatically translate your AI’s responses into
* *Interruptible*: Allow users to interrupt the AI while it’s speaking for a more natural conversational flow

*Speech-to-Text* converts a user’s spoken input into text for your AI to understand. This determines how your voice agent listens and interprets speech.

* *Provider*: Default set to the built-in Deepgram service
* *Model*: Choose from [available Deepgram models](https://developers.deepgram.com/docs/model) to fine-tune how incoming audio is processed and transcribed

#### Touchpoint (API channel)

* *Hosting*: Host your application as a Touchpoint app via the *conversational.app* domain. Ideal for previewing its final look during development and sharing your app externally with collaborators. Enable *Hosting* to configure the URL (e.g., `mybusiness.conversational.app`)
* *Communication style*: Choose how users will interact with your Touchpoint app. Text for chat-only experiences, or Voice to enable voice input/output&#x20;
* *Layout*: Choose how Touchpoint appears on your site or hosted page
  * *Full screen*: A fully immersive standalone interface
    * *Embedded*: ON by default. Allows you to embed Touchpoint directly inside your webpage or application without a launcher icon. Ideal for NLX hosting (conversational.app domain)
  * *Half page*: A split-screen layout ideal for companion experiences
  * *Mini*: A traditional small interface that's unobtrusive
* *Color mode*: Choose Light or Dark mode to match your brand or product environment
* *Font*: Set the typeface used in the Touchpoint UI. This controls all visible text inside the hosted interface
* *Accent colo*r: Pick the highlight color used for buttons, links, and interface accents

### Access setup instructions & Touchpoint SDK

1. Be sure to create an [application build and deploy it](https://docs.nlx.ai/platform/nlx-platform-guide/setup#deployment).
2. Once deployed, select the *Configuration* tab of your application again and choose the default API channel. Click the Setup instructions tab in modal. Implement using:
   * [NLX Touchpoint SDK](https://app.gitbook.com/s/2VnkvXtkrR2qhkVBmB1l/conversation-sdk/touchpoint) (recommended, includes UI + Voice+)
   * Your own custom front-end built with [REST/WebSocket](https://app.gitbook.com/s/2VnkvXtkrR2qhkVBmB1l/) calls to the API channel


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/deployment/managing-channels/creating-an-api-channel.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
