# MCP server

### **What's an application integrated through MCP?**

*Model Context Protocol (MCP)* is a standardized protocol for how Large Language Models (LLMs) integrate and engage with external systems. Prior to MCP, builders would need to provide a multitude of tools via REST APIs to give an LLM the ability to perform even the simplest of tasks you wish to define:

* Check the bank balance of a user
* Provide an activity recommendation local to a user

NLX’s MCP support allows you to turn any NLX application into an MCP Server, giving an MCP Client the ability to follow and perform tasks outlined in the application's [flow(s)](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/overview) as well as [pass context](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/overview/flows-and-variables#externally-defined) easily to NLX when the MCP Client interfaces with a user.&#x20;

Your NLX flows effectively become *MCP tools* that you provide to a supporting MCP Client to boost its capabilities. This process is done entirely without writing code or exposing your systems and services to new vulnerabilities.

### Requirements

* [ ] Create MCP flow
* [ ] Deploy app
* [ ] Provide to MCP Client

### Step 1: Create MCP flow

{% hint style="success" %}
Est. time to complete: \~10 minutes
{% endhint %}

Begin by identifying the tasks your conversational AI application will automate and organize them into individual topics handled by [flows](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/overview). Determine the sequence of steps and messaging that the conversational application follows to assist a user with the task. The conversation workflow is assembled in a flow's Canvas with a pattern of nodes similar to a flow diagram.

Each flow is invoked when your chosen AI model identifies customer intent from a user's query ("Where can I get Portuguese food in Atlanta?") and matches it to a flow you've created (MCPFoodRecommendation).&#x20;

Variables required for the flow to work that should be extracted by the LLM in conversation and passed along to NLX are set up first in the flow's *Settings:*

<figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2FJemUkaM77UHLGndlo2KA%2Fimage.png?alt=media&#x26;token=751719d0-300a-4d75-ad38-f22037820858" alt=""><figcaption></figcaption></figure>

1. Select *Resources* from workspace menu > Choose *Flows* > Click *New flow*
2. Enter a descriptive name (no spaces or special characters) > Select *Save*
3. Choose *Settings* (gear icon) in flow toolbar
4. From the *Routing* tab:
   * In the *AI description* field, enter a concise description explaining the purpose of the flow that LLM models reference to invoke the flow (e.g., *A tool for providing food recommendations based on cuisine and location*)
5. From the *MCP* tab:
   * Enable *MCP* toggle
     * Provide a unique and concise input name (no spaces or special characters)
       * Enter any optional input schema containing [variable(s)](https://docs.nlx.ai/platform/nlx-platform-guide/flows-and-building-blocks/overview/flows-and-variables) that will be passed along by the LLM interfacing with a user
       * Enter a brief description for each property so the LLM understands the purpose and context of each (e.g., `cuisine` property might have the accompanying description for a weather update flow: `The cuisine for the recommendation`)

<figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2Fa3z4JXo4pGLOatXQH9H1%2Fimage.png?alt=media&#x26;token=d8907dcc-b19b-45f2-9eeb-1d9ed54a5c29" alt=""><figcaption></figcaption></figure>

6. Click *Save*

On any node of the flow, enter an open curly brace `{` and reference the MCP input variable you want to use as an output in messaging, payload fields, *Split* node conditions, etc:

<figure><img src="https://2737319166-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FHCxYxhIU0Bqkjj942mGk%2Fuploads%2FOVKR7lzBfVHXYbGO9NAV%2Fimage.png?alt=media&#x26;token=01c6e71d-3604-4768-af82-7adef92af975" alt=""><figcaption></figcaption></figure>

## Step 2: Deploy app

{% hint style="success" %}
Est. time to complete: \~2 minutes
{% endhint %}

Now you'll set up and deploy an application that will serve as an MCP server.

1. Select *Applications* from workspace menu > Choose *New application*
2. Click *Blank application* from the available options > Choose *Custom*
3. Provide a name for your application > Click *Create application*
4. On *Configuration* tab of application:
   * Under *Delivery* section, select the MCP channel
     * Enable *MCP interface* toggle > Click *Update channel*
   * Under *Functionality* section, attach one or more flows created in previous step to make available to your application
     * Click <i class="fa-gear">:gear:</i> *Default behavior*  > Assign any attached flow to the application's behaviors&#x20;
5. Select *Settings* tab of application > Under *AI settings*, enter a concise description of the application's purpose into the *AI description* field
6. Click *Save*

A build now constructs a package of your application with a snapshot of current state of flows, languages, and application setup.&#x20;

7. Select deployment status in upper right > Choose *Build and deploy*
8. Review *Validation check* for critical errors or detected UX issues in custom flows. Before each new build initiates, a validation check is run to provide a preview of potential errors that may cause failed builds. Detected issues are listed with descriptions and potential solutions
   * You may provide a *Description* of notable build edits as a changelog
9. Click *Create build*
10. Click deployment status in upper right > Select *All builds*&#x20;
11. Choose *Deploy* on successful build > Click *Create deployment*

## Step 3: MCP Client

{% hint style="success" %}
Est. time to complete: \~3 minutes
{% endhint %}

The final step is to prepare your MCP Client to access and use your NLX application as an MCP Server.

{% @arcade/embed flowId="Ck72joWhJFXyIn36kK74" url="<https://app.arcade.software/share/Ck72joWhJFXyIn36kK74>" %}

1. From your application in your NLX workspace, choose its *Configuration* tab
2. Select the MCP channel listed under the *Delivery* section
3. Follow instructions for your preferred MCP Client:

{% tabs %}
{% tab title="Claude Desktop" %}
4\) Follow these [instructions for setting up Claude Desktop](https://modelcontextprotocol.io/quickstart/user)
5\) Open the MCP configuration in Claude Desktop > Select *Settings* menu > Choose *Developer* tab > Click *Edit Config*
6\) Claude Desktop will open the file explorer to the `claude_desktop_config.json` file

* Paste the MCP server JSON copied from the setup instructions to the `claude_desktop_config.json` file

7. Relaunch Claude Desktop and open the conversation settings. Check to see your new MCP server is enabled

Claude Desktop can now use your NLX MCP server when relevant.

Example MCP server JSON configuration:

```json
{
  "mcpServers": {
    "YourApplicationName": {
      "command": "npx",
      "args": [
        "-y",
        "@nlxai/mcp-nodejs-server"
      ],
      "env": {
        "NLX_API_KEY": "your API key",
        "NLX_APP_URL": "your MCP URL"
      }
    }
  }
}
```

{% endtab %}

{% tab title="Cursor" %}
4\. Follow these [instructions for setting up Cursor](https://docs.cursor.com/chat/tools#mcp-servers)
5\. Click the *Add to Cursor* button under the MCP instructions from your application's API channel
6\. Test by opening a new chat tab and asking Cursor to make a request to your new MCP tool directly

Cursor can now use your NLX MCP server when relevant.

Example MCP server JSON configuration:

```json
{
  "mcpServers": {
    "YourApplicationName": {
      "command": "npx",
      "args": [
        "-y",
        "@nlxai/mcp-nodejs-server"
      ],
      "env": {
        "NLX_API_KEY": "your API key",
        "NLX_APP_URL": "your MCP URL"
      }
    }
  }
}
```

{% endtab %}

{% tab title="Postman" %}
4\. Follow these [instructions for setting up Postman](https://learning.postman.com/docs/postman-ai-agent-builder/mcp-requests/create/)
5\. Open a new MCP request tab in Postman > Select *New Request* > Click *MCP*
6\. Postman will open a new request tab

* Paste the MCP server JSON copied from the MCP instructions into Postman
* Click *Connect*

7. Click *Load Capabilities* to explore your MCP enabled flows and test

Postman can now use your NLX MCP server when relevant.

Example MCP server JSON configuration:

```json
{
  "mcpServers": {
    "YourApplicationName": {
      "command": "npx",
      "args": [
        "-y",
        "@nlxai/mcp-nodejs-server"
      ],
      "env": {
        "NLX_API_KEY": "your API key",
        "NLX_APP_URL": "your MCP URL"
      }
    }
  }
}
```

{% endtab %}
{% endtabs %}

{% hint style="info" %}
Want to take your app offline? Click the deployed build > Select *Deployment* tab in modal > Scroll to *Danger zone* and click *Delete* *deployment*. The app stays offline until you redeploy.
{% endhint %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nlx.ai/platform/nlx-platform-guide/ai-applications/deployment/mcp-server.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
