Ingest content
Easily set up your knowledge base data in NLX
What's Content of a Knowledge base?
Content, in its simplest form, refers to the knowledge base data that your conversational AI draws from to deliver a helpful response when a user's query is matched. A knowledge base (KB) in NLX provides the retrieval layer of a Retrieval-Augmented Generation (RAG) pipeline. A KB's content serves as the authoritative information source your conversational AI uses to ground its answers.
When a user asks a question, NLX performs semantic retrieval to locate relevant content, then uses that retrieved material to generate or deliver a grounded response. This process activates when the user’s utterance does not match a flow, slot value, external value, or UI element.
How NLX performs retrieval
When properly configured, your knowledge base powers semantic search:
The user’s query is embedded into a vector
NLX compares that vector to stored knowledge base vectors
The closest matches are retrieved and passed into the response generation process
This retrieval step supplies grounding context for the AI’s final answer
Ingesting content
When setting up your knowledge base, you may choose one of the following methods to ingest and host content.
Q&A
Content that's entered locally as question-and-answer parings and stored by NLX
Documents
File content that, once uploaded through NLX, is automatically managed for you in S3
External (custom)
Content hosted through your own custom vector store through S3 bucket hosting
Integration
Content hosted through a managed vector provider (e.g., Amazon Bedrock, Azure AI Search, Zendesk Sunshine, etc.)
Q&A
A lightweight way to supply structured question–answer pairs directly to the NLX platform. NLX stores this content and uses semantic search to surface the best-matching answer during RAG retrieval.
To ingest via Q&A:
Select Resources from workspace menu > Choose Knowledge bases > New knowledge base
Provide a name to your Knowledge base
Select Q&A as content type > Click Create knowledge base
Upload a CSV or JSON file, or select + Add article to manually enter question-and-answer pair
Sample JSON format of a question and answer:
[
{
"question": {
"text": "How are you doing?"
},
"responses": [
{
"type": "text",
"body": "It's another great day for me."
},
{
"type": "text",
"body": "How may I help you?"
}
]
}
]Optional
Payload: Provide a URL along with the response. Payload format:
https://google.c
Click Save
After creating your Q&A content, the Publish tab makes your content live and updates with changes made since the last publish. These updates immediately become available to any application using a Q&A Knowledge base:
Click the Publish tab of your Q&A Knowledge base
Select Create your first deployment button
Optional:
Enter a description of your deployment to keep track of changes
Click Publish knowledge base
Rollback version
To rollback your Q&A Knowledge base to a previous state, select Rollback next to the desired version from the Deployment column of the Publish tab.
Documents
Documents provides a simple document loader that allows you to ingest content from PDFs, images, and text files that is then stored for you and managed through NLX.
Document ingestion turns your files into a fully managed vectorized RAG corpu:

To ingest via Documents:
Select Resources from workspace menu > Choose Knowledge bases > New knowledge base
Provide a name to your Knowledge base
Select Documents as content type > Click Create knowledge base
Click Add documents > Drop files or select to browse locally from your computer
Click Save
Once uploaded files are done processing, the status column will display an Ingested status.
External
For enterprise teams maintaining their own custom RAG infrastructure, NLX can call your external retrieval API directly.
To ingest via External:
Select Resources from workspace menu > Choose Knowledge bases > New knowledge base
Provide a name to your Knowledge base
Select External as content type > Click Create knowledge base
Click Save
Integration
For teams leveraging providers like Amazon Bedrock, Azure AI Search, Google Vertex AI, or Zendesk Sunshine, NLX integrates directly with your managed vector store and orchestrates the RAG request.
To ingest via Integration:
Select Resources from workspace menu > Choose Knowledge bases > New knowledge base
Provide a name to your Knowledge base
Select Integration as content type > Click Create knowledge base
Choose from a supported data store service provider already integrated in your workspace in the dropdown
Enter information into all fields (fields are unique to each provider)
Click Save
Last updated

