# LLM services

*LLMs (Large Language Models)* are AI services that comprehend human speech and can generate responses without the need to design hardcoded messaging. LLMs are pre-trained on massive datasets to construct the most natural and appropriate responses to humans.&#x20;

You may use an LLM service to handle the majority of your conversation flows with your users or to work when resolving several parameters with users via the [Generative Journey® node](/platform/nlx-platform-guide/flows-and-building-blocks/overview/nodes.md#generative-journey).

{% hint style="info" %}
NLX already provides several built-in generative AI models, including LLMs in your workspace. You may review all [native features here](/platform/nlx-platform-guide/introduction-to-nlx/platform.md#built-in-ai-capabilities). For a preferred LLM provider, select from on of the supported integrations below.
{% endhint %}

{% content-ref url="/pages/aQJNvVvQTm43XK6SM9in" %}
[Amazon Bedrock](/platform/nlx-platform-guide/integrations/types/llm-services/amazon-bedrock.md)
{% endcontent-ref %}

{% content-ref url="/pages/xvOKAOfj5RnXEXC8cAwm" %}
[Anthropic](/platform/nlx-platform-guide/integrations/types/llm-services/anthropic.md)
{% endcontent-ref %}

{% content-ref url="/pages/adh97THMtzeAaYKKei4k" %}
[Azure OpenAI](/platform/nlx-platform-guide/integrations/types/llm-services/azure-openai.md)
{% endcontent-ref %}

{% content-ref url="/pages/aiZZL3hv1PwH7p1y9P50" %}
[Cerebras](/platform/nlx-platform-guide/integrations/types/llm-services/cerebras.md)
{% endcontent-ref %}

{% content-ref url="/pages/SLD5Mt6zKELlyOBjjddz" %}
[Cohere](/platform/nlx-platform-guide/integrations/types/llm-services/cohere.md)
{% endcontent-ref %}

{% content-ref url="/pages/xua52OzEPMvF4PEglgK7" %}
[Google Vertex AI](/platform/nlx-platform-guide/integrations/types/llm-services/google-vertex-ai.md)
{% endcontent-ref %}

{% content-ref url="/pages/b9jebZczp4IL9l5BR2Wz" %}
[Groq](/platform/nlx-platform-guide/integrations/types/llm-services/groq.md)
{% endcontent-ref %}

{% content-ref url="/pages/WAg7avuis57BHhc2BlSX" %}
[NVIDIA](/platform/nlx-platform-guide/integrations/types/llm-services/nvidia.md)
{% endcontent-ref %}

{% content-ref url="/pages/WBksLqqMpFzuI8aLMONi" %}
[OpenAI](/platform/nlx-platform-guide/integrations/types/llm-services/openai.md)
{% endcontent-ref %}

{% content-ref url="/pages/8UQAvJHhY1KCbvBoFlFD" %}
[xAI](/platform/nlx-platform-guide/integrations/types/llm-services/xai.md)
{% endcontent-ref %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.nlx.ai/platform/nlx-platform-guide/integrations/types/llm-services.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
