Skip to main content

Frequently Asked Questions

Which LlamaCloud services communicate with which database/queue/filestore dependencies?

  • Backend: Postgres, MongoDB, Redis, Filestore
  • Jobs Service: Postgres, MongoDB, Filestore
  • Jobs Worker: RabbitMQ, Redis, MongoDB
  • Usage: MongoDB and Redis
  • LlamaParse: Consumes from RabbitMQ, Reads/Writes from Filestore
  • LlamaParse OCR: None

Which Features Require an LLM and what model?

  • Chat UI: This feature requires the customer's OpenAI Key to have access to either the Text-Only models and/or the Multi-Modal model (if multi-modal index)

    • (As of 09.24.2024): These keys are set up via the Helm chart:

      • backend:
        config:
        openAiApiKey: <your-key>

        # If you are using Azure OpenAI, you can configure it like this:
        # azureOpenAi:
        # enabled: false
        # existingSecret: ""
        # key: ""
        # endpoint: ""
        # deploymentName: ""
        # apiVersion: ""
  • Embeddings: Credentials to connect to an embedding model provider are input within the application directly during the Index creation workflow.

  • LlamaParse Fast: Text extraction only. No LLM.

  • LlamaParse Accurate: This mode uses the gpt-4o under the hood, and the key can be configured here:

    • llamaParse:
      config:
      openAiApiKey: "<your-key>"

      # If you are using Azure OpenAI, you can configure it like this:
      # azureOpenAi:
      # enabled: false
      # existingSecret: ""
      # key: ""
      # endpoint: ""
      # deploymentName: ""
      # apiVersion: ""

LLM API Rate Limits

There will be many instances where you may run into some kind of rate limit with an LLM provider. The easiest way to debug is to view the logs, and if you see a 429 error, increase your tokens per minute limit.

What auth modes are supported at the moment?

As of 09.24.2024, we only support OIDC auth for self-hosted deployments.