Data Sinks
Once your input documents have been processed, they're ready to be sent to their final destination: a vector database.
If you don't want to set up and host a vector database, we offer a full-managed option in which we host the vector database for you. Alternatively, you can host your own vector database and connect it to LlamaCloud:
📄️ Azure AI Search
Configure via UI
📄️ Managed Data Sink
Use LlamaCloud managed index as data sink.
📄️ Milvus
Configure your own Milvus Vector DB instance as data sink.
📄️ MongoDB Atlas Vector Search
Configure your own MongoDB Atlas instance as data sink.
📄️ Pinecone
Configure your own Pinecone instance as data sink.
📄️ Qdrant
Configure your own Qdrant instance as data sink.
Once the vector database is setup, they will be store using a Embedding Model of choice and will be ready to be used in your RAG use case ➡️
For the time being, the term "Data Sink" means a vector database. However, this definition of a Data Sink may expand in the future.