Parsing & Transformation
Once data is loaded from a Data Source, it is pre-processed before being sent to the Data Sink. There are many pre-processing parameters that can be tweaked to optimize the downstream retrieval performance of your index. While LlamaCloud sets you up with reasonable defaults, you can dig deeper and customize them as you see fit for your specific use case.
Parser Settings
A key step of any RAG pipeline is converting your input file into a format that can be used to generate a vector embedding. There are many parameters that can be used to tweak this conversion process to optimize for your use case. LlamaCloud sets you up from the start with reasonable defaults for your parsing configurations, but also allows you to dig deeper and customize them as you see fit for your specific application.
Transformation Settings
The transform configuration is used to define the transformation of the data before it is ingested into the Index. it is a JSON object which you can choose between two modes auto
and advanced
and as the name suggests, the auto
mode is handled by LlamaCloud which uses a set of default configurations and the advanced
mode is handled by the user with the ability to define their own transformation.
Auto Mode
You can set the mode by passing the transform_config
as below on index creation or update.
transform_config = {
"mode": "auto"
}
Also when using the auto
mode, you can configure the chunk size being used for the transformation by passing the chunk_size
and chunk_overlap
parameter as below.
transform_config = {
"mode": "auto",
"chunk_size": 1000,
"chunk_overlap": 100
}
Advanced Mode
The advanced mode provides a variation of configuration options for the user to define their own transformation. The advanced mode is defined by the mode
parameter as advanced
and the segmentation_config
and chunking_config
parameters are used to define the segmentation and chunking configuration respectively.
transform_config = {
"mode": "advanced",
"segmentation_config": {
"mode": "page",
"page_separator": "\n---\n"
},
"chunking_config": {
"mode": "sentence",
"separator": " ",
"paragraph_separator": "\n"
}
}
Segmentation Configuration
The segmentation configuration uses the document structure and/or semantics to divide the documents into smaller parts following natural segmentation boundaries. The segmentation_config
parameter include three modes none
, page
and element
.
None Segmentation Configuration
The none
segmentation configuration is used to define no segmentation.
transform_config = {
"mode": "advanced",
"segmentation_config": {
"mode": "none"
}
}
Page Segmentation Configuration
The page
segmentation configuration is used to define the segmentation by page and the page_separator
parameter is used to define the separator, which will split your document into pages.
transform_config = {
"mode": "advanced",
"segmentation_config": {
"mode": "page",
"page_separator": "\n---\n"
}
}
Element Segmentation Configuration
The element
segmentation configuration is used to define the segmentation by element which identifies the elements from the document as title, paragraph, list, table, etc.
The element
segmentation configuration is not available with fast parse mode.
transform_config = {
"mode": "advanced",
"segmentation_config": {
"mode": "element"
}
}
Chunking Configuration
Chunking configuration is mainly used to deal with context window limitaitons of embeddings model and LLMs. Conceptually, it's the step after segmenting, where segments are further broken down into smaller chunks as necessary to fit into the context window. It include a few modes none
, character
, token
, sentence
and semantic
.
Also all chunk config modes allow the user to define the chunk_size
and chunk_overlap
parameters. In the examples below we are not always defining the chunk_size and chunk_overlap parameters but you can always define them.
None Chunking Configuration
The none
chunking configuration is used to define no chunking.
transform_config = {
"mode": "advanced",
"chunking_config": {
"mode": "none"
}
}
Character Chunking Configuration
The character
chunking configuration is used to define the chunking by character and the chunk_size
parameter is used to define the size of the chunk.
transform_config = {
"mode": "advanced",
"chunking_config": {
"mode": "character",
"chunk_size": 1000
}
}
Token Chunking Configuration
The token
chunking configuration is used to define the chunking by token and uses OpenAI tokenizer behind the hood. Alsochunk_size
and chunk_overlap
parameters are used to define the size of the chunk and the overlap between the chunks.
transform_config = {
"mode": "advanced",
"chunking_config": {
"mode": "token",
"chunk_size": 1000,
"chunk_overlap": 100
}
}
Sentence Chunking Configuration
The sentence
chunking configuration is used to define the chunking by sentence and the separator
and paragraph_separator
parameters are used to define the separator between the sentences and paragraphs.
transform_config = {
"mode": "advanced",
"chunking_config": {
"mode": "sentence",
"separator": " ",
"paragraph_separator": "\n"
}
}
Embedding Model
The embedding model allows you to construct a numerical representation of the text within your files. This is a crucial step in allowing you to search for specific information within your files. There are a wide variety of embedding models to choose from, and we support quite a few on LlamaCloud.
After Pre-Processing, your data is ready to be sent to the Data Sink ➡️