Search Pipelines
GET/api/v1/pipelines
Search for pipelines by various parameters.
Request
Query Parameters
Cookie Parameters
Responses
- 200
- 422
Successful Response
- application/json
- Schema
- Example (from schema)
Schema
Array [
- MOD1
- MOD1
- MOD1
- MOD1
- AzureOpenAIEmbeddingConfig
- CohereEmbeddingConfig
- GeminiEmbeddingConfig
- HuggingFaceInferenceAPIEmbeddingConfig
- OpenAIEmbeddingConfig
- VertexAIEmbeddingConfig
- BedrockEmbeddingConfig
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- Pooling
- MOD1
- MOD1
- MOD1
- MOD2
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
Array [
- MOD1
- CharacterSplitter
- PageSplitterNodeParser
- CodeSplitter
- SentenceSplitter
- TokenTextSplitter
- MarkdownNodeParser
- MarkdownElementNodeParser
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- LLM
- MOD1
- BasePromptTemplate
- MOD1
- MOD1
- NodeParser
- MOD1
]
- PipelineConfigurationHashes
- MOD1
- MOD1
- MOD1
- AutoTransformConfig
- AdvancedModeTransformConfig
- NoneSegmentationConfig
- PageSegmentationConfig
- ElementSegmentationConfig
- NoneChunkingConfig
- CharacterChunkingConfig
- TokenChunkingConfig
- SentenceChunkingConfig
- SemanticChunkingConfig
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MetadataFilters
Array [
- MetadataFilter
- MOD1
- MOD2
- MOD3
- MOD4
- MOD5
- MOD6
Array [
]
Array [
]
Array [
]
]
- FilterCondition
- MOD1
- LlamaParseParameters
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- DataSink
- MOD1
- MOD1
- MOD1
- CloudPineconeVectorStore
- CloudPostgresVectorStore
- CloudQdrantVectorStore
- CloudAzureAISearchVectorStore
- CloudMongoDBAtlasVectorSearch
- CloudMilvusVectorStore
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
- MOD1
]
Unique identifier
created_at
object
Creation datetime
anyOf
string
updated_at
object
Update datetime
anyOf
string
embedding_model_config_id
object
The ID of the EmbeddingModelConfig this pipeline is using.
anyOf
string
Enum for representing the type of a pipeline
Possible values: [PLAYGROUND
, MANAGED
]
MANAGED
managed_pipeline_id
object
The ID of the ManagedPipeline this playground pipeline is linked to.
anyOf
string
embedding_config
object
required
oneOf
Type of the embedding model.
Possible values: [AZURE_EMBEDDING
]
AZURE_EMBEDDING
component
object
Configuration for the Azure OpenAI embedding model.
The name of the OpenAI embedding model.
text-embedding-ada-002
The batch size for embedding calls.
Possible values: > 0
and <= 2048
10
num_workers
object
The number of workers to use for async embedding calls.
anyOf
integer
Additional kwargs for the OpenAI API.
api_key
object
The OpenAI API key.
anyOf
string
The base URL for Azure deployment.
The version for Azure OpenAI API.
Maximum number of retries.
10
Timeout for each request.
60
default_headers
object
The default headers for API requests.
anyOf
Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.
true
dimensions
object
The number of dimensions on the output embedding vectors. Works only with v3 embedding models.
anyOf
integer
azure_endpoint
object
The Azure endpoint to use.
anyOf
string
azure_deployment
object
The Azure deployment to use.
anyOf
string
AzureOpenAIEmbedding
Type of the embedding model.
Possible values: [COHERE_EMBEDDING
]
COHERE_EMBEDDING
component
object
Configuration for the Cohere embedding model.
The modelId of the Cohere model to use.
embed-english-v3.0
The batch size for embedding calls.
Possible values: > 0
and <= 2048
10
num_workers
object
The number of workers to use for async embedding calls.
anyOf
integer
api_key
object
required
The Cohere API key.
anyOf
string
Truncation type - START/ END/ NONE
END
input_type
object
Model Input type. If not provided, search_document and search_query are used when needed.
anyOf
string
Embedding type. If not provided float embedding_type is used when needed.
float
CohereEmbedding
Type of the embedding model.
Possible values: [GEMINI_EMBEDDING
]
GEMINI_EMBEDDING
component
object
Configuration for the Gemini embedding model.
The modelId of the Gemini model to use.
models/embedding-001
The batch size for embedding calls.
Possible values: > 0
and <= 2048
10
num_workers
object
The number of workers to use for async embedding calls.
anyOf
integer
title
object
Title is only applicable for retrieval_document tasks, and is used to represent a document title. For other tasks, title is invalid.
anyOf
string
task_type
object
The task for embedding model.
anyOf
string
api_key
object
API key to access the model. Defaults to None.
anyOf
string
api_base
object
API base to access the model. Defaults to None.
anyOf
string
transport
object
Transport to access the model. Defaults to None.
anyOf
string
GeminiEmbedding
Type of the embedding model.
Possible values: [HUGGINGFACE_API_EMBEDDING
]
HUGGINGFACE_API_EMBEDDING
component
object
Configuration for the HuggingFace Inference API embedding model.
model_name
object
Hugging Face model name. If None, the task will be used.
anyOf
string
The batch size for embedding calls.
Possible values: > 0
and <= 2048
10
num_workers
object
The number of workers to use for async embedding calls.
anyOf
integer
pooling
object
Pooling strategy. If None, the model's default pooling is used.
anyOf
Enum of possible pooling choices with pooling behaviors.
string
Possible values: [cls
, mean
, last
]
query_instruction
object
Instruction to prepend during query embedding.
anyOf
string
text_instruction
object
Instruction to prepend during text embedding.
anyOf
string
token
object
Hugging Face token. Will default to the locally saved token. Pass token=False if you don’t want to send your token to the server.
anyOf
string
boolean
timeout
object
The maximum number of seconds to wait for a response from the server. Loading a new model in Inference API can take up to several minutes. Defaults to None, meaning it will loop until the server is available.
anyOf
number
headers
object
Additional headers to send to the server. By default only the authorization and user-agent headers are sent. Values in this dictionary will override the default values.
anyOf
cookies
object
Additional cookies to send to the server.
anyOf
task
object
Optional task to pick Hugging Face's recommended model, used when model_name is left as default of None.
anyOf
string
HuggingFaceInferenceAPIEmbedding
Type of the embedding model.
Possible values: [OPENAI_EMBEDDING
]
OPENAI_EMBEDDING
component
object
Configuration for the OpenAI embedding model.
The name of the OpenAI embedding model.
text-embedding-ada-002
The batch size for embedding calls.
Possible values: > 0
and <= 2048
10
num_workers
object
The number of workers to use for async embedding calls.
anyOf
integer
Additional kwargs for the OpenAI API.
api_key
object
The OpenAI API key.
anyOf
string
api_base
object
The base URL for OpenAI API.
anyOf
string
api_version
object
The version for OpenAI API.
anyOf
string
Maximum number of retries.
10
Timeout for each request.
60
default_headers
object
The default headers for API requests.
anyOf
Reuse the OpenAI client between requests. When doing anything with large volumes of async API calls, setting this to false can improve stability.
true
dimensions
object
The number of dimensions on the output embedding vectors. Works only with v3 embedding models.
anyOf
integer
OpenAIEmbedding
Type of the embedding model.
Possible values: [VERTEXAI_EMBEDDING
]
VERTEXAI_EMBEDDING
component
object
Configuration for the VertexAI embedding model.
The modelId of the VertexAI model to use.
textembedding-gecko@003
The batch size for embedding calls.
Possible values: > 0
and <= 2048
10
num_workers
object
The number of workers to use for async embedding calls.
anyOf
integer
The default location to use when making API calls.
The default GCP project to use when making Vertex API calls.
The embedding mode to use.
Possible values: [default
, classification
, clustering
, similarity
, retrieval
]
retrieval
Additional kwargs for the Vertex.
client_email
object
required
The client email for the VertexAI credentials.
anyOf
string
token_uri
object
required
The token URI for the VertexAI credentials.
anyOf
string
private_key_id
object
required
The private key ID for the VertexAI credentials.
anyOf
string
private_key
object
required
The private key for the VertexAI credentials.
anyOf
string
VertexTextEmbedding
Type of the embedding model.
Possible values: [BEDROCK_EMBEDDING
]
BEDROCK_EMBEDDING
component
object
Configuration for the Bedrock embedding model.
The modelId of the Bedrock model to use.
amazon.titan-embed-text-v1
The batch size for embedding calls.
Possible values: > 0
and <= 2048
10
num_workers
object
The number of workers to use for async embedding calls.
anyOf
integer
profile_name
object
The name of aws profile to use. If not given, then the default profile is used.
anyOf
string
aws_access_key_id
object
AWS Access Key ID to use
anyOf
string
aws_secret_access_key
object
AWS Secret Access Key to use
anyOf
string
aws_session_token
object
AWS Session Token to use
anyOf
string
region_name
object
AWS region name to use. Uses region configured in AWS CLI if not passed
anyOf
string
The maximum number of API retries.
Possible values: > 0
10
The timeout for the Bedrock API request in seconds. It will be used for both connect and read timeouts.
60
Additional kwargs for the bedrock client.
BedrockEmbedding
configured_transformations
object[]
Deprecated don't use it, List of configured transformations.
Name for the type of transformation this is (e.g. SIMPLE_NODE_PARSER). Can also be an enum instance of llama_index.ingestion.transformations.ConfigurableTransformations. This will be converted to ConfigurableTransformationNames.
Possible values: [CHARACTER_SPLITTER
, PAGE_SPLITTER_NODE_PARSER
, CODE_NODE_PARSER
, SENTENCE_AWARE_NODE_PARSER
, TOKEN_AWARE_NODE_PARSER
, MARKDOWN_NODE_PARSER
, MARKDOWN_ELEMENT_NODE_PARSER
]
component
object
required
Component that implements the transformation
anyOf
object
A splitter that splits text into characters.
Whether or not to consider metadata when splitting.
true
Include prev/next node relationships.
true
id_func
object
Function to generate node IDs.
anyOf
string
The token chunk size for each chunk.
Possible values: > 0
1024
The token overlap of each chunk when splitting.
200
Default separator for splitting into words
Separator between paragraphs.
secondary_chunking_regex
object
Backup regex for splitting into sentences.
anyOf
string
SentenceSplitter
Split text into pages.
Whether or not to consider metadata when splitting.
true
Include prev/next node relationships.
true
id_func
object
Function to generate node IDs.
anyOf
string
page_separator
object
Separator to split text into pages.
anyOf
string
base_component
Split code using a AST parser.
Thank you to Kevin Lu / SweepAI for suggesting this elegant code splitting solution. https://docs.sweep.dev/blogs/chunking-2m-files
Whether or not to consider metadata when splitting.
true
Include prev/next node relationships.
true
id_func
object
Function to generate node IDs.
anyOf
string
The programming language of the code being split.
The number of lines to include in each chunk.
Possible values: > 0
40
How many lines of code each chunk overlaps with.
Possible values: > 0
15
Maximum number of characters per chunk.
Possible values: > 0
1500
CodeSplitter
Parse text with a preference for complete sentences.
In general, this class tries to keep sentences and paragraphs together. Therefore compared to the original TokenTextSplitter, there are less likely to be hanging sentences or parts of sentences at the end of the node chunk.
Whether or not to consider metadata when splitting.
true
Include prev/next node relationships.
true
id_func
object
Function to generate node IDs.
anyOf
string
The token chunk size for each chunk.
Possible values: > 0
1024
The token overlap of each chunk when splitting.
200
Default separator for splitting into words
Separator between paragraphs.
secondary_chunking_regex
object
Backup regex for splitting into sentences.
anyOf
string
SentenceSplitter
Implementation of splitting text that looks at word tokens.
Whether or not to consider metadata when splitting.
true
Include prev/next node relationships.
true
id_func
object
Function to generate node IDs.
anyOf
string
The token chunk size for each chunk.
Possible values: > 0
1024
The token overlap of each chunk when splitting.
20
Default separator for splitting into words
Additional separators for splitting.
TokenTextSplitter
Markdown node parser.
Splits a document into Nodes using Markdown header-based splitting logic. Each node contains its text content and the path of headers leading to it.
Args: include_metadata (bool): whether to include metadata in nodes include_prev_next_rel (bool): whether to include prev/next relationships
Whether or not to consider metadata when splitting.
true
Include prev/next node relationships.
true
id_func
object
Function to generate node IDs.
anyOf
string
base_component
Markdown element node parser.
Splits a markdown document into Text Nodes and Index Nodes corresponding to embedded objects (e.g. tables).
Whether or not to consider metadata when splitting.
true
Include prev/next node relationships.
true
id_func
object
Function to generate node IDs.
anyOf
string
llm
object
LLM model to use for summarization.
anyOf
The LLM class is the main class for interacting with language models.
Attributes: system_prompt (Optional[str]): System prompt for LLM calls. messages_to_prompt (Callable): Function to convert a list of messages to an LLM prompt. completion_to_prompt (Callable): Function to convert a completion to an LLM prompt. output_parser (Optional[BaseOutputParser]): Output parser to parse, validate, and correct errors programmatically. pydantic_program_mode (PydanticProgramMode): Pydantic program mode to use for structured prediction.
system_prompt
object
System prompt for LLM calls.
anyOf
string
Function to convert a list of messages to an LLM prompt.
Function to convert a completion to an LLM prompt.
output_parser
object
Output parser to parse, validate, and correct errors programmatically.
anyOf
Pydantic program mode.
Possible values: [default
, openai
, llm
, function
, guidance
, lm-format-enforcer
]
default
query_wrapper_prompt
object
Query wrapper prompt for LLM calls.
anyOf
kwargs
object
required
output_parser
object
required
anyOf
template_var_mappings
object
Template variable mappings (Optional).
anyOf
object
function_mappings
object
Function mappings (Optional). This is a mapping from template variable names to functions that take in the current kwargs and return a string.
anyOf
Query string to use for summarization.
What is this table about? Give a very concise summary (imagine you are adding a new caption and summary for this table), and output the real/existing table title/caption if context provided.and output the real/existing table id if context provided.and also output whether or not the table should be kept.
Num of workers for async jobs.
4
Whether to show progress.
true
nested_node_parser
object
Other types of node parsers to handle some types of nodes.
anyOf
Base interface for node parser.
Whether or not to consider metadata when splitting.
true
Include prev/next node relationships.
true
id_func
object
Function to generate node IDs.
anyOf
string
base_component
MarkdownElementNodeParser
config_hash
object
Hashes for the configuration of the pipeline.
anyOf
Hashes for the configuration of a pipeline.
embedding_config_hash
object
Hash of the embedding config.
anyOf
string
parsing_config_hash
object
Hash of the llama parse parameters.
anyOf
string
transform_config_hash
object
Hash of the transform config.
anyOf
string
transform_config
object
Configuration for the transformation.
anyOf
Possible values: [auto
]
auto
Chunk size for the transformation.
Possible values: > 0
1024
Chunk overlap for the transformation.
200
Possible values: [advanced
]
advanced
segmentation_config
object
Configuration for the segmentation.
anyOf
Possible values: [none
]
none
Possible values: [page
]
page
---
Possible values: [element
]
element
chunking_config
object
Configuration for the chunking.
anyOf
Possible values: [none
]
none
Possible values: > 0
1024
200
Possible values: [character
]
character
Possible values: > 0
1024
200
Possible values: [token
]
token
Possible values: > 0
1024
200
Possible values: [sentence
]
sentence
Possible values: [semantic
]
semantic
1
95
preset_retrieval_parameters
object
Preset retrieval parameters for the pipeline.
dense_similarity_top_k
object
Number of nodes for dense retrieval.
anyOf
integer
Possible values: >= 1
and <= 100
dense_similarity_cutoff
object
Minimum similarity score wrt query for retrieval
anyOf
number
Possible values: <= 1
sparse_similarity_top_k
object
Number of nodes for sparse retrieval.
anyOf
integer
Possible values: >= 1
and <= 100
enable_reranking
object
Enable reranking for retrieval
anyOf
boolean
rerank_top_n
object
Number of reranked nodes for returning.
anyOf
integer
Possible values: >= 1
and <= 100
alpha
object
Alpha value for hybrid retrieval to determine the weights between dense and sparse retrieval. 0 is sparse retrieval and 1 is dense retrieval.
anyOf
number
Possible values: <= 1
search_filters
object
Search filters for retrieval.
anyOf
Metadata filters for vector stores.
filters
object[]
required
anyOf
Comprehensive metadata filter for vector stores to support more operators.
Value uses Strict* types, as int, float and str are compatible types and were all converted to string before.
See: https://docs.pydantic.dev/latest/usage/types/#strict-types
value
object
required
anyOf
integer
number
string
string
number
integer
Vector store filter operator.
Possible values: [==
, >
, <
, !=
, >=
, <=
, in
, nin
, any
, all
, text_match
, contains
, is_empty
]
==
condition
object
anyOf
Vector store filter conditions to combine different filters.
string
Possible values: [and
, or
]
files_top_k
object
Number of files to retrieve (only for retrieval mode files_via_metadata and files_via_content).
anyOf
integer
Possible values: >= 1
and <= 5
The retrieval mode for the query.
Possible values: [chunks
, files_via_metadata
, files_via_content
, auto_routed
]
auto_routed
Whether to retrieve image nodes.
false
base_component
eval_parameters
object
Eval parameters for the pipeline.
The LLM model to use within eval execution.
Possible values: [GPT_3_5_TURBO
, GPT_4
, GPT_4_TURBO
, GPT_4O
, GPT_4O_MINI
, AZURE_OPENAI
]
GPT_4O
The template to use for the question answering prompt.
Context information is below.
---------------------
{context_str}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {query_str}
Answer:
llama_parse_parameters
object
Settings that can be configured for how to use LlamaParse to parse files within a LlamaCloud pipeline.
anyOf
Settings that can be configured for how to use LlamaParse to parse files within a LlamaCloud pipeline.
Possible values: [af
, az
, bs
, cs
, cy
, da
, de
, en
, es
, et
, fr
, ga
, hr
, hu
, id
, is
, it
, ku
, la
, lt
, lv
, mi
, ms
, mt
, nl
, no
, oc
, pi
, pl
, pt
, ro
, rs_latin
, sk
, sl
, sq
, sv
, sw
, tl
, tr
, uz
, vi
, ar
, fa
, ug
, ur
, bn
, as
, mni
, ru
, rs_cyrillic
, be
, bg
, uk
, mn
, abq
, ady
, kbd
, ava
, dar
, inh
, che
, lbe
, lez
, tab
, tjk
, hi
, mr
, ne
, bh
, mai
, ang
, bho
, mah
, sck
, new
, gom
, sa
, bgc
, th
, ch_sim
, ch_tra
, ja
, ko
, ta
, te
, kn
], >= 1
false
false
false
false
false
false
false
false
false
false
false
page_separator
object
anyOf
string
false
false
true
false
false
project_id
object
anyOf
string
azure_openai_deployment_name
object
anyOf
string
azure_openai_endpoint
object
anyOf
string
azure_openai_api_version
object
anyOf
string
azure_openai_key
object
anyOf
string
input_url
object
anyOf
string
http_proxy
object
anyOf
string
false
auto_mode_trigger_on_regexp_in_page
object
anyOf
string
auto_mode_trigger_on_text_in_page
object
anyOf
string
false
false
data_sink
object
The data sink for the pipeline. If None, the pipeline will use the fully managed data sink.
anyOf
Schema for a data sink.
Unique identifier
created_at
object
Creation datetime
anyOf
string
updated_at
object
Update datetime
anyOf
string
The name of the data sink.
Possible values: [PINECONE
, POSTGRES
, QDRANT
, AZUREAI_SEARCH
, MONGODB_ATLAS
, MILVUS
]
component
object
required
anyOf
object
Cloud Pinecone Vector Store.
This class is used to store the configuration for a Pinecone vector store, so that it can be created and used in LlamaCloud.
Args: api_key (str): API key for authenticating with Pinecone index_name (str): name of the Pinecone index namespace (optional[str]): namespace to use in the Pinecone index insert_kwargs (optional[dict]): additional kwargs to pass during insertion
Possible values: [true
]
true
namespace
object
anyOf
string
insert_kwargs
object
anyOf
object
CloudPineconeVectorStore
Possible values: [false
]
false
hybrid_search
object
anyOf
boolean
CloudPostgresVectorStore
Cloud Qdrant Vector Store.
This class is used to store the configuration for a Qdrant vector store, so that it can be created and used in LlamaCloud.
Args: collection_name (str): name of the Qdrant collection url (str): url of the Qdrant instance api_key (str): API key for authenticating with Qdrant max_retries (int): maximum number of retries in case of a failure. Defaults to 3 client_kwargs (dict): additional kwargs to pass to the Qdrant client
Possible values: [true
]
true
3
CloudQdrantVectorStore
Cloud Azure AI Search Vector Store.
Possible values: [true
]
true
search_service_api_version
object
anyOf
string
index_name
object
anyOf
string
filterable_metadata_field_keys
object
anyOf
object
embedding_dimension
object
anyOf
integer
client_id
object
anyOf
string
client_secret
object
anyOf
tenant_id
object
anyOf
string
CloudAzureAISearchVectorStore
Cloud MongoDB Atlas Vector Store.
This class is used to store the configuration for a MongoDB Atlas vector store, so that it can be created and used in LlamaCloud.
Args: mongodb_uri (str): URI for connecting to MongoDB Atlas db_name (str): name of the MongoDB database collection_name (str): name of the MongoDB collection vector_index_name (str): name of the MongoDB Atlas vector index fulltext_index_name (str): name of the MongoDB Atlas full-text index
Possible values: [false
]
false
vector_index_name
object
anyOf
string
fulltext_index_name
object
anyOf
string
CloudMongoDBAtlasVectorSearch
Cloud Milvus Vector Store.
Possible values: [false
]
false
collection_name
object
anyOf
string
token
object
anyOf
embedding_dimension
object
anyOf
integer
CloudMilvusVectorStore
[
{
"id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"created_at": "2024-07-29T15:51:28.071Z",
"updated_at": "2024-07-29T15:51:28.071Z",
"name": "string",
"project_id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"embedding_model_config_id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"pipeline_type": "MANAGED",
"managed_pipeline_id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"embedding_config": {},
"configured_transformations": [
{
"id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"configurable_transformation_type": "CHARACTER_SPLITTER",
"component": {}
}
],
"config_hash": {},
"transform_config": {},
"preset_retrieval_parameters": {
"dense_similarity_top_k": 0,
"dense_similarity_cutoff": 0,
"sparse_similarity_top_k": 0,
"enable_reranking": true,
"rerank_top_n": 0,
"alpha": 0,
"search_filters": {},
"files_top_k": 0,
"retrieval_mode": "auto_routed",
"retrieve_image_nodes": false,
"class_name": "base_component"
},
"eval_parameters": {
"llm_model": "GPT_4O",
"qa_prompt_tmpl": "Context information is below.\n---------------------\n{context_str}\n---------------------\nGiven the context information and not prior knowledge, answer the query.\nQuery: {query_str}\nAnswer: "
},
"llama_parse_parameters": {},
"data_sink": {}
}
]
Validation Error
- application/json
- Schema
- Example (from schema)
Schema
Array [
Array [
- MOD1
- MOD2
]
]
detail
object[]
loc
object[]
required
anyOf
string
integer
{
"detail": [
{
"loc": [
"string",
0
],
"msg": "string",
"type": "string"
}
]
}