Skip to main content

Using in TypeScript

First, get an api key. We recommend putting your key in a file called .env that looks like this:

LLAMA_CLOUD_API_KEY=llx-xxxxxx

Set up a new TypeScript project in a new folder, we use this:

npm init
npm install -D typescript @types/node

LlamaParse support is built-in to LlamaIndex for TypeScript, so you'll need to install LlamaIndex.TS:

npm install llamaindex dotenv

Let's create a parse.ts file and put our dependencies in it:

import {
LlamaParseReader,
// we'll add more here later
} from "llamaindex";
import 'dotenv/config'

Now let's create our main function, which will load in fun facts about Canada and parse them:

async function main() {
// save the file linked above as sf_budget.pdf, or change this to match
const path = "./canada.pdf";

// set up the llamaparse reader
const reader = new LlamaParseReader({ resultType: "markdown" });

// parse the document
const documents = await reader.loadData(path);

// print the parsed document
console.log(documents)
}

main().catch(console.error);

Now run the file:

npx tsx parse.ts

Congratulations! You've parsed the file, and should see output that looks like this:

[
Document {
id_: '02f5e252-9dca-47fa-80b2-abdd902b911a',
embedding: undefined,
metadata: { file_path: './canada.pdf' },
excludedEmbedMetadataKeys: [],
excludedLlmMetadataKeys: [],
relationships: {},
text: '# Fun Facts About Canada\n' +
'\n' +
'We may be known as the Great White North, but
...etc...

Let's go a step further, and query this document using an LLM. For this, you will need an OpenAI API key (LlamaIndex supports dozens of LLMs, but OpenAI is the default). Get an OpenAI API key and add it to your .env file:

OPENAI_API_KEY=sk-proj-xxxxxx

Add the following to your imports (just below LlamaParse):

VectorStoreIndex,

And add this to your main function, below your console.log():

  // Split text and create embeddings. Store them in a VectorStoreIndex
const index = await VectorStoreIndex.fromDocuments(documents);

// Query the index
const queryEngine = index.asQueryEngine();
const { response, sourceNodes } = await queryEngine.query({
query: "What can you do in the Bay of Fundy?",
});

// Output response with sources
console.log(response);

Which when you run it should give you this final output:

You can raft-surf the world's highest tides at the Bay of Fundy.

And that's it! You've now parsed a document and queried it with an LLM. You can now use this in your own TypeScript projects. Head over to the TypeScript docs to learn more about LlamaIndex in TypeScript.