The Dino 🦕, the Llama 🦙, and the Whale 🐋
- An environment for our language model – while you can connect up to various LLM hosting environments via APIs, we are going to leverage the Ollama framework for running language models on your local machine.
- A large language model – we will use a resized version of DeepSeek R1 that can run locally.
- A notebook – Jupyter Notebook for interactive code and text.
- Deno – a runtime that includes a built-in Jupyter kernel. We assume a recent version is installed.
- An IDE – we’ll use VSCode with built-in Jupyter Notebook support and the Deno extension (extension link).
- An AI library/framework – LangChain.js to simplify interactions with the LLM.
- A schema validator – we’ll structure LLM output. We will use zod for this.
Build a custom RAG AI agent in TypeScript and Jupyter
- Retrieve and prepare several blog posts to be used by our AI agent.
- Create an AI agent which has several tools:
- A tool to query the blog posts in the database.
- A tool to grade if the documents are relevant to the query.
- The ability to rewrite and improve the query if required.
- Finally we generate a response to the query based on our collection of information.
No comments:
Post a Comment