From ee4384cec384f10fe3118c829c6b27e4c117d08f Mon Sep 17 00:00:00 2001 From: Gustaf Rydholm Date: Fri, 26 Apr 2024 00:50:47 +0200 Subject: Add rag --- content/projects/retrieval-augmented-generation.md | 26 ++++++++++++++++++++++ 1 file changed, 26 insertions(+) create mode 100644 content/projects/retrieval-augmented-generation.md (limited to 'content/projects') diff --git a/content/projects/retrieval-augmented-generation.md b/content/projects/retrieval-augmented-generation.md new file mode 100644 index 0000000..17d1358 --- /dev/null +++ b/content/projects/retrieval-augmented-generation.md @@ -0,0 +1,26 @@ +--- +title: "Retrieval Augmented Generation" +date: 2024-04-26 00:36 +tags: + [ + "deep learning", + "retrieval augmented generation", + "vector database", + "ollama", + "llm", + ] +draft: false +--- + +I implemented a retrieval augmented generation (RAG) +[program](https://github.com/aktersnurra/rag) for fun with the goal of being able to +search my personal library. My focus was to make this run locally with only open +source models. This was achieved with `ollama` and `sentence-transformers` for +downloading and running these models locally. However, the project was expanded to +integrate with cohere and its rerank and command-r+ models, since I was curious about +the command-r+ performance. These models can be downloaded and run locally, but it took +ages for my computer to generate any output, since the command-r+ is huge. + + +Here is a [presentation](/rag.html) that gives a brief overview of what a RAG system +is, and how it can be improved with reranking. -- cgit v1.2.3-70-g09d2