summaryrefslogtreecommitdiff
path: root/content/projects/retrieval-augmented-generation.md
diff options
context:
space:
mode:
Diffstat (limited to 'content/projects/retrieval-augmented-generation.md')
-rw-r--r--content/projects/retrieval-augmented-generation.md11
1 files changed, 6 insertions, 5 deletions
diff --git a/content/projects/retrieval-augmented-generation.md b/content/projects/retrieval-augmented-generation.md
index 673a5c3..35695cc 100644
--- a/content/projects/retrieval-augmented-generation.md
+++ b/content/projects/retrieval-augmented-generation.md
@@ -20,11 +20,12 @@ source models. This was achieved with [`ollama`](https://ollama.com) and
downloading and running these models locally.
However, the project was expanded to
-integrate with cohere and their rerank and command-r+ models, since I was especially
-curious about the command-r+'s performance. These models can be downloaded and run
-locally, but it took ages for my computer to generate any output, since the command-r+
-model is 104B parameters. The obvious and impressive benefit of the command-r+ is that
-it generates citations from the context in its answer.
+integrate with cohere and their rerank and
+[command-r+](https://cohere.com/blog/command-r-plus-microsoft-azure) models, since I
+was especially curious about the command-r+'s performance. These models can be
+downloaded and run locally, but it took ages for my computer to generate any output,
+since the command-r+ model is 104B parameters. The obvious and impressive benefit of the
+command-r+ is that it generates citations from the context in its answer.
Here is a [presentation](/rag.html) that gives a brief overview of what a RAG system
is, and how it can be improved with reranking.