diff options
-rw-r--r-- | README.md | 53 |
1 files changed, 51 insertions, 2 deletions
@@ -2,9 +2,53 @@ RAG with ollama and qdrant. -tbd +## Usage -### TODO +### Setup + +#### Environment Variables + +Create a .env file or set the following parameters: + +```.env +CHUNK_SIZE = <CHUNK_SIZE> +CHUNK_OVERLAP = <CHUNK_OVERLAP> + +ENCODER_MODEL = <ENCODER_MODEL> +EMBEDDING_DIM = <EMBEDDING_DIM> + +GENERATOR_MODEL = <GENERATOR_MODEL> + +RAG_DB_NAME = <DOCUMENT_DB_NAME> +RAG_DB_USER = <RAG_DB_USER> + +QDRANT_URL = <QDRANT_URL> +QDRANT_COLLECTION_NAME = <QDRANT_COLLECTION_NAME> +``` + +### Ollama + +Download the encoder and generator models with ollama: + +```sh +ollama pull $GENERATOR_MODEL +ollama pull $ENCODER_MODEL +``` + +### Qdrant + +Qdrant will is used to store the embeddings of the chunks from the documents. + +Download and run qdrant. + +### Postgres + +Postgres is used to save hashes of the document chunks to prevent document chunks from +being added to the vector db more than ones. + +Download and run qdrant. + +#### Running Build script/or FE for adding pdfs or retrieve information @@ -12,8 +56,13 @@ Build script/or FE for adding pdfs or retrieve information [streamlit](https://github.com/streamlit/streamlit) +### Notes + +Yes, it is inefficient/dumb to use ollama when you can just load the models with python +in the same process. ### Inspiration + I took some inspiration from these tutorials. [rag-openai-qdrant](https://colab.research.google.com/github/qdrant/examples/blob/master/rag-openai-qdrant/rag-openai-qdrant.ipynb) |