Local RAG: Langchain + Ollama + Streamlit Build your own RAG (chatPDF) and run it locally. Setups: install ollama from https://ollama.ai/ pull the llama3:latest model on ollama using the command ollama pull llama3:latest install the requirements using pip install -r requirements.txt run the streamlit app using streamlit run main.py