Skip to content

rayed-therap/llamachain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llamachain

Ollama Langchain Devcontainer

Install Docker

Download project

git clone [email protected]:rayedbw/llamachain.git
cd llamachain

Setup vscode

  • Open the project in vscode code .
  • Install the Dev Container extension pack
  • Ctrl + Shift + P to open the command palette and type Rebuild and Reopen in Container

Pull Llama2 model

In a terminal window outside of the container run the following to download the llama2 model in your container:

docker exec llamachain-ollama-1 ollama pull llama2

Run the application in vscode

python src/main.py or press F5

Try different LLM models

docker exec llamachain-ollama-1 ollama pull gemma to try the gemma model

Don't forget to actually use the model in your code by changing the model parameter.

from langchain_community.chat_models.ollama import ChatOllama

llm = ChatOllama(model="gemma", base_url=os.environ["OLLAMA_BASE_URL"])

To use OpenAI

  • Create an api key from the OpenAI website
  • Create a .env file at the project root
  • Add OPENAI_API_KEY=<your_api_key>

About

Ollama Langchain Devcontainer

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published