Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix-comment-typos #334

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion ingest.py
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ def main(device_type):
model_kwargs={"device": device_type},
)
# change the embedding type here if you are running into issues.
# These are much smaller embeddings and will work for most appications
# These are much smaller embeddings and will work for most applications
# If you use HuggingFaceEmbeddings, make sure to also use the same in the
# run_localGPT.py file.

Expand Down
6 changes: 3 additions & 3 deletions run_localGPT.py
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ def load_model(device_type, model_id, model_basename=None):
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
trust_remote_code=True,
# max_memory={0: "15GB"} # Uncomment this line with you encounter CUDA out of memory errors
# max_memory={0: "15GB"} # Uncomment this line when you encounter CUDA out of memory errors
)
model.tie_weights()
else:
Expand Down Expand Up @@ -127,7 +127,7 @@ def load_model(device_type, model_id, model_basename=None):
return local_llm


# chose device typ to run on as well as to show source documents.
# choose device type to run on as well as to show source documents.
@click.command()
@click.option(
"--device_type",
Expand Down Expand Up @@ -183,7 +183,7 @@ def main(device_type, show_sources):
# uncomment the following line if you used HuggingFaceEmbeddings in the ingest.py
# embeddings = HuggingFaceEmbeddings(model_name=EMBEDDING_MODEL_NAME)

# load the vectorstore
# load the vector store
db = Chroma(
persist_directory=PERSIST_DIRECTORY,
embedding_function=embeddings,
Expand Down