- Follow the instructions in the section below for setting up the Jupyter Environment.
- Go to https://cloud.llamaindex.ai/ and create an account using one of the authentication providers.
- Once logged in, go to the API Key page and create an API key. Copy that generated API key to your clipboard.
- Go back to LlamaCloud. Create a project and initialize a new index by specifying the data source, data sink, embedding, and optionally transformation parameters.
- Open one of the Jupyter notebooks in this repo (e.g.
examples/getting_started.ipynb
) and paste the API key into the first cell block that readsos.environ["PLATFORM_API_KEY"] = "..."
- Copy the
index_name
andproject_name
from the deployed index into theLlamaCloudIndex
initialization in the notebook.
That should get you started! You should now be able to create an e2e pipeline with a LlamaCloud pipeline as the backend.
If you're interested in running/viewing evals, you have the following options from clicking on the "Evals" tab:
- Cloud Eval Runs: Easily run evals on a deployed index on the cloud. Click New Cloud Run, fill out the index/prompt/required info, and add your own questions to run.
- Local Eval Runs: Take full advantage of the flexible open-source evaluation modules, and upload results to the "Local Eval Runs" tab as a report card. See
examples/batch_eval.ipynb
as an example.
Here's some commands for installing the Python dependencies & running Jupyter.
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
jupyter lab
Notebooks are in examples
.
Note: if you encounter package issues when running notebook examples, please rm -rf .venv
and repeat the above steps again.