Skip to content

Latest commit

 

History

History
122 lines (71 loc) · 5.03 KB

KALE-KServe.md

File metadata and controls

122 lines (71 loc) · 5.03 KB

LAB / PROJECT: KALE (Kubeflow Automated PipeLines Engine) and KServe

This lab/project shows:

  • how to use KALE and KServe in a project.

Prerequisite

Steps

  • Create a new notebook server pod and connect:

    image

  • Run Terminal to download examples:

    image

  • Clone Kale Examples:

git clone https://github.com/kubeflow-kale/kale
  • Open the ipynb file (kale/examples/serving/sklearn/iris.ipynb)

    image

  • Run the cell "pip install -r requirements.txt" to install requirements

  • Then, after installing required packages, restart the kernel.

    image

  • Open the KALE section from left and enable KALE

    image

  • After opening KALE feature, it is seen that each cells are tagged with steps (e.g. imports, pipeline-parameters, etc.)

  • Use "gcr.io/arrikto/jupyter-kale-py36:develop-l0-release-1.2-pre-295-g622fe91aca" as docker image if you encounter with another image.

    image

  • Before compiling, add tagging to "serving_model", this enables to create model in Kubeflow.

    image

  • Run "Compile and Run" to create Kubeflow pipeline from the notebook (this is KALE feature)

  • It creates pipeline:

image

  • When monitoring pods in K8s, it can be seen that pods are running and completed for each step.

    image

  • For each task details can be viewed.

    image

  • For each step logs and data are stored in ROK and Minio (if MiniKF is used).

    image

  • After running pipeline, it can be seen the result parameters (accuracy)

    image

  • KServe creates model:

    image

  • From model section, detail information can be reachable:

    image

    image

    image

    image

  • From Experiment Section, models' results can be seen (e.g. accuracy, f1, etc.):

    image

  • Open launcher:

    image

  • Run Terminal:

    image

  • Run following commands to create JSON file to send the model.

cat <<EOF > "./iris-input.json"
{
  "instances": [
    [6.8,  2.8,  4.8,  1.4],
    [6.0,  3.4,  4.5,  1.6]
  ]
}
EOF
curl -v http://iris-pipeline-xchh4-3630146895-2r0st.kubeflow-user.svc.cluster.local/v1/models/iris-pipeline-xchh4-3630146895-2r0st:predict -d @./iris-input.json
  • After sending file with curl command, the prediction is responded, this shows that KServe serve the model:

    image

  • For different training run, reaching served model:

    image