Skip to content

Latest commit

 

History

History
51 lines (31 loc) · 2.75 KB

README.md

File metadata and controls

51 lines (31 loc) · 2.75 KB

MXNet to ONNX to ML.NET

This is an example project to show how an Apache MXNet MLP model can be exported to the ONNX format and subsequently be used in ML.NET. The challenge was running inference in ML.NET because at the time of writing, the ONNX Transformer was only supported on Windows ref.

Tutorial

An in-depth tutorial describing the process and how to productionize the whole ML pipeline in AWS is described on my blog:

MXNet to ONNX to ML.NET with SageMaker, ECS and ECR

To follow the tutorial, all you need in terms of software requirements are a browser and an RDP client.

Requirements

For the Modeling part you need Docker to use Linux containers.

For the Inference part you need Docker to use Windows containers. You need to be on Windows to be able to do this.

If you do not have a Windows box, you can just run the modeling part.

Modeling

The data for this example comes from the New York taxi fare dataset.

The model that will be created will solve a regression problem. Note that I have not focused on creating a low error model, but focused more on the process.

The model artefacts are already in the repository and running the modelling step is not strictly required.

MLP

Usage
  • Windows
    • Use PowerShell
    • Make sure you use Linux containers How to guide
    • Build image docker build -t mxnet-onnx-mlnet-modeling -f Dockerfile.modeling .
    • Run container: docker run -p 8888:8888 -v ${pwd}/data:/notebook/data/:ro -v ${pwd}/models:/notebook/models mxnet-onnx-mlnet-modeling
  • *nix
    • Build image docker build -t mxnet-onnx-mlnet-modeling -f Dockerfile.modeling .
    • Run container: docker run -p 8888:8888 -v $(pwd)/data:/notebook/data/:ro -v $(pwd)/models:/notebook/models mxnet-onnx-mlnet-modeling

Inference

Use the generated ONNX model for running an inference web service.

Usage
  • Make sure Docker uses Windows containers How to guide
  • Build image docker build -t mxnet-onnx-mlnet-inference -f Dockerfile.inference .
  • Run container based on the built image docker run -p 5000:80 mxnet-onnx-mlnet-inference
  • Test with a cURL call or similar. Eg: curl -H "Content-Type: application/json" -d "{'RateCode':1.0,'PassengerCount':1.0,'TripTime':1.0,'TripDistance':1.0}" http://localhost:5000