Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

external NFS storage support #94

Open
cloustone opened this issue Jun 12, 2018 · 13 comments
Open

external NFS storage support #94

cloustone opened this issue Jun 12, 2018 · 13 comments

Comments

@cloustone
Copy link

Hello, @FfDL
We deploy FfDL in a private environment in which S3 and Swift are not available, only support NFS external storage. for model definition file, we can use localstack in current dev environment, for training data, we wish use NFS.
The following steps are our adaptions for NFS.

  1. Deploy an external NFS server out of kubernetes.
  2. Add PVs declaration in templates folder
  3. Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment

We are confirming the above method, however, new question already occurred.
If there are two models to be submitted, they are all using NFS static external storage at the same mount point, is this not a problem?

Would you please confirm the above method and question, or provide a right solution to us.

Thanks

@atinsood
Copy link

@cloustone there's work going on on cleaning that tight integration that we have and we should have something out relatively soon.

the thought process is that you can create a PVC, load all the training data to this PVC and in the manifest file provide a pvc reference id/name similar to the way you provide s3 details in manifest and the learner can mount that pvc rather than the s3 storage and use the data

@cloustone
Copy link
Author

@atinsood thanks for your reply. I just used dynamic external storage with NFS to deploy model train. It seems ok.

@animeshsingh
Copy link

@cloustone would love to get more details about how you did this. We would love to include a PR with a doc stating how to leverage NFS, with the steps you defined above

"The following steps are our adaptions for NFS.

Deploy an external NFS server out of kubernetes.
Add PVs declaration in templates folder
Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment"

@atinsood
Copy link

@cloustone I just used dynamic external storage with NFS to deploy model train. It seems ok. curious on how you got this going from a technical perspective :)

thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training (basically change the https://github.com/IBM/FfDL/blob/master/lcm/service/lcm/learner_deployment_helpers.go#L493 and add the volume mount)

I wonder if you went this route or a different one

@cloustone
Copy link
Author

@atinsood Yes, the method is almost same with what you provided.

thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training.

@atinsood
Copy link

@cloustone other interesting thing that you can try is this https://ai.intel.com/kubernetes-volume-controller-kvc-data-management-tailored-for-machine-learning-workloads-in-kubernetes/

https://github.com/IntelAI/vck

we have been looking into this as well. but this can help bring data down to your nodes running the gpus and you'd end up accessing the data as you would access local data on those machines.

this is an interesting approach and should work well if you don't have a need of isolation of training data for every training.

@cloustone
Copy link
Author

@atinsood Thanks, we will try this method according to our requirement.

@Eric-Zhang1990
Copy link

Hello, @FfDL
We deploy FfDL in a private environment in which S3 and Swift are not available, only support NFS external storage. for model definition file, we can use localstack in current dev environment, for training data, we wish use NFS.
The following steps are our adaptions for NFS.

  1. Deploy an external NFS server out of kubernetes.
  2. Add PVs declaration in templates folder
  3. Add PVCs file "/etc/static-volumes/PVCs.yaml" in LCM docker environment

We are confirming the above method, however, new question already occurred.
If there are two models to be submitted, they are all using NFS static external storage at the same mount point, is this not a problem?

Would you please confirm the above method and question, or provide a right solution to us.

Thanks

@cloustone Can you please detailly tell me how to use NFS? I also want to use NFS but I do not know how to use it. Which files do you change and how to change? Thank you very much.

@Eric-Zhang1990
Copy link

@cloustone other interesting thing that you can try is this https://ai.intel.com/kubernetes-volume-controller-kvc-data-management-tailored-for-machine-learning-workloads-in-kubernetes/

https://github.com/IntelAI/vck

we have been looking into this as well. but this can help bring data down to your nodes running the gpus and you'd end up accessing the data as you would access local data on those machines.

this is an interesting approach and should work well if you don't have a need of isolation of training data for every training.

@atinsood Do you have add this method into FfDL? Or do you have document about how to use this method in FfDL? Thank you very much.

@atinsood
Copy link

@Tomcli @fplk did you try the intel vck approach with ffdl

@sboagibm
Copy link
Contributor

@atinsood @Eric-Zhang1990 No, we do not currently have vck integration in FfDL.

@cloustone said:

and you'd end up accessing the data as you would access local data on those machines.

Which I think just implies a host mount, which I think is enabled in the current FfDL. So you could give that a try.

@cloustone said:

thinking more about your initial suggestion, you can also have a configmap with a list of pvcs that you have created before hand, mount it as a volume in lcm, and then lcm can just pick 1 pvc and allocate it to training.

We do have an internal PR that enables use of generic PVCs for training and result volumes. I don't think we need a configmap? The idea is that PVC allocation is done by some other process, and then we just point to the training data and result data volumes by name, in the manifest.

Perhaps we can go ahead and externalize this in the next few days, at least on a branch, and you could give it a try. Let me see what I can do.

@Eric-Zhang1990
Copy link

Eric-Zhang1990 commented Jan 17, 2019

@sboagibm Thank you for your kind reply. You say "then we just point to the training data and result data volumes by name, in the manifest.", can you give me a example of manifest file using local path of host?

I find a file in "https://github.com/IBM/FfDL/blob/vck-patch/etc/examples/vck-integration.md", what you say is like this manifest file? If it is, can I add multi learners in it?

  Thank you very much.

@Eric-Zhang1990
Copy link

@cloustone @atinsood @sboagibm How to use NFS to store data to start training jobs?? Can you provide more detail docs for us??
Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants