-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Vertex orchestrator to allow for custom disk size and type #2253
Comments
I'll like to help around here. Could you guide me around on how to get started? |
Hi @npv12. I'd suggest you read the CONTRIBUTING.md guide which has general instructions. Then if you're unfamiliar with ZenML, the top of the docs will be useful and then of course you'll need to have and use the Vertex orchestrator, documented here. Also let us know if any of the description doesn't make sense. |
I can take this issue if it's not assigned, or completed |
@AryaMoghaddam let us know if you have any questions along the way! |
Open Source Contributors Welcomed!
Please comment below if you would like to work on this issue!
Contact Details [Optional]
[email protected]
What happened?
Currently, the Vertex orchestrator in ZenML does not provide options to configure custom disk size and type. This limitation restricts users from optimizing their cloud resources according to specific needs, such as handling large datasets or requiring faster disk speeds.
Task Description
Update the Vertex orchestrator's configuration/settings in ZenML to include options for specifying custom disk size and type. This enhancement will allow users more flexibility and control over their cloud resources, leading to better performance and cost optimization.
Expected Outcome
Steps to Implement
Additional Context
Allowing for custom disk size and type configurations aligns with ZenML's philosophy of providing flexible and scalable MLOps solutions. This update will cater to a broader range of use cases and performance requirements.
Code of Conduct
The text was updated successfully, but these errors were encountered: