Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container Apps: vnet configuration, custom domains and removing obsolete arm properties #1019

Open
stroborobo opened this issue Feb 27, 2023 · 7 comments

Comments

@stroborobo
Copy link

stroborobo commented Feb 27, 2023

Hey Farmers,

loving this lib, thank you so much for building a project like this! <3

We'd like to use it with container apps (i.a.) and noticed vnet configuration and custom domains (via Ingress) to be missing. (ref #993)

I'd like to add support for these, but also noticed the ManagedEnvironment.JsonModel method is emitting some ARM that might be in conflict since the API has been replaced. Specifically internalLoadBalancerEnabled, which seems to have been copied from the old Microsoft.Web/kubeEnvironments. I'm also a little confused about the type field, according to the MS docs there used to be one called environmentType, I guess it was renamed at some point?

So I was wondering how I should implement this. Would it be ok to update the resource version and remove/replace the outdated API with the new one? I don't think we can migrate the old field to the new API, since it requires a vnet and subnets to be set in vnetConfiguration.

What about default values and required fields? The MS docs are not exactly clear on many things (or I didn't find it). For the subnets I guess I'd split them like the ContainerGroups Builder into vnet/subnets.

Ingress for the Domains seems to be very minimal at the moment and would need to be extended. This is currently implemented in Common, which is probably fine for the smaller settings, but with Ingress becoming larger and nested, should it be moved to Builders? The app's custom domain setting would need to reference a new resource type, Microsoft.App managedEnvironments/certificates, so that would need to be reachable by the Ingress type.

@ninjarobot
Copy link
Collaborator

@stroborobo thanks for the interest in this, help is appreciated!

We took up the Container App implementation before there was even a documented ARM API for it, so it's been through some revisions, for sure. Here are the main guiding principles:

  • Avoid breaking the builders (the DSL) - it's often possible to make changes in the ARM resource generation without affecting the builder syntax.
  • Updating the API is fine as long as it creates the same resulting resource with the new API.

In this case, the underlying API was entirely deprecated and there are some things that don't have the same structure from an ARM resource perspective, but we need to avoid creating breakage in the DSL. If you need to take a sane default (for example, needing a vnet and subnet to enable the internal load balancer), then use the defaults similar to the Azure portal where it makes it a quick experience. However, also ensure those values can be specified since the defaults only work in simple cases.

Common has the very generic type structure for resources, but usually more of these have to be defined in the IArmResource implementation, and then the builder will just be used to simplify the usage of those resources by providing a simpler syntax work sane defaults.

I hope that helps, and if you want to start a draft PR to show your suggestions, please do and we can review and discuss.

Thanks again!

@stroborobo
Copy link
Author

Thanks for the feedback :)

To be clear, I'll try to work around the farmer api breakages by supplying defaults instead and for the Ingress I'll add a new builder, but keep the old types and just accept the new one using an overload for example? And changes in the ARM output would be ok, since it will target the new resource type version?

@ninjarobot
Copy link
Collaborator

using an overload

exactly, yes.

changes in the ARM output would be ok

Yes, just please ensure it creates the same resource using the new API/version so that people who redeploy with the same builders and syntax will not have any surprise changes.

@isaacabraham
Copy link
Member

isaacabraham commented Mar 3, 2023

Just some thoughts here - what would happen for people that are redeploying an existing resource group deployed using the old structures that then deployed using the new ARM?

DSL - agreed, yes let's try to avoid changes - if it's needed to break stuff for the right reason, I don't mind if there's old code doesn't compile as long as there's good documentation for how to fix it.

@stroborobo
Copy link
Author

Haven't gotten to do get into this yet, but I'll test it and report back. I'd expect it to be diff'ed like usual, since the changes are additive (except for apiVersion ofc). It might also be entirely possible that the internal_load_balancer_state is not doing anything atm, so maybe we could mark it as obsolete and forget about the defaults then, I guess?

Feels like this also feeds into the whole versioning question (#842), I'll just dump some ideas over there in a moment.

@ninjarobot
Copy link
Collaborator

The API version and the underlying resource are updated independently. When you specify the apiVersion, you are telling ARM what format your requests will be in so it knows how to route it to the correct internal resource provider endpoint and so the RP itself knows how to deserialize requests and serialize responses.

Redeploying existing resources using a new API version is intended to be no different than redeploying with the existing API. The resource provider deserializes the payload and checks for changes with the existing resource, then applies only what is changed. The API version would only affect how it goes about deserializing and validating the payload.

I expect this is even the case with a total change of resource type as was done with container apps, although I don't personally have one created with the original resource type and API to be able to verify this. That team had to go through a whole migration project that would repopulate the ARM resources so they would appear under the new type, but the actual underlying Kubernetes environment would remain the same.

@stroborobo
Copy link
Author

Hey, I'll have to delay this a little longer, sorry. I'm still determined to do this and Im going to keep your words in mind, but I need an emergency vacation right now, I hope you understand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants