-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Portal Dashboard: Treat MarkdownPartMetadata as untyped JSON #3123
base: master
Are you sure you want to change the base?
Conversation
Does the PR have any schema changes?Found 10 breaking changes: Types
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #3123 +/- ##
==========================================
+ Coverage 60.75% 61.40% +0.64%
==========================================
Files 71 72 +1
Lines 11368 11583 +215
==========================================
+ Hits 6907 7112 +205
- Misses 3898 3906 +8
- Partials 563 565 +2 ☔ View full report in Codecov by Sentry. |
1955a3e
to
709e975
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The end result looks good, though I've got a couple of concerns.
The new schema transformation doesn't appear to be scoped to only affecting the custom resource's schema - it can globally change any types. That said, it's also not complete as it doesn't allow for transformation of the resource's schema itself. Also, the order of transformations could be an issue as they could easily interact with each other. It would be much easier to reason about if we could apply the transformation for the types generated for just a specific resource - so the transformation is scoped to a single gather resource context.
Secondly, the implementation of the transformation looks quite fragile and slow. We're iterating through every generated type which I expect adds quite a bit of redundant processing. We're also using a hard-coded list to try and identify the types associated with the resource. If the schema is updated this will break.
Can we make this easier to maintain in the future?
@danielrbradley All good questions - I had similar thoughts. See my answers below.
There is no such thing as types scoped to a single resource. Types are contained within a module, and multiple resources of that module can rely on overlapping types. If we wanted to get a list of all types that a given resource depends on, we'd need to traverse its properties recursively and build out a list. And yet, again, it would not be an exclusive list. I actually think that we need cross-resource and cross-module transformations. We could use them for existing hard-coded generation tweaks like SubResource.ID expansion, User Assigned Identities shape, maintained sub-resource collection, etc.
That's straightforward to add later, I don't see why I need to do so in this PR without a use case.
That's fair. Do you have ideas how to overcome this? I could add a test that runs transformations in a few random orders and compares the result. Since we are talking about schema types here, if the order becomes important, we will start getting spurious diffs on schema generation, which is already a test of a kind.
It's an iteration through map keys and parsing each key, which is ~O(N) for N in a few thousands. I expect it to take a few ms max. What am I missing?
That's fair. Do you have suggestions here? How can we transform the types without relying on their names? The best I can think of is to check the type shape and fail loud and clear if it's not what we expect, forcing a maintainer to resolve it. If you have a better approach on these in mind, I'm all ears! |
@mikhailshilkov I could see a few alternatives here...
|
709e975
to
5e6b940
Compare
@danielrbradley I refactored the implementation away from all-mighty transformations. Instead, the custom resource is now specifying "overrides" for schema and metadata types, that then replace the original types with the same name. Additionally, I added a final-pass on the schema that deletes all unused object types. The good news is that no previous types were unused, so only the Dashboard Markdown types are being removed now. This also helps with the existing snapshot test: it elides the Dashboard types and therefore the snapshot is unchanged. Let me know if you think it's a step in the right direction. |
098633b
to
793eb8e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is better, though it still feels a little obscure when reading the custom resource definition as to what it's actually doing.
The separation of Types* and MetaTypes* is not great as these are almost identical implementations, and could easily introduce bugs if they differed accidentally. It's probably a wider issue to bring these two together into a single model at some point which can be projected into both the schema & metadata as required.
Thinking from the point of view of what we're trying to express in the purpose of this custom resource, perhaps utilizing the idea of the visitor pattern could work nicely here so the custom resource can only interact with types that the original resource referenced, or add new ones. For example:
func portalDashboard() *CustomResource {
return &CustomResource{
// Provide a token of the resource to override
Token: "azure-native:portal:Dashboard",
// Follow the types referenced from the resource then pass them all in here to be customised
CustomiseTypes: func (schema map[string]*schema.ComplexTypeSpec, metadata map[string]*resources.AzureAPIType) (map[string]*schema.ComplexTypeSpec, map[string]*resources.AzureAPIType) {
// Iterate over types & add to output with modifications where required.
// Extra types can be added here too.
// Unreferenced types will be dropped from the schema.
},
// We could also add a `CustomiseResource` function eaily to mirror this for the root resource's schema too.
}
}
func normalizePackage(pkg *pschema.PackageSpec, metadata *resources.AzureAPIMetadata) { | ||
// Record all type tokens referenced from resources and functions. | ||
usedTypes := map[string]bool{} | ||
visitor := func(t string, _ pschema.ComplexTypeSpec) { | ||
usedTypes[t] = true | ||
} | ||
VisitPackageSpecTypes(pkg, visitor) | ||
|
||
// Elide unused types. | ||
allTypeNames := codegen.SortedKeys(pkg.Types) | ||
for _, typeName := range allTypeNames { | ||
if !usedTypes[typeName] { | ||
t := pkg.Types[typeName] | ||
if len(t.Enum) > 0 { | ||
continue | ||
} | ||
delete(pkg.Types, typeName) | ||
delete(metadata.Types, typeName) | ||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This method would be a nice utility to have for elsewhere too! Maybe another little one for the pulumi-go-provider
as an "x" package?
793eb8e
to
03d9e7a
Compare
Problem
As explained in #1696 and #3023, the
azure-native:portal:Dashboard
resource is currently ususable due to lack of specific types for dashboard parts. The API supports a multitude of part types, but the open API spec only describes one of them. The result of this is that we generate a discriminated union which is so restricted that it can't be used in practice.We opened Azure/azure-rest-api-specs#27465 upstream but received no feedback yet. The work to describe all other dashboard part types looks very involved - there are many types with many-many properties. Unfortunately, we don't anticipate reliable type definitions any time soon.
Proposal
As suggested in #3023, this PR introduces a type override to remove the only strongly typed part definition and replace it with a generic DashboardMetadataPart type. The type accepts arbitrary collections of inputs and settings, which enables passing configuration for any part type.
Note that technically this is a breaking change. However, I'm quite convinced that the previous type system prevented anyone from creating any non-trivial dashboards, so we are not breaking any practical user scenarios.
Implementation details
Historically, we implement this kind changes as a special case in our generation logic. However, this change is kind of tricky to do in the generation pass, because we need to introduce an new type, change an existing type, and then (ideally) remove unused types too.
Because of that, and to avoid further bloat of inline generation logic, I decided to introduce a new mechanism in customer resources. It allows specifying "overrides" for schema and metadata types, that would then replace the original types with the same name.
Additionally, I added a final-pass on the schema that deletes all unused object types. The good news is that no previous types were unused, so only the Dashboard Markdown types are being removed now. This also helps with the existing snapshot test: it elides the Dashboard types and therefore the snapshot is unchanged.
Testing
In addition to a few unit tests, I added two end-2-end tests that create a dashboard from TypeScript and C#. The TypeScript one was created by importing an elaborate dashboard created manually in Azure portal. Unfortunately, our C# import is broken beyond repair for untyped dictionaries, so I defined a much simpler dashboard manually.
Resolves #1696
Resolves #3023