Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds IFluidContainer end to end tests using tinylicious #21092

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

seanimam
Copy link
Contributor

@seanimam seanimam commented May 15, 2024

Description

  • Adds new IFluidContainer end to end tests using Tinylicious
  • Edits original unit tests to remove mock implementation of AppInsightsTelemetryConsumer in favor of using the actual one

@seanimam seanimam requested a review from a team May 15, 2024 17:05
@github-actions github-actions bot added area: framework Framework is a tag for issues involving the developer framework. Eg Aqueduct dependencies Pull requests that update a dependency file base: main PRs targeted against main branch labels May 15, 2024
@@ -77,10 +77,14 @@
"format:prettier": "prettier --write . --cache --ignore-path ../../../../.prettierignore",
"lint": "fluid-build . --task lint",
"lint:fix": "fluid-build . --task eslint:fix --task format",
"start:tinylicious:test": "tinylicious > tinylicious.log 2>&1",
"test": "npm run test:mocha",
"test:mocha": "npm run test:mocha:esm && echo skipping cjs to avoid overhead - npm run test:mocha:cjs",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As things are configured now, if a developer runs npm run test (or test:mocha, etc.) the tests will fail due to tinylicious not running, right? If so, we may want to reconfigure the scripts to ensure they are all runnable.

I also don't think we want test:mocha (etc.), as it will result in the tests being run as a part of the global test:mocha run, which won't be correctly set up for tinylicious to run.

I would recommend removing the mocha script variants, and update test to call test:realsvc

Copy link
Contributor

@Josmithr Josmithr May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That said, there might also be a case for having 2 separate suites of tests here:

  1. The end-to-end tests that run with tinylicious via test:realsvc
  2. The simple unit tests that run via test:mocha

The main benefit being that it is still possible to write simple unit tests that don't rely (and can't rely) on tinylicious.

Then we would just want to update test to call both of them for convenience.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also like the idea of two separate suites/kinds of tests. Keep the actual unit tests running under test:mocha so CI only runs those, and test can kick off both those and the realsvc ones. Although if we want the realsvc ones to run in CI, we'd need a test:realsvc:tinylicious:report script... which currently nothing has except the e2e tests package, so I'm hesitant to add it here. I'd be ok just triggering them locally through test for now.

How to split the two kinds of tests... I might go with *.spec.ts for the unit ones, and maybe .spec.realsvc.ts for the other ones, and pass the appropriate filter to mocha in each of the scripts? Although that's not something we do in other places as far as I know, so maybe split them into two subfolders of src/test/ with *.spec.ts suffix in both cases?

Copy link
Contributor Author

@seanimam seanimam May 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated with split tests.
test --> spin up tinylicious and run all unit and e2e tests
test:mocha --> run unit tests only
test:realsvc --> spin up tinylicious and run only e2e tests

…s to await a promise rather than cycle a while loop
Copy link
Contributor

@alexvy86 alexvy86 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few comments to simplify the tests, specifically on one but they can be applied to all of them.

Comment on lines 73 to 89
const actualConsumedConnectedTelemetry: ContainerConnectedTelemetry[] =
telemetryConsumerConsumeSpy
.getCalls()
.map((spyCall) => spyCall.args[0] as IFluidTelemetry)
.filter(
(telemetry) =>
telemetry.eventName === ContainerTelemetryEventNames.CONNECTED,
) as ContainerConnectedTelemetry[];

if (actualConsumedConnectedTelemetry.length === 0) {
console.log("Failed to find expected telemetry");
reject(
new Error(
"Expected TestTelemetryConsumer.consume() to be called alteast once with expected container telemetry but was not.",
),
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should expect a single stable telemetry event for connected and nothing else, right? Would it be a simpler and more robust check to aggressively assert the calls array should have exactly 1 item, and that (calls[0] as IFluidTelemetry).eventName === ContainerTelemetryEventNames.CONNECTED?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thoughts here are that over time as the framework changes new container events may be emitted during the connected process and this test just wants to see that the connected event fired rather than it being the only event that fired.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The container could start emitting more events but until we update fluid-telemetry to leverage them, what this test should see remains the same, right? If we start emitting stable telemetry for the new events, then we might need to make updates here but at that point it seems potentially expected, and IMO preferrable. Right now this feels to me like a somewhat vague test... for example it would not tell us if the container for some reason connected, disconnected, and connected again, which it shouldn't be doing based on the test set up.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good points. I forgot we're filtering untracked event types.
Updated in commit 9b0f166

Comment on lines 98 to 108
try {
expect(expectedContainerTelemetry).to.deep.equal(actualContainerTelemetry);
// We won't know what the container containerInstanceId will be but we can still check that it is defined.
expect(actualContainerTelemetry.containerInstanceId)
.to.be.a("string")
.with.length.above(0);
// This will enable the test to finally complete.
resolve();
} catch (error) {
reject(error); // Reject the promise with the assertion error
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we move this out of the event listener as described above, we don't need the try/catch anymore.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated in commit 9b0f166

"test:mocha:cjs": "mocha --recursive \"dist/test/**/*.spec.*js\" --exit",
"test:mocha:esm": "mocha --recursive \"lib/test/**/*.spec.*js\" --exit",
"start:tinylicious:test": "tinylicious > tinylicious.log 2>&1",
"test": "start-server-and-test start:tinylicious:test 7070 test:mocha:all",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: this works, and might be more performant, but it diverges pattern-wise from how CI runs the tests (as separate steps). It might be helpful to be consistent with CI and just run the standard mocha tests and the real service ones separately via concurrently npm:test:mocha:unit npm:test:mocha:end-to-end.

@@ -94,10 +104,13 @@
},
"devDependencies": {
"@arethetypeswrong/cli": "^0.15.2",
"@fluid-experimental/tree": "workspace:~",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the old shared-tree, not the new one. I'm assuming we want to use the new one here?

// This test suite creates an actual IFluidContainer and confirms events are fired with the expected names during the expected events

describe("container telemetry E2E", () => {
let schema;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should give this a type annotation.


// We don't know exactly when the given container events that we're looking for will fire so we have to
// wrap the container.on(...) event handler within a promise that will be awaited at the end of the test.
const { actualContainerTelemetry, expectedContainerTelemetry } = await timeoutPromise<{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know we had this utility. Good to know!

expectedContainerTelemetry: {
eventName: ContainerTelemetryEventNames.CONNECTED,
containerId,
containerInstanceId: containerTelemetryFromSpy.containerInstanceId,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not get the instance ID directly from our container variable? If we did, we could declare the entire "expected" object outside of the promise. But I may be missing something.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm remembering that we don't expose the instance ID on IFluidContainer. That's a bummer. It doesn't seem quite right to use the "actual" value for it here in the "expected" object, but the alternative would be to drill into private members of the container, which likely isn't better.

Probably worth a brief comment explaining this. Alternatively, we could omit the instance IDs from the test comparisons. Either way seems reasonable to me.

if (actualConsumedDisposedTelemetry.length === 0) {
reject(
new Error(
"Expected TestTelemetryConsumer.consume() to be called alteast once with expected container telemetry but was not.",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"Expected TestTelemetryConsumer.consume() to be called alteast once with expected container telemetry but was not.",
"Expected TestTelemetryConsumer.consume() to be called at least once with expected container telemetry but was not.",

Copy link
Contributor

@Josmithr Josmithr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a few more questions / suggestions. But all-in-all it looks good to me!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: framework Framework is a tag for issues involving the developer framework. Eg Aqueduct base: main PRs targeted against main branch dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants