Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requirement for test tagging is not clear #778

Open
greenatatlassian opened this issue May 12, 2022 · 11 comments
Open

Requirement for test tagging is not clear #778

greenatatlassian opened this issue May 12, 2022 · 11 comments
Assignees
Labels
status/review Sheduled for a review type/bug

Comments

@greenatatlassian
Copy link
Contributor

Describe the bug
Tests must be tagged (e.g. with the image name) in order for them to be run by cekit test behave. This requirement is not clear from the testing documentation.

To reproduce

  1. Create a test such as the one given as an example in the documentation and an image to run the test on. Test is created in the test/features directory along side the image.yaml file.
  2. Run cekit test behave. Tests are not run (note in the log output that all tests are skipped.)

Note: tests are run if they are explicitly called out by name, e.g. cekit test behave --name "Test name"

Expected behavior
I would expect cekit to run all untagged tests in the tests/features directory next to the image.yaml on that image when calling cekit test behave.

Only after reading the source code for behave_runner.py and observing the verbose output was I able to discover why my tests weren't being run. After doing that, I realised that the documentation for test tagging suggests that tests need to be tagged with the image name. It would be nice if this requirement were called out front and centre at the top of the testing documentation and included in the examples. Or better yet, simply not require the tag—since the tests are already organised into directories next to either images or modules, it should always be clear which image/module a given set of tests belongs to without needing the tag.

@greenatatlassian greenatatlassian added status/review Sheduled for a review type/bug labels May 12, 2022
@rnc rnc assigned jmtd Jun 22, 2022
@greenatatlassian
Copy link
Contributor Author

If we can agree on a way forward here, I think someone from my team can create a PR for this. Our preference would be that we not require the image name tag, since the developer of a module may not know the name of the image it is to be used in. However, this might be considered a breaking change, so not sure how everyone feels about it.

@greenatatlassian
Copy link
Contributor Author

Another option if we don't want eliminate the tag would be to enable some kind of universal tag, like @all_images that could be put on a module to make sure it runs regardless of what image the module is put in.
For tests associated with images instead of modules, we could do the same, though I think just not requiring the tag still makes more sense in the image case.

@rnc
Copy link
Contributor

rnc commented Dec 5, 2022

This is a good point. Also, perhaps if possible there should be better logging on what tests are collected and what are ignored. I am leaning slightly towards the universal @always tag idea but have asked @jmtd, @spolti, @luck3y to comment.

@spolti
Copy link
Contributor

spolti commented Dec 6, 2022

Hi @greenatatlassian, thanks for opening this issue.
IIRC the behave tests works this way because our requirements in the past. Let me add some context here :)

In the beginning of the creation of this tool, we were starting to play with containers and we addressed mainly our use case.
We have a set of images that all of then used the same shared modules and the same test would be run against a few images but not others. The way we got over this was by running the tests that has the annotation matching the image 'name:tag' defined in the image.yaml.

It still working well for use case but I see and understand your point.
Let me add my 2 cents.

I would stick with the option to add a new tag that will be triggered regardless the image's 'name:tag'. @always is a good call. Another option aiming this same behavior would be expose a new parameter, like:

cekit test ... behave --all

The pros are, it will continue to play well with our current tests structure and does not need any change from user side to the existing tests and will run all features and scenarios. But on the other hand, you can't specify a specificy feature or scenario like you could using the annotation.

The second option, IMHO, could be a good fit as well, is to do like you suggest, to run all test/features for the image or specific module, but aiming only the modules so it can behave very similar to the Bats tests, like this example.
But this, I guess, would need a few more effort to be implemented (I guess) and will not fully address the problem, but, IMHO, it is a nice feature to have.

Thanks.

@luck3y
Copy link
Contributor

luck3y commented Dec 6, 2022

I think the best approach is:

(1) Don't require an image name on tests, and these are run by default.
(2) Allow tests to be excluded by both image name (currently supported) and tag (cekit test --exclude=sometag) perhaps?

We do tend to share and reuse tests for different images that do share quite a bit in common, and I understand the above would break older images, I'm not sure there is a good way of preserving backwards comparability, though perhaps introducing a new tester (behave2, or something), and creating the new semantics there might be useful.

@jmtd
Copy link
Collaborator

jmtd commented Dec 15, 2022

Thanks @spolti for the historic context. Indeed, besides having shared-and-unshared tests in one repository, we also had all the image descriptors in the same repository as well. We've since moved away from that.

Personally, I think we should go hard for a change to "untagged means always run", rather than something short of that. I'd rather live with a short-term transition period than the long term legacy of adding more flags, or more semantics, to cekit. Although I like @luck3y's suggestion of an --exclude option as this would be useful anyway. I think "untagged means always run" is the more intuitive choice, and @greenatatlassian's experience would seem to corroborate that.

The work needed to transition from the old to new behaviour, for users, is going to be largely a one-shot: adding tags to scenarios or features. It can be done prior to this behavioural change (explicitly tagging all relevant tests with image names will work with the old and new semantics). Worst-case, the breakage to be observed is going to be running too many tests, not too few, which is the better outcome, and quick to spot a failing (new) test.

@greenatatlassian
Copy link
Contributor Author

Thanks everyone for great discussion.

@spolti Thanks for the context. A few questions:

We have a set of images that all of then used the same shared modules and the same test would be run against a few images but not others. The way we got over this was by running the tests that has the annotation matching the image 'name:tag' defined in the image.yaml.

If I understand you correctly, you have a repository containing shared modules, but those modules are not used in all images, and you want to make sure that tests for modules that are not included in a certain image are not run? This is certainly the behaviour I would expect as well, i.e. if I haven't used a module in an image, I wouldn't want the tests for that module to be run on the image (they would presumably fail anyway). However, at least in the current implementation, I don't see why that incurs the need for the tagging?

Before running the tests, CEKit is creating a temporary directory and copying in only the modules that are needed to build that image. So there is no need to filter out tests, because no tests not associated with modules used in the image are present in the directory that behave would search. (Maybe my understanding of this bit of the code is wrong—it's been a while since I read it closely).

Therefore, if we go with "untagged means always run" as @jmtd suggests, then I don't think that should really affect people even if they have other modules that aren't used in a given image hanging around. Also, presumably all existing tests are tagged (otherwise they wouldn't run) so running untagged tests shouldn't affect properly written existing test suites at all (assuming we can/do make the change only apply to untagged tests).

I may be able to put together a PR with this in the coming week or so, since we've just been bitten by this again. (We renamed some images, and didn't notice that suddenly no tests are running—perhaps we should also throw an error if the test suite is empty?)

@greenatatlassian
Copy link
Contributor Author

greenatatlassian commented Jan 10, 2023

Just realised this is very closely related/almost a dupe of #421

@rnc
Copy link
Contributor

rnc commented Jan 18, 2023

I agree that #421 that @goldmann entered does sound similar - and that a pattern of running all untagged tests automatically would be straightforward.
Regarding throwing an error if the test suite is empty - I'm not sure about that - perhaps a warning? Thinking on other systems e.g. junit - it doesn't throw an error if no tests are run?

@spolti
Copy link
Contributor

spolti commented Jan 24, 2023

Hi @greenatatlassian

Thanks everyone for great discussion.

@spolti Thanks for the context. A few questions:

We have a set of images that all of then used the same shared modules and the same test would be run against a few images but not others. The way we got over this was by running the tests that has the annotation matching the image 'name:tag' defined in the image.yaml.

If I understand you correctly, you have a repository containing shared modules, but those modules are not used in all images, and you want to make sure that tests for modules that are not included in a certain image are not run? This is certainly the behaviour I would expect as well, i.e. if I haven't used a module in an image, I wouldn't want the tests for that module to be run on the image (they would presumably fail anyway). However, at least in the current implementation, I don't see why that incurs the need for the tagging?

We do not add tests per module, instead we have the tests directory that contains all the tests for all the images we have.
What I mean is that, we control what feature will be executed only against with the features that have the annotation that matches the image name. The normal flow we have is, build the image then run the behave tests.

E.g. for a test that needs to run on all images, today we do this way: https://github.com/kiegroup/kogito-images/blob/main/tests/features/common.feature

But I would agree that no annotation on this one would be clearer and easier to control :)
You can explore the repo to see how it is strcutred.

Hope this helps, thanks.

@rnc
Copy link
Contributor

rnc commented Feb 8, 2023

@greenatatlassian If you do want to make a PR, I'd be happy to roll a release out with that included :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/review Sheduled for a review type/bug
Projects
None yet
Development

No branches or pull requests

5 participants