-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Requirement for test tagging is not clear #778
Comments
If we can agree on a way forward here, I think someone from my team can create a PR for this. Our preference would be that we not require the image name tag, since the developer of a module may not know the name of the image it is to be used in. However, this might be considered a breaking change, so not sure how everyone feels about it. |
Another option if we don't want eliminate the tag would be to enable some kind of universal tag, like |
Hi @greenatatlassian, thanks for opening this issue. In the beginning of the creation of this tool, we were starting to play with containers and we addressed mainly our use case. It still working well for use case but I see and understand your point. I would stick with the option to add a new tag that will be triggered regardless the image's 'name:tag'.
The pros are, it will continue to play well with our current tests structure and does not need any change from user side to the existing tests and will run all features and scenarios. But on the other hand, you can't specify a specificy feature or scenario like you could using the annotation. The second option, IMHO, could be a good fit as well, is to do like you suggest, to run all test/features for the image or specific module, but aiming only the modules so it can behave very similar to the Bats tests, like this example. Thanks. |
I think the best approach is: (1) Don't require an image name on tests, and these are run by default. We do tend to share and reuse tests for different images that do share quite a bit in common, and I understand the above would break older images, I'm not sure there is a good way of preserving backwards comparability, though perhaps introducing a new tester (behave2, or something), and creating the new semantics there might be useful. |
Thanks @spolti for the historic context. Indeed, besides having shared-and-unshared tests in one repository, we also had all the image descriptors in the same repository as well. We've since moved away from that. Personally, I think we should go hard for a change to "untagged means always run", rather than something short of that. I'd rather live with a short-term transition period than the long term legacy of adding more flags, or more semantics, to cekit. Although I like @luck3y's suggestion of an The work needed to transition from the old to new behaviour, for users, is going to be largely a one-shot: adding tags to scenarios or features. It can be done prior to this behavioural change (explicitly tagging all relevant tests with image names will work with the old and new semantics). Worst-case, the breakage to be observed is going to be running too many tests, not too few, which is the better outcome, and quick to spot a failing (new) test. |
Thanks everyone for great discussion. @spolti Thanks for the context. A few questions:
If I understand you correctly, you have a repository containing shared modules, but those modules are not used in all images, and you want to make sure that tests for modules that are not included in a certain image are not run? This is certainly the behaviour I would expect as well, i.e. if I haven't used a module in an image, I wouldn't want the tests for that module to be run on the image (they would presumably fail anyway). However, at least in the current implementation, I don't see why that incurs the need for the tagging? Before running the tests, CEKit is creating a temporary directory and copying in only the modules that are needed to build that image. So there is no need to filter out tests, because no tests not associated with modules used in the image are present in the directory that behave would search. (Maybe my understanding of this bit of the code is wrong—it's been a while since I read it closely). Therefore, if we go with "untagged means always run" as @jmtd suggests, then I don't think that should really affect people even if they have other modules that aren't used in a given image hanging around. Also, presumably all existing tests are tagged (otherwise they wouldn't run) so running untagged tests shouldn't affect properly written existing test suites at all (assuming we can/do make the change only apply to untagged tests). I may be able to put together a PR with this in the coming week or so, since we've just been bitten by this again. (We renamed some images, and didn't notice that suddenly no tests are running—perhaps we should also throw an error if the test suite is empty?) |
Just realised this is very closely related/almost a dupe of #421 |
I agree that #421 that @goldmann entered does sound similar - and that a pattern of running all untagged tests automatically would be straightforward. |
We do not add tests per module, instead we have the tests directory that contains all the tests for all the images we have. E.g. for a test that needs to run on all images, today we do this way: https://github.com/kiegroup/kogito-images/blob/main/tests/features/common.feature But I would agree that no annotation on this one would be clearer and easier to control :) Hope this helps, thanks. |
@greenatatlassian If you do want to make a PR, I'd be happy to roll a release out with that included :-) |
Describe the bug
Tests must be tagged (e.g. with the image name) in order for them to be run by
cekit test behave
. This requirement is not clear from the testing documentation.To reproduce
test/features
directory along side theimage.yaml
file.cekit test behave
. Tests are not run (note in the log output that all tests are skipped.)Note: tests are run if they are explicitly called out by name, e.g.
cekit test behave --name "Test name"
Expected behavior
I would expect cekit to run all untagged tests in the
tests/features
directory next to theimage.yaml
on that image when callingcekit test behave
.Only after reading the source code for
behave_runner.py
and observing the verbose output was I able to discover why my tests weren't being run. After doing that, I realised that the documentation for test tagging suggests that tests need to be tagged with the image name. It would be nice if this requirement were called out front and centre at the top of the testing documentation and included in the examples. Or better yet, simply not require the tag—since the tests are already organised into directories next to either images or modules, it should always be clear which image/module a given set of tests belongs to without needing the tag.The text was updated successfully, but these errors were encountered: