Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Start service when testing #50

Open
rudigiesler opened this issue Oct 27, 2015 · 0 comments
Open

Start service when testing #50

rudigiesler opened this issue Oct 27, 2015 · 0 comments

Comments

@rudigiesler
Copy link
Contributor

This issue is to address the concerns raised in #19. The main issue being that we don't call startService during testing. If startService is not being called, then twisted is skipping code that would normally be called if we were not testing. This leads to our tests not being as trustworthy as they otherwise could be.

The purpose of this issue is to propose solutions, discuss solutions, and finally to implement a solution. The main solution that was discussed so far was to replace the default WorkerCreator with one that would launch workers with fake AMQP things. There are multiple ways that we can go about this:

  • Stash the WorkerCreator on the API, and have the API pass it to the Channel every time a channel is created. This would also involve modifying the service to not create an API if it already has one, because we would need to create our own API and replace the default WorkerCreator with our test one.
  • Stash the WorkerCreator on the service, since the Channel already has access to the service when it needs to start the channel/stop the channel, in order to add/remove itself as a child. This would mean that we would have a default WorkerCreator created on the service, and we could call startService, and then just replace the WorkerCreator before we try to start any channels.
  • Specify the WorkerCreator class on the channel as a class variable, and change that variable with our testing WorkerCreator before we run the tests, making sure to cleanup properly afterwards.

Note that with all these methods, there might be rabbit-holing involved with things that will only come up when we implement the details, as there could be places where things assume that they work the way they currently work, especially in the tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant