Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Constant Throughput timer with shared algorithm generate wrong throughput when target throughput is big #6278

Open
onionzz opened this issue May 17, 2024 · 5 comments · May be fixed by #6280

Comments

@onionzz
Copy link

onionzz commented May 17, 2024

Expected behavior

No response

Actual behavior

No response

Steps to reproduce the problem

For example,
1、When I set target throughput under 30000(500TPS),everything works fine. But when the target throughput is set to 40000(333TPS), the result throughput is still 500TPS.
2、When the target throughput is set between 60000(1000TPS) and 150000(2500TPS),the result throughput is always 1000TPS
3、When the target throughput is set beyond 150000(2500TPS),the result throughput can't be controlled and wiil be a high value just like without Constant Throughput timer enabled

I think this is may be in ConstantThroughputTimer.java
`private static final double MILLISEC_PER_MIN = 60000.0;

double msPerRequest = MILLISEC_PER_MIN / getThroughput();

Math.round(msPerRequest)
`
I guess when the target throughput is set to a big value, the result of Math.round may produce a fixed value, and then the result throughput is a fixed value in different target throughput.

JMeter Version

5.6.3

Java Version

No response

OS Version

No response

@FSchumacher
Copy link
Contributor

Does this happen with all shared modes? I would think, that it is most problematic, when calculateSharedDelay(ThroughputInfo, long) is used. The delay would be rounded too early there.
That would be the modes with (shared).

@FSchumacher
Copy link
Contributor

And another question is, how many threads had your thread group?

@onionzz onionzz linked a pull request May 20, 2024 that will close this issue
2 tasks
@FSchumacher
Copy link
Contributor

After looking a bit deeper here, I think the resolution of milliseconds for the calculated delay is not enough, when we are trying a high throughput rate with a low thread count.
For example, if we set 30,000 or 40,000 requests as a target with one thread and use active thread as the mode. Then the calculation for the two request/s targets would be:

30,000 => 60,000/30,000 = 2 => rounded to 2
40,000 => 60,000/40,000 = 1.5 => rounded to 2

It doesn't change, when we calculate the same with microseconds instead and still round at the end, as it would be:

30,000 => 60,000,000/30,000 = 2,000 => round to milliseconds => 2
40,000 => 60,000,000/40,000 = 1,500 => round to milliseconds => 2

Apart from this, it is probably still a good idea to change the resolution.

@onionzz
Copy link
Author

onionzz commented May 21, 2024

Does this happen with all shared modes? I would think, that it is most problematic, when calculateSharedDelay(ThroughputInfo, long) is used. The delay would be rounded too early there. That would be the modes with (shared).

It happens with all shared modes. And I think you are right, the root cause is the delay rouned too early. Calculate with microseconds can't solve the problem thoroughly.

@onionzz
Copy link
Author

onionzz commented May 21, 2024

After looking a bit deeper here, I think the resolution of milliseconds for the calculated delay is not enough, when we are trying a high throughput rate with a low thread count. For example, if we set 30,000 or 40,000 requests as a target with one thread and use active thread as the mode. Then the calculation for the two request/s targets would be:

30,000 => 60,000/30,000 = 2 => rounded to 2 40,000 => 60,000/40,000 = 1.5 => rounded to 2

It doesn't change, when we calculate the same with microseconds instead and still round at the end, as it would be:

30,000 => 60,000,000/30,000 = 2,000 => round to milliseconds => 2 40,000 => 60,000,000/40,000 = 1,500 => round to milliseconds => 2

Apart from this, it is probably still a good idea to change the resolution.

Use microseconds may be quite different when Math.max(now, nextRequestTime), and then the result of this will produce an effect on the delay. But calcuate with microseconds will still have the problem when the throughput rate is high enough.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants