Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduced performance with scaled-down data #14

Open
uricohen opened this issue Nov 9, 2023 · 2 comments
Open

Reduced performance with scaled-down data #14

uricohen opened this issue Nov 9, 2023 · 2 comments

Comments

@uricohen
Copy link

uricohen commented Nov 9, 2023

I'm a very happy user of Clarabel and have now moved away from all my previous choices (ECOS, SCS, quadprog).

I am using it from Python, using the CVXPY API and qpsolvers, to solve large scale problems, e.g., 256 variables and 100K linear equality and inequality constraints.

I now run into an issue where scaling the problem by a factor of 100 changes the results considerably. Clarabel seems to be working well when the data mean is order 1 and the performance is reduced considerably when it is 100 times smaller, even though the problems are equivalent.

Is it a tolerance issue?
Should I scale the data myself?
What's your recommendation on this issue?

Well done, and best wishes.

@goulart-paul
Copy link
Member

It is hard to say without seeing an example, but it is reasonable to guess that it is related to tolerances. Which tolerances should be blamed is harder to say though. We would be interested to see a test case if you have one you can share.

When you say "performance is reduced considerably", do you mean that the solver does not converge to full accuracy, or that it requires more iterations, or both / something else?

We do internal data scaling on $P$ and $A$ only, in an attempt to improve conditioning of the KKT matrix that we factor at every iteration. That doesn't take into account the scaling of the linear terms though, i.e. the linear part of the cost or RHS of the constraints.

As an example of a type of issue that might cause: If you had a very large LP, say, then it could happen that very small linear terms produce poor performance because the duality gap would be really small (because the objective is really small) relative to the absolute duality gap tolerance tol_gap_abs. If this is what is happening, then scaling up all of the objective terms (make them all norm 1, say) could help.

@uricohen
Copy link
Author

I will try to create a minimal reproduction of the issue. If you would like to close this until I have one, please go ahead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants