-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
flaky test: potentially flaky distance::dot::tests::test_dot_f32 #2243
Comments
// Accuracy of dot product depends on the size of the components
// of the vector.
// Imagine that each `x_i` can vary by `є * |x_i|`. Similarly for `y_i`.
// (Basically, it's accurate to ±(1 + є) * |x_i|).
// Error for `sum(x, y)` is `є_x + є_y`. Error for multiple is `є_x * x + є_y * y`.
// See: https://www.geol.lsu.edu/jlorenzo/geophysics/uncertainties/Uncertaintiespart2.html
// The multiplication of `x_i` and `y_i` can vary by `(є * |x_i|) * |y_i| + (є * |y_i|) * |x_i|`.
// This simplifies to `2 * є * (|x_i| + |y_i|)`.
// So the error for the sum of all the multiplications is `є * sum(|x_i| + |y_i|)`.
fn max_error<T: Float + AsPrimitive<f64>>(x: &[f64], y: &[f64]) -> f32 {
let dot = x
.iter()
.cloned()
.zip(y.iter().cloned())
.map(|(x, y)| x.abs() * y.abs())
.sum::<f64>();
(2.0 * T::epsilon().as_() * dot) as f32
} actually, T::epsilon() is however, in IEEE754, the variance in float point number representation is not a constant, and it is also not linear to the number value, for values close to 1, the variance is small (i.e., they have high precision), for very large values, the difference between consecutive representable floating-point numbers can be quite large (i.e., the precision is lower), the reason is that the variance in
|
https://github.com/lancedb/lance/actions/runs/8794212752/job/24133303595
The text was updated successfully, but these errors were encountered: