Make inc() flush subnormals to zero when DAZ is enabled
Previously the smallest negative normalized floating-point value would
become a negative subnormal when calling inc() on it, regardless of
whether denormals-are-zero is enabled. Calling inc() again on that
result would then produce the smallest positive normalized value.
This change instead flushes the subnormal to zero if DAZ is enabled.
This is useful for when we're iterating over a range and want to know
whether the iterator is an integer. A single-precision subnormal would
compare equal to zero, but when cast to a double would not longer be
subnormal and thus produce a result different from when zero is used as
input.
Note that this change entangles DAZ and FTZ, in the sense that FTZ
behavior is achieved when DAZ is enabled. Thus we expect either both to
be enabled, or neither.
Bug: b/169904252
Change-Id: I1204466cd1793cb9e3011c3549f672faf25d9ddc
Reviewed-on: https://swiftshader-review.googlesource.com/c/SwiftShader/+/63168
Kokoro-Result: kokoro <noreply+kokoro@google.com>
Tested-by: Nicolas Capens <nicolascapens@google.com>
Reviewed-by: Alexis Hétu <sugoi@google.com>
diff --git a/src/System/Math.hpp b/src/System/Math.hpp
index bafa046..19f45b7 100644
--- a/src/System/Math.hpp
+++ b/src/System/Math.hpp
@@ -397,7 +397,11 @@
x1 += (x1 >= 0) ? 1 : -1;
}
- return bit_cast<float>(x1);
+ float y = bit_cast<float>(x1);
+
+ // If we have a value which compares equal to 0.0, return 0.0. This ensures
+ // subnormal values get flushed to zero when denormals-are-zero is enabled.
+ return (y == 0.0f) ? +0.0f : y;
}
} // namespace sw