In floating-point, usually doing an operation to double precision and then rounding to float gives the right result in float precision. One exception to this is fused multiply add (fma) where "a * b + c" is computed with a single rounding. This requires the equivalent of extra intermediate precision inside the operation. If a float fma is implemented using a double fma rounded to float, for some well-chosen arguments where the final result is near a half-way result in *float*, an incorrect answer will be computed due to double rounding. In more detail, the double result will round up and then the cast to float will round up again whereas a single rounding of the exact answer to float would only round-up once.
The new float fma implementation does the exact arithmetic using BigDecimal where possible, with guard to handle the non-finite and signed zero IEEE 754 details. ------------- Commit messages: - 8253409: Double-rounding possibility in float fma Changes: https://git.openjdk.java.net/jdk/pull/2684/files Webrev: https://webrevs.openjdk.java.net/?repo=jdk&pr=2684&range=00 Issue: https://bugs.openjdk.java.net/browse/JDK-8253409 Stats: 31 lines in 2 files changed: 3 ins; 11 del; 17 mod Patch: https://git.openjdk.java.net/jdk/pull/2684.diff Fetch: git fetch https://git.openjdk.java.net/jdk pull/2684/head:pull/2684 PR: https://git.openjdk.java.net/jdk/pull/2684 |
> In floating-point, usually doing an operation to double precision and then rounding to float gives the right result in float precision. One exception to this is fused multiply add (fma) where "a * b + c" is computed with a single rounding. This requires the equivalent of extra intermediate precision inside the operation. If a float fma is implemented using a double fma rounded to float, for some well-chosen arguments where the final result is near a half-way result in *float*, an incorrect answer will be computed due to double rounding. In more detail, the double result will round up and then the cast to float will round up again whereas a single rounding of the exact answer to float would only round-up once.
> > The new float fma implementation does the exact arithmetic using BigDecimal where possible, with guard to handle the non-finite and signed zero IEEE 754 details. Joe Darcy has updated the pull request incrementally with one additional commit since the last revision: Add a jtreg run command to disable any fma instrinic so the Java code is tested. ------------- Changes: - all: https://git.openjdk.java.net/jdk/pull/2684/files - new: https://git.openjdk.java.net/jdk/pull/2684/files/9d26b312..ee2ea23a Webrevs: - full: https://webrevs.openjdk.java.net/?repo=jdk&pr=2684&range=01 - incr: https://webrevs.openjdk.java.net/?repo=jdk&pr=2684&range=00-01 Stats: 1 line in 1 file changed: 1 ins; 0 del; 0 mod Patch: https://git.openjdk.java.net/jdk/pull/2684.diff Fetch: git fetch https://git.openjdk.java.net/jdk pull/2684/head:pull/2684 PR: https://git.openjdk.java.net/jdk/pull/2684 |
On Tue, 23 Feb 2021 07:00:07 GMT, Joe Darcy <[hidden email]> wrote:
>> In floating-point, usually doing an operation to double precision and then rounding to float gives the right result in float precision. One exception to this is fused multiply add (fma) where "a * b + c" is computed with a single rounding. This requires the equivalent of extra intermediate precision inside the operation. If a float fma is implemented using a double fma rounded to float, for some well-chosen arguments where the final result is near a half-way result in *float*, an incorrect answer will be computed due to double rounding. In more detail, the double result will round up and then the cast to float will round up again whereas a single rounding of the exact answer to float would only round-up once. >> >> The new float fma implementation does the exact arithmetic using BigDecimal where possible, with guard to handle the non-finite and signed zero IEEE 754 details. > > Joe Darcy has updated the pull request incrementally with one additional commit since the last revision: > > Add a jtreg run command to disable any fma instrinic so the Java code is tested. Looks fine. Presumably the updated test fails without the source change. ------------- Marked as reviewed by bpb (Reviewer). PR: https://git.openjdk.java.net/jdk/pull/2684 |
On Tue, 23 Feb 2021 19:11:06 GMT, Brian Burkhalter <[hidden email]> wrote:
> > > Looks fine. Presumably the updated test fails without the source change. Right; the added test case is the failing one from the bug report. It will fail if the old non-intrinsic implementation, that is the Java implementation is used. ------------- PR: https://git.openjdk.java.net/jdk/pull/2684 |
In reply to this post by Joe Darcy-2
On Tue, 23 Feb 2021 03:58:46 GMT, Joe Darcy <[hidden email]> wrote:
> In floating-point, usually doing an operation to double precision and then rounding to float gives the right result in float precision. One exception to this is fused multiply add (fma) where "a * b + c" is computed with a single rounding. This requires the equivalent of extra intermediate precision inside the operation. If a float fma is implemented using a double fma rounded to float, for some well-chosen arguments where the final result is near a half-way result in *float*, an incorrect answer will be computed due to double rounding. In more detail, the double result will round up and then the cast to float will round up again whereas a single rounding of the exact answer to float would only round-up once. > > The new float fma implementation does the exact arithmetic using BigDecimal where possible, with guard to handle the non-finite and signed zero IEEE 754 details. This pull request has now been integrated. Changeset: e5304b3a Author: Joe Darcy <[hidden email]> URL: https://git.openjdk.java.net/jdk/commit/e5304b3a Stats: 32 lines in 2 files changed: 4 ins; 11 del; 17 mod 8253409: Double-rounding possibility in float fma Reviewed-by: bpb ------------- PR: https://git.openjdk.java.net/jdk/pull/2684 |
Free forum by Nabble | Edit this page |