Up

Mixed Abstractions for Floating-Point Arithmetic

Mixed Abstractions for Floating-Point Arithmetic
1 · 2 · 3 — 5

floating point

1 Floating point is essential for avionics. Got it. Main problem with floating point is rounding. (You can get non-associative addition.) Two ways: (1) abstract interpretation (program is partially executed on an abstract domain such as real intervals I don't understand.) and (2) proof assistants (tools that prove theorems about programs under human guidance). They want to encode float-point operations as functions on bit-vectors and use SAT-solvers. But this approach has proven to be intractable in practice (very large and hard-to-solve). They use overapproximation and underapproximation: unsat of overapproximated A implies unsat of A (CEGAR), sat of A implies sat of underapproximated A. 2 Algorithm termination is guaranteed.

Rounding function can be in (1) round-up or (2) round-down modes. Rounding always return nearest number. Every floating point operation approximates its result: x +fp y = round(x + y). Each FP operation can be modeled as a formula in propositional logic, so formula in FPA can be translated to propositional formula and passed to SAT-solver. 3 But (!) complexity is a bottleneck. Algorithm for addition/subtraction:

For double-precision you need 1404 propositional variable for align, 826 for operation, and 2923 for rounding. For multiplication/division 5 times more!

5 

8 pages, skip

Sergey Vartanov, 2007–2020