## A Comparative Study of Incremental Constraint Solving Approaches in Symbolic Execution

1 Stack-based approach is often much faster than cache-based. They use KLEE. There are 3 alternatives to incremental solving:

*First alternative*to incremental solving it to only solve changed parts (different from original) of the constraint. So, it uses already-computed solution for variables from unchanged parts. 2 But this works*only if paths are explored in order (depth-first search) and not all variables are dependent*.*Second alternative*: cache solutions of every independent expression observed in every path constraint. It has memory overheada and time consumption. Uses by KLEE.*Third alternative*: use of built-in SMT solver support to solve similar constraints (CVC4, MathSAT5, Yices, Z3 learn lemmas which can be reused).

Their tests are real and randomly-generated programs (using RUGRAT). Stack-based approach has **5× speedup**. But they want to improve it. 3 Their own improvement is 1.11x speedup. SE uses solving to (1) check path feasibility and (2) generate inputs. Only CVC4, MathSAT and Z3 are incremental solvers (provide an assertion stack: `push` and `pop` commands). 4 Note that `(exit)` destroys solver context with lemmas learned in previous computations.

5 They just eliminate common-subexpressions from input constraints. Firstly, they said SE converts fragment `a = x + y; if (a + z) > 10` to `x + y + z > 10`, but it is not true. And their optimization is just using intermediate variables for that. All our tools already do that. Any other not? OK, they also use renaming *before* `push` to have more created assertions. 10 They have tested both bitvector and integer constraints.

12 If we send new query to solver each time, more time is spend to expression construction (not to parsing). The main problem is not to avoid text trace parsing but how to use solver internal state for new query. Their technique spent more time to rename variables. They use static source code transformations to make an SSA form, inlining, loops unrolling. Linearization is significantly 13 more expensive compared to SSA. Oh, and they don't investigate how KLEE's optimizations interfer with theirs. 14 See Green solver infrastructure. See

OK, it's interest but does it work for random search or only for depth-first search? Ask them about.

Sergey Vartanov, 2007–2020