## Green: Reducing, Reusing and Recycling Constraints in Program Analysis

1 They propose to use persistent cache between SE runs. Even different programs, even different analysis. 2 Four phases: slicing, canonization, reuse, and translation. *Slicing* is just removing independent constraints. *Canonization* is renaming and reordering. 3 They assume that SE invokes SAT-solver on every branch. I think, it isn't truth. With canonization they replace semantic equivalence of formulae with syntactic equivalence. Slicing helps to reuse because of small number of variables. See DiSE path-sensitive regression analysis. Three different container implementation has strong reusing.

Very good: 4 Any path condition can be split into *known* and *unknown* parts. In classic SE every branch introduces new constraint. In DSE you negates one of conditions and remove others. Unknown part is not dependent on all of the known. They construct the graph: V are variables and E indicate whether two variables are part of the same constraint.

Canonization: they work only for **linear arithmetic** with lexicographic ordering to `ax + by + cz + ... + k [==|!=|<=] 0`. 5 Their tool is Redis. They can use any database with `get(key)` and `put(key, value)` (`key` is a string formula representation, `value` is just SAT/UNSAT). See Symbolic PathFinder. 7 Small changes to program can reduce the potential for reuse dramatically. There is a spectrum of related programs (binary tree, tree map).

9 Slicing has the biggest impact. 10 Canonical form expose semantic equivalence as syntactic equivalence.

Sergey Vartanov, 2007–2020