On a related note, I recently learned about the relationship between theorem provers and unification [1][2].
I was actually just trying to understand "plain" unification (still working through it!). I just find it exciting to discover cool applications of unif. that I haven't came across before.
(Btw, I'm trying to figure out how unification can be used as a building block for a datalog like query language for indexedDB)
To understand unification I recommend Robinson's seminal paper on resolution, "A machine-oriented logic based on the resolution principle", which is also referenced in your first link. It's a widely influential paper and most modern work on unification, theorem proving, logic programming etc derives from it to some extent.
It's an old paper and it might look scary on first sight but it's actually pretty readable. As an aside, I'm told that Robinson's paper was in review for well over a year in a constant back-and-forth between him and referees who kept demanding improvements. Which of course was very frustrating for Robinson, but it seems to have done the job and resulted in a very readable, very clear paper. Give it a try!
Btw, I'm a bit put off by the terminology chosen in your first link that treats predicates as functions. Robinson of course describes unification in first-order logic. Propositional logic doesn't have functions so it doesn't make a lot of sense to speak of unification without predicates, but with functions.
For me the best introduction to unification was creating a microKanren[1] implementation based on the "Hello, declarative world" article[2], but even just reading the article can help enormously to understand how unification works.
Basically if you have unification then you already have a simple query engine, which can handle simple equality constraints that is usually used in SQL WHERE clauses. For example "select * where pokemon = pikachu" can be simply achieved by such a system.
Of course this only gives a bare bones query engine, so if you want something more advanced then you will need to implement a constraint system on top of this base unification system. cKanren is a good example of such a system.
Few years ago I had the same kind a of curious question about the relationship between databases and logic programming languages like microKanren and as a result I created LogikalDB[3], which is an embedded query language for FoundationDB.
The article series Modern SAT Solvers: Fast, Neat, and Underused[1] argues that we don't use generic problem solvers enough in our daily engineering work.
A generic solution won't beat a hand-optimised specific solution, but if you're good at phrasing your problem in the right way, you can get 80 % of the way there with 20 % of the effort by using a generic solver.
Z3, being an even more generic SMT solver rather than "just" a SAT solver, should allow you to do even more with less.
I've long wanted to integrate this type of solver into my workflow more, because I think that if I can make it into more of a habit, it would provide insane leverage for prototyping. Does anyone have experience using these techniques for that?
> A generic solution won't beat a hand-optimised specific solution, but if you're good at phrasing your problem in the right way, you can get 80 % of the way there with 20 % of the effort by using a generic solver.
Using generic solvers is especially useful, because when the problem slightly changes, you don't need to re-write your hand-optimised specific solution from scratch.
Generic solvers also naturally make you separate the specification of your solution from your method of finding that solution.
With hand-crafted approaches, almost always the implementation (of the solver) becomes the de-facto specification. And it's hard to tell which accidental behaviours of the solver are now being relied on by other parts of the system or by users.
I have more experience with (Mixed) Integer Programming, but the same principle applies.
I would say that the benefits you get from a habit of using SMT solvers depends a lot on the kind of problems you are working on.
If your problem is rather small and self-contained, you can really win a lot with these solvers. E.g. I used an ILP solver to fairly distribute tasks to a group of people based on a heuristic of familiarity between each person and the tasks. That's only a few constraints in the solver that saved me a lot of manual or suboptimal coding work. Similarly, it was easy enough to quickly check whether two graphs satisfy a custom condition that is close to (but not quite) isomorphism that I wanted to give a try.
If you are thinking of bigger, more complex tasks, the efficiency gains you get might vary more. If your problem produces many large solver queries, the chances are good that you will wait a long time for results. Without much practice, adjusting/augmenting your specification (and specifically in the case of ILP solvers: tuning the solver parameters) to reduce your solving times is a very non-trivial task. Either way, the solver probably won't bring you to 80% of the execution speed of a custom solution: I once sped up a task by 100-200x by replacing a Gurobi LP with a dumb, theoretically exponential, but easy to optimize custom implementation (the solver was however helpful for testing that the implementation was correct).
Lastly, in my experience, once a constraint system/model reaches a certain complexity, it is a lot more difficult to debug than code in a programming language. There are useful techniques and tricks for debugging solver models, but the tooling and the sequential execution of real code make debugging of traditional code less of a head ache.
Big disadvantage of custom implementations is the fact that they are custom. If you change the constraints / requirements then suddenly they do not work.
Tailored implementations are great for problems that you have to solve frequently with no changes to the structure. In reality, requirements frequently change breaking many of previously valid assumptions for efficiently filtering the feasible space.
> I've long wanted to integrate this type of solver into my workflow more, because I think that if I can make it into more of a habit, it would provide insane leverage for prototyping. Does anyone have experience using these techniques for that?
In my experience (and I'd love alternate viewpoints here, just because I want to use solvers more!), there are a limited set of areas where SAT, SMT, and MILP solvers are a reasonable choice, but in these situations they really shine. Any time you need to model and reason about something in terms of rules and constraints (business logic validation perhaps, schedules, layout engines, etc) solvers will help you cleanly solve the problem. Of course, then there's the additional problem that large models take large amounts of time to converge, so large problems may require more creative approaches. I've used MILP solvers in the past to determine optimal (given a model) average thread pool sizes, and average batch job wait times as well, but usually operating conditions change so frequently that models often provide poor performance.
One of my favorite uses of Z3 though is ZetZ[1] a language that adds a verification layer atop C.
You're probably not wrong, because what you're saying seems to be conventional wisdom.
However, there's a part of me that can't help thinking, "...but that's only because we're not used to thinking of our problems in terms of rules and constraints. If we got into the habit of doing it, we'd soon find out that more and more of our problems are of the rules-and-constraints kind, if only looked at the right way."
There's also an argument that you can turn any problem into rules and constraints, but doing so might not help you. I'm taking the stance here that maybe for a larger class of problems than we think it is actually helpful to think of them as rules and constraints. And then it's a small step to apply a generic solver as the first, baseline approach.
So my argument is not really that "we should apply solvers wherever they are currently immediately obviously useful" but more that "there might be a high-potential paradigm shift hidden here where we as a profession can adapt a new way of thinking based on off-the-shelf solvers for generic problems, if only we learn to view the world that way, and this might turn out to be a better way to think of the lower level components of software engineering."
Have a look at github.com/Shelwien/sudoku-cbmc for a “neat hack” that abuses the C bounded model checker to, basically, run a sat solver on a problem written in C.
off topic but I clicked this immediately because of the research.google.com domain, and I wanted to see what the folks at google are using z3 for, but it's just a colab notebook
TLA+ has been used at Amazon for proving high-level properties of their distributed systems. Very useful, because a fix in prod for a protocol is very costly. ACL2 was used to prove some floating point architecture that AMD used a couple of decades ago haha. Github recently bought the formal methods company Semmle (something like that) for static analysis of repos.
Formal verification is used in critical software, airplanes and so on.
The most popular form of formal methods are type systems, which I would say a normal SWE should learn about and understand. It's not typically characterised as that, but a type system is a proof inscribed in the program for which we guarantee that some properties hold... Sounds pretty formal methodsy to me :-).
It's a specialised area of study, but isn't that the definition of an area of study performed at a university-level?
Having a two year masters degree on the subject, I can give a go:
- FAANG companies, top research universities, spinoffs of top research universities
- Yes and no. Yes because FM orbits around philosophical questions like correctness, soundness, etc., which are not usually thought in your normal web shop. No because it is fundamental CS i.e. generic.
- To prove properties, e.g. that a piece of software is bug-free. Usually this means things like reachability of states, which can be checked with e.g. TLA+. This may be a requirement in a critical system, where your normal support ticket could instead mean a death of someone. Asynchronous, distributed, and multi-core systems, which might be though to wrap your head around, might also be application areas.
- This is super opinionated, but I would say learn about SMT solvers and forget the rest. For me, it clicked with SavileRow. Constraint programming was much more like an AI to me than any actual AI, since I just described the constraints and my programs solved itself. This way, you will get used with propositional logic, which is then easier to extend into TLA+ and some partial formal verification tools which prove state-properties with propositional logic and Hoare-triples from generic software.
>> Constraint programming was much more like an AI to me than any actual AI, since I just described the constraints and my programs solved itself.
Theorem proving and SAT solving are classic AI subjects, don't forget. If we don't think of them this way today that's the usual "if it works, it's not AI" paradox (it's got a name I think, but I don't remember it).
I'd go as far as to say that theorem proving is honest-to-god AI, much more so than pattern recognition. After all, we know many animals that can recognise sights and sounds, but we only know of one animal that can state and prove theorems.
Formal Methods is a big field, and there's lots of different pieces with varying usefulness. I'm working on a bigger intro right now, but a couple years back I wrote an overview on why people don't use it: https://www.hillelwayne.com/post/why-dont-people-use-formal-...
Right now, I'd say that it is still a specialized area of study, and most normal SWEs don't need huge swathes of it. Sure, it might help you think better about problems, but lots things do that, too, and some are even directly useful! But there are still useful parts, like specifying abstract distributed systems, where it saves more in preemptive bugfixing than it costs in time spent. Disclaimer, I make money off teaching it, so of course I'm biased ¯\_(ツ)_/¯
Provable operating systems such as seL4, etc., used in mission critical scenarios utilize formal methods to effectively prove correctness of the system. Definitely a real-world use case. Worth knowing for a normal SWE? Probably not.
Formal methods are useful where the cost of bugs greatly exceed the cost of formal verification.
Most software systems do not fall into this category, but some do: space launches, transport safety systems, medical equipments and particle colliders are good examples. The LHC has several of its subsystems formally verified, for example.
Worth learning as a normal SWE? Heck yes, at least to some degree.
It makes you a lot better at writing understandable, modular, composable code with sensible abstraction boundaries. Also makes you better at writing tests. At least this was my experience.
Honestly, the only specific thing I can think of is basic formal logic (predicate calculus, sets, functions, formal proofs, and so forth). (For ex: https://www.logicmatters.net/ifl/)
The specific technique may have something more, but generally any resources start by assuming you know nothing more than formal logic (if that) and build from there.
Since everyone else is being optimistic, I'll jump in as the bitter, cynical, devil's advocate:
- To a first-order of approximation: nowhere. People mention an Amazon (research?) group using TLA+, there's AdaCore with GNAT, INRIA with Frama-C, various dependently-typed language groups, but in practice I have not heard of any extensive use of formal methods.
- Yes and no. No, in the sense that I was introduced to it as an (early) undergrad and it forms a good chunk of my intuitions and how I think about some problems. On the other hand, I don't believe anyone still does that kind of thing, and formal methods have become a separate, diverging area of study, particularly if you are experienced and looking to add it to what you already know.
- By and large, it's not needed. The current technological culture does not apply any form of pressure to encourage the extra effort needed in general. There is some such pressure in some areas---like the Amazon TLA network protocol stuff (it's really hard to debug your way out of a bad network protocol :-))---but on the leading edge, innovation is preferred to reliability, and on the trailing edge, low price is. (In a sense, it's "move fast, break things" writ large.) Since the consequences of technical failures don't generally end up on the technical community's doorstep and major failures are management problems, the "need" for formal methods is entirely theoretical.
- By and large, no, in the sense that future earnings provide the value of your education. You would be much better rewarded spending the time learning some particular domain or more popular technologies.
Now, as I said, I grew up with formal methods and they are the base of how I think about things. Plus, they're just plain fun. (No, really, Coq (https://coq.inria.fr/) is the best computer game I've ever seen. And Frama-C is just brilliant, if you can get it working and are living within its constraints.) I've spent a lot of time and effort learning formal methods and I wouldn't do anything differently. But then, I have a lower long-term salary history than many of my peers. For what it's worth...
- Frama-C is developed by CEA (French nuclear research agency; Inria participated in the beginning of the project, and is still a partner, but development has always been led by CEA);
- Your first-order of approximation is a bit severe; there are formal methods groups within large companies, such as Airbus using Astrée, Frama-C, and CompCet; Framatome (French nuclear reactor business) using Astrée; EDF (French electrical company) using Frama-C; plus several railway companies using formal methods such as B; and NASA, Thales, Scania... It's peanuts compared to the amount of money being poured into blockchains, AI, and the latest hype, but formal methods are used in some industries.
Overall, I agree with you: it's not the best time investment for many SWEs, but if you like maths, formal methods do provide some satisfaction and long-term hope that we'll stop patching bugs and rewriting things all the time, and build some longer-lasting software.