Hacker News new | past | comments | ask | show | jobs | submit login
MiniZinc: free and open-source constraint modeling language (minizinc.org)
179 points by bjourne on Jan 20, 2018 | hide | past | favorite | 33 comments



Every time some OR or constraint programming language comes up, Håkan Kjellerstrand is an obligatory mention. He has a whole page dedicated to problems and examples for those, including MiniZinc: http://www.hakank.org/minizinc/index.html


Yes, an impressive set of problems solved with an impressive set of solvers. Quite a bit of experience stored on that site.

One thing I'm interested to know (as someone with less experience in constraint programming), is when does one employ constraint programming engines like Choco, Gecode, MiniZinc etc, vs. when metaheuristics (such as simulated annealing, genetic algorithms, etc.) are a more practical solution. Is there a rule of thumb regarding the size search space, type of problem etc. ?


Constraint satisfaction is NP-complete and for optimization it’s NP-hard. In theory that means it’s going to be equally hard to determine how many steps solving the problem takes as it is to simply solve the problem.

Local search is always going to outperform global search but there’s no guarantee of a solution or an optimal one. So it really depends on your needs and how long you’re prepared to wait. Modern CSP solvers can handle impressively large problems with multiple thousands of clauses (1m+ for SAT), but it really depends on what your constraints look like.


One easy rule is use CP when One of these are true.

You need to know the actual optimal answer (metaheurisitics can never prove they have the optimal).

You need all solutions to your problem.

Your problem is just true/false, not optimisation, and there is no obvious way to measure the quality of a partial solution (so metaheurisitics will struggle).

In general, CP does better on smaller, harder problems.


A constraint solver typically works by:

1. Having a finite set of variables with a finite set of possible values

2. Having a finite set of propagators which are monotonic on variables (that is, it either reduces the amount of possible values for a variable or does not (it never adds).

3. A space describing a full set of variables and propagators

4. A way of reducing the amount of values when all propagators have hit a fixpoint for some space.

5. A way of rewinding state when some variable runs out of possible values

And then it basically (BASICALLY) runs this loop:

while(!solved(space)) {

if(failed(space)) {

		rewind(space); 

	} 

	while(!fixpoint(space)) { 

		for(p in propagators(space)) 

		{ p(space); 

		} 


	} 

	make_choice(space);
}

It's mainly used for solving NP-hard problems by being smart when attacking the full search tree.


So what differentiates the solvers then? Is it in how they apply the propagators? I'm interested because I naively wrote the exact loop you just posted, trying to solve a very hard problem. I spent ages trying different heuristics, but I knew I was floundering. Then I discovered Z3, which solves the problem in a tiny fraction of the time. It seems like pure magic, and I'm keen to understand how it works.


Z3 is an SMT solver, which means it combines a boolean SAT solver, which is used to solve the boolean structure of the problem, with theory-specific solvers, e.g. bit vectors, or linear equations.

Modern SAT solvers use the DPLL algorithm, which has in turn influenced modern CSP solvers. Wikipedia has a great page on DPLL:

https://en.m.wikipedia.org/wiki/DPLL_algorithm


The two points where clever heuristics can be applied are in make_choice (i.e., which variable to set such that the search space is maximally reduced) and in rewind (i.e., which variable to unset to make the problem hopefully satisfiable again).

Different heuristics work best for different workloads, so you get solvers that are specialized for a single domain.


I haven’t used all the different solver engines, and I haven’t kept up with the field, but many years ago I had a license to IBM ILOG CPLEX, and it was interesting how much faster it was than some others. Gurobi is also very fast (I think they were started by the people who sold ILOG CPLEX to IBM), but haven’t benchmarked it vs CPLEX.


So I haven't looked deeply into the capabilities of Gurobi and CPLEX, but I was under the impression that they focus on integer linear programming. How do they do with problems for which finding a feasible solution is very difficult?


There's a good MOOC on Coursera on applied discrete optimization that uses MiniZinc: https://www.coursera.org/learn/basic-modeling


Fun fact: We're using MiniZinc and Gecode in production code to solve volume/product puzzles on oil tankers.


Tried to use MiniZinc to write a model checker [1] for c programs a while back [2]. Essentially checking the equivalence of c functions that use only bit operations. It basically translates the programs into minizinc, then attempts to derive a case where the same input results in differing output. Was a fun way to experiment using the constraint solver for a practical application, though unfortunately didn't work as well on larger more complicated functions.

1) https://en.wikipedia.org/wiki/Model_checking

2) https://github.com/dbunker/ArchStatMzn


I'd expect a SMT (Satisfiability modulo theory) solver like z3 to work better for this use case, might be worth giving it a try. E.g.

  x = z3.BitVec('x', 32)
  y = z3.BitVec('y', 32)
  i1 = x | y
  i2 = ~(~x | ~y)
  s = z3.Solver()
  s.add(i1 != i2)
  if s.check() == z3.sat:
      print "found counterexample"
      print s.model()
  else:
      print "always equal"


People need to specify what type of constraint their solver handles. It's a rather large space, and to not even mention what type of constraints on the page is a major oversight.


MiniZinc itself is only a specification language, which a large number of solvers can use.


Hear, hear. Is this good for optimization problems over the reals, or only finite-domain problems? If the former, can it solve nonlinear problems?


Some of the FlatZinc solvers can handle (nonlinear) finite-domain problems, and some just linear MIP problems with floats.

Some solvers, e.g. Gecode and JaCoP, can handle nonlinear problems with floats as well as finite-domain problems.


the hakank?


:-) Yes.


It depends on the solver backend. Some only solve finite domain problems. I think all of them (for general constraint programming) can solve non-linear problems. Obviously if you use a backend that only solves mixed integer linear programs, then you are more restricted.


Whenever I come across a post about constraint solvers, I refer back to this article for a great introduction to the domain - complete with examples and different progressive approaches - http://www.jasq.org/just-another-scala-quant/new-agey-interv...


I've never heard of constraint programming languages before, and I kind of wish I had, because I recently had a problem where I needed to sequence a bunch of elements based on a lot of arbitrary rules. Sequencing based on one rule was easy, but applying all simultaneously was a huge PITA, and resulted in much headaches; this tool looks like it could have solved the problem quickly. Definitely putting this in my bookmark tool-set for future use.


I would love to hear other applications of optimization techniques to HN readers.

(I used constrained quadratic optimization to fit transition matrices to population data for my dissertation, and for some other demographic estimation projects.)


What is a constraint modeling language exactly? I briefly looked at the first example in the documentation, and it looks like you specify a problem and some type of solver solves the problem automatically? What algorithm is it running exactly?


It can run using a number of solvers, you can find a list here: http://www.minizinc.org/software.html


It's a language for specifying problems that can be solved through constraint satisfaction. I'm unsure if such languages are even Turing complete. Their main advantage are insanely succinct problem specifications. F.e the n-queens problems: https://github.com/MiniZinc/minizinc-benchmarks/blob/master/... MiniZinc solves it for boards up to 100x100 in under a second.


“Constraint Programming” refers, for the most part, to the solving of finite-domain problems via backtracking and constraint propagation and is typically concerned with performing a complete, global search of the solution space.

https://en.m.wikipedia.org/wiki/Constraint_satisfaction


I like this answer. However, by solution space do you mean domain space?


Yes, sorry, the search space.


MiniZinc is a modeling language - for problem specification. Then there are solvers that understand MiniZinc (Gecode, ECLiPSe, Google OR-Tools, ...), and they use variety of algorithms (mixed linear programming, constraint-specific algorithms, local search, SAT-solving techniques, ...). You can use the same specification of the problem in MiniZinc with different solvers.


Possibly a simplex solve for a large linear problem.


There is a lot off different type of solvers that can be used to solve a MiniZinc model: MIP, Constraint Programming, SAT, Hybrid SAT+CP, Local search etc.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: