17 Jun 2017
Planet Lisp
Paul Khuong: Chubanov's Projection Methods for 0/1 Programming
I've long felt that compilers (and symbolic processing in general) would benefit from embedding integer programming solvers. However, I was never comfortable with actually doing so for a production system that others would have to run: industrial strength integer linear programming solvers are large systems with complex runtime behaviour, and that's not the kind of black box you want to impose on people who just want to build their project. (That's also true of SAT solvers, though, so maybe embedding complicated black boxes is the new normal?)
However, if we had something simple enough to implement natively in the compiler, we could hope for the maintainers to understand what the ILP solver is doing. This seems realistic to me mostly because the generic complexity tends to lie in the continuous optimisation part. Branching, bound propagation, etc. is basic, sometimes domain specific, combinatorial logic; cut generation is probably the most prominent exception, and even that tends to be fairly combinatorial. (Maybe that's why we seem to be growing comfortable with SAT solvers: no scary analysis.) So, for the past couple years, I've been looking for simple enough specialised solvers I could use in branchandbound for large 0/1 ILP.
Some stuff with augmented lagrangians and specialised methods for boxconstrained QP almost panned out, but nested optimisation sucks when the inner solver is approximate: you never know if you should be more precise in the lower level or if you should aim for more outer iterations.
A subroutine in Chubanov's polynomialtime linear programming algorithm [PDF] (related journal version) seems promising, especially since it doesn't suffer from the numerical issues inherent to log barriers.
Chubanov's subroutine in branchandbound
Chubanov's "Basic Subroutine" accepts a problem of the form \(Ax = 0\), \(x > 0\), and either:
 returns a solution;
 returns a nonempty subset of variables that must be 0 in any feasible solution;
 returns a nonempty subset of variables \(x\sb{i}\) that always satisfy \(x\sb{i} \leq u\) in feasible solutions with \(x\sp{\star} \in [0, 1]\), for some constant \(u < 1\) (Chubanov sets \(u = \frac{1}{2}\)).
The class of homogeneous problems seems useless (never mind the nondeterministic return value), but we can convert "regular" 0/1 problems to that form with a bit of algebra.
Let's start with \(Ax = b\), \(0 \leq x \leq 1\), we can reformulate that in the homogeneous form:
\[Ax  by = 0,\] \[x + s  \mathbf{1}y = 0,\] \[x, s, y \geq 0.\]
Any solution to the original problem in \([0, 1]\) may be translated to the homogeneous form (let \(y = 1\) and \(s = 1  x\)). Crucially, any 0/1 (binary) solution to the original problem is still 0/1 in the homogeneous form. In the other direction, any solution with \(y > 0\) may be converted to the boxconstrained problem by dividing everything by \(y\).
If we try to solve the homogenous form with Chubanov's subroutine, we may get:
 a strictly positive (for all elements) solution. In that case, \(y > 0\) and we can recover a solution to the boxconstrained problem.
 a subset of variables that must be 0 in any feasible solution. If that subset includes \(y\), the boxconstrained problem is infeasible. Otherwise, we can take out the variables and try again.
 a subset of variables that are always strictly less than 1 in feasible solutions. We exploit the fact that we only really care about 0/1 solutions (to the original problem or to the homogenous reformulation) to also fix these variables to 0; if the subset includes \(y\), the 0/1 problem is infeasible.
As soon as we invoke the third case to recursively solve a smaller problem, we end up solving an interesting illspecified relaxation of the initial 0/1 linear program: it's still a valid relaxation of the binary problem, but is stricter than the usual box linear relaxation.
That's more than enough to drive a branchandbound process. In practice, branchandbound is much more about proving the (near) optimality of an existing solution than coming up with strong feasible solutions. That's why the fact that the subroutine "only" solves feasibility isn't a blocker. We only need to prove the absence of 0/1 solutions (much) better than the incumbent solution, and that's a constraint on the objective value. If we get such a proof, we can prune away that whole search subtree; if we don't, the subroutine might have fixed some variables 0 or 1 (always useful), and we definitely have a fractional solution. That solution to the relaxation could be useful for primal heuristics, and will definitely be used for branching (solving the natural LP relaxation of constraint satisfaction problems ends up performing basic propagation for us, so we get some domain propagation for free by only branching on variables with fractional values).
At the root, if we don't have any primal solution yet, we should probably run some binary search on the objective value at the root node and feed the resulting fractional solutions to rounding heuristics. However, we can't use the variables fixed by the subroutine: until we have a feasible binary solution with objective value \(Z\sp{\star}\), we can't assume that we're only interested in binary solutions with object value \(Z < Z\sp{\star}\), so the subroutine might fix some variables simply because there is no 0/1 solution that satisfy \(Z < k\) (case 3 is vacuously valid if there is no 0/1 solution to the homogeneous problem).
That suffices to convince me of correctness. I still have to understand Chubanov's "Basic Subroutine."
Understanding the basic subroutine
This note by Cornelis/Kees Roos helped me understand what makes the subroutine tick.
The basic procedure updates a dual vector \(y\) (not the same \(y\) as the one I had in the reformulation... sorry) such that \(y \geq 0\) and \(y_1 = 1\), and constantly derives from the dual vector a tentative solution \(z = P\sb{A}y\), where \(P\sb{A}\) projects (orthogonally) in the null space of the homogeneous constraint matrix \(A\) (the tentative solution is \(x\) in Chubanov's paper).
At any time, if \(z > 0\), we have a solution to the homogenous system.
If \(z = P\sb{A}y = 0\), we can exploit the fact that, for any feasible solution \(x\), \(x = P\sb{A}x\): any feasible solution is alrady in the null space of \(A\). We have
\[x\sp{\top}y = x\sp{\top}P\sb{A}y = x\sp{\top}\mathbf{0} = 0\]
(the projection matrix is symmetric). The solution \(x\) is strictly positive and \(y\) is nonnegative, so this must mean that, for every component of \(y\sb{k} > 0\), \(x\sb{k} = 0\). There is at least one such component since \(y_1 = 1\).
The last condition is how we bound the number of iterations. For any feasible solution \(x\) and any component \(j\),
\[y\sb{j}x\sb{j} \leq y\sp{\top}x = y\sp{\top}P\sb{A}x \leq x P\sb{A}y \leq \sqrt{n} z.\]
Let's say the max element of \(y\), \(y\sb{j} \geq 2 \sqrt{n}z\). In that case, we have \[x\sb{j} \leq \frac{\sqrt{n}z}{y\sb{j}} \leq \frac{1}{2}.\]
Chubanov uses this criterion, along with a potential argument on \(z\), to bound the number of iterations. However, we can apply the result at any iteration where we find that \(x\sp{\top}z < y\sb{j}\): any such \(x\sb{j} = 0\) in binary solutions. In general, we may upper bound the lefthand side with \(x\sp{\top}z \leq xz \leq \sqrt{n}z\), but we can always exploit the structure of the problem to have a tighter bound (e.g., by encoding clique constraints \(x\sb{1} + x\sb{2} + … = 1\) directly in the homogeneous reformulation).
The rest is mostly applying lines 912 of the basic procedure in Kees's note. Find the set \(K\) of all indices such that \(\forall k\in K,\ z\sb{k} \leq 0\) (Kees's criterion is more relaxed, but that's what he uses in experiments), project the vector \(\frac{1}{K} \sum\sb{k\in K}e\sb{k}\) in the null space of \(A\) to obtain \(p\sb{K}\), and update \(y\) and \(z\).
The potential argument here is that after updating \(z\), \(\frac{1}{z\sp{2}}\) has increased by at least \(K > 1\). We also know that \(\max y \geq \frac{1}{n}\), so we can fix a variable to 0 as soon as \(\sqrt{n} z < \frac{1}{n}\), or, equivalently, \(\frac{1}{z} > n\sp{3/2}\). We need to increment \(\frac{1}{z\sp{2}}\) to at most \(n\sp{3}\), so we will go through at most \(1 + n\sp{3})\) iterations of the basic procedure before it terminates; if the set \(K\) includes more than one coordinate, we should need fewer iterations to reach the same limit.
Chubanov shows how to embed the basic procedure in a basic iterative method to solve binary LPs. The interesting bit is that we reuse the dual vector \(y\) as much as we can in order to bound the total number of iterations in the basic procedure. We fix at least one variable to \(0\) after a call to the basic procedure that does not yield a fractional solution; there are thus at most \(n\) such calls.
Next step
In contrast to regular numerical algorithms, the number of iterations and calls so far have all had exact (non asymptotic) bounds. The asymptotics hide in the projection step, where we average elementary unit vectors and project them in the null space of \(A\). We know there will be few (at most \(n\)) calls to the basic procedure, so we can expend a lot of time on matrix factorisation. In fact, Chubanov outright computes the projection matrix in \(\mathcal{O}(n\sp{3})\) time to get his complexity bound of \(\mathcal{O}(n\sp{4})\). In practice, this approach is likely to fill a lot of zeros in, and thus run out of RAM.
I'd start with the sparse projection code in SuiteSparse. The direct sparse solver spends less time on precomputation than fully building the projection matrix (good if we don't expect to always hit the worst case iteration bound), and should preserve sparsity (good for memory usage). In return, computing projections is slower, which brings the worstcase complexity to something like \(\mathcal{O}(n\sp{5})\), but that can be parallelised, should be more proportional to the number of nonzeros in the constraint matrix (\(\mathcal{O}(n)\) in practice), and may even exploit sparsity in the righthand side. Moreover, we can hope that the \(n\sp{3}\) iteration bound is pessimistic; that certainly seems to be the case for most experiments with random matrices.
The worstcase complexity, between \(\mathcal{O}(n\sp{4})\) and \(\mathcal{O}(n\sp{5})\), doesn't compare that well to interior point methods (\(\mathcal{O}(\sqrt{n})\) sparse linear solutions). However, that's all worstcase (even for IPMs). We also have different goals when embedding linear programming solvers in branchandbound methods. Warm starts and the ability to find solution close to their bounds are key to efficient branchandbound; that's why we still use simplex methods in such methods. Chubanov's projection routine seems like it might come close to the simplex's good fit in branchandbound, while improving efficiency and parallelisability on large LPs.
17 Jun 2017 7:24pm GMT
12 Jun 2017
Planet Lisp
McCLIM: Progress report #8
Dear Community,
During this iteration we had many valuable contributions. It's a joy to see how McCLIM gains more mindshare and people are willing to put their time and wallet in fixing issues and writing applications in McCLIM.
Some highlights for this iteration:
 many Listener fixes,
 major tab layout extension refactor,
 new extension for Bezier curves (based on older internal implementation),
 interactor improvements,
 layout improvements,
 fixes for redisplay and transformations,
 documentation cleanups,
 cleanup of the issues (closed the obsolete and fixed ones).
All McCLIM bounties (both active and already solved) may be found here.
Bounties solved this iteration:
 [$200] Interactor CLI prompt print problem
Fixed by Gabriel Laddel. Waiting for a pull request and a bounty claim.
 [$200] Problem with coordinate swizzling (probably).
Fixed by Alessandro Serra and merged. Waiting for a bounty claim.
 [$100] menu for inputprompt in lisplistener does not disappear after use.
Fixed by Alessandro Serra and merged. Waiting for a bounty claim.
Active bounties:

[$150] When flowing text in a FORMATTINGTABLE, the pane size is used instead of the column size.

[$150] clx: input: english layout. (someone already works on it).

[$100] Caps lock affects nonalphabetic keys. (new)

[$100] Add PDF file generation (PDF backend). (new)

[$100] Keystroke accelerators may shadow keys even if inactive. (new)
Our current financial status is $1,429 for bounties and $267 recurring monthly contributions from the supporters (thanks!).
Suggestions as to which other issues should have a bounty on them are appreciated and welcome. Please note that Bountysource has a functionality "Suggest an Issue" which may be found on the bounties page. If you feel that you may solve some problem, but there is no bounty on it, feel free to suggest it too.
If you have any questions, doubts or suggestions  please contact me either by email (daniel@turtleware.eu) or on IRC (my nick is jackdaniel
).
Sincerely yours,
Daniel Kochmański
12 Jun 2017 1:00am GMT
11 Jun 2017
Planet Lisp
ABCL Dev: ABCL 1.5.0
Due to the lack of a publicly available Java 5 implementation, with this release we drop support for that platform, and henceforth support running on Java 6, Java 7, and Java 8.
In addition to consolidating eight months of bug fixes, the following notable features are now also present in the implementation.
The compiler now records more complete debugging information on the SYS:SOURCE symbol property.
ABCLINTROSPECT offers improved inspection of backtraces to the point that local variables may be inspected in Lisp debug frames. Patches to SLIME to use this feature are in the process of being merged into the upstream repository. The OBJECTWEB system allows the user to disassemble JVM bytecode via dependencies managed by Maven.
JSS now contains a syntax for accessing Java static and member fields.
For declaring dependencies on Java artifacts ABCLASDF, we have added an experimental syntax to address JRE/JDK artifacts via the ASDF:JDKJAR class, as well as the ability to more finely control Maven dependencies with the ASDF:MVNMODULE class.
A complete list of changes may be viewed in the source repository.
Binaries for this release may either be downloaded directly from http://abcl.org/releases/1.5.0, retrieved from the distributed Maven POM graph, or run from Docker via
Many thanks to all who have contributed to nurturing the Bear's execution of conforming ANSI Common Lisp on the Java Virtual Machine.docker run it easye/abcl:1.5.0
11 Jun 2017 10:41am GMT