First a very silly trick.

During IAEPC, there was a problem that could be modeled as a bipartite graph with 106 vertices on each side. And each vertex had like O(logn) edges or something.

Somehow maroonrk managed to squeeze TL on that.

His trick was the reverse all edges and compute the cost from sink to source.

The test case creator did not consider that and therefore the tests were weak and he managed to scam.

Now for more legitamate tricks.

Bitset Kuhn’s

We know that for unweighted bipartite matching, it is O(VE) using Kuhn’s and O(EV) using Dinic’s.

But actually there is a way using O(V3w) using bitset. It is useful when the graph is very dense. The main idea is to store the adjacent list as a bitset. Then for each left edge, we will use the bitset to check the unvisited vertices on the right side.

Although I have no idea how practical this idea is compared to using DInic’s.

Kuhn Speedup when Flow is small

Note that we only need to actually reset the visited graph from the dfs when we successfully find an augmenting path.

Therefore, the time complexity of Kuhn’s is really O(FE).

If we know that FV, this can be significant speedup.

Special Case of Weighted Bipartite Matching

Suppose a have a bipartite matching where if we have edges Li and Rj in the matching, then we get an additional cost of Ci (which is dependent only on the left endpoint of the matching).

It turns out we can use Kuhn’s to solve this min-cost max-flow problem. And we actually perform MCMF algorithm when doing so.

The algorithm is to sort the vertices on the left side by Ci. Then we will attempt to push an augmenting path through each vertex on the left side from lowest to highest value of Ci.

If one models this as a MCMF graph, we can see that this is the strategy that MCMF will take, since the edges with cost are all directly connected to the source.

So we claim that the only viable augmenting paths are those that go through exactly 1 edge that is incident to the source.

Suppose otherwise, and there is a viable augmenting path that uses the edges sabsc.

But we know that sabs should not be a negative weight cycle from properties of MCMF.

So it is valid to take sc anyways.

So each augmenting path only uses 1 edge incident to the source.

So the algorithm can be seen as picking the vertex on the left side with the minimum Ci such that we can push an augmenting path through it.

There is another more ad-hoc way to think about this.

Suppose C1C2Cn and we know that in an optimal solution, we will pick vertices x1<x2<<xk on the left side.

We will show inductively that the optimal solution must contain the vertices x1<x2<<xi.

Suppose we already know the optimal solution must contain the vertices x1<x2<<xi and the optimal solution is actually x1<x2<xi<yi+1<yi+2<<yk.

Firstly, we must have xi+1yi+1, because otherwise, from algorithm we know that taking x1,x2,,xi,yi+1 on the left side is impossible.

Now we will show that we can change one of the yj to become xi+1.

Now, consider the matching M when the vertices on the left side are x1,x2,,xi,xi+1 and the matching M when the vertices on the left side are x1,x2,,xi,yi+1,yi+2,,yk.

Consider the union MM. The component containing xi+1 is clearly a path. If the path ends on the right side of the graph, then M isn’t a maximal matching because we found an M-augmenting path.

Otherwise, the path ends in one of yi+1,yi+2,,yk. Since xi+1yi+1,yi+2,,yk, it is optimal to flip that augmenting path.

Therefore, we have two different but probably very similar ways of proving our claim.

These last two optimizations are used in CF1728F, if you want to solve that.

EDIT: Patrik Pavic told me that you can prove this easily since the sets of valid x forms a matroid…


<
Previous Post
How to Troll Competitive Programmers
>
Next Post
Cambridge or NUS