Boundary-based segmentation with Multicut

What it is and why you need it

This workflow allows you to segment images based on boundary information. Given a boundary probability map, it breaks the image up into superpixels and then merges them to recover segments limited by closed surfaces (no dangling edges). The main algorithm, known as multicut or correlation clustering, was presented in this paper by B. Andres. Its applications to biological image analysis can be found in, for example, connectomics data or bright field and phase contrast images or any other kind of imaging which uses membrane staining.

Boundary evidence

Start by creating a boundary probability map. This can be done with ilastik’s Pixel Classification workflow or by an outside program. The images below illustrate the boundary map creation in ilastik for a very small stack of electron microscopy images of a mouse brain (data from Graham Knott’s lab, EPFL).

Pixel Classification: labels
Pixel Classification: predictions

Load the data

If we now start the Multicut workflow and load the raw data and this probability map, you’ll see something like this:


If you have already computed superpixels on your data, you can also load them in the corresponding tab and skip the superpixel creation step described below. If you also have a groundtruth segmentation, load it as well and the method will use it for training instead of interactively provided labels. The trained workflow can, as usual, be applied to other datasets in batch or headless mode.

Superpixels – DT Watershed

We compute superpixels by the watershed algorithm, running it on the distance transform of the boundary probability map. The seeds are computed from the local maxima of the distance transform (each maximum then gives rise to a separate superpixel). The motivation for this approach is as follows: Commonly used superpixel algorithms, for example SLIC, group pixels based on their similarity in brightness. This is, however, not desired here since it would result in superpixels which lie on the boundaries rather then be separated by them. Instead, for our use case, superpixels should group pixels based on which object they belong to. To achieve this, the high-contrast boundaries can be used. Here, the technique of choice is a watershed.

The most common approach is to calculate the watershed directly on the boundary prediction. However, this only works if the boundary prediction perfectly separates each object from its neighbors. The smallest hole in the prediction can lead to merging different objects into the same superpixel. Obtaining a perfect edge prediction is hard in itself and is often further complicated by boundary gaps due to errors in the sample preparation procedure. Consequently, we would prefer an algorithm which is robust to boundary holes.

This can be achieved by performing the watershed on the distance transformation of the boundary prediction. Similar concepts have been used for a long time to deal with missing edge information in other applications (for example in here). Performing the watershed on the distance transformation ensures that all gaps smaller than the object diameter are closed by the superpixels. In practice this is almost always the case, therefore high quality segmentations can be obtained even for a low number of superpixels.

Our approach is further described in the Supplementary materials of this publication and, in great detail, in this Master thesis.

Let’s go throught the controls of this applet from top to bottom:

  1. Input Channel – which channel of the probability map data contains the boundary probability? You can view different channels by enabling the “Probability Map” layer in the lower left corner. Thresholding of the current channel is displayed by default (here in green), just choose a different value in the combobox if a wrong channel is shown.

  2. Threshold – threshold for the probability map. Pixel with values above the threshold belong to the boundary, the rest belong to the background. You can see the results of the thresholding operation in the “Thresholded input” layer (on by default, as shown on the left).

  3. Min Boundary Size – size filter to get rid of single pixel noise or tiny fake boundary pieces.

  4. Presmooth before Seeds – how much to smooth the distance transform map before computing the seeds - specify the sigma. The more you smooth, the less seeds you will have. With less seeds you get less superpixels, but they are bigger.

  5. Seed Labeling – when local maxima pixels are merged into seeds, which neighborhood is used? Connected corresponds to direct 4-neighborhood in 2D and 6-neighborhood in 3D. Clustered option groups pixels together based on a distance heuristic.

  6. Min Superpixel Size – the resulting superpixels should be at least that big, measured in pixels.

  7. Preserve Thin Structures – this checkbox activates a trick to deal with large connected areas of high boundary probability. While not dangerous in general, such areas can lead to wrong seeding and partitioning of thin structures. If you data has long thin objects or objects connected by long thin “tentacles” and your boundary probability map is wide, we recommend putting this setting on. If, however, your foreground objects are roundish and the long thin areas between them belong to the background, don’t activate this setting. The images below attempt to illustrate the difference. The thin corridor inside the black rectangle is partitioned between its neighbors in the superpixel creation procedure (middle image). There is no way clustering of these superpixels will recover the corridor. On the right, however, purple, yellow and grey superpixels in the black rectangle can be combined to reconstruct the corridor. The superpixels on the right, produced with this option “on”, are thus better than in the middle where it was not activated.

  8. Show Debug Layers – show intermediate layers of superpixel computation. This setting is useful if you want to understand the algorithm in detail, but it’s not necessary for casual use.

Edge training and multicut

Now that we have superpixels, we need to train the algorithm how to decide which of them should be merged and which not. The general approach we use was first described in this publication. Briefly, given the superpixels computed in the previous step, we now compute features on the edges of adjacent superpixels. These features include, for example, the summed intensity of the edge and the minimal and maximal intensity along it, as well as statisitics of the probability map and of the intensity inside the superpixels (“Select Features” button brings up a dialog which lets you choose features). After the features are computed, we predict – for every edge independently – if this edge should be dropped or preserved to achieve a correct segmentation. The “naive” way to proceed would be to then only take the edges which are classified as “to preserve” and use those as the final segmentation. This, however, would lead to an inconsistent segmentation with dangling edges inside the objects. Instead, we formulate a so-called multicut problem, where special constraints ensure no dangling edges are present and all segmented objects are closed surfaces, while following the classifier preferences for which edges to keep. This problem is NP-hard in general, but for highly structured graphs, such as our superpixel region adjacency graphs, it usually converges fairly fast. Besides, excellent approximate solvers exist and are also accessible through this applet.

Training

If you already have a groundtruth segmentation, you can load it in the “Input Data” applet and then it will be used here as labels if you click the “Auto-label” button. If not, you can label interactively, as described below.

You can do it in this applet by clicking on the edges between superpixels. Left click, making edge green, corresponds to edges which should be dropped and right click, making edges red, corresponds to true segmentation edges which should be preserved. The initial view is shown on the left, while on the right you can see the same image with some edges labeled. To label multiple edges in one sweep, brush across them with either mouse button pressed.

As usual in ilastik, pressing “Live Predict” will show you the edge probabilities and update them live as you click more edges (left figure). The right figure above shows the naive segmentation obtained by simply preserving all red edges. Now let’s apply the multicut and get a consistent segmentation.

Multicut

Beta - this parameter controls the under- or over-segmentation preference. With lower beta, the algorithm merges more aggressively. The default of 0.5 should handle most common situations correctly. Solver - the final segmentation is the result of solving an integer linear program (ILP). The available solvers are listed in this dropdown menu, the exact entries depend on whether some outside libraries are installed on your machine. The Nifty_FmGreedy solver is the most basic of them and should be available in all ilastik installations. If you have CPLEX or Gurobi installed (see installation instructions for more details), you will see other solvers as well. Intersection-based approximate solvers have always worked very well for us in practice. The optimality gap is can not usually be noticed in the final results, but just in case we also provide an exact solver which solves the problem to global optimality.

On the left you see the results of multicut, edges of the final segmentation are shown in blue. On the right, you see another interesting view into the segmentation algorithm – in cyan we show the edges where the results of the multicut disagree with predictions of the edge classifier. If you press the “Live Multicut” button, segmentation will be recomputed every time the edge probabilities change following your labels. To only update occasionally, press the “Update Now” button.

Once you are happy with the segmentation, export it and/or use the classifier you trained in the Batch Processing applet or in headless mode.