Training time depends on number of training samples (e.g. annotated pixels). Adding lot of very similar “looking” pixels will not improve the classifier, but increase the training time a lot. It is good practice to start with few annotations, go to live update and correct the classifier where it’s wrong.
Lazy access (and parallelization) require file formats that store volumes in chunks (squared tiles, blocks).
File formats that allow efficient reading of sub-volumes will perform better.
in ilastik we support .h5
(hdf5) for small/medium data, .n5
for large data.
How to convert your data? Use our Fiji Plugin (can be done efficiently in a macro), from Python using a Jupyter notebook.
When exporting Probabilities, go to Export Image Settings, tick Convert to Data Type
(choose integer 8-bit), as well as Renormalize [min,max]
(from 0 ... 1.0
to 0 ... 255
).
Intuitively, we think of probabilities as values between 0 and 1, or maybe we multiply those values by 100 to obtain percentages.
Fractions are represented for computations as 32-bit floating point values.
The output of the random forest classifier is not continuous between 0 and 1, however.
It can only take discrete values corresponding to integer percentages: 0.00
(0%), 0.01
(1%), 0.02
(2%) etc. Not for example 0.015
(1.5%) or anything else that would correspond to a fractional percentage.
This means the values can be converted to 8-bit integers without losing information.
Working with 8-bit integers instead of 32-bit floating point numbers is faster, and the resulting exported files are smaller to store.
Computations in ilastik are done in parallel whenever possible. Having a CPU with multiple cores will result in faster performance.
Block-wise computations are more efficient with increasing block-size. Having more RAM available means ilastik can work more efficiently. 3D data will in general require more RAM. E.g. we would not recommend to attempt processing 3D data in the Autocontext with less than 32 Gb of RAM.
Currently only workflows that use deep neural networks (Neural Network Workflow, Trainable Domain Adaptation) support doing calculations on a GPU.
If you have an NVidia graphics card, download and install the -gpu
builds from our download page to gain vastly improved performance in these workflows.
Other workflows, like Pixel- or Object Classification do not use the GPU for calculations.
Apple Silicon Hardware is fully supported in the latest beta release.