Image segmentation by neural networks trained on synthetic data Claudia Redenbach (TU Kaiserslautern) Neural networks have become common tools for image segmentation. Training a network requires a suitable amount of training data, that is, images along with the desired segmentation result. Generation of training data by manual annotation of images is still common practice. In case of 3D image data as obtained by micro computed tomography (µCT) or focused ion beam scanning electron microscopy (FIB-SEM) imaging, manual annotation is time consuming and error prone. We suggest to perform the training on synthetic image data. For their simulation, virtual micro structures are generated as realizations of suitable models from stochastic geometry. In a second step, a model for the imaging process is applied to simulate realistic images of the synthetic structures. Additionally, the structures are discretized into binary images to obtain a ground truth for the segmentation. The resulting pairs are then used to train the neural network. We present two examples of application: segmentation of cracks in µCT images of concrete and of FIB-SEM images of porous structures.