Follow:

Deep learning for automated size and shape analysis of nanoparticles in scanning electron microscopy

https://doi.org/10.1039/D2RA07812K

“The automated analysis of nanoparticles, imaged by scanning electron microscopy, was implemented by a deep-learning (artificial intelligence) procedure based on convolutional neural networks (CNNs). It is possible to extract quantitative information on particle size distributions and particle shapes from pseudo-three-dimensional secondary electron micrographs (SE) as well as from two-dimensional scanning transmission electron micrographs (STEM). After separation of particles from the background (segmentation), the particles were cut out from the image to be classified by their shape (e.g. sphere or cube). The segmentation ability of STEM images was considerably enhanced by introducing distance- and intensity-based pixel weight loss maps. This forced the neural network to put emphasis on areas which separate adjacent particles. Partially covered particles were recognized by training and excluded from the analysis. The separation of overlapping particles, quality control procedures to exclude agglomerates, and the computation of quantitative particle size distribution data (equivalent particle diameter, Feret diameter, circularity) were included into the routine.”

SEM images usually show two-dimensional representations of particles having a three-dimensional shape. Of course, geometric information is lost by such a projection. This is often tacitly ignored, e.g. circular particles are assumed to be spherical, quadratic particles are assumed to be cubic, etc. In fact, a circular particle may also be disc-like because particles are usually settling on their largest face during sample preparation for SEM. This will lead to images where most discs are sitting on their circular face instead of their edge. This is a fundamental problem which can only be addressed by recording SEM data taken from different angles.

The segmentation training dataset consisted of 30 SE and 12 STEM images, respectively. We also used 32 SE images published by Ruehle et al.21 Validation datasets contained 16 SE images and 3 STEM images, respectively. These images had typical sizes of 2000 × 1600 pixels. The particles in both image types were typically separated by 1 to 3 pixels, i.e. the particle density was high (as common in scanning electron microscopy).

Because the input of UNet++ was fixed to 512 × 512 pixels, we randomly cut out patches of the training images. For data augmentation, we artificially altered each image by random rotation, flipping, intensity variation, shearing, and zooming (up to 15% each) before cutting out image patches. The number of patches per image depended on the image size. Bigger images resulted in more patches. Approximately 450 patches were cut out of the 30 SE images. Approximately 180 patches were cut out of the 12 STEM images. The random extraction of patches from each image was performed in each epoch. An epoch was finished after each patch of each image was processed once by the CNN. Fig. 1 illustrates this step.

Fig. 1 Representative SEM image from the training dataset for segmentation containing ZnO nanorods (2048 × 1886 pixels; SE mode). Orange boxes show typical cut-out patches used for training.

Leave a Comment