talk-icon
Description
Daniela Ushizima1 2 Silvia Miramontes-Lizarraga1 3 Michael Macneil1 Dilworth Parkinson1

1, Lawrence Berkeley National Laboratory, Berkeley, California, United States
2, BIDS, University of California, Berkeley, Berkeley, California, United States
3, Applied Math, University of California, Berkeley, Berkeley, California, United States

The growth of X-ray brilliance and extremely quick snapshots allied to advances in machine learning create new opportunities to streamline the description of materials structures as part of the design of new compounds. From industry to national laboratories, X-ray imaging has become fundamental to measure the function and resilience of new materials and for probing dynamic properties. However, the analysis of these rich datasets at scale requires further research in automation that combines computational and experimental methods.

A major challenge is to couple increasing data rate experiments to new data science algorithms in support of quantitative image analysis that can automatically drive the scientific discovery. Our efforts in deep learning applied to image representation and structural fingerprints have made sample sorting and ranking possible, allowing automated identification of special materials configurations from million-sized databases. These complex networks recognize events from data gathered in two regimes: experimentally and by simulation. While such methods successfully bypass hand-engineered features, their full extension to three-dimensional imagery seldom meets standards that are comparable to manual curation. Additionally, labeling large datasets of 3D data is practically impossible.

For example, the inspection of material deformation using X-ray attenuation contrast data from microtomography often generates two thousand cube voxels per volume. The issue is that the creation of millions of labeled volumes means manually handling eight billion voxels per time step for one experimental setting. Therefore, our research efforts also include the creation of the next generation curation tools based on advanced computer vision algorithms addressing fundamental problems, such as multiresolution algorithms for image segmentation (e.g. graph-based classification and convolutional neural networks), stereological analysis, and enumeration of particles within microtomography imagery.

The contributions of our team include: (a) the development of numerical schemes to analyze data that stem from physical experiments; (b) the construction of new software tools to empower materials scientists and constrain parameter space, particularly given prior knowledge from experimental settings; and (c) the reproducibility of experiments by recognizing the importance of open-source codes and availability of benchmark datasets of scientific images coming from advanced instruments.

This talk will present computational tools for recognition of patterns that occur in scientific images, both coming from synchrotron-based X-ray instruments and simulation through HPC codes. This talk will include scripts for visual analysis and interaction with extracted 3D geometries to be shared with the audience, which will be illustrated on scientific imagery from open-data projects. Use-cases will demonstrate our advancements on inspection of hierarchical materials that consist of many individual strands, bundled within a matrix to achieve high-strength mechanical properties and durability.

Tags