The University of Manchester
Finding efficient means of quantitatively describing material microstructure is a critical step towards harnessing data-centric machine learning approaches to understanding and predicting processing-microstructure-property relationships. Common quantitative descriptors of microstructure tend to consider only specific, narrow features such as grain size or phase fractions, but these metrics discard vast amounts of information. Since the gain in traction of machine learning and computer vision, more abstract methods for describing image data in a concise and quantitative manner have become available but have yet to be fully exploited within materials science. We present the results from investigations of some of these methods, with a focus on Variational Autoencoders (VAEs) and Vision Transformers (ViTs), as tools for constructing compressed numerical descriptions of microstructural image data, which are here referred to as “microstructural fingerprints”. We quantify their potential for mining correlations between processing parameters and mechanical properties on image data from steel Jominy end-quench samples, which inherently provide natural variation in processing parameters and mechanical properties within a single sample.
© 2026