A Computational Framework for Predicting Appearance Differences

Michael Ludwig, Professor Gary Meyer, July 2018

A dissertation submitted to the faculty of the University of Minnesota, in partial fulfillment of the requirements for the degree of doctor of philosophy.

Abstract

Quantifying the perceived difference in appearance between two surfaces is an im- portant industrial problem that currently is solved through visual inspection. The field of design has always employed trained experts to manually compare appearances to verify manufacturing quality or match design intent. More recently, the advancement of 3D printing is being held back by an inability to evaluate appearance tolerances. Much like color science greatly accelerated the design of conventional printers, a computational solution to the appearance difference problem would aid the development of advanced 3D printing technology. Past research has produced analytical expressions for restricted versions of the problem by focusing on a single attribute like color or by requiring homo- geneous materials. But the prediction of spatially-varying appearance differences is a far more difficult problem because the domain is highly multi-dimensional.

This dissertation develops a computational framework for solving the general form of the appearance comparison problem. To begin, a method-of-adjustment task is used to measure the effects of surface structure on the overall perceived brightness of a material. In the case considered, the spatial variations of an appearance are limited to shading and highlights produced by height changes across its surface. All stimuli are rendered using computer graphics techniques in order to be viewed virtually, thus increasing the number of appearances evaluated per subject. Results suggest that an image-space model of brightness is an accurate approximation, justifying the later image-based models that address more general appearance evaluations.

Next, a visual search study is performed to measure the perceived uniformity of 3D printed materials. This study creates a large dataset of realistic materials by using state- of-the-art material scanners to digitize numerous tiles 3D printed with with spatially- varying patterns in height, color, and shininess. After scanning, additional appearances are created by modifying the reflectance descriptions of the tiles to produce variations that cannot yet be physically manufactured with the same level of control. The visual search task is shown to efficiently measure changes in appearance uniformity resulting from these modifications.

A follow-up experiment augments the collected uniformity measurements from the vi- sual search study. A forced-choice task measures the rate of change between two appear- ances by interpolating along curves defined in the high-dimensional appearance space. Repeated comparisons are controlled by a Bayesian process to efficiently find the just noticeable difference thresholds between appearances. Gradients reconstructed from the measured thresholds are used to estimate perceived distances between very similar ap- pearances, something hard to measure directly with human subjects. A neural network model is then trained to accurately predict uniformity from features extracted from the non-uniform appearance and target uniform appearance images.

Finally, the computational framework for predicting general appearance differences is fully developed. Relying on the previously generated 3D printed appearances, a crowd- sourced ranking task is used to simultaneously measure the relative similarities of multi- ple stimuli against a reference appearance. Crowd-sourcing the perceptual data collection allows the many complex interactions between bumpiness, color, glossiness, and pattern to be evaluated efficiently. Generalized non-metric multidimensional scaling is used to estimate a metric embedding that respects the collected appearance rankings. The em- bedding is sampled and used to train a deep convolutional neural network to predict the perceived distance between two appearance images. While the learned model and exper- iments focus on 3D printed materials, the presented approaches can apply to arbitrary material classes. The success of this computational approach creates a promising path for future work in quantifying appearance differences.