{"id":304,"date":"2021-07-07T00:01:18","date_gmt":"2021-07-06T17:01:18","guid":{"rendered":"https:\/\/lhkbob.com\/blog\/?page_id=304"},"modified":"2021-07-07T00:01:18","modified_gmt":"2021-07-06T17:01:18","slug":"a-computational-framework-for-predicting-appearance-differences","status":"publish","type":"page","link":"https:\/\/lhkbob.com\/blog\/publications\/a-computational-framework-for-predicting-appearance-differences\/","title":{"rendered":"A Computational Framework for Predicting Appearance Differences"},"content":{"rendered":"\n<figure class=\"wp-block-image\"><img decoding=\"async\" loading=\"lazy\" width=\"1024\" height=\"234\" src=\"https:\/\/lhkbob.com\/blog\/wp-content\/uploads\/2021\/07\/Screen-Shot-2021-07-06-at-12.55.54-PM-1024x234.png\" alt=\"\" class=\"wp-image-305\" srcset=\"https:\/\/lhkbob.com\/blog\/wp-content\/uploads\/2021\/07\/Screen-Shot-2021-07-06-at-12.55.54-PM-1024x234.png 1024w, https:\/\/lhkbob.com\/blog\/wp-content\/uploads\/2021\/07\/Screen-Shot-2021-07-06-at-12.55.54-PM-300x68.png 300w, https:\/\/lhkbob.com\/blog\/wp-content\/uploads\/2021\/07\/Screen-Shot-2021-07-06-at-12.55.54-PM-768x175.png 768w, https:\/\/lhkbob.com\/blog\/wp-content\/uploads\/2021\/07\/Screen-Shot-2021-07-06-at-12.55.54-PM.png 1798w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><strong>Michael Ludwig<\/strong>, Professor Gary Meyer, July 2018<\/p>\n\n\n\n<p>A dissertation submitted to the faculty of the University of Minnesota, in partial fulfillment of the requirements for the degree of doctor of philosophy.<\/p>\n\n\n\n<ul><li><a href=\"https:\/\/hdl.handle.net\/11299\/206226\">Official Version<\/a><\/li><li><a href=\"https:\/\/lhkbob.com\/static\/ludwig-2018-dissertation.pdf\">Author Version (hi-res, 319MB)<\/a><\/li><li><a href=\"https:\/\/lhkbob.com\/static\/ludwig-2018-dissertation-lowres.pdf\">Author Version (low-res, 20MB)<\/a><\/li><\/ul>\n\n\n\n<h3>Abstract<\/h3>\n\n\n\n<p>Quantifying the perceived difference in appearance between two surfaces is an im-\nportant industrial problem that currently is solved through visual inspection. The field of\ndesign has always employed trained experts to manually compare appearances to verify\nmanufacturing quality or match design intent. More recently, the advancement of 3D\nprinting is being held back by an inability to evaluate appearance tolerances. Much like\ncolor science greatly accelerated the design of conventional printers, a computational\nsolution to the appearance difference problem would aid the development of advanced\n3D printing technology. Past research has produced analytical expressions for restricted\nversions of the problem by focusing on a single attribute like color or by requiring homo-\ngeneous materials. But the prediction of spatially-varying appearance differences is a far\nmore difficult problem because the domain is highly multi-dimensional.\n<\/p>\n\n\n\n<p>This dissertation develops a computational framework for solving the general form\nof the appearance comparison problem. To begin, a method-of-adjustment task is used to\nmeasure the effects of surface structure on the overall perceived brightness of a material.\nIn the case considered, the spatial variations of an appearance are limited to shading\nand highlights produced by height changes across its surface. All stimuli are rendered\nusing computer graphics techniques in order to be viewed virtually, thus increasing the\nnumber of appearances evaluated per subject. Results suggest that an image-space model\nof brightness is an accurate approximation, justifying the later image-based models that\naddress more general appearance evaluations.\n<\/p>\n\n\n\n<p>Next, a visual search study is performed to measure the perceived uniformity of 3D\nprinted materials. This study creates a large dataset of realistic materials by using state-\nof-the-art material scanners to digitize numerous tiles 3D printed with with spatially-\nvarying patterns in height, color, and shininess. After scanning, additional appearances\nare created by modifying the reflectance descriptions of the tiles to produce variations\nthat cannot yet be physically manufactured with the same level of control. The visual\nsearch task is shown to efficiently measure changes in appearance uniformity resulting\nfrom these modifications.\n<\/p>\n\n\n\n<p>A follow-up experiment augments the collected uniformity measurements from the vi- sual search study. A forced-choice task measures the rate of change between two appear- ances by interpolating along curves defined in the high-dimensional appearance space. Repeated comparisons are controlled by a Bayesian process to efficiently find the just noticeable difference thresholds between appearances. Gradients reconstructed from the measured thresholds are used to estimate perceived distances between very similar ap- pearances, something hard to measure directly with human subjects. A neural network model is then trained to accurately predict uniformity from features extracted from the non-uniform appearance and target uniform appearance images. <\/p>\n\n\n\n<p>Finally, the computational framework for predicting general appearance differences\nis fully developed. Relying on the previously generated 3D printed appearances, a crowd-\nsourced ranking task is used to simultaneously measure the relative similarities of multi-\nple stimuli against a reference appearance. Crowd-sourcing the perceptual data collection\nallows the many complex interactions between bumpiness, color, glossiness, and pattern\nto be evaluated efficiently. Generalized non-metric multidimensional scaling is used to\nestimate a metric embedding that respects the collected appearance rankings. The em-\nbedding is sampled and used to train a deep convolutional neural network to predict the\nperceived distance between two appearance images. While the learned model and exper-\niments focus on 3D printed materials, the presented approaches can apply to arbitrary\nmaterial classes. The success of this computational approach creates a promising path for\nfuture work in quantifying appearance differences.\n<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Michael Ludwig, Professor Gary Meyer, July 2018 A dissertation submitted to the faculty of the University of Minnesota, in partial fulfillment of the requirements for&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":191,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0},"_links":{"self":[{"href":"https:\/\/lhkbob.com\/blog\/wp-json\/wp\/v2\/pages\/304"}],"collection":[{"href":"https:\/\/lhkbob.com\/blog\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/lhkbob.com\/blog\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/lhkbob.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/lhkbob.com\/blog\/wp-json\/wp\/v2\/comments?post=304"}],"version-history":[{"count":4,"href":"https:\/\/lhkbob.com\/blog\/wp-json\/wp\/v2\/pages\/304\/revisions"}],"predecessor-version":[{"id":309,"href":"https:\/\/lhkbob.com\/blog\/wp-json\/wp\/v2\/pages\/304\/revisions\/309"}],"up":[{"embeddable":true,"href":"https:\/\/lhkbob.com\/blog\/wp-json\/wp\/v2\/pages\/191"}],"wp:attachment":[{"href":"https:\/\/lhkbob.com\/blog\/wp-json\/wp\/v2\/media?parent=304"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}