The power of visual impact

The power of visual impact

image: The new approach determines a user’s response to an image or scene in real time based on eye movements, particularly saccades, super-fast eye movements that dart between points before fixating on an image or object. The researchers will present their new work, titled “Imagery features influence reaction time: A learned probabilistic model of saccade latency perception,” at SIGGRAPH 2022, August 8-11 in Vancouver, BC, Canada.
View more

Credit: ACM SIGGRAPH

What motivates or drives the human eye to focus on a target, and so how is this visual image perceived? What is the lag between our visual acuity and our response to observation? In the burgeoning field of immersive virtual reality (VR) and augmented reality (AR), connecting these dots in real time between eye movement, visual targets and decision-making is the driving force behind a new computational model developed by the team. computer scientists at New York University, Princeton University, and NVIDIA.

The new approach determines a user’s response to an image or scene in real time based on eye movement, particularly saccades, super-fast movements of the eye that dart between points before fixating on the image or object. Saccades allow frequent shifts of attention to better understand one’s surroundings and locate objects of interest. Understanding the mechanism and behavior of saccades is fundamental to understanding human performance in visual environments, which represents an exciting area of ​​research in computer graphics.

The researchers will present their new work, titled “Imagery features influence reaction time: A learned probabilistic model of saccade latency perception,” at SIGGRAPH 2022, August 8-11 in Vancouver, BC, Canada. The annual conference, which will be in person and virtual this year, will spotlight the world’s leading professionals, academics and creative minds at the forefront of computer graphics and interactive techniques.

“Recently, there has been extensive research on measuring the visual qualities perceived by humans, especially for VR/AR displays,” says the paper’s lead author Qi Sun, PhD, assistant professor of computer science and engineering at New York University’s Tandon School of Engineering.

“However, we have yet to explore how the displayed content can affect our behavior, even perceptibly, and how we might use these displays to push the limits of our performance that are not otherwise possible.”

Inspired by how the human brain transmits data and makes decisions, the researchers implement a neurologically inspired probabilistic model that mimics the accumulation of “cognitive confidence” that leads to human decision and action. They performed a psychophysical experiment with parameterized stimuli to observe and measure the correlation between image characteristics and the processing time required to initiate a saccade, and whether/how the correlation differs from that of visual acuity.

They validate the model using data from over 10,000 user experiment trials using an eye-tracking VR display to understand and articulate the correlation between visual content and the “speed” of image-based decision-making. The results show that the prediction of the new model accurately represents human behavior in the real world.

The proposed model can serve as a metric for predicting and changing the eye-image response time of users in interactive computer graphics applications, and can also help improve the design of VR experiences and the performance of players in esports. In other industries, such as healthcare and auto, the new model could help estimate a doctor’s or driver’s ability to react quickly and respond to emergencies. In esports, it can be used to measure fair competition between players or to better understand how to maximize your performance where reaction times are down to milliseconds.

In future work, the team plans to explore the potential for cross-modal effects, such as visual and audio cues, that together influence our cognition in scenarios such as driving. They are also interested in expanding the work to better understand and represent the accuracy of human action influenced by visual content.

Article authors, Budmonde Duinkharjav (NYU); Praneeth Chakravarthula (Princeton); Rachel Brown (NVIDIA); Anjul Patney (NVIDIA); and Qi Sun (NYU) are set to demonstrate their new method on August 11 at SIGGRAPH as part of the Roundtable Session: Perception. You can find the sheet here.

About ACM SIGGRAPH
ACM SIGGRAPH is an international community of researchers, artists, developers, filmmakers, scientists, and business professionals with a common interest in computer graphics and interactive techniques. A special interest group of the Association for Computing Machinery (ACM), the world’s first and largest computing society, our mission is to nurture, support, and connect like-minded researchers and practitioners to accelerate innovation in computer graphics and interactive techniques.


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of the news published on EurekAlert! by contributing institutions or using any information through the EurekAlert system.

#power #visual #impact

Leave a Comment

Your email address will not be published.