Technique tells machines which graphs can be understood by humans

[ad_1]

ColumbiaU-Pixel-Approximate-Entropy

To give computers some idea where on the scale their output might be, and how to improve it if need be, researchers at Columbia University and Tufts University have invented a technique called ‘pixel approximate entropy’.

“This is a brand new approach to working with line charts with many different potential applications,” said engineer Gabriel Ryan. “Our method gives visualisation systems a way to measure how difficult line charts are to read, so now we can design these systems to automatically simplify or summarise charts that would be hard to read on their own.”

It is particularly suited for identifying graphs where trends are buried in fast high-amplitude noise.

“For instance, in industrial control an operator may need to observe and react to trends in readouts from a variety of system monitors over time, such as at a chemical or power plant,” said Ryan. “A system that is aware of chart complexity could adapt readouts to ensure the operator can identify important trends and reduce fatigue from trying to interpret potentially noisy signals.

The technique had been made open-source and the development team expects it to be useful to data scientists and engineers who are developing AI-driven data science systems.

Other scenarios that might benefit, said Columbia, are Doctors reading EEGs in emergency rooms, first responders viewing the outputs of multiple sensors in a disaster zone, and brokers buying and selling.

Pixel approximate entropy will be presented at the IEEE VIS 2018 conference in Berlin on 25 October.

There is a pixel approximate entropy video.

 

Image: Example from the study, where users had to classify charts based on their shape.
Intuitively, it is more difficult to read than the chart on the left, the chart on the right would also get a higher pixel approximate entropy score too, allowing a visualisation programme to enhance important aspects of the data to make it easier to read.

[ad_2]

Source link