MOTO’s simple touchscreen test, round 2

MOTO has done another benchmarking test comparing mobile touchscreens: Robot Touchscreen Analysis. (I wrote about the previous one here.) This time they've used a robot-controlled (simulated) finger instead of a human finger. The test involves drawing diagonal lines and looking at how linear the response is on the screen.

I think it's great to get data, but these results are being overhyped in my opinion. (And — disclosure! — I work at Synaptics, though I don't speak for Synaptics here.) This test measures one performance characteristic but misses others.

2 thoughts on “MOTO’s simple touchscreen test, round 2

  1. The link between performance on this test and actual performance on user tasks is difficult to see. If the user is drawing lines, then this matters, but what if they are just pushing buttons? We don’t know from this test whether accuracy of touch-and-lift contact varies similarly.

    Latency and system speed make a huge difference in touch interface performance. This test doesn’t really control for that. Similarly all the host processing and filtering of touch data can result in different performance from the same sensor. You could easily score better on this test, for instance, by adding a smoothing filter (and since the test doesn’t look at latency, it misses the downside of smoothing).

    In a way, I think their previous test was more realistic because it used real fingers, which are larger than the robot fingers in the second test. Adult fingers make a contact area roughly 10 mm in diameter (as opposed to 7 and 4 mm in the second test). The 4 mm results look especially bad but may not represent real use.

    I don’t have a big problem with their overall result (the iPhone clearly works well) but this test gives a false impression of authority/proof, and I don’t think the authors did enough to point out its limitations.

Leave a Comment