Monday, November 20, 2017

DO Trust OnePlus 5T Benchmarks in Reviews

Earlier this year, we published a report where we found that a couple smartphone vendors, including OnePlus, had begun cheating on benchmarks. This came as a bit of a surprise, as there was a massive backlash in 2013 when smartphone vendors had previously attempted to cheat on benchmarks, and benchmark cheating had fallen to the wayside in the meantime. With every new phone comes new information however, and for the OnePlus 5T benchmarks will look a little bit different than they did with the OnePlus 5.

As of today, we are pleased to announce that OnePlus has will no longer cheat on benchmarks! Starting with the launch of the OnePlus 5T, OnePlus will no longer target benchmarks with performance modes that do not represent day to day usage.

OnePlus' statement on why they originally implemented app targeting on the OnePlus 5

"We have set the OnePlus 5 to run benchmarks at a high-performance level that is both natural and sustainable for all devices, media and consumer, so that users can see the true potential of the device, when running resource intensive apps and games. At no point do we overclock the CPU, nor do we set a CPU frequency floor.

We are confident our approach best displays the true performance capabilities of the OnePlus 5."

When the benchmark cheating was discovered earlier this year, in our articles and on All About Android we invited OnePlus to offer the alternate CPU scaling method to all apps if they truly felt it was a worthwhile improvement in user experience. We proposed doing this through the use of a user-selectable list, much like what HTC, Samsung, and LG offer with HTC Boost+, Samsung Game Tools, and LG Game Tools (albeit, those three apps allow you to select reduced performance and resolution modes in order to save battery). This would allow users to apply the increased power usage mode to any app they would like, and would avoid the problems that exist with the list of apps not being updated as new apps are released. However, we question the usefulness of this boost. In our testing, we found minimal performance improvements in benchmarks that model real world usage (like Geekbench 4) when the benchmark cheating was active.

That all brings us back to the question of what you are hoping to accomplish when benchmarking. If you are looking to set overclocking high scores and compete to see who can get their processor to perform the fastest, then by all means, enable a performance governor. However, if you are looking for a way to compare devices in regular usage, you will want to see the devices using their normal performance scaling that they use in all other apps.

Benchmarks are an important tool in determining device performance, but they are only one part of the conversation. They are not the be-all and end-all of which phone is the fastest, but they do help establish a baseline to compare against. If devices with your phone's processor usually score in a certain range and your phone is scoring half that, it's an easy way to receive confirmation that something is wrong with your phone's software or hardware. If a device line consistently scores higher or lower than other device lines with the same processor, that can give a bit of an insight into the manufacturer's coding efficiency and performance/power usage decisions. If a processor consistently scores higher or lower than a different processor, it can give an insight into the processors' relative performance in those workloads.

And that is an important distinction. Benchmarks are repeatable data, but they are only as useful as the context that you put them in. Some benchmarks, like Geekbench 4, attempt to model real-world use cases in repeatable ways in order to provide a consistent basis for comparison across devices. Some benchmarks, like Prime95, attempt to show how quickly the processor can perform one specific task (in Prime95, primality testing). Some benchmarks, like our charging and frame time tests, are outright repeatable measurements of real-world use.

Benchmarks are a great way of getting data that can be used to assist in further comparisons, but by creating a benchmark cheating system, OnePlus corrupted that data. Instead of apps like Geekbench being able to provide comparisons for how the OnePlus 3 and OnePlus 3T performed in everyday usage (under the same CPU scaling and behavior as other applications), they were providing comparisons based on the altered performance scaling and power usage of the benchmark cheating mode that was only used for a select list of apps. So we would like to thank OnePlus for fixing this issue and creating comparable scores again. While OnePlus may still hold the same opinion regarding the OnePlus 5T benchmarks as they did for the OnePlus 5 and OnePlus 3T, they still did change their practices based on user feedback. Hopefully, going forward OnePlus will maintain this openness to feedback that has helped them achieve the reputation they have today.



from xda-developers http://ift.tt/2AWRQPW
via IFTTT

No comments:

Post a Comment