big.LITTLE: ARM's Strategy For Efficient Computing

Article Index:   
To date, Intel has eschewed a big.LITTLE approach in favor of DVFS -- Dynamic Voltage and Frequency Scaling. As the name implies, Intel uses DVFS to reduce power consumption by dropping the CPU into the lowest possible power state, transitioning out of that state when needed, and returning to it when the need for additional performance has dropped off again. One of the problems with comparing the two approaches is that "performance," in this case, refers to task completion time, total power consumed during the completion of that task, and how effectively the operating system manages the power conservation features of the CPU.

Samsung's Exynos Octa was supposed to be big.LITTLE's major debut, but all available evidence suggests that the CPU's implementation is broken. That would explain why Samsung's flagship, the Galaxy S4, recently launched with a Qualcomm processor inside the US version, with the much-touted Exynos 5 Octa relegated to the international versions of the phone and the Korean model. Reports indicate that the CCI-400 (Cache Coherent Interconnect) module that makes big.LITTLE possible is disabled on the device and can't be enabled via software.

As far as triumphant debuts are concerned, that's problematic -- but it doesn't say anything about the underlying usefulness of big.LITTLE as a whole. ARM showed us demos of asymmetric configurations in action, and it's clear that chips that implement a GTS can save power compared to those that don't.

big.LITTLE MP (Global Task Scheduling on an asymmetric core implementation. Cumulative energy for a conventional dual-core vs three A7's + two A15's is shown in the center.)

The other fact that's worth pointing out is that while big.LITTLE is an alternative to the kind of frequency and voltage scaling that Intel uses, ARM processors are compatible with DVFS techniques as well. A manufacturer like Qualcomm or Samsung could implement a chip to use a DVFS approach rather than a big.LITTLE option, or could even implement both. Again, this is somewhat dependent on available foundry technology from TSMC or GlobalFoundries, but it's far from impossible.

So where does that leave us? Waiting for the next round of products, on both sides. Intel unquestionably needs Bay Trail to be a major success story; continuing softness in the PC market threatens the company's bottom line and it needs to demonstrate a chip that can compete squarely on ARM's turf. big.LITTLE, meanwhile, only has a few adoptees. That will likely change once the relevant patches are streamed into Android and software support picks up, but this sort of chicken-and-egg scenario is always a slow process.

Image gallery

Related content


deathdemon89 one year ago

That was a great read, thanks!

karanm one year ago

These articles have been great so far, very informative.

AKnudson one year ago

I hadn't realized the full logic behind what Nvidia did with the Tegra 3 Chip, having a quad core chip with a 5th "Ghost or backup core"

If you have a DVFS system on a big.LITTLE configuration its only real effective use would be inside each individual chip, it would have a range of effectiveness for the A-15 and another range of effectiveness for the A-7. This opens up a low energy/perfect amount of computing power balance that is unmatched, unless.....

Unless this sort of methodology is a band-aid fix for a larger problem. Using both methods adds another layer of inefficiency that could be cut down with a lot of fine tuning.

The problem with that is while fine tuning occurs helping bring the big.LITTLE/DVFS config up to par you might be left in the dust entirely by a new innovation.

A strangely common occurrence in this industry. I believe they call it opportunity cost.

KlausKnegg one year ago

Great, but I think you need SMARTER software also ... much smarter, and LEARNING.

A lot of stuff can be turned off when you turn off (or timeout) the display. You probably do not need xG, WiFI, Bluetooth connectivity all the time when the display is turned off. The best would be if the phone would learn from your use of the phone. If the phone learns that you seldom reed gmail instantly when email arrives, should the phone than be less aggressiv polling the servers when the display is black? I think so. Than only a few manual "overrides" where the "learnings" are wrong would be needed to fit your usage pattern.

One manual "override" I would like to have is connecting the "display off" button on my Android phone to "closing all my open GUI apps". That would free up memory and avoiding the garbage collector CPU hogs when memory hits the wall. Closing the apps would be smarter.

TimEmerson one year ago

Nice article, very intriguing. I'm interested to see how this kind of technology building is built upon in the future.

Post a Comment
or Register to comment