Energy-efficient processors and memory reduce your carbon footprint

Energy-efficient processors and memory reduce your carbon footprint

The technical industry runs on silicon chips; therefore, any initiative to reduce the carbon footprint and create greener options must start with the chips that are at the heart of all smart devices. Every solution must find a way to reduce the energy consumption of the billions of laptops, phones, tablets and embedded systems everywhere.

The good news is that energy consumption has long been a priority for the industry. Users of mobile devices and laptops demand long battery life and lower power consumption from chips. The display is an easy way to deliver a device that will run for hours without being plugged in. Over the past few decades, steady advances have produced smartphones that deliver billions of times more computing power for the same amount of electricity.

At the same time, data center operators are well aware that electricity consumption costs them double. First, they want chips that can deliver the most performance per watt, because electricity is one of the biggest items for running the centers. Second, all the electricity that enters the computers is converted to heat, and so the center has to pay to remove it from the building as well.

These two economic forces of the market have pushed chip manufacturers to produce greener chips with a lower environmental footprint. Even if the goal was only to save money, the economic drive is closely aligned with environmental requirements.

Many companies are not shy about their environmental awareness† For example, Apple says it is carbon neutral for its global business operations, and by 2030 it aims to achieve “net zero climate impact across the entire company, including manufacturing supply chains and all product life cycles.”

These forces are reflected in the market in different and sometimes divergent ways. Here are some of the key ways the chip industry is building new hardware that minimizes the environmental footprint of computers.

Pushing the ‘integrated’ part of integrated circuits (ICs)

Apple’s new M1 and M2 chips offer a new solution that integrates on a single main chip the processors used for general purpose computing (CPU) and graphics processing (GPU). Most computers today use separate chips, often on separate circuit boards, to perform the same tasks. The Apple chips share the same memory and use the tight integration to deliver significantly faster speeds for many tasks that require both the CPU and GPU to work closely together.

While much of the sales literature focuses on increasing speed, the tight integration and shared resources also dramatically reduce energy consumption. in his Announcement of the M2, Apple boasted that when it compared its new laptops to a PC competitor (Samsung Galaxy with an i7), the chip “matches its peak performance at one-fifth the power.”

embracing ARM chips

While the CPU market has been dominated by the venerable Intel or x86 architecture for decades, a number of users have recently moved to the ARM (Acorn RISC Machine) architecture, in part because the chips that use this architecture’s instruction set , provide more computing power for less electricity .

For example, Amazon touts its Graviton 2 and Graviton 3 chips, which it designed itself and installed in data centers. When users recompile the code in the new instruction set, Amazon estimates that it is not uncommon for the code to use 60% less electricity compared to the regular versions. Actual savings will vary from application to application.

The AWS users never see the utility bill, but Amazon passes the savings on in pricing to the agencies. His ARM machines are: said cost about 30% less for about the same computing power. In some cases, users are not even aware that they are seeing the savings. AWS has quietly moved some of its managed services, such as the Aurora database, to ARM instances.

Amazon is far from the only company exploring the ARM architecture. A number of other companies build ARM-based servers and report similar successes in power reduction. For example, Qualcomm would partner with large and small cloud providers to use its ARM chips. It also recently acquired Nuvia, a startup that was designing its own ARM chips.

Meanwhile, in April, Microsoft launched a example of Ampere’s Altra chips with the claim that these chips can offer as much as 50% better performance.

Using GPUs for big jobs

Graphics processing units (GPU) started out as chips designed to help gamers enjoy faster frame rates, but they’ve evolved into critical tools for solving large computational tasks with less energy. This may come as a shock to some gamers who have become accustomed to installing bulky GPUs that demand 600 watts, 700 watts, 800 watts, or more from the power supply. The top-end GPUs are getting hungrier and hungrier.

However, the real measure is the power per unit of work. The chunky GPUs may chew through electricity, but they do even more work along the way. Overall, they can be an efficient way to calculate. This is why the GPUs are now commonly used to perform large, parallel processing tasks for a wide variety of scientific and financial applications. A lot machine learning algorithms also rely on the GPUs.

The best GPUs from cryptocurrency miners are often in high demand as they provide some of the most efficient resources for calculation per unit of energy. The market has evolved a lot as the mining algorithms provide a predictable and stable workload that is easy to predict.

Making specialized chips

For some applications, there is enough demand to justify creating custom chips designed to solve their problems faster and more efficiently. Recently, companies like Google and Amazon have been building special chips to accelerate machine learning.

Google’s Tensor Processing Units (TPUs) are the foundation for many of the company’s machine learning experiments. They are highly efficient and Google thanks them for helping to keep data center efficiency as low as possible. The company resells the TPUs for customer work, but also deploys them internally for tasks such as demand forecasting to manage power consumption.

“Today, a Google data center is on average twice as energy efficient as a typical enterprise data center,” bragged Urs Hölzle, senior vice president for technical infrastructure at Google.

“And compared to five years ago, we now deliver about seven times as much computing power with the same amount of electrical power.”

in a recent presentation at AWS Summit in San Francisco in April 2022, Ali Saidi, senior principal engineer at AWS, spoke about the energy savings of Inferentia, a chip designed to apply machine learning models as quickly as possible. These models are often used extensively in front-line applications for classification or detection. Of particular importance is speeding up the search for the trigger word used by voice interfaces such as Siri or Alexa.

†[Inferentia] achieves between 1.5x and 3x better energy efficiency, compared to [Nvidia’s Turing T4], with an average energy efficiency improvement of about two times,” Saidi told the audience. “This means that inf1 instances are greener and cheaper to run, and as always we use that to pass the cost savings on to our customers.”

Right size chips

When Intel started building x86 chips for the cheaper laptops, it started stripping away all the extra features that the average user doesn’t need when opening a few browser windows. The low-end chips like the Atom and Celeron line may not be able to chew through computation like the high-end servers, but the average user doesn’t need that power when checking email. The cost savings multiply because the batteries can also be smaller and still last a long time.

Working with lower precision

When Amazon designed its Gravitron 3 processor, it added the bfloat16, a special lower-precision format for low-resolution calculations. The chip can perform four of these operations in the same amount of time and energy it uses for standard, double-precision floating point calculations. Some machine learning algorithms don’t seem to mind the difference, so they can run on these chips at a quarter of the power.

Improve memory

The CPUs are the sole focus for the engineers looking for lower power consumption. The latest RAM standard, DDR5, runs at faster speeds but lower voltages, allowing it to save electricity and complete the calculation sooner. The differences in voltage are small (1.2v vs 1.1v) but they add up over time.

Others adapt the architecture of the memory chips to improve power consumption. An option called Load Reduced Dual Inline Memory Modules (LRDIMM) adds a memory buffer that can respond more quickly and reduce the load on the communication circuits between the memory and the CPU. These are commonly found in servers in data centers with high constant usage.

Draw thinner lines

As silicon production lines develop better processes, the amount of energy used for each calculation decreases. Thinner lines need fewer electrons to saturate them. While many think Moore’s Law and the relentless shrinking of each transistor is all about speed, the savings in electricity is an added bonus. Newer chips built on the latest manufacturing technology tend to use less power than the older ones. Chips built on the 5nm process nip less than those built on the 7nm process, and so on.

Going beyond mechanical storage

Many of the best servers and laptops use solid-state “disks” with flash memory to store information, mainly because they are much faster. However, the older spinning magnetic drives remain competitive by offering a lower price per byte for storage.

That is shifting as more data centers take energy costs into account. When VAST Data rolled out its latest storage solution, it emphasized that energy costs should be a big part of why a company would want to buy its flash memory-filled storage racks.

“From an energy perspective, our solution is much more efficient. You can save about 10x what customers would otherwise have to pay if you had a hard drive-based infrastructure,” said Jeff Denton in a Q&A with VentureBeat. “This infrastructure density always results in cost savings. When you add up the efficiency, the energy savings, the data center, the space savings and the cost savings, we believe we have finally reached cost parity with a hard drive-based infrastructure and essentially have the last argument for mechanical media in the enterprise. eliminated.”

Close where possible

Sometimes the best chips are the ones that don’t do anything at all. Smartphone designers are trying to balance the demand for more performance with the practical need for long battery life. The chips in the different smartphones are all optimized to consume as little power as possible, while still delivering high-resolution video displays and always-on communication.

One of the main strategies is for the chip to shut down additional processor cores or subsystems when they are not in use. Smartphone users can see the battery life and track how much they use their phone and the smartest phones are the ones that use the least amount of power when they are in a pocket.