How to make computers faster and climate friendly

Your smartphone is far more powerful than the NASA computers that put Neil Armstrong and Buzz Aldrin on the moon in 1969, but it is also an energy hog. In computing, energy use is often considered a secondary problem to speed and storage, but with the rate and direction of technological advancement, it is becoming a growing environmental concern.
How to make computers faster and climate friendly
© Anna Ivanova – 123RF.com

When the cryptocurrency mining company Hut 8 opened Canada’s largest bitcoin mining project outside Medicine Hat, Alta., environmentalists sounded the alarm. The plant consumes 10 times more electricity, largely produced by a natural gas-fired power plant, than any other facility in the city.

Globally, greenhouse gas (GHG) emissions from the information, communication and technology (ICT) sectors are forecast to reach the equivalent of 1.4 gigatonnes (billion metric tonnes) of carbon dioxide annually by 2020. That’s 2.7% of global GHGs and roughly double Canada’s total annual greenhouse gas output.

By designing energy-efficient computer processors we could reduce energy consumption, and we could reduce GHG emissions in places where electricity comes from fossil fuels. As a computer engineer specialised in computer architecture and arithmetic, my colleagues and I are confident these positive effects can be achieved with almost no impact on computer performance or user convenience.

Powerful connections

The internet of things (IoT) — made up of the connected computing devices embedded into everyday objects — is already delivering positive economic and social impacts, transforming our societies, the environment and our food supply chains for the better.

These devices are monitoring and reducing air pollution, improving water conservation and feeding a hungry world. They’re also making our homes and businesses more efficient, controlling thermostats, lighting, water heaters, refrigerators and washing machines.

With the number of connected devices set to top 11 billion — not including computers and phones — in 2018, IoT will create big data requiring huge computations.

Making computation more energy efficient would save money and reduce energy use. It would also allow the batteries that provide power in computing systems to be smaller or run longer. In addition, calculations could run faster, so computing systems would generate less heat.

Approximate computing

Today’s computing systems are designed to deliver exact solutions at a high energy cost. But many error-resilient algorithms like image, sound and video processing, data mining, sensor data analysis and deep learning do not require exact answers.

This unnecessary accuracy and excessive energy expenditure is wasteful. There are limitations to human perception — we don’t always need 100% accuracy to be satisfied with the outcome. For example, minor changes in the quality of images and videos often go unnoticed.

Computing systems can take advantage of these limitations to reduce energy use without having a negative impact on the user experience. “Approximate computing” is a computation technique that sometimes returns inaccurate results, making it useful for applications where an approximate result is sufficient.

At the University of Saskatchewan’s computer engineering lab, we are proposing to design and implement these approximate computing solutions, so that they can optimally trade off accuracy and efficiency across software and hardware. When we applied these solutions to a core computing component of the processor, we found that power consumption dropped by more than 50% with almost no drop in performance.

Flexible precision

Nowadays, most personal computers contain a 64-bit standard numerical format. This means that they use a number with 64 digits (either zero or one) to perform all the computations.

3D graphics, virtual reality and augmented reality require the 64-bit format to work. But basic audio and image processing can be done with a 32-bit format and still provide satisfying results. Moreover, deep learning applications can even use 16-bit or 8-bit formats due to their error resilience

The shorter the numerical format, the less energy is used to perform the calculation. We can design flexible, yet precise, computing solutions that run different applications using the most appropriate numerical format so that it promotes energy efficiency.

For example, a deep learning application using this flexible computing solution could reduce energy consumption by 15%, according to our preliminary experiment. In addition, the proposed solutions can be reconfigured to simultaneously perform multiple operations requiring low numerical precision and improve performance.

The IoT holds a great deal of promise, but we must also think about the costs of processing all of this data. With smarter, greener processors we could help address environmental concerns and slow or reduce their contributions to climate change.

This article is republished from The Conversation under a Creative Commons license. Read the original article.
The Conversation

Source: The Conversation Africa

The Conversation Africa is an independent source of news and views from the academic and research community. Its aim is to promote better understanding of current affairs and complex issues, and allow for a better quality of public discourse and conversation.

Go to: https://theconversation.com/africa

About Seokbum Ko

Seokbum Ko, professor, University of Saskatchewan
Let's do Biz