article_icon
article

The Processing Power Problem: AI and the Growing Computational Demands

AI fuels innovation, but demands rapidly increasing processing power.

2023-08-213 min readFetch.ai

AI fuels innovation, drives growth, and presents an array of exciting possibilities. However, a critical challenge lurks behind the curtains of algorithms and computations: the rapidly increasing demand for processing power. As AI models grow in complexity, the requirement for robust computing power intensifies, stretching existing hardware capacities to their limits.

There is, however, one solution.

You’re gonna need a bigger computer

It's well known that giants like Nvidia and AMD have dominated the graphics processing unit (GPU) market. Thanks to their ability to perform billions of simple calculations in parallel, these GPUs are also widely used in AI development. They have become so essential to AI development that Chinese tech companies recently scrambled to acquire over $5 Billion worth of Nvidia chips over fears the US will impose export controls to weaken China’s AI industry. Another example here is the success of machine learning startup Inflection AI’s $1.3B fundraising round - that is arguably a result of Nvidia giving them access to 22,000 of their H100 GPUs.

That may sound like a lot, but it’s a drop in the ocean compared to how much processing power the industry needs. The problem is that AI models are growing exponentially, but the hardware to train these behemoths and run them hasn’t advanced as quickly. This is compounded by the fact that so far, the larger and deeper these models are, the better they perform.

This challenge is two-pronged. On one hand, AI development requires a huge amount of processing power. To train a 65 billion parameter model you would need ten A100 80GB GPUs. ChatGPT has 100 trillion parameters. With a whole raft of LLMs being developed right now along with other generative AI models, demand is outpacing supply.

Then there’s another challenge.

Knowledge is power

There’s a tendency to imagine the digital world as divorced from the physical, but AI development needs data centers, hardware, and energy infrastructure that comes with a heavy environmental cost. We need the energy to power the centers, water to cool them, and carbon-intense raw materials to build them.

It has been estimated that Chat-GPT’s energy consumption in January 2023 was as much as 23M kWh. As LLMs become incorporated into search engines this could mean an increase of up to 5x. In another study, researchers estimated that for every 20-50 Chat-GPT questions, a 500ml bottle of water would be “drunk” by cooling a data center.

AI mustn’t be allowed to become another driver of climate change and environmental degradation, but how do we escape the spiraling demands of energy and processing power?

Open innovations, possible solutions

In AI development, fine-tuning is a process where a pre-trained model is further trained on a smaller dataset to perform more specific functions. For LLMs, that means they can specialize in a task or domain.

There are two major advantages of fine-tuning:

  1. It lets researchers leverage the vast amounts of data that an LLM has learned, saving the need to start from scratch.
  2. It requires less computing power, making it more accessible.

Even then, fine-tuning LLMs requires access to high-end hardware. But several open innovations are working to fix this problem. One of them is Quantised Low Rank Adapters (QLoRa). QLoRa's innovative steps, such as 4-bit NormalFloat quantization and double quantization, significantly reduce the memory footprint of large language models (LLMs), making fine-tuning possible even on consumer-grade hardware.

Why does this matter?

We're navigating exciting times, filled with opportunities, challenges, and a critical juncture in AI's growth. Innovations like QLoRa and others are unlocking possibilities and setting the stage for what comes next.

The need for responsible growth that doesn't excessively hinge on specific companies or solutions is clear. By embracing a broad spectrum of techniques, we're fostering an environment where creativity thrives, and the future of AI is accessible to all, without sacrificing our planet's health.

This pivotal moment in AI development prompts eager anticipation, not just for what's on the horizon but for the incredible journey that will take us there.


More from Fetch