Difference Between CPU and GPU

CPU, the acronym for Central Processing Unit, is the brain of a computing system that performs the “computations” given as instructions through a computer program. Therefore, having a CPU is meaningful only when you have a computing system that is “programmable” (so that it can execute instructions) and we should note that the CPU is the “Central” processing unit, the unit that controls the other units/parts of a computing system. In today’s context, a CPU is typically located in a single silicon chip also know as a microprocessor. On the other hand, GPU, the acronym for Graphics Processing Unit, is designed to offload computationally intensive graphics processing tasks from the CPU. The ultimate goal of such tasks it to project the graphics to a display unit such as a monitor. Given that such tasks are well known and specific, they do not essentially need to be programmed, and in addition, such tasks are inherently parallel due to the nature of the display units. Again, in the current context, while the less capable GPUs are typically located in the same silicon chip where you find the CPU (this setup is known as integrated GPU) others, the more capable, powerful GPUs are found in their own silicon chip, typically on a separate PCB (Printed Circuit Board).

What is CPU?

The term CPU is used in computing systems for more than five decades now, and it was the only processing unit in the early computers until “other” processing units (such as GPUs) were introduced to complement its processing power. The two major components of a CPU are its Arithmetic Logic Unit (aka ALU) and Control Unit (aka CU). The ALU of a CPU is responsible for the arithmetic and logical operations of the computing system, and the CU is responsible for fetching the instruction program from the memory, decoding them and instructing other units such as ALU to execute the instructions. Therefore, the control unit of the CPU is responsible for bringing the glory for CPU to be the “central” processing unit. The CU to fetch the instructions from memory, the instructions have to be stored as programs in the memory and, therefore, such instructing system is also known as “stored programs”. It would be clear that the CU will not execute the instructions, but will facilitate the same by communicating with the right units such as the ALU. 

What is GPU (aka VPU)?

The term Graphics Processing Unit (GPU) was introduced in late nineties by NVIDIA, a GPU manufacturing company, who claimed to have marketed the world’s first GPU (GeForce256 ) in 1999. According to Wikipedia, at the time of GeForce256, NVIDIA defined GPU as the following: “a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second”. Couple of years later, NVIDIA’s rival ATI Graphics, another similar company, released a similar processor (Radeon300 ) with the term VPU for Visual Processing Unit. However, as it is clear that the term GPU has become more popular than the term VPU. 

Today GPUs are deployed everywhere, such as in embedded systems, mobile phones, personal computers and laptops, and game consoles. Modern GPUs are extremely powerful in manipulating graphics, and they are made programmable so that they can be adapted to different situations and applications. However, even now, typical GPUs are programmed at the factory through what are known as firmware. Generally, GPUs are more effective than CPUs for algorithms where processing of large blocks of data is done in parallel. It is expected, since GPUs are designed to manipulate computer graphics, which are extremely parallel in nature. 

There is also this new concept known as GPGPU (General Purpose computing on GPU), to utilize GPUs to exploit the data parallelism available in some applications (such as bioinformatics) and, therefore, performing non-graphics processing in GPU. However, they are not considered in this comparison. 

 

What is the difference between CPU and GPU?

• While, the reasoning behind the deployment of a CPU is to act as the brain of a computing system, a GPU is introduced as a complementary processing unit that handles the computation intensive graphics processing and processing required by the task of projecting graphics to the display units. 

• By nature, graphics processing is inherently parallel and, therefore, can easily be parallelised and accelerated.

• In the era of multi-core systems, CPUs are designed with only a few cores that can handle a few software threads, which can be exploited in an application program (instruction and thread level parallelism). GPUs are designed with hundreds of cores, to utilize the available parallelism.