In any work system where there are several elements working with each other, whether they are people or automated elements such as machines and processors. Not only is the individual performance of each of the elements important, but also that the communication between them is adequate at all times to give the maximum possible performance. This is where what we know as quality of service or QoS of a processor comes into play.
What do we understand as quality of service or QoS in a CPU?
The term quality of service in CPUs comes from the world of telecommunications, where it refers to the general performance of a telephone network and therefore the ability to provide adequate service to the different terminals that participate in said network. This definition is extrapolated to the world of multicore CPUs and APUs, where it refers to the communication of the different elements of the processor with each other and with the RAM memory through a common central hub. Which we traditionally know as Northbridge and therefore the QoS of a multicore CPU depends on the performance of the central hub, be it the classic Crossbar Switch of the Switches or a NIC or SmartNIC of a NoC.
The idea of quality of service is that not only the different cores, but also the rest of the components in the processor have the necessary communication lines to function properly at all times. Therefore, it is necessary to ensure in the design that the entire intercom system not only in the expected performance peaks. But also that there are no imbalances regarding internal communication in a CPU.
In a multi-threaded system in which each of the CPUs is in charge of a different thread of execution, the fact that each core of the CPU does not have communication problems with the rest of the cores and components is important in terms of performance. Especially because of the intercommunication between the different threads of execution of the programs. But this is important for server CPUs running multiple operating systems in a virtualized way for multiple remote clients.
Parkinson’s Law and the QoS of a CPU
As it is obvious for a multicore processor to offer quality of service we have to ensure that the bandwidths and latencies of each of the elements are adequate to be able to get the maximum possible performance. Today the cores depend on the integrated Northbridge to achieve the maximum possible performance in this regard. How do we measure this performance? Contrary to laws of performance in computing such as Amdahl’s Law in computing, Parkinson’s Law is applied, whose general statement is as follows:
The work to be done is expanded to fill the time available for it to be completed.
In principle, it is not something that has much to do with computing, but it must be taken into account that the QoS of a CPU depends on how the information shipments are organized between the different parties. So, in the same way as in real life, when we plan something, unexpected elements appear that delay our project. In the internal communication of a CPU in order to grant the quality of service, the same thing happens, so all the elements that can increase latency and decrease internal bandwidth must be taken into account.
However, when designing a new processor, not all elements that provide quality of service depend on the hardware.
The quality of service of a CPU cannot be assured
Our hardware is useless without software to run and at the same time software is nothing without hardware to run it. So they are two parts that complement each other in a symbiotic way and therefore the hardware cannot live without the software and vice versa. This implies that there are factors in the software that affect the hardware and also the quality of service of a CPU.
In terms of hardware, there are elements such as latency between different memory wells in a non-unified addressing system. The communication of the cache with the different cores and their way of using it. In addition to other factors at the architecture level that due to various limitations in the design have not been able to be implemented in a better way.
If we talk about software, we have that the program in charge of managing the execution threads of the different programs is the operating system, so it will be its management that will affect the quality of service that a CPU can provide. So depending on how optimized and therefore well programmed an operating system is, we will see one performance or another between different operating systems running on the same CPU.
We cannot forget either the quality of the code used when developing the programs. We can write a million different programs to do a specific task, but not all of them use equally efficient algorithms and data structures, nor will the third-party libraries and external APIs on which they are based will be as good. All this makes the quality of service in a CPU today not achievable and is really a utopia.
Is there an ideal QoS CPU?
First of all, we must take into account that the ideal quality of service does not exist, since it is impossible to reach the theoretical maximums that it requires so that there is 100% communication performance between the different parties. What would a CPU with a perfect quality of service have to have? Well, the following elements:
- There are no problems in accessing data and instructions by the CPU, everything works correctly. Which we know is impossible in the search and capture of information by the CPU.
- There are no hardware and software interrupts to stop all processes at once. Which is impossible, since without interruptions the rest of the hardware would not be able to interact with the CPU.
- The operating system has the perfect code when it comes to managing the different threads of the CPU in an orderly way. Another thing that is also impossible to achieve in an operating system.
So the ideal QoS CPU does not really exist, but at the same time it is a challenge for engineers to get the best possible performance.