Computer

Why will we end up using a second graphics card for streaming?

Well, we have to think that streaming works in reverse to cloud gaming. Instead of us receiving video content generated by a server that plays a game, it is our PC that generates that content and broadcasts it to third parties. Which not only means that it has to have power and memory to play the game, but also to encode video. So if you have little memory on the graphics card or it is not powerful enough, then you end up having problems in the broadcast. The logic is to think that we need more powerful graphics hardware, however, this does not necessarily have to be the case.

Using the second graphics card for video streaming

Graphics cards are not only used to generate beautiful graphics, they have many more uses thanks to their computing pipeline. However, the vast majority have to do with worlds that are completely removed from gaming, such as scientific computing or mining. Moreover, in the case of games, it is required that the two units be totally equal. However, there is an application, it is extremely useful and it will allow you use your second graphics card for streaming content and that will be crucial in the coming years.

Second Graphic Streaming Play

We are referring to the encoding via GPU, and by that we do not mean to use the small video codec of these. Rather, the idea is to use the full computing power of the second graphics card for streaming, and thus not as the first assistant when generating the graphics of the games. The idea is to take advantage of the parallel calculation capacity of the graphics card processors to convert the different blocks of the image into the compressed blocks in video formats.

Of course, the ideal model is to be able to have all the power of it for this, but we would not have it for games and vice versa. Hence to separate the functionality in two different chips in such a way that there is no contention process. Since each GPU could work smoothly with its video memory.

How does it work?

This can be done through a very simple process:

  • Graphics Card A generates the game frame which is stored in its video memory to be streamed.
  • Through a DMA channel, graphics card B communicates with A’s memory, reads the last frame, and copies the data to its own memory through the PCI Express interface.
  • With the information that B now has, the video encoding process begins using all its power, freeing the first graphics card to perform this task and not requiring the participation of the central processor at any time. Which also ends up working more comfortably.

For the process it is not necessary to have a high power graphics, even a model that has a consumption of 75 W in the PCI Express port could do this job and even the graphics integrated in your processor.

So why isn’t it used more often?

On paper all this sounds very good, but the programs responsible for broadcasting content over the internet need to be designed for this and, therefore, they have to optimize the code for the use of the second graphics card and believe us that it is not easy. Since it requires the use of the DMA drives on both graphics cards, you need to manually sync them, and NVIDIA, Intel, and AMD all have their own drives with their own instruction sets. Therefore, in the end, it would be necessary to create 9 versions of the same program just for the synchronization between the graph that generates the frame and the one that encodes it.

rendering Coding
renderingIntel ARC CodingIntel ARC
renderingIntel ARC CodingAMD Radeon
renderingIntel ARC CodingNVIDIA GeForce
renderingAMD Radeon CodingIntel ARC
renderingAMD Radeon CodingAMD Radeon
renderingAMD Radeon CodingNVIDIA GeForce
renderingNVIDIA GeForce CodingIntel ARC
renderingNVIDIA GeForce CodingAMD Radeon
renderingNVIDIA GeForce CodingNVIDIA GeForce

The programs in charge of encoding the video are not a problem, since they can be written in high-level shader languages ​​such as HLSL or GLSL and, therefore, be compatible with all combinations with a common code. The complication comes rather with the synchronization between both GPUs to be able to carry out the task. Therefore, a close collaboration on the part of the manufacturers is necessary.

Currently highly used applications like Streamlabs OBS do not have this capability and the only way we have to do this is through a second PC. This supposes an additional cost for the user interested in broadcasting over the internet. The ideal is not to complicate things, but to make them more accessible and simple.

The secret weapon of Intel and AMD: use the integrated GPU for streaming

As we have said before, streaming with a second graphics card is not something that requires a lot of power to perform. So it would be possible to even do it using the built in processor. The problem comes when there is a need for DMA units that communicate with both parties and normally this does not happen. Simply, when the most powerful graphics card is activated to generate the graphics of the games, the integrated one is inactive. However, this is something that can be solved in future models.

One of the things that Intel wants to do with the duo between its ARC graphics and its Core processors is what they have baptized as Deep Link, whose main function is to precisely use the iGPU to assist in video encoding for content streaming. Which means that the user does not have to buy a second graphics card. On the other hand, it is an ideal scenario, since it takes the work out of creators of streaming applications and is a motivation to buy an Intel-only pair.

The other big manufacturer that can do it is AMD, since let’s not forget that both the Ryzen CPUs and the Radeon graphics cards are from this company and we have already seen them make similar moves with their SmartShift, which works similarly to the Deep Link of Intel, but for the moment they have not announced this functionality, although there is no doubt that we will also see it being applied by Lisa Su’s company. After all, the interest of both one and the other is that you buy both products under their seal.

Is this the end of video capture?

In the professional world of Internet video broadcasting, no one uses video capture devices anymore, since the power of graphics cards and their ability to work with large amounts of data in parallel make them ideal for this type of task. What’s more, they achieve much larger results, even than several internal capturers, and at a much cheaper infrastructure cost.

Razer Ripsaw Grabber

If we go to the domestic market, the vast majority of capture devices have the problem that they are external and depend on the speed of the USB port, which adds latency to the process and they also do not have much power, especially those that cannot use the port USB-C This causes that in many cases they are a load for the CPU, because they do not do the encoding well either. So the idea of ​​having a second graphics card for streaming is not unreasonable and even more so when it is the same one that is in the central CPU and does not imply an additional cost for the user.

The only problem we see with this approach? The graphics hardware in laptop processors is much more powerful than that found inside desktop processors. In any case, we can connect a second card to the motherboard of our tower and it can be a way to output the entry-level cards, usually relegated to office and school equipment. This market is in danger of disappearing and there is no doubt that it will be a way to safeguard it. Of course, it will depend on the manufacturers to automate certain processes with the drivers and also on the creators of the applications to make the pertinent changes.

Related Articles