The fact of representing the reality of a three-dimensional world in a two-dimensional space has its limitations, not only due to the limited viewing angle, but also due to the fact that a series of perspective aberrations appear. The solution to the problem would be to have a graphic system designed for a virtual reality helmet and not one designed for a conventional and ill-behaved screen. However, we are going to focus on the problem of textures
What is the oblique textures problem?
One of the visual limitations in current games, as long as they are polygonal scenes in three dimensions, has to do with the textures on objects placed at certain viewing angles with respect to the camera. In many cases they lose definition and even come to be seen with little definition or blurred. Which greatly tarnishes the visual aspect of the games.
Of course, we have to start from the fact that the textures are a 2D image that is mapped pixel by pixel in the fragments, which are nothing more than the polygons that make up each of the elements of the scene transferred in a Cartesian space. in two dimensions. The problem comes when the object is at an oblique angle to the camera, then the texture filtering looks really bad, as it makes even the detail blur completely. However, this has a very simple explanation.
Let’s imagine for a moment a 3D scene that represents a wall with a totally smooth color and a painting in the center. If we look at that box head-on, then we will have a 1:1 pixel, box, and texture mapping, and therefore it will look right. On the other hand, if we rotate the camera and with it the angle, we find that this relationship is completely broken. The pixels are no longer square, but have become longer and thinner at the same time. That is, an anisotropy has occurred and, therefore, certain characteristics have changed with the viewing angle.
For example, we may find that an object that was 256 x 256 pixels from the front, in an oblique position, has fewer pixels. So in the end it ends up generating a texture with less information and consequently resolution. That is to say, we are really looking at an object with a much lower definition as there are fewer information points and if we add to this the interpolation of the texture filtering, well, we can already imagine what the problem is with the oblique textures.
Graphics card hardware is certainly limited
In each of the GPU cores there is a texture unit, what it does is take 4 adjacent pixels and apply an interpolation algorithm called bilinear filtering, which can be used to apply other more complex types as anisotropic. . Features that developers do not have to program into their games because they are automated by these fixed function units.
To avoid the problem with oblique angles and not have to calculate the same texture at a lower resolution, these versions are loaded into memory. Which are created by the artists in the creation of the game. Well, these are not only used for distance textures, but to calculate textures at oblique angles, although not with all the precision in the world and therefore limited to nearby objects, but it avoids the problem regarding to the distance that can be seen in distant objects.
The ideal would be to have a texture of each object depending on its viewing angle in memory, but this would not only be a titanic job for artists, but would require multiplying 90 times the amount of memory, since we would need a version for each degree of view of the camera. Of course, turning on anisotropic is free and requires zero lines of code, however, there is a much better and more advanced solution.
The solution to oblique textures is RIP Mapping, but
RIP Mapping is neither more nor less than the definitive solution to the oblique texture display problem and is, therefore, a more advanced solution to the current anisotropic filtering in games. If applied in terms of hardware, the visual problem would completely end, however, there are reasons why it has not been implemented yet and they have to do not only with computing costs, but also with bandwidth, which can reach be up to 4 times higher.
If current graphics cards are already limited in terms of VRAM bandwidth and textures need to be loaded from each core’s internal cache to have a high enough rate, then imagine using RIP Mapping. And no, it is not a potential limitation. Let’s not forget that we have reached the point where it is more expensive to move information than to calculate it, and this is one of the clearest examples. It doesn’t seem like there is any intention to fix this problem any time soon, unless they find a solution through AI. Which is a shame, due to the fact that it is one of the visual problems that most tarnishes the gaming experience at a graphic level.