Nvidia had a special presentation yesterday. Brief, but with a lot of fanfare, they held a special presentation for a select few specialized technology media, showcasing their new upscaling technology: DLSS 5. Everyone was eagerly anticipating it because, despite its flaws, DLSS is a great tool. It allows computers with lower capabilities to run cutting-edge video games at resolutions higher than they normally could with minimal compromises.
The problem is that DLSS 5 has not been that. What they have presented is something very different from DLSS 4.5 and that, almost universally, has been hated for its results. But why has it been? For that, we need to understand what DLSS is and what it is not. And what they are trying to sell us.
What is DLSS?
DLSS, Deep Learning Super Sampling, is a technology that Nvidia launched in September 2018 as a key feature of the GeForce RTX 20 series GPUs. Its greatest contribution was the ability to, through deep learning, upscale lower resolution images to a higher resolution without losing fidelity or graphical quality. This allowed for increased graphical settings and FPS, as it enabled setting a relatively low resolution that the system could handle comfortably, upscaling it to the resolution of the TV or monitor being used.
Each new iteration has introduced new elements. Version 2.0 introduced its own anti-aliasing, which smoothed out the jagged edges in the first iteration. Version 3.0 introduced the ability to invent frames between the existing frames to gain fluidity in the image. And version 4.0 reduced both memory usage and emphasized light detail, particularly through the use of ray tracing.
Although DLSS has obvious advantages, allowing relatively modest computers to run current games, it has also had some disadvantages that cannot be overlooked. The generation of frames causes input lag, meaning that the commands we give to the controller take a certain number of frames to translate into images, which can be fatal for both competitive games and games that require reflexes or precision. And because DLSS is capable of doing a very good job running the game, many developers have stopped efficiently optimizing their games, relying on DLSS or other forms of upscaling technologies to do the work for them to make their games run.
That’s why there was great enthusiasm to know what the new version of DLSS would bring. Each new version has improved upon the previous one, even if there have been some issues since 3.0 that perhaps haven’t been discussed in the detail they should have. And now, with the presentation of 5.0, people have erupted.
The disaster of DLSS 5
Nvidia presented DLSS 5 with a series of videos and explanations that can be summarized in that the new thing in DLSS is that it is less efficient, more resource-demanding, and the contribution it makes is to add a generative AI filter over the games. Although they have rushed to assert that developers still have control over their images, that is neither what was seen in the presentation video nor what has been sold to us: DLSS 5 completely changes the scenes, making us play a very different game. And for the vast majority of the audience, a much uglier one.
The most obvious thing is the faces. Presenting their technology with Resident Evil Requiem, Hogwarts Legacy, Starfield, and EA FC 26, among others, if something stood out throughout the presentation, it was that when activating DLSS 5, the characters looked different. It changed their physical features, reinterpreted their appearance, and provided a supposedly hyper-realistic version, but it is not.
Hyperrealism is a very specific form of art. It is a meticulous recreation of a photo or a scene from reality to which non-existent details are added to enhance its narrative qualities. Hyperrealism is, by definition, not realistic: it does not aim to reproduce reality, but rather to extract what is hidden in reality, betraying its small details. But DLSS 5 is not that. DLSS 5 is a layer of generative AI that makes an approximation based on algorithms to change images according to what is essentially a random calculation: there is no intention behind the changes. It is not enhancing what is hidden through its real reproduction, nor is it based on anything that exists in reality; it is not hyperrealistic, it is just ugly.
That’s why all faces look the same and are, very clearly, AI-generated faces. Because it has biases. Because it does not seek to enhance what already exists. It turns all women into models with infantilized faces and all men into subjects with defined bones and muscles. If someone has dark skin, hair, or eyes, it lightens them, and if a man is young or a woman is an adult, it ages or rejuvenates them due to its biases. Because it does not interpret and does not understand. It only applies the biases that have been given to it.
That’s why it’s not better when we talk about lighting effects or the settings, something that some people have defended more. The lighting lacks shading and is inconsistent, each scene fixture seems to be illuminated by its own LED spotlight, and instead of realistic, it looks like something else: as if filmed in a photography studio.
That’s why it produces the effect of the uncanny valley and why it has received such a negative reception. There is nothing here that is attractive, eye-catching, or interesting. The faces are strange, identical to each other; the scenes are unnatural, like very, very detailed models filmed in a studio. But furthermore, it is not even coherent. The face of Grace Ashcroft, the protagonist of Resident Evil Requiem, appears three times during the presentation, and each time it is a different face when DLSS 5 is activated. When it is not activated, Grace Ashcroft only has one face. Grace Ashcroft’s.
A Terrible Reception
Despite Nvidia presenting it with all the fanfare possible, even inviting Digital Foundry to their offices to release a fifteen-minute video praising the technology —and even then, unable to avoid stating that it didn’t do a good job with the faces—, the reception has been disastrous. And everyone has gone out to do damage control.
According to Nvidia, it is just a prototype, they are working on it and it will improve, because artists will still have control over the final result. They will be able to say how aggressive the filter will be. Moreover, that is not having control, because they will literally still not be able to decide how the changes are made and to what extent, that just leaves us with a question mark. So why have they presented it then? If it is just a prototype, if it is flawed, if artists will have more control, if they know all this, why have they presented it? Because they thought people would applaud it. Because Nvidia has interests in it.
They have thought about it, in part, because DLSS has been celebrated almost uncritically until now. It is a technology with disadvantages that, more often than not, any criticism made of it is disregarded. This has given them the confidence to release this product. But Nvidia, thanks to AI, is now the most valuable publicly traded company in the world. If anyone is interested in introducing AI and ensuring that AI succeeds in all aspects of life, including gaming, it is Nvidia. Their business and their superiority over the rest of humanity depend on it. That is why it should not surprise us: we have handed them the opportunity to do whatever they want, and on top of that, they have every reason to do so.
But it is normal for people to have reacted violently against DLSS 5. It makes games look worse, requires an obscene investment —as of today it works with two 5090s, one solely dedicated to DLSS 5; but Nvidia has also thought about that, why not use their cloud gaming service, Geforce NOW?— and moreover, it tarnishes a technology that everyone liked. But that’s why we need to be more critical. And that’s why we must not refuse to listen when others talk about the possible problems of a technology or a company. Because DLSS 5 is a disaster, but it is a disaster that was foreseeable.