Cyberpunk 2077 2.0 is coming soon - the first showcase for DLSS 3.5 ray reconstruction, integrated into a full DLSS package including super resolution and frame generation, all combining to produce a state-of-the-art visual experience. In this roundtable discussion, we discuss how ray reconstruction works, how it was developed and its applications outside of path-traced games. We also talk about the evolution of the original DLSS, the success of DLSS in the modding community and the future of machine learning in PC graphics.
Many thanks to all participants in this roundtable chat: Bryan Catanzaro, Vice President Applied Deep Learning Research at Nvidia, Jakub Knapik VP Art and Global Art Director at CD Projekt RED, GeForce evangelist Jacob Freeman and Pedro Valadas of the PCMR sub-Reddit.
00:00:00 Introduction
00:01:10 When did the DLSS 3.5 Ray Reconstruction project start and why?
00:04:16 How did you get DLSS 3.5 Ray Reconstruction up and running?
00:06:17 What was it like to integrate DLSS 3.5 for Cyberpunk 2077?
00:10:21 What are the new game inputs for DLSS 3.5?
00:11:25 Can DLSS 3.5 be used for hybrid ray tracing titles and not just path traced ones?
00:12:41 What is the target performance budget for DLSS 3.5?
00:14:10 Is DLSS a crutch for bad performance optimisation in PC games?
00:20:19 What makes machine learning specifically useful for denoising?
00:24:00 Why is DLSS naming kind of confusing?
00:27:03 What did the new denoising enable for Cyberpunk 2077’s graphical vision?
00:32:10 Will Nvidia still focus on performance without DLSS at native resolutions?
00:38:26 What prompted the change internally at Nvidia to move away from DLSS 1.0 and pursue DLSS 2.0?
00:43:43 What do you think about DLSS mods for games that lack DLSS?
00:49:52 Where can machine learning go in the future for games beyond DLSS 3.5?
There’s a really interesting discussion in here link (~34:59) around native 4k raster performance and beauty versus AI generated data. Here’s a snippet: