The impact of DLSS on the stunning world of Naraka: Bladepoint


If you aim to achieve 4K results, simply increase the resolution to alleviate aliasing. If 8K is the desired resolution, rendering will be four times slower, and you will likely run into issues with 8K texture bandwidth (memory usage).

Another method to solve for aliasing is Multisample Anti-aliasing (commonly known as MSAA), which is supported by GPU hardware. MSAA checks multiple samples in different sub-pixel positions, instead of only checking the center sample of the pixel. The triangle fragment color can be adjusted to make the edges smoother based on the number of samples in a pixel covered by primitives.

Temporal Anti-aliasing (TAA) is another method that accumulates samples across multiple frames. This method adds a different jitter to each frame in order to change the sampling position. With the help of motion vectors, it then blends the color results between frames.

If the historical color of each pixel in the current frame can be identified, you can use that information going forward.

Jitter generally means that the sampling position in the pixel is slightly adjusted, so that samples can be accumulated across frames instead of attempting to solve an undersampling problem all at once.

That’s why 24 Entertainment turned to DLSS, which not only reduced the overall rendering resolution to rectify these issues, but also obtained high-quality results with the desired smooth edges.

Here, the recommended sample pattern comprises Halton sequences, which are low-discrepancy sequences that look random but cover the space more evenly.

In practice, applying Jitter Offset can be rather intuitive. Consider these steps to do it effectively:

Step 1: Generate samples from Halton sequences of a specific camera, according to different settings. The output jitter should be between negative 0.5 to 0.5.

 Vector2 temporalJitter = m_HalotonSampler.Get(m_TemporalJitterIndex, samplesCount);

Step 2: Store the jitter into a Vector4. Then multiply the jitter by 2 and divide by the scaled resolution to convert the jitter to screen space in pixels. Store these in the zw component.

These two were later used to modify the projection matrix and globally affect the rendering result:

  m_TemporalJitter = new Vector4(temporalJitter.x, temporalJitter.y,
           temporalJitter.x * 2.0f /
UpSamplingTools.GetRTScalePixels(cameraData.pixelWidth),
           temporalJitter.y * 2.0f /
UpSamplingTools.GetRTScalePixels(cameraData.pixelHeight)
);

Step 3: Set the View Projection matrix to the global property UNITY_MATRIX_VP. This will result in the shader working smoothly without any modification, as the vertex shader calls the same function to convert the World position onscreen.

 var projectionMatrix = cameraData.camera.nonJitteredProjectionMatrix;
projectionMatrix.m02 += m_TemporalJitter.z;
projectionMatrix.m12 += m_TemporalJitter.w;

projectionMatrix = GL.GetGPUProjectionMatrix(projectionMatrix, true);
var jitteredVP = projectionMatrix * cameraData.viewMatrix;



Source link

More To Explore

The State of UX in 2022

This is the moment to recalibrate and reimagine what it means to be a designer, to design, and to be part of a design community.

Share on facebook
Share on twitter
Share on linkedin
Share on email