EE Rough

Research Question
How is raytracing an improvement over rasterization for realtime physically accurate rendering?


Introduction

The world of 3D computer graphics is rapidly transforming: the recent shift towards the famed concept of realtime raytracing has generated a flurry of research. Behind the scenes, the entertainment industry has continued to discover innovative algorithms and models for accurate rendering of clouds, hair, fog, skin, and many other complex aspects of the real world.
Photorealistic rendering is a critical part of an immersive world. While many games and movies use unrealistic / cartoon styles for artistic purposes, this paper will focus on physically-based rendering.
Real-world measurements and units are employed so that objects look natural across a variety of conditions. Energy must be conserved: objects cannot reflect more light than they receive. Reflections and shadows must be correctly placed.

Topic in context, importance
Simulation
Entertainment is the most visible area for physically-based rendering; games and movies use it to increase the quality of their visuals.
Virtual reality is increasingly being used in diverse areas, from training emergency responders to practicing medical surgery. In these situations physical realism must be accurately maintained to ensure immersion.


Theoretical / fundamental structure

Rasterization is traditionally the faster way of rendering in 3D; it has been used since the beginnings of computer graphics in the 1960's. All objects are made of triangles, which are the simplest shape to draw (they are always convex and planar). In rasterization, each triangle is individually drawn onto the screen by traversing the pixels that it covers. A depth buffer is used to ensure that closer objects are drawn on top of farther ones.

Ray tracing has been used for a long time in non-realtime rendering, such as animated movies. It was first described in 1968/9 by Arthur Appel. Ray tracing uses rays to calculate visibility. For each pixel on the screen, a ray is generated that points in a specific direction. Each ray is tested to see if it intersects with the objects in the scene (which can be an analytical primitive e.g. sphere or a triangle). The closest object then transfers its color / attributes to the pixel on the screen.


Some raycasting is already present in rasterization, for example Screen-Space Ambient Occlusion

Rasterization
For each triangle {
  For each pixel {
    If pixel in triangle then fill
  }
}

Ray tracing
For each pixel {
  For each triangle {
    If pixel in triangle then fill
  }
}

Above is the high-level pseudocode for both frameworks. The main difference is the swapping of the inner and outer loops.
The outer loop can be efficiently parallelized in both frameworks: in rasterization each processing thread is assigned a triangle, while in ray tracing each thread is assigned a ray / pixel.

Texture operations have a similar structure in both frameworks.

High-level comparison

A typical scene with rasterization may run 30fps; with ray tracing it would run maybe 5fps.

Rasterization
Memory - accessing buffers
Processor - calculating barycentric coordinates
The rasterization pipeline: Vertex processing => Triangle setup => Triangle projection => Pixel processing

(pseudocode for triangle projection or pixel processing)

Raytracing
Memory - traversing tree
Processor - traversing tree, intersecting
The raytracing pipeline: Tree processing => Ray generation => Ray-scene intersection => Shading => More ray intersection & shading

(pseudocode for ray-object intersection)

Effect of greater scene complexity or more triangles
Rasterization is prone to overdraw - when many objects are in a line, the resources spent on the farther ones are wasted because they are overwritten by the closer ones. The impact of overdraw can be decreased by using deferred rendering. More threads are necessary but each thread still has the same amount of work.
For raytracing, intersection with objects is expensive; more objects means more computations. This inner loop cannot be as effectively parallelized away.

Effect of increasing resolution

Optimization
A basic optimization for rasterization is Frustrum Culling. The part of a scene visible to the camera is typically a section of a rectangular pyramid; objects outside this volume can be completely skipped.
At the per-pixel level, modern tiled rasterization techniques essentially perform raytracing on the 2d triangles post-transform. Using barycentric coordinates it is possible to get the vertex weights for the current pixel.

(explain barycentric coordinates?)

Finding the intersection of a ray and objects can be accelerated by using Bounding Volume Hierarchies. Several objects can be grouped together; if the ray does not hit this collective volume then all of the included objects can be skipped.

(explain BVH)

In a bounding volume hierarchy, each primitive (triangle) is assigned a bounding box which simplifies collision / intersection computation. Bounding volumes which are close to each other are grouped together, and a single larger bounding volume is constructed containing all of the constituent volumes. The enclosed volumes are added as children of the larger volume. This way, if a ray is found to not intersect with the large bounding volume, the children can be completely skipped, saving computation and memory access time. This process is repeated recursively until the whole scene is made of a single or few nodes.

The axis-aligned bounding box is chosen for its simplicity and compact data representation - it only needs 2 points describing opposite corners.

An example of a BVH class is as follows:

class boundingVolume {
    *boundingVolume children;
    float3 max_point;
    float3 min_point;
    int triangleIndex;
}

Traversal of this structure could be done recursively or iteratively. Breadth-first traversal is the optimal way.

function traverseBVH (ray, startBV) {
    currentBV = startBV;
    while (currentBV.hasChildren) {
        for i in currentBV.children {
            if (boxIntersect(ray, i)) currentBV = i;
        }
    }
    intersectTriangle(ray, currentBV.triangleIndex);
}


Lighting
With rasterization shadows are made with a shadow map or stencil buffer. While shadow maps are faster, they exhibit aliasing artifacts with lower resolutions.
With raytracing a ray is cast towards the light to determine shadow. This necessitates the creation of another ray which is intersected with the scene.

Global Illumination

Global illumination is an integral part of physically realistic rendering. Light bounces around the world, picking up color from the objects it hits. Simulating this complex interaction creates much more appealing and realistic visual results.

With rasterization, global illumination is typically pre-rendered and 'baked' into an extra texture.
The Finite Element method is commonly used with rasterization. In this method, for each vertex, a hemicube is rendered around it and the incoming light is integrated according to the Rendering Equation. This is multiplied with the diffuse color of the vertex to get the final color. This method is very slow. This process must be repeated for each light bounce.

With raytracing, a number of rays can be cast from the surface in question to sample the hemisphere; these samples are then put into the Rendering Equation. This is a Monte Carlo method where the rays are randomly generated. Sufficient samples are needed to decrease noise. However, naive random sampling does not yield accurate results.
Importance Sampling - if there is a bright light in the scene, most illuminance will come from it, and therefore more rays should be cast in its direction.

The rendering equation.
Outgoing radiance = Emissive coefficient + integral of incoming radiance * diffuse coefficient

Specular!!!
In a physically-based rendering framework, all materials have a Bidirectional Reflectance Distribution Function (BRDF) that dictates how much light is scattered at a certain angle; this is typically empirically determined with measurements. This means that the light contribution is View-Dependent (the brightness of a particular spot changes with the angle which one views it at), unlike the Lambert diffuse model.


Reflections
Cubemap is only accurate for one point, planar reflections are expensive.
Raytraced reflections are good
Anisotropy

Volumetrics
Using rasterization, effects such as fog and light rays can be approximated using depth peeling and screen-space techniques.
Deep opacity shadow maps can be used to determine volumetric shadowing.
(explain more)

Raytracing allows a physically accurate rendering of volumetrics. Raymarching with a set or adaptive number of samples is used for lighting calculations.
(explain more)

Both require many samples for good quality.

Screen space effects
Ambient Occlusion


Low-level comparison

Vectorization - barycentric coordinate calculations are good
Cache coherence ??? => Level of Detail, Tiles
Register <= Cache <= Main memory


Shortcuts

When objects are far away and small, they can be rendered at a lower detail since high-frequency features will not be visible.

Machine learning can reduce the need for supersampling in raytracing. Neural networks are trained using low-resolution and high-resolution images, then deployed to convert low-resolution input into higher-resolution output. They can also be trained to recognize noise caused by illumination sampling and smooth it out.

Effect of hardware
The rasterization pipeline has greatly evolved; modern GPUs have specialized circuitry. Render output units transfer pixels to the framebuffer, texture multipliers multiply textures, and unified shader cores execute vertex and pixel shaders.
Hardware-accelerated raytracing is a very recent development. Hardware includes specialized pipes for generating rays, computing collisions, and traversing the bounding volume tree.

OpenCL is extensively used for raytracing. Vulkan is very new and supports raytracing. DirectX is a Microsoft proprietary API which also supports raytracing.

Results / high level



Conclusion

Raytracing is much more intensive but gives much better physically accurate effects. With the increasing power of modern graphics cards and dedicated hardware support, fully realtime raytracing is bound to become a reality. However, the performance of rasterization will likely ensure its prevalence in mobile and embedded applications.


Evaluation
More code?? coming soon

Bibliography
Tomas Akenine-MAAller, Eric Haines, Naty Hoffman, Angelo Pesce, MichaAA Iwanicki, Sébastien Hillaire. (2018). Real-Time Rendering, Fourth Edition. http://www.realtimerendering.com

NVIDIA Corporation. (2008). GPU Gems. https://developer.nvidia.com/gpugems/GPUGems

Matt Pharr, Wenzel Jakob, and Greg Humphreys. (2016). Physically Based Rendering: From Theory To Implementation. http://www.pbr-book.org

CryTek. (2018). CryEngine V Manual. https://docs.cryengine.com/display/CEMANUAL/CRYENGINE+V+Manual

Epic Games. (2016). Engine Features. https://docs.unrealengine.com/en-us/Engine

Romain Guy, Mathias Agopian. (2019). Physically-Based Rendering in Filament. https://google.github.io/filament/Filament.md.html

(http://box2d.org/files/GDC2019/ErinCatto_DynamicBVH_GDC2019.pdf)