The algorithm starts at the camera (viewer position) and shoots rays into the scene. Each ray corresponds to a pixel on the screen, determining its color by analyzing interactions with scene objects.
The algorithm checks whether the ray intersects an object in the scene. Uses mathematical intersection formulas (e.g., sphere intersection equation, triangle ray intersection using the Möller-Trumbore algorithm). Finds the nearest object to ensure correct depth ordering.
Determines surface properties at the intersection point. Computes normals for shading and applies Phong illumination, Lambertian reflection, or Physically-Based Rendering (PBR) models.
If the surface is reflective (like glass or mirrors), secondary rays bounce off, capturing environment reflections. If transparent, refraction is computed using Snell’s Law, bending light according to the material’s refractive index.
Shoots rays from the intersection point toward light sources. If the ray hits an object before reaching the light source, the point is in shadow. Supports soft shadows using area lighting.
Advanced ray tracing uses Monte Carlo path tracing to simulate indirect lighting. Rays bounce between surfaces, accumulating complex lighting effects like color bleeding, caustics, and ambient occlusion.
Ray tracing calculates how rays of light interact with a sphere, determining surface normals, shading, and reflections.
Using various light types (point, directional, ambient) helps create realistic highlights, shadows, and reflections.
Simulates soft shadows by treating lights as surfaces rather than single points, improving realism.
Models how light scatters across surfaces, creating natural-looking shading on matte materials.
Calculates secondary rays from objects to light sources to determine shadow softness and accuracy.