Commit d9b51656 authored by Katarina Furmanova's avatar Katarina Furmanova
Browse files

raytracing cont

parent 1d0705ed
Loading
Loading
Loading
Loading
Loading
+286 −5
Original line number Diff line number Diff line
@@ -232,12 +232,12 @@
                </div><br>
            The modern raytracing algorithm was developed by Arthur Appel in 1968. While intuitively one might think of tracing rays from light sources to the camera,
            in practice this is not efficient, as most rays from light sources do not reach the camera. Therefore, the common approach is to trace rays from the camera 
            position \(s\) (center of perspective projection) through the center of each pixel \(P\) of the projection plane raster grid into the scene.
            position \(S\) (center of perspective projection) through the center of each pixel \(P\) of the projection plane raster grid into the scene.
            Such a ray can be defined as:
            \[X(t) = P + t \cdot d \quad \quad t\geq 0 \]
            where 
            \[d = \frac{P - s}{||P - s||}\] 
            is the normalized direction vector from the camera position \(s\) to the pixel center \(P\).
            \[d = \frac{P - S}{||P - S||}\] 
            is the normalized direction vector from the camera position \(S\) to the pixel center \(P\).
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:45%; display: flex;">
                        <img src="img/12/raytr1.svg" alt="" class="img-fluid"
@@ -246,9 +246,290 @@
                </div><br>


            When a ray intersects an object, the color of that pixel is determined based on the material properties of the object and the lighting conditions.
            When a ray intersects an object, the color of the intersection point is determined based on the material properties of the object and the lighting conditions.
            We often use local illumination models (e.g., Phong, Blinn-Phong) to compute the color \(C_{\text{local}}\) at the intersection point.
            To determine the light conditions, a so called <b>shadow ray</b> is cast from the intersection point towards the light sources to 
            check for light visibility. If the shadow ray intersects another object before reaching the light source, the point is in shadow. 
            In such case, we either completely skip the color calculations for this pint or only ambient light is considered (if we assume 
            illumination model with ambient component). Otherwise, the local color contribution is computed using the chosen illumination model.
            <br><br>
            However, the calculation does not stop here. To account for global illumination effects, additional <b>secondary</b> rays are traced from the intersection point.
            These rays can be of different types, depending on the desired effects:
            <ul>    
                <li><b>Reflection rays:</b> These rays are traced in the reflection direction \(R\) of the incident ray
                    reflected with respect to the surface normal \(N\). This is not to be confused with light reflection vector in Phong lighting model.
                    The color contribution from the reflection ray is computed recursively by tracing the ray further into the scene.
                    They are typically computed for most of the objects in the scene.</li>
                <li><b>Refraction rays:</b> These rays are traced through transparent or translucent materials to simulate light bending 
                    and transmission. Similar to reflection rays, the color contribution from refraction rays is computed recursively.
                    However, refraction rays are only computed for transparent or translucent objects.</li>
            </ul>
            The recursion continues until a maximum recursion depth is reached. A recursion branch can terminate early if the ray does not intersect any objects. 
            The animation below illustrates the raytracing process with primary, shadow, reflection, and refraction rays up to depth 3.
            At each intersection we cast shadow rays to determine light visibility and then proceed to cast reflection 
            and if applicable (i.e., for transparent objects) refraction rays for transparent objects.

            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:65%; display: flex;">
                        <img src="img/12/raytracing.gif" alt="" class="img-fluid"
                            style="width: 50%; height:auto; object-fit:contain;">
                            <div style="width: 10%;"></div>
                        <img src="img/12/raytr_legend.svg" alt="" class="img-fluid"
                            style="width: 40%; height:auto; object-fit:contain;">
                    </figure>
                </div><br>
            In this way we build a ray-tracing tree, where each node represents a ray intersection and its color contribution.
            The edges represent the secondary rays traced from each intersection point.
            The final color of the pixel is obtained by combining the color contributions from all the rays traced from the camera through that pixel.
            For example, to get the final color \(A\) at the first intersection point in the figure below, we 
            combine the local color \(C_{\text{localA}}\) with the color contributions from the reflection ray 
            \(\color{#00B050}C_{R}\) (i.e., point \(B\)) and refraction ray \(\color{#B88C00}C_{T}\) (i.e., point \(C\)). The reflection and refraction contributions are typically weighted 
            by reflection and refraction coefficients (\(\color{#00B050}k_r\) and \(\color{#B88C00}k_t\)), which represent the material's reflectance and transmittance properties.
            \[C_A = C_{\text{localA}} + {\color{#00B050}k_r \odot C_{R}} + {\color{#B88C00}k_t \odot C_{T}}\]
            However, to get the color contributions of point \(B\) (\(C_{R}\)) and point \(C\) (\(C_{T}\)), we need to recursively compute the 
            colors from their respective secondary rays too, as illustrated in the ray-tracing tree below.
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:70%; display: flex;">
                        <img src="img/12/raytr2.svg" alt="" class="img-fluid"
                            style="width: 35%; height:auto; object-fit:contain;">
                            <div style="width: 10%;"></div>
                        <img src="img/12/raytr3.svg" alt="" class="img-fluid"
                            style="width: 55%; height:auto; object-fit:contain;">
                    </figure>
                </div><br>
            To compute the local color \(C_{\text{local}}\) at each intersection point, we can use any local illumination model, such as Phong model 
            from previous lecture:
            \[C_{\text{local}} = I_a \odot k_a + \sum_{i=0}^{n-1} (I_i \odot k_d) (N \cdot L_i) + (I_i \odot k_s) (R_i \cdot V)^h  \]
            However, where in previous lecture the direction \(V\) was from the point to the viewer, in raytracing it 
            is the direction opposite to the incident ray direction as illustrated below.
            For the primary ray, it is still the direction to the camera (since the primary ray comes from camera direction).
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:35%; display: flex;">
                        <img src="img/12/raytr4.svg" alt="" class="img-fluid"
                            style="width: 100%; height:auto; object-fit:contain;">
                    </figure>
                </div><br>
            Furthermore, we have two reflection directions in this case: one for local illumination model \(R_i\)
            (reflection of light direction \(L_i\) with respect to surface normal \(N\)), used to calculate specular 
            component of local color:
            \[R_i = 2 (N \cdot L_i) N - L_i \]
            and one for tracing the reflection ray \(R\) (reflection of incident ray with respect to surface normal \(N\)):
            \[R = 2 (N \cdot V) N - V \]
            where \(V\) is the vector opposite to incident ray direction. Both can be computed using the same formula, 
            that we derived in previous lecture, but with different input vectors. Computation of <b>refraction ray direction</b>
            is more complex, as it involves <b>Snell's law</b> to account for light bending at the interface of two media 
            with different refractive indices. The law states that the ratio of the sines of the angles of incidence 
            and refraction is equal to the ratio of the refractive indices of the two media:
            \[\frac{\sin \alpha'}{\sin \alpha} = \frac{n_1}{n_2} = n_{12}\]
            where \(n_1\) and \(n_2\) are the refractive indices of the incident and transmitted media, respectively.
            This law enables us to express \(\sin \alpha'\) based on \(\sin \alpha\):
            \[\sin \alpha' = n_{12} \sin \alpha\]
            This relationship is crucial for calculating the refraction ray direction \(T\). Consider the figure below:

            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:25%; display: flex;">
                        <img src="img/12/raytr5.svg" alt="" class="img-fluid"
                            style="width: 100%; height:auto; object-fit:contain;">
                    </figure>
                </div><br>
            Assuming we have the light direction \(L\), surface normal \(N\), and their angle \(\alpha\),
            we can compute two new vectors \(\color{#1220A5}u\) and \(\color{#8F3AA9}v\). Vector \(\color{#1220A5}u\) is the projection of \(L\) onto the plane defined by normal \(N\):
            \[{\color{#1220A5}u} = {\color{#FF4F4F}L} - {\color{#FB0DF0} N \cos \alpha } ={\color{#FF4F4F}L} - {\color{#FB0DF0} N (N \cdot L)  } \]
            Vector \(\color{#8F3AA9}v\) is the projection of refraction ray \(T\) onto the normal \(N\) (in opposite direction).
            While we do not know the \(T\) ray direction, we know we can express \(\color{#8F3AA9}v\) using cosine of refraction angle \(\alpha'\).
            And from Snell's law, we know how to compute \(\cos \alpha'\) using \(\cos \alpha\): 
            \[{\color{#8F3AA9}v = -N \cos \alpha' }\]

            \[\cos \alpha' = \sqrt{1 - \sin^2 \alpha'} = \sqrt{1 - (n_{12} \sin \alpha)^2} = \sqrt{1 - n_{12}^2 (1 - \cos^2 \alpha)} = \sqrt{1 - n_{12}^2 (1 - (N \cdot L)^2)}\]
            
            Finally, we can compute the refraction ray direction \(T\) as composition of vector \(\color{#8F3AA9}v\) and 
            normalized vector opposite to \(\color{#1220A5}u\) scaled by \(\sin \alpha'\):
            \[T = {\color{#8F3AA9}v} + {\color{#B88C00} \sin \alpha' \frac{{-u}}{||u||} }\]
            Again, we can express \(\sin \alpha'\) using Snell's law and then express \(\sin \alpha\) using \(\cos \alpha\). 
            \[\sin \alpha' = n_{12} \sin \alpha = n_{12} \sqrt{1 - \cos^2 \alpha} = n_{12} \sqrt{1 - (N \cdot L)^2}\]
            After substituting all the equations and some rearrangement we
            get the final formula for refraction ray direction:
            \[T = -n_{12}L + \left(n_{12} (N \cdot L)) - \sqrt{1 - n_{12}^2 (1 - (N \cdot L)^2)} \right) N \]
            Now we have all the components needed to implement basic raytracing algorithm.
            The pseudocode of the algorithm is as follows. 
            The main function is <i>rayTracing</i>, which iterates over all pixels of the projection plane raster,
            computes the primary ray for each pixel, and calls the recursive function <i>traceRay</i> to compute the color for that ray:
                <div style="background-color: rgb(206, 206, 206); padding: 10px;">
                <pre><code>rayTracing(Camera, Scene): -> colors for all pixels:
	pixels = []
	<b>foreach</b> pixel coordinates <b>P</b> in projection screen:
		pixels[P] = <b>traceRay</b>(P, unitVector(P-Camera.origin), Scene, 0)
	<b>return</b> pixels
</code></pre>
</div>
The recursive function <i>traceRay</i> computes the color for a given ray by finding the closest intersection with scene objects,
casts shadow rays to determine light visibility (one ray for each light in the scene), computes local illumination, and traces reflection and refraction rays if applicable:
                <div style="background-color: rgb(206, 206, 206); padding: 10px;">
                <pre><code>traceRay(P, d, Scene, depth) -> colour C computed for the passed ray:
	Q = firstIntersectionRayScene(P, d, Scene)	 //find ray-scene intersection
	if Q is None: <b>return</b> BLACK			 //no intersection found
	N,M = normalAt(Q, Scene), surfaceMaterial(Q, Scene)
	lighting = []
	<b>foreach</b> light in Scene.lights:			//cast shadow ray to test light visibility
		L = unitVector(light.position - Q)
		<b>if</b> L⋅N>0 and firstIntersectionRayScene(Q, L, Scene) is light.position:
			lighting += light			 //add light if its visible from Q
	C = localPhongLightingModel(M,Q,-d, normalAt(Q, Scene), lighting) //local illumination
	if depth ≥ MAX_DEPTH: return C
	C +=M.k<sub>r</sub> ⨀ traceRay(Q, reflectionVector(N,-d), Scene, depth + 1) //reflection
	C +=M.k<sub>t</sub> ⨀ traceRay(Q, refractionVector(N,-d,M.n<sub>12</sub>), Scene, depth + 1) //refraction
	<b>return</b> C
</code></pre>
</div>

            <h2 id="improvements">Raytracing improvements</h2>
            <h6>Speed-up techniques</h6>
            The pseudocode above includes a function <i>firstIntersectionRayScene</i>, which finds the closest intersection of a ray with all objects in the scene.
            A naive implementation would test the ray against all objects (i.e., all triangles) in the scene, which can be computationally expensive for complex scenes.
            Therefore, various acceleration structures are used to speed up the ray-scene intersection tests.
            As a first simple speed-up technique we can use <b>axis-aligned bounding boxes (AABBs)</b> to quickly eliminate large 
            groups of objects that do not intersect with a given ray. In axis-aligned bounding box, the edges of the box are aligned with the coordinate axes.
            The bounding box is defined by two points: the minimum \(L\) and maximum \(H\) corners, which represent the smallest and largest 
            coordinates of the enclosed object along each axis.
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:25%; display: flex;">
                        <img src="img/12/bb1.svg" alt="" class="img-fluid"
                            style="width: 100%; height:auto; object-fit:contain;">
                    </figure>
                </div><br>
            We can express the ray using parametric equation:
            \[X(t) = P + t d \quad \quad t\geq 0 \]
            where \(P\) is the ray origin and \(d\) is the normalized ray direction.
            To test the intersection of the ray with the AABB, we can compute the entry and exit points of the ray for
            each axis-aligned plane forming the side of the cube.
            For example, we can start with the back-side plane of the cube (YZ plane at the position \(x = L_x\)).
            The normal vector of this plane is \((1, 0, 0)\) and the plane passes through point \(L\).
            Let us consider an intersection point of the ray with this plane at parameter \(t_i\):
            \[P + t_i  d \]
            Then the vector from point \(L\) to the intersection point must be orthogonal to the plane normal, so their dot product is zero:
            \[((P + t_i d) - L) \cdot (1, 0, 0) = 0\]
            From this we can derive the parameter \(t_i\):
            \[t_i = \frac{L_x - P_x}{d_x}\]
            We can do the same for the front-side plane of the cube (YZ plane at position \(x = H_x\)). Let us denote the 
            intersection parameter with this plane as \(t_i'\):
            \[t_i' = \frac{H_x - P_x}{d_x}\]
            Now we can sort the two parameters to get the entry and exit points along the x-axis. We will denote 
            the entry point as \(t_x\) and exit point as \(t_X\):
            \[t_x = \min(t_i, t_i') \quad \quad t_X = \max(t_i, t_i')\]
            
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:25%; display: flex;">
                        <img src="img/12/bb2.svg" alt="" class="img-fluid"
                            style="width: 100%; height:auto; object-fit:contain;">
                    </figure>
                </div><br>
            Note, that if \(d_x = 0\), the ray is parallel to the planes, and we need to check if the ray origin \(P_x\) is within 
            the bounds of the box along x-axis:
            \[L_x \leq P_x \leq H_x\]
            If not, the ray does not intersect the box. Otherwise, we can set \(t_x = -\infty\) and \(t_X = +\infty\) to 
            indicate that the ray is always inside the box w.r.t x-axis.
            <br><br>
            We can repeat the same process for the other two axes (y and z) to get \(t_y, t_Y\) and \(t_z, t_Z\).
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:55%; display: flex;">
                        <img src="img/12/bb3.svg" alt="" class="img-fluid"
                            style="width: 100%; height:auto; object-fit:contain;">
                    </figure>
                </div><br>

            Finally, we can compute the overall entry (\(t_0)\) and exit (\(t_1)\) points of the ray with the box.
            We are looking for the interval, where all three axis-aligned intervals \([t_x, t_X], [t_y, t_Y], [t_z, t_Z]\) overlap.
            This can be found by taking the maximum of the entry points and the minimum of the exit points:
            \[t_{0} = \max(t_x, t_y, t_z) \quad \quad t_{1} = \min(t_X, t_Y, t_Z)\]
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:45%; display: flex;">
                        <img src="img/12/bb4.svg" alt="" class="img-fluid"
                            style="width: 100%; height:auto; object-fit:contain;">
                    </figure>
                </div><br>
            If \(t_{0} > t_{1}\), the ray does not intersect the box. There is no overlapping interval along all three axes.
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:45%; display: flex;">
                        <img src="img/12/bb5.svg" alt="" class="img-fluid"
                            style="width: 100%; height:auto; object-fit:contain;">
                    </figure>
                </div>
            If \(t_{1} < 0\), the box is located behind the ray origin (behind point \(P\)), so there is no intersection in the ray direction.
            <br>
            If \(t_{0} < 0\) but \(t_{1} > 0\), the ray origin is inside the box, and the ray exits the box at \(t_{1}\).
            The first intersection point \(Q\) would thus be:
            \[Q = P + t_{1} d\]
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:35%; display: flex;">
                        <img src="img/12/bb6.svg" alt="" class="img-fluid"
                            style="width: 100%; height:auto; object-fit:contain;">
                    </figure>
                </div><br>    
            Otherwise, the ray intersects the box in the interval \([t_{0}, t_{1}]\).
            The first intersection point \(Q\) would be:
            \[Q = P + t_{0} d\]
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:35%; display: flex;">
                        <img src="img/12/bb7.svg" alt="" class="img-fluid"
                            style="width: 100%; height:auto; object-fit:contain;">
                    </figure>
                </div><br> 

            Using AABBs can significantly reduce the number of ray-object intersection tests, especially in complex scenes with many objects.
            However, AABBs are not ideal for all object shapes, as they may not tightly fit around the object, leading to false positives.
            A slightly better approach is to use <b>oriented bounding boxes (OBBs)</b>, which can be rotated to better fit the object shape.
            OBBs are still composed of six orthogonal planes, but they are not aligned with the coordinate axes.
            The intersection test with OBBs can thus be more complex than with AABBs. However, because the planes are still orthogonal,
            we can use a similar approach as for AABBs, but we need to transform the ray into the OBB's local coordinate system 
            where the OBB is axis-aligned. Then the intersection test can be performed the same way as for AABB case.
            The found intersection point should then be transformed back to the world coordinate system.
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:45%; display: flex;">
                        <img src="img/12/bb8.svg" alt="" class="img-fluid"
                            style="width: 100%; height:auto; object-fit:contain;">
                    </figure>
                </div><br> 
            For even better performance, especially in scenes with a large number of objects, we can use hierarchical acceleration structures,
            such as <b>Bounding Volume Hierarchies (BVH)</b> or <b>space partitioning</b>.
            The idea of a BVH is to recursively group objects into a tree structure, where each node represents a bounding volume that encloses its child nodes or objects.
            The root node represents the bounding volume for the entire scene, and the leaf nodes represent individual objects.
            The bounding volumes can be AABBs, OBBs, bounding spheres, or other shapes. Each type has its own advantages and disadvantages (e.g., 
            tightness of fit vs. computational cost of intersection tests).
            The example below illustrates a BVH structure for a simple scene with several objects:
             <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:100%; display: flex;">
                        <img src="img/12/bvh1.svg" alt="" class="img-fluid"
                            style="width: 20%; height:auto; object-fit:contain;">
                            <div style="width: 2%;"> </div>
                            <img src="img/12/arrow.svg" alt="" class="img-fluid"
                            style="width: 3%; height:auto; object-fit:contain;">
                            <div style="width: 2%;"> </div>
                        <img src="img/12/bvh2.svg" alt="" class="img-fluid"
                            style="width: 20%; height:auto; object-fit:contain;">
                            <div style="width: 2%;"> </div>
                            <img src="img/12/arrow.svg" alt="" class="img-fluid"
                            style="width: 3%; height:auto; object-fit:contain;">
                            <div style="width: 2%;"> </div>
                        <img src="img/12/bvh3.svg" alt="" class="img-fluid"
                            style="width: 20%; height:auto; object-fit:contain;">

                            <div style="width: 10%;"> </div>
                        <img src="img/12/bvh4.svg" alt="" class="img-fluid"
                            style="width: 15%; height:auto; object-fit:contain;">
                    </figure>
                </div><br>
            An alsternative approach to BVH is <b>space partitioning</b>, where the 3D space is recursively divided into smaller regions using planes or grids.
            Common space partitioning structures include <b>BSP-trees</b> and <b>octrees</b> (resp. quad-tress for 2D scenes) that we have discussed in previous lectures.
            <div style="margin: auto; width: 100%;">
                    <figure style="margin: auto; width:20%; display: flex;">
                        <img src="img/12/quadtree.svg" alt="" class="img-fluid"
                            style="width: 100%; height:auto; object-fit:contain;">
                    </figure>
                </div><br> 

            During raytracing, the ray is tested against the bounding volumes or space partitions first.
            If the ray intersects a bounding volume or partition, we proceed to test the ray against the child nodes or objects within that volume or partition.
            If the ray does not intersect the bounding volume or partition, we can skip testing the objects within it, thus reducing the number of intersection tests and improving performance.
            
            <h2 id="radiosity">Radiosity</h2>

Loading