Putting it all together: BRDFs, Object Lights and Path Tracing

Putting It All Together: BRDFs, Object Lights and Path Tracing


 

In this post I will explain three concepts that are very very important for ray tracing but easy to implement concepts relative to their importance (they still need care though).


Bidirectional Reflection Distribution Functions (AKA BRDFs)

 Let us remember the reflectance term of the rendering equation. The f here is the brdf function.

Up to now I was doing using a model called the Blinn-Phon shading model. Actually this is not the only shading model we can use and we should use them because in real life we have different all kinds of materials. This is where the Bidirectional Reflection Distribution Functions come in. We can think of them as a an abstract interface for all the shading models.



Figure 1: Rendering hemisphere [1]

These functions takes 3 arguments: the point x that is being shaded, the incoming light ray direction wi, the outgoing (eye) ray direction wo and returns the amount of light that is reflected by that point(for each wavelength of light of course). For simplicity we will be implementing isotropic surfaces and hence can lose the x term. Below you can see the names of the terms from now on I will be using for the rest of this blog post for BRDFs. I won't be explaining much of the theorical background for most of the BRDFs because that would be simply rewriting their orignal papers. I will simply try to show the differences between them.

Figure 2: BRDF symbols [2]
 

Blinn-Phong BRDF 

Like I have told I have been actually using this model from the beginning of my ray tracer implementation so starting with this will be like a warm up for us.

 

Where alpha is the angle between half vector(wh) and the surface normal. Basically this is the Blinn-Phong shading model we are used to in a BRDF form. You might have noticed that there is a cosine term at the denominator now but actually that's not new because formerly we were not using the rendering equation directly. There is also a cosine term in the rendering equation as well so they were actually cancelling each other out. The good old Blinn-Phong model 

Figure 2: Sphere Blinn-Phong original

Modified Blinn-Phong

Since I have started with Blinn-Phong I also want to mention the modified Blinn-Phong BRDF before moving on to others.

This is basically Blinn-Phong BRDF with the cosine term removed. And when we implement it we get the following result.

Figure 3: Sphere Blinn-Phong modified

I don't know if it is noticable but the shiny spot on top of the sphere was a little bigger in the original Blinn-Phong BRDF.

Normalized Modified Blinn-Phong BRDF

Then comes the normalized version of the modified Blinn-Phong. Normalizing BRDF came to life because they have noticed that BRDFs may reflect more light than they actually recieve and since this is physically impossible people started to normalize their BRDFs if their model wasn't energy conserving to begin with.


The only difference here with the non-normilzed version is that we also multiply by some terms. The p and n here are aliases for the phong exponent. Normalized Blinn-Phong looks like the following. 

Figure 4: Sphere Blinn-Phong modified and normalized

As you can see it is much darker because since it is energy conserving the spots that are the spots that face the light directly.


Phong BRDF

Now comes the Phong BRDF. 

Notice that the alpha_r term here is not the angle between half vector and the normal. The alpha term here is the angle between the perfect reflection direction of w_i along the normal n and the w_o direction. We get the following result when we render the same sphere again with Phong BRDF.

 

Figure 5: Sphere Phong original

Modified Phong

And now we have the modified phong just like the modified Blinn-Phong and just like modified Blinn-Phong, modified Phong misses the cosine term in the denominator.

Here is the output of this BRDF.

Figure 6: Sphere modified Phong
 

Normalized Modified Phong BRDF

 Again we have the normalized version of the modified Phong BRDF. And it is modelled like the following. We just divide by some terms to conserve the energy.

And It has an output like the following.

Figure 6: Sphere with normalized modified Phong BRDF


Torrance-Sparrow BRDF

This BRDF is a little different than the others so I want to mention the idea behind it. The birth of this BRDF comes from idea that in reality no surace is actually perfectly smooth and it has some degree of roughness and has some sort of micro facets on it's surface. You can see a suraface with microfacets on it's surface below.

Microfacet Models
Figure 7: Surafaces with different types of micro-facets [3]


This model suggests that some of these micro facets will reflect the light and some mask the light depending on their geometry. Below you can see the formulated version of this model.

This model seems complex but it is actually simple. It just has many components to compute. The functions D, F and G are new to us let us go through them one by one.

Figure 8: terms for Torrance-Sparrow BRDF [2]

D is the probability distribution function for using theis alpha. There are various functions out there but we will use the popular Blinn's distribution

G is the goemetry term. It is used for modelling the geometry. It is computed by the following. 



Lastly, F is the Fresnel reflection term. We can compute by using the Shlick's approximation. 

where R0 is computed by

eta here is the refractive index of the surface. After all these computations we've got ourselves a torrance sparrow BRDF.

 

Figure 9: Torrance-Sparrow BRDF on a sphere


 And as final image I want to show a comparison of a killeroo with blinn-phong on the left and torrance sparrow brdf on the right. Everything else is the same in both scenes.

Figure 10: killeroo with BRDF comparison


Figure 11: Killeroo BRDF comparison closeup

As you can see Blinn-Phong model displays specular lighting more whereas torrance sparrow almost is like a fully diffuse object with 40 phong exponent.



Object Lights 

Up to now I have implemented different types of light but except other than area lights and environment light we can't exaclty say that they were realistically convincing. Now we will implement light sources that are actually spheres or meshes with mesh lights we can have lights with any kind of shape. 

In order to have object lights we will again need multisampling in our scenes because we have to sample points on our light sources just like area lights or environment lights. 

Let us start off with mesh lights. For modelling mesh lights at each sample we need to first sample a triangle on the mesh then on a point on this triangle. 

To sample a triangle we could just select a random triangle on a mesh with uniform probability but this would be statiscally incorrect becase if a mesh has triangles of different sizes we should give the bigger triangle more probability. This is actaully really. The steps we need to follow should be like this for each mesh light:

Steps at preprocess:

1 - Before rendering pre-compute all triangles area
2 - Compute the meshes total surface area
3 - find the probability of each triangle by dividing it's surface area by the meshes total surface area.
 
Steps while sampling triangles:
1 - Sample a front facing(according to the point that is shaded) triangle using the pre-computed values.
2 - Uniformly sample a point on the triangle 
3 - return the point, the triangles area, triangles surface normal and the triangles probability.  

The step we need to be careful about is the one the one where we sample a point on the triangle because we don't want to introduce a bias. We will sample two random variables. Then using these two we will uniformly sample a point between two corners (A and B) of the triangle. Then while sampling a point inside the triangle, we need to use inverse transform sampling because if we were to sample a point between the point we just sampled and the other corner of the triangle we would be giving a bias towards one edge of the triangle.

Figure 12: Triangle sampling

So we will compute q by

q =A * xi1 + B * (1 - xi1)
p = q * sqrt(xi2) + C * (1 - sqrt(xi2))

where xi1 and xi2 are uniformly sampled numbers between 0 and 1. 

After doing this all we need to do is shading and it is exactly like area lights and we already know how to do that and we have al we need (sample light point, light normal, sampled triangle probability). The small detail we need to atted to is that after sampling the triangle we need to divide the radiance emitted by the mesh by the probability of the sampled triangle ( again to protect the statistical soundness). Finally we've got ourselved really cool area lights. 

Figure 13: Diffuse materials with light ceiling

Figure 14: Glossy materials under a ceiling light


Figure 15: Glossy materials with a small mesh light


Let us move on to sphere lights. Again in sphere light we need to find a point on the sphere that is visible by the point we are shading. Again we don't want to introduce a bias so we will use the following formula for sampling a direction towards a sphere light. 

wc = pc - p

thetaMax = arcsin(r / length(wc))

phi = 2PI * xi1

theta = arccos( 1 - xi2 + (xi2 * cos(thetaMax))

Then we will construct a orthonormal basis aligned with its w axis aligned with wc 

and find the sampled ray direction by:

lightDir = w * cos(Theta)  + u * sin(Theta) * cos(phi) + v * sin(Theta) * sin(phi)

Figure 16: sampling light spher lights [4]



After sampling the direction we just need to use the ray sphere intersection formula for finding the point and we would be done.

 

Figure 17: Sphere light source

 And if you use inverse transformations to carry the shaded point to spheres local coordinates first then apply the spheres it's transformation to point we found you can get ellipsoids to be lights as well. 

Figure 18: Ellipsoid light source


Path Tracing

Now we've come to another ray tracing concept which is a ray tracer with Monte Carlo integrations, also known as path tracer. In a path tracer we will get rid of the ambient lighting term because it is actually phyiscally wrong. Instead at each ray hit point we will send new random global illumination rays. Normally we were sending recursing rays at only mirror, conductor or dielectric objects but in reality all objects reflect light hence they all have some sort of interaction with each other. 

In order to create global illumination rays we need to sample a direction in the hemisphere our objects normal points at. To uniformly sample a hemisphere we again need to use inverse transform sampling. since it is very trivial I will directly share my C++ code if anyone wants to know more about these the pbr-book explains these really good.

This function takes two uniformly sampled random number between 0 and 1 and returns a direction in the upper hemisphere where z is the up direction.

Importance Sampling

Of course this is not the only way to do it. In statistics there is a method called importance sampling we can also use it instead of importance sampling. Why would we want to use it? Because of the cosine(theta) term in rendering equation the rays that are closer to the surface's normal are going to contribute more to a points lighting using this information we can simply give a slight bias towards those points and do a cosine weighted sampling. While doing cosine weighted I have used the method suggested by the the pbr-book again. In this method sample a unit disk then project disk onto a hemisphere. This operation gives us a samples a hemisphere with cosine weights. This is also known as Malley's method. For details see https://www.pbr-book.org/3ed-2018/Monte_Carlo_Integration/2D_Sampling_with_Multidimensional_Transformations 

Here is the code for sampling a hemisphere with cosine weight.

and we get the following results in cornellbox with and without importance sampling with 100 samples per pixel.

Figure 19: Cornellbox with pure path tracing

Figure 20: Cornell box with importance sampling


I don't know if it is noticable but if you directly switch between them you can see that in importance sampling the diffuse spheres shading looks a little different a little more realistic if you accept.

Next Event Estimation

Next up we have a method called next event estimation. In next event estimation. While sending global illumination rays we also sample the light sources we have but if our global illimunation ray hits the sampled object light again we discard one of them otherwise we would be introducing a bias. You can see some of my results with next event estimation. 

 

diffuse materials without next event estimation

 
diffuse materials with next event estimation

Russian Roulettte

And finally we have a method called the russian roulette. While it has a cool name it is actually a very simple concept. Instead of killing rays with a max recursion depth we will kill them pased on a probability. Usually this probability depends on the ray's throughput and most of the time throughput is define by how much this ray would contribute illumination of a point if it hits an object. So for instance if our ray bounces off an object with 0.8 reflectance it will have 0.8 throughput. When later it bounces off of a surface with 0.5 reflectance it will contribute 0.4 to the initial point as this goes on a rays contribution will decrase and it will eventuall die. But one should notice that this method increases noise because it kills rays randomly but it is statiscally more sound.

diffuse materials with only russian roulette

You can see in russian roulette I got lucky and my ceiling has lower levels of noise this is of couse due to having a white ceiling(it reflects most of the light). But this may change. 

Here are my other outputs with combinations of these methods all outputs are rendered with 100 samples if not told otherwise.

diffuse importance sampling & russian roulette


diffuse next event estimation, importance & russian

diffuse next russian

in some scenes I have combined russian roulette and max recusion depth to get a better result but most of the time it's because I got lucky.


glass importance russian


glass_next
glass_next_importance

glass_next_importance_russian

 
glass_next_russian

As you can see this was an unlucky output most of the ray didn't even make it through glass it looks just like a mirror object we can't see it's emissiveness

and as final outputs I want to show the difference between different sample numbers. 

glass_next_importance with 100 samples


 
glass_next_importance with 1024 samples


glass_next_importance with 2500 samples



glass_next_importance with 16384 samples


As you can see increasing sample size makes a huge difference although I admit I could do this more efficiently. I didn't use jittered sampling for instance but maybe in the future I will use jittered sampling for global illumunation rays and the I can get even better results with much shorter times for instance the last image took seven and half hour to render.

Bugs


Forgetting the cos(theta) term in the rendering equation




While sampling meshes I have made some errors while sampling a front facing mesh. I also have encountered another bug while trying to sample a front facin triangle in the scene below some points in the ceiling(the ones that are in the light) can not see any front facing triangles at all so using rejection sampling goes into an infinite loop in order to solve that I have reduced my sampling space everytime I sampled a triangle so if I rejected that triangle I wouldn't sample it again and if I was out of triangles I would simply return the last sampled triangle and if it was a back faced triangle I let the shading computation handle it so that it wouldn't illuminate a point. Below you can see the bugs created by biased sampling.

not sampling front facing triangles correctly






References

1 - https://en.wikipedia.org/wiki/Rendering_equation

2 - Akyüz 2022, BRDF summary, Advanced Ray Tracing course materials

3 - https://www.pbr-book.org/3ed-2018/Reflection_Models/Microfacet_Models 

4 - https://www.pbr-book.org/3ed-2018/Light_Transport_I_Surface_Reflection/Sampling_Light_Sources

Comments

Popular posts from this blog

Trace the Ray

Textures