OpenGL point light shadows & Atomspheric FX hacks

So asset creation in the form of modeling the final environment that’ll be used in the game took the past few days.  It’s amazed how much faster it went this time compared the first scene (the city street).  I chalk this up to experience that I’ve picked up now that I’ve been doing this awhile, and my understanding of things that are very time consuming to model vs things I can quickly grab, import, texture, and place from turbosquid.  The perfect balance of both techniques has led to me being done with the dark night alley street scene after just a few days.

Where I spent the most time this time was in the shaders.  I really wanted to have quality rendering and go out with a bang since this’ll likely be the last environment that I do for the game.  I want the dark alleyway to appear foggy and dark, like its just about to rain.  That includes moist shiny cobblestone rocks, and a damp fog effect that feels very palpable to the player.

This idea of mine posed two major problems that I haven’t had to tackle until now.

Point Light Shadows

The first major problem, is that I could no longer “cheat” using a single directional light source representing the “sun” in the scene.  Instead, I had to place a street lamp every building or so and have these be the only sources of light in the scene.  This means, in order to keep my shadows working, I now had to implement shadows from a point light source.  In some ways this was easier than point lighting (no longer necessary to compute an optimal shadow bounding box to be used for the light’s ortho matrix projection).  However, now I needed render to and utilize cube maps to store the shadow depth information.  Surprisingly, there was very little comprehensive information on the web about how to properly do PCF shadows using cubemaps.

What I found that works is the following.

  • Create a cubemap renderer that is positioned at the exact same position as one of the point lights - this special vgl engine object renders the scene into 6 faces of a cubemap with 90-degree fov angles to properly capture the entire scene from “all angles”.
  • Format the cubemap shader to have each face hold floating point depth-component values and keep the size small (256×256) since there will be alot of these.
  • Define an “override” shader to be used by the above cubemap renderer to ensure a simple specialized “light render” shader was used when rendering the scene into the cubemap faces.  
  • Set the cubemap compare mode (GL_TEXTURE_COMPARE_MODEfor the rendered cubemap texture to GL_COMPARE_REF_TO_TEXTURE which helps with percentage-closest-filtering (PCF) mode to allow smoother linear-interpolated shadow edges.  
  • Lastly, pass a rendered shadow cubemap for each light to the scene shaders in the form of samplerCubeShadow

The “light render” shader mentioned above which is used when rendering the depth information to the cubemaps looks like so:

Basically this encodes the distance from the surface point to the light in the form normalized depth information.  This is done by manually overriding the depth value stored in gl_FragDepth.  Each light has its own cubemap renderer with a custom-tailored shader like this (using the correct light index to compute distance from).

The per-fragment lighting shader code that utilizes the cube shadow maps looks like:

Essentially the highlighted code above compares the distance between the surface point to the light to the stored depth value in the cube texture using the fragment-light vector as a lookup value into the cubemap.  Because we’re sampling using a special “shadow” version of the cubemap sampler, the result will be properly interpolated between shadow texels to avoid ugly edges between shadowed and non-shadowed areas.

Luckily, I was able to build this into the Verto Studio editor and associated graphics system and test this out with relatively little trouble.  Even though I have 4 or 5 statically rendered cubemap shadow textures in the entire scene.  I was able to keep performance high by building versions of the shaders for each side of the street so that each individual shader only has to shade with at most 3 lights at a time.  This worked out better than I had expected.

Light-Fog Interaction (Volume FX)

This part was tricky..  I had an idea in my head of how I wanted this to look.  So I did something that usually leads to trouble;  I tried to come up with a volume rendering technique on my own from scratch and implement it and just kinda see how it goes.

The basic idea stems from what I’ve observed in real life from foggy, dark, damp nights and lighting.  Essentially, fog at night is black… IF there is no light around to interact with the fog.  Naturally, if the water droplets don’t have any light to interact with them, you won’t see them and the fog will appear black.  However, if any light interacts with the water vapor in the air, it’ll create the illusion of a whiter colored and denser fog.  So this is what I set out to emulate with my shader.

Now atmospheric shader effects can often lead to the necessity of raymarching and heavy iteration in the fragment shader to simulate the accumulation of light-atomosphere interaction.  To this I said “hell no” since ray marching of any kind in a shader terribly robs performance.  I quickly realized that I could avoid raymarching entirely if I used a simple model to represent the light-fog interaction that I was going for.

In my case, it turned out I could do the whole effect using something as simple as a sphere intersection test.  Basically, when I’m shading a pixel (a point on the surface), I’m interested in what happens to the light on its way back from the surface to the viewer, the surface-to-viewer vector.  If the atmosphere affects the light at any point along this vector, I’ll need to compute that.  In other words, if the ray from the surface to the viewer intersects a sphere centered at the light, then the fog affects the light on the way back to the viewer.  How much?  Well if I calculate the length of the segment between the entry and exit points of the ray intersection (how much of the sphere the ray pierces), I find that length is proportional to both the perceived increase in density of the fog and the brightening of the fog color.

This algorithm is given below in fragment shader code:

 

I’m sure I’m not the first one to come up with this idea before, but it still did feel pretty cool to reason my way through a shading problem like this.  The visual result looks amazing.

The progress of this all is below in a gallery.  

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>