Triple-A game assets such as characters, weapons, and environments rely heavily on normal mapping in order to look good. But what is normal mapping...and why is its use so widespread?
Balancing Quality and Performance
Games should run in real time at 60+ frames per second. This means that high-poly models commonly used for 3D printing or animation would not work well in a game, since they would be a huge sap on the game's responsiveness.
Low-poly models are needed in order to ensure a game's smooth performance.
But at the same time, gamers expect beautiful visuals with a high level of detail. Game devs need to be able to use many models in one scene to achieve this, not to mention costly lights, physics, and effects.
So how can we compromise between smooth performance and realistic visuals?
Normal maps are the best compromise we have to the struggle between model quality and game performance. The secret is that they allow us to get more apparent detail while using less geometry.
Faking Geometry With Textures
How does it work? Normal and displacement maps are special kinds of image textures that influence how light is calculated across a surface. They create an illusion of depth by telling the light to bounce off simulated features of the surface, even though those features are not actually there.
The light bouncing off a normal mapped surface bounces according to the texture normal instead of the surface normal.
This allows us to achieve a high level of detail without burdening the game engine with intricate geometry. Sounds too good to be true? Keep in mind that there are some limitations to this technique.
Normal Maps vs. Height Maps
While both normal and height maps give our low-poly models the appearance of more detail, they are used for distinctly different purposes.
The most obvious difference is that height maps are greyscale only, because they only portray height differences. Black is 'down', white is 'up', and 0.5 gray means no change of height.
There are a couple different types of height maps that you might be familiar with already.
Bump maps are the old school way of adding detail to low poly objects. They are used in the same way as normal maps, except they only contain height information and not angle information.
Displacement maps are sometimes used to change the location of actual vertices in a mesh. This kind of displacement doesn't add any additional detail. Instead, it is used to generate otherwise complex objects. An example of this type of displacement is how terrain is often generated from a texture.
Another use for a displacement map is parallax mapping (also called virtual displacement mapping), which is a more advanced technique in which a game engine attempts to offset the texture coordinates away from the camera. This is pretty computationally expensive, but it can lead to good looking results.
Normal Maps, on the other hand, do not contain any height information whatsoever. Instead, they contain angle information. They are colorful because the RGB value tells the renderer which direction the slope is facing and how steep that slope is.
The most important advantage of this is that we can use angle information to artificially bend edges of adjacent faces toward each other to produce a bevelling effect. This cannot be done with only height information, because the renderer would have no way of knowing in which direction the edges should be bent.
Softening sharp edges sounds simple, but it gives a surprisingly large jump in visual quality. This is because in real life nothing has completely sharp edges (except for maybe graphene). On the other hand, in CG, everything has an infinitely sharp edge since each line has zero thickness.
You're not likely to notice the actual thickness of an edge in game, but you can notice the fact that at least some light should be glancing off those edges.
Beyond just appearing more natural, this edge-glint light greatly helps to show the form of the object, especially if seen from a distance. Since many game objects are displayed quite small on a screen, game artists often exaggerate their bevels in order to help the player see everything clearly or to help emphasize important objects.
3 Types of Normal Maps
There are three types of normal maps that all achieve the same effect, but they are calculated slightly differently.
As the name implies, tangent space normals are based on the tangent direction of each face. These maps are always made up of a combination of three colors.
- Blue shows what has a slope in the normal direction
- Red shows what has a slope in the left and right tangent direction
- Green shows what has an up or down slope in the tangent direction. OpenGL normal maps have the green sloping up while DirectX maps have the green sloping down.
Object space normal maps are based on the entire object instead of each face individually. This is slightly faster for a graphics card to compute, but it does have some drawbacks.
Since the right side will be a different color than the left side, no UV's can be mirrored, meaning that a lot of texture space will be wasted on symmetrical models. This also means that if the object twists around, the shading will appear flipped.
World space normal maps are the least flexible of all. Since they are based on global coordinates, the object may not rotate at all in order to preserve the correct shading. This type of normal map is only used for large, static, and asymmetrical objects like environments, or used temporarily in programs like Substance Painter as a means to calculate weathering effects.
The most common type of normal map you will see and work with is a tangent space since it is the most flexible, but it's useful to understand the other types as well so that you can use them if needed. For an in-depth look at Normal Maps and how to model for and bake them, dial into my Introduction to Normal Map Modeling course.