User Tools

Site Tools


gamedev:textureproperties:normal

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
gamedev:textureproperties:normal [2012/11/30 23:26] – created dragonlordgamedev:textureproperties:normal [2019/05/24 23:43] (current) – ↷ Links adapted because of a move operation dragonlord
Line 1: Line 1:
 +{{tag>graphic skin texture normal}}
 <WRAP youarehere> <WRAP youarehere>
-[[:start|Start Page]] >> [[gamedev:main|Game Development with the Drag[en]gine]] >> [[gamedev:skinproperties|Skin Texture Properties]] >> [[gamedev:texturepropertylist|Texture Property Database]] >> **normal**+[[:start|Start Page]] >> [[:gamedev|Game Development with the Drag[en]gine]] >> [[gamedev:skinproperties|Skin Texture Properties]] >> [[gamedev:texturepropertylist|Texture Property Database]] >> **normal**
 </WRAP> </WRAP>
  
-====== Summary ====== +====== Skin Texture Property: normal ====== 
-|Texture Property Name|normal|+ 
 +Defines the surface normal in tangent space. 
 |Excepted Data Source|3 component image| |Excepted Data Source|3 component image|
 |Data Range|0 to 1 for all image components| |Data Range|0 to 1 for all image components|
 |Default Value|(0.5, 0.5, 1)| |Default Value|(0.5, 0.5, 1)|
-|Graphic Module|Defines the surface normal in tangent space+|Affected Modules|Graphic, Physics
-|Physics Module|No effect+|Linked Properties|[[gamedev:textureproperties:normal_strength|normal.strength]]
-|Audio Module|No effect|+ 
 +====== Description ======
  
-====== Graphic Module ====== 
 The **normal** texture property is used to add high resolution light rendering to objects that can not be represented with a low resolution game model. Normal based lighting is a standard in game development since a long time. The basic idea is to use textures to store high resolution surface normals not found on a low resolution model. Such normal map textures are no ordinary color textures. They store normal vectors packed into an 8-bit color. Such a normal map can be produced with various tools but typically two main ways exist. The **normal** texture property is used to add high resolution light rendering to objects that can not be represented with a low resolution game model. Normal based lighting is a standard in game development since a long time. The basic idea is to use textures to store high resolution surface normals not found on a low resolution model. Such normal map textures are no ordinary color textures. They store normal vectors packed into an 8-bit color. Such a normal map can be produced with various tools but typically two main ways exist.
  
Line 19: Line 22:
 The default value for this texture property is (0.5, 0.5, 1) or (127, 127, 255) in RGB representing an unaltered normal. The default value for this texture property is (0.5, 0.5, 1) or (127, 127, 255) in RGB representing an unaltered normal.
  
-===== Modifications ===== 
 Normals can be modified using other texture properties after reading them from this texture property. If you need faint normal map effects like for example for a slightly uneven window surface you can run into resolution problems with 8-bit images. In this case store the normals excagerated and use the **[[gamedev:textureproperties:normal_strength|normal.strength]]** texture property to scale down the effect. Normals can be modified using other texture properties after reading them from this texture property. If you need faint normal map effects like for example for a slightly uneven window surface you can run into resolution problems with 8-bit images. In this case store the normals excagerated and use the **[[gamedev:textureproperties:normal_strength|normal.strength]]** texture property to scale down the effect.
 +
 +===== History =====
 +
 +Normal based lighting is a standard in game development and cinema since a long time. A game without normal mapping tends to look flat because objects lack the dynamic change of light and shadows as we know it from the real world. To deal with this problem some tricks had been created along the years.
 +
 +It all started with Bump-Mapping. The idea is to encode the change in surface height across a texture mapped on the texture in the red and green channel. The red channel stores the difference in height for a pixel relative to the pixel right next to it. The green channel stores the difference in height for a pixel relative to the pixel right below. Hence the resulting texture can be interprated as a height gradient map for the object. With the gradients a normal can be reconstructed that is then used instead of the real surface normal for lighting calculations. Converting a height difference to a normal like this is not a perfect match and results in problems if the height difference is too large. For adding bumpy details to an object though this technique works well and is still used in production environments nowadays as it is a cheap and fast solution while still delivering good results.
 +
 +Later on the next step had been the true normal mapping. Here the normal itself is stored in the texture and not the height gradients. This removes conversion and reconstruction calculations resulting in higher quality lighting results. Furthermore with this technique mapping a high resolution mesh to a low resolution mesh became interesting and efficient. Especially organic models suffered under the Bump-Mapping technique since autoring a gradient map is not easy for a complex organic model. Using the normal map though the high resolution model is stored directly resulting in higher quality.
 +
 +Variations exist in how the normal maps are stored, what components are reconstructed or if even a high definition range image is used to store the normal map. In the end though all these techniques are the same in the end usage. Today normal mapping using these techniques is the defacto standard in game development.
  
 ===== Normal Maps from Images ===== ===== Normal Maps from Images =====
 +
 The quick and often dirty way is to use a normal map plugin for GIMP or other paint applications. These plugins take a color or grayscale image and convert it to a normal map. There exist different operators that can be used to calculate the resulting normal map. They vary in crispness, strength and broadness of pixels covered (in the sense of how much details of neighbors flows into the result). Results obtained with a normal map plugin can vary drastically. There exist tricks to improve the results for example using Photo Normal Reconstruction or using applications specialized in generating normal maps from images. This technique is well suited for Detail Normal Maps hence bumpy surface details of small scale and depth like scratches, leather patterns or embossed text. It is also often used to create normal maps from real world images. For the later though using Photo Normal Reconstruction is recommended but the results vary. For Geometry Normals as required for complex organic models this technique is not well suited at all. Here the second way to create normals maps is better used. The quick and often dirty way is to use a normal map plugin for GIMP or other paint applications. These plugins take a color or grayscale image and convert it to a normal map. There exist different operators that can be used to calculate the resulting normal map. They vary in crispness, strength and broadness of pixels covered (in the sense of how much details of neighbors flows into the result). Results obtained with a normal map plugin can vary drastically. There exist tricks to improve the results for example using Photo Normal Reconstruction or using applications specialized in generating normal maps from images. This technique is well suited for Detail Normal Maps hence bumpy surface details of small scale and depth like scratches, leather patterns or embossed text. It is also often used to create normal maps from real world images. For the later though using Photo Normal Reconstruction is recommended but the results vary. For Geometry Normals as required for complex organic models this technique is not well suited at all. Here the second way to create normals maps is better used.
  
 ===== Normal Maps from High Resolution Model ===== ===== Normal Maps from High Resolution Model =====
 +
 The second way uses a normal map generation too and a set of models. With this technique a high resolution model is used as the source of normal information and mapped onto the texture of a low resolution model. This process is called "Baking" and applications like Blender3D can do this for you as well as external tools specialized on normal map generation using a high polygon model. Using this technique can be tricky as there are various problems that can happen. The second way uses a normal map generation too and a set of models. With this technique a high resolution model is used as the source of normal information and mapped onto the texture of a low resolution model. This process is called "Baking" and applications like Blender3D can do this for you as well as external tools specialized on normal map generation using a high polygon model. Using this technique can be tricky as there are various problems that can happen.
  
Line 31: Line 45:
  
 ===== Tangent Space Problem ===== ===== Tangent Space Problem =====
 +
 A main problem is the splitting of normals and especially tangents along the low resolution model. The normals have to be transformed relative to a coordinate system located at the point of interest on the low resolution model. This coordinate system is typically called the "Tangent Space" and depends on the normal and tangent vectors at this very point in space. Normals as well as tangents are interpolated in the Drag[en]gine along faces in a low resolution model. Without proper splitting normals and tangents of neighboring faces are averaged resulting in singularities. Even without these singularities normals and tangents can interpolate across the low resolution model in funny ways. Normal map generation tools take care of these peculiarities but there is an annoying problem. These tools usually all work with their own specific definition of a tangent space and how tangents are split across models. These definitions vary only slightly but small differences can have huge visible artifacts as consequences. It is thus crucial to ensure the normal map generation tool produces an output fully consistent with the Drag[en]gine Normal Definition to work properly. A main problem is the splitting of normals and especially tangents along the low resolution model. The normals have to be transformed relative to a coordinate system located at the point of interest on the low resolution model. This coordinate system is typically called the "Tangent Space" and depends on the normal and tangent vectors at this very point in space. Normals as well as tangents are interpolated in the Drag[en]gine along faces in a low resolution model. Without proper splitting normals and tangents of neighboring faces are averaged resulting in singularities. Even without these singularities normals and tangents can interpolate across the low resolution model in funny ways. Normal map generation tools take care of these peculiarities but there is an annoying problem. These tools usually all work with their own specific definition of a tangent space and how tangents are split across models. These definitions vary only slightly but small differences can have huge visible artifacts as consequences. It is thus crucial to ensure the normal map generation tool produces an output fully consistent with the Drag[en]gine Normal Definition to work properly.
  
 ===== Normal Map Encoding ===== ===== Normal Map Encoding =====
 +
 Another problem is the encoding of normal maps. The components of normals are in the range from -1 to 1 while those of colors are in the range from 0 to 1. The most common solution is to map all normal components directory from -1..1 to 0..1 using one color component for each normal component. For the choice of mapping between the components as well as flipping or not flipping the result there are tons of different ways normals can be mapped. The Drag[en]gine requires the normals to be encoded in the following way to work properly. Another problem is the encoding of normal maps. The components of normals are in the range from -1 to 1 while those of colors are in the range from 0 to 1. The most common solution is to map all normal components directory from -1..1 to 0..1 using one color component for each normal component. For the choice of mapping between the components as well as flipping or not flipping the result there are tons of different ways normals can be mapped. The Drag[en]gine requires the normals to be encoded in the following way to work properly.
 |Normal Component|-1 Normal|0 Normal|1 Normal|Color Component| |Normal Component|-1 Normal|0 Normal|1 Normal|Color Component|
Line 45: Line 61:
 If this holds true chances are good your normal map is usable from the encoding point of view. Make sure your watch the Tangent Space Problem too. If this holds true chances are good your normal map is usable from the encoding point of view. Make sure your watch the Tangent Space Problem too.
  
-====== History ====== +===== Blender3D GBuffer NormGen Script =====
-Normal based lighting is a standard in game development and cinema since a long time. A game without normal mapping tends to look flat because objects lack the dynamic change of light and shadows as we know it from the real world. To deal with this problem some tricks had been created along the years.+
  
-It all started with Bump-Mapping. The idea is to encode the change in surface height across texture mapped on the texture in the red and green channelThe red channel stores the difference in height for pixel relative to the pixel right next to itThe green channel stores the difference in height for a pixel relative to the pixel right belowHence the resulting texture can be interprated as a height gradient map for the object. With the gradients a normal can be reconstructed that is then used instead of the real surface normal for lighting calculations. Converting a height difference to a normal like this is not a perfect match and results in problems if the height difference is too large. For adding bumpy details to an object though this technique works well and is still used in production environments nowadays as it is a cheap and fast solution while still delivering good results.+The Blender3D export scripts contain GBuffer based normal normal generator script. This script converts normals generated by the blender backing process into normals working with the Drag[en]gineNormal map generators in general have problem with the middle value. A (0, 0, 1) normal converts to 127.5 . Slight irregularities result in tiny different color values chosen for nearly the same encoded normalNot visible from the naked eye they turn into artifacts while rendering. Especially texture compression reacts badly to such small irregularities in the normal maps.
  
-Later on the next step had been the true normal mapping. Here the normal itself is stored in the texture and not the height gradients. This removes conversion and reconstruction calculations resulting in higher quality lighting results. Furthermore with this technique mapping high resolution mesh to a low resolution mesh became interesting and efficient. Especially organic models suffered under the Bump-Mapping technique since autoring a gradient map is not easy for a complex organic modelUsing the normal map though the high resolution model is stored directly resulting in higher quality+The export scripts contain **squish correction** to ensure the (0, 0, 1) and similar normals always map properly to (127, 127, 255)The image below shows what to avoid
- +<WRAP center 100%> 
-Variations exist in how the normal maps are stored, what components are reconstructed or if even a high definition range image is used to store the normal mapIn the end though all these techniques are the same in the end usage. Today normal mapping using these techniques is the defacto standard in game development.+<WRAP center box 550px> 
 +{{ :gamedev:blender3d:normgen_squish_correction.png |NormGen squish correction}} 
 +<WRAP centeralign>NormGen squish correction. The red color component is shown as gray scale scaled to display the values 125-129 betterWithout squish correction the similar normals (0, 0, 1) end up with slightly different values. With squish correction the result is stable.</WRAP> 
 +</WRAP> 
 +</WRAP>
  
 ====== Physics Module ====== ====== Physics Module ======
-This texture property does not effect Physics Modules. 
  
-====== Audio Module ====== +The **normal** texture property can be used for **[[gamedev:particleemitters|particles]]** to influence their bouncing direction. This aligns the physical response of small particles bouncing off surfaces with the graphical representation of the material. 
-This texture property does not effect Audio Modules.+ 
 +This kind of in depth simulation is usually not used by pyhsics modules as the effect is subtle and using just the interpolated surface normal is faster.
  
 ====== Examples ====== ====== Examples ======
-( TODOexample image )+ 
 +<WRAP column 45%> 
 +<WRAP center box 450px> 
 +{{ :gamedev:textureproperties:normal.png |Manhole cover with and without normal map}} 
 +<WRAP centeralign>Manhole cover with and without normal map. The difference between the left side without normal map and the right side with normal map is clearly visible.</WRAP> 
 +</WRAP> 
 +</WRAP> 
 + 
 +<WRAP column 45%> 
 +<WRAP center box 550px> 
 +{{ :gamedev:textureproperties:normal2.png |Sample normal map image of a storm drain inlet}} 
 +<WRAP centeralign>Sample normal map image of a storm drain inlet. This shows how a normal map has to look like concerning the red and green channel orientation.</WRAP> 
 +</WRAP> 
 +</WRAP> 
 + 
 +<WRAP clear></WRAP>
gamedev/textureproperties/normal.1354317983.txt.gz · Last modified: 2012/11/30 23:26 by dragonlord