There's no cheap and dirty way to do normal mapping is there?
-
@eniko Kinda is? But depends how much quality you want out of it.
The high-level "simple" way to do it from heightmaps is to subtract the values of the neighboring pixels on each axis to get the slopes (you need to decide what the ratio of 1 pixel coordinate to 1 increment of height value is though!), then take the cross product of those 2 slope vectors to get the normal, and normalize it.
Using SIMD ops and rsqrt for the normalize will drastically speed that up.
Gets messy at edges though.
@eniko ... unless you mean doing like dot-product-and-clamp type lighting on existing normal maps?
In general, almost everything you can do in a shader, you can also with SIMD ops on CPU. It's definitely not going to be as fast, but not actually that much more complicated (with intrinsics), and it's a lot faster than doing one pixel at a time.
(Sometimes compiler can auto-vectorize the code well too but really need to use "restrict" so it knows the input+output don't overlap.)
-
@eniko ... unless you mean doing like dot-product-and-clamp type lighting on existing normal maps?
In general, almost everything you can do in a shader, you can also with SIMD ops on CPU. It's definitely not going to be as fast, but not actually that much more complicated (with intrinsics), and it's a lot faster than doing one pixel at a time.
(Sometimes compiler can auto-vectorize the code well too but really need to use "restrict" so it knows the input+output don't overlap.)
@EricLasota mostly thinking of like an actual normal map. But the normals from the texture map need to be rotated to match the normal of the surface it's mapped on
-
@EricLasota mostly thinking of like an actual normal map. But the normals from the texture map need to be rotated to match the normal of the surface it's mapped on
@EricLasota AFAIK the way to do that in a shader is to make a TBN matrix and interpolation that between all 3 vertices and then multiplying that and the normal from the texture every pixel
-
@EricLasota AFAIK the way to do that in a shader is to make a TBN matrix and interpolation that between all 3 vertices and then multiplying that and the normal from the texture every pixel
@eniko What's the use case for this? A software renderer?
-
There's no cheap and dirty way to do normal mapping is there? Like, on a cpu, not in a shader
@eniko The simplest way on a SW renderer is you make a tiny hemisphere map (e.g. 5x5), light those vectors before you start the triangle, and then each texel of the triangle's texture has a (0-24) index into that cube map. Works great of flat tris, combining it with gouraud shading is a bit trickier though.
-
@eniko The simplest way on a SW renderer is you make a tiny hemisphere map (e.g. 5x5), light those vectors before you start the triangle, and then each texel of the triangle's texture has a (0-24) index into that cube map. Works great of flat tris, combining it with gouraud shading is a bit trickier though.
@eniko The other way is to do Ye Olde Emboss Bumpmapping just like we did (but on GPUs) in about 1998 or so.
You have two sets of UV coords, the second shifted TOWARDS the light by less than a texel. The texture holds a height value. You read the height value twice, once with each UV coord. Then you subtract one height from the other, and that's your brightness.
This makes no sense written down, I know. But it works!
-
@eniko The other way is to do Ye Olde Emboss Bumpmapping just like we did (but on GPUs) in about 1998 or so.
You have two sets of UV coords, the second shifted TOWARDS the light by less than a texel. The texture holds a height value. You read the height value twice, once with each UV coord. Then you subtract one height from the other, and that's your brightness.
This makes no sense written down, I know. But it works!
@eniko This was great on GPUs, but may not be faster than the hemisphere version on a CPU because you have to interpolate twice as many UVs and sample twice, which gets expensive.
-
@eniko What's the use case for this? A software renderer?
-
@eniko The simplest way on a SW renderer is you make a tiny hemisphere map (e.g. 5x5), light those vectors before you start the triangle, and then each texel of the triangle's texture has a (0-24) index into that cube map. Works great of flat tris, combining it with gouraud shading is a bit trickier though.
@TomF wait is this basically a matcap?
-
@TomF wait is this basically a matcap?
@eniko Yeah, a very tiny one that is lit dynamically.
-
@eniko Yeah, a very tiny one that is lit dynamically.
@TomF that's very clever. I guess that'd work particularly well for directional lights?
-
@TomF that's very clever. I guess that'd work particularly well for directional lights?
@TomF and I guess instead of reorienting the normals per pixel to match the surface normal you can just reorient the light instead?
-
@TomF that's very clever. I guess that'd work particularly well for directional lights?
-
@eniko You can use SIMD ops for that too but the biggest problem for SW rendering is sampling+edge cull. AVX2 has gather-load ops (to pull texture data from spread-out locations) and conditional store ops (to avoid storing outside of the tri edge), which make it much less problematic.
There are writeups out there for how to compute barycentric coordinates for each pixel, which will let you interpolate the coordinates+matrices of each vert. Beyond that it's the same as using shaders.
-
undefined oblomov@sociale.network shared this topic