There's no cheap and dirty way to do normal mapping is there?
-
@EricLasota AFAIK the way to do that in a shader is to make a TBN matrix and interpolation that between all 3 vertices and then multiplying that and the normal from the texture every pixel
@eniko What's the use case for this? A software renderer?
-
There's no cheap and dirty way to do normal mapping is there? Like, on a cpu, not in a shader
@eniko The simplest way on a SW renderer is you make a tiny hemisphere map (e.g. 5x5), light those vectors before you start the triangle, and then each texel of the triangle's texture has a (0-24) index into that cube map. Works great of flat tris, combining it with gouraud shading is a bit trickier though.
-
@eniko The simplest way on a SW renderer is you make a tiny hemisphere map (e.g. 5x5), light those vectors before you start the triangle, and then each texel of the triangle's texture has a (0-24) index into that cube map. Works great of flat tris, combining it with gouraud shading is a bit trickier though.
@eniko The other way is to do Ye Olde Emboss Bumpmapping just like we did (but on GPUs) in about 1998 or so.
You have two sets of UV coords, the second shifted TOWARDS the light by less than a texel. The texture holds a height value. You read the height value twice, once with each UV coord. Then you subtract one height from the other, and that's your brightness.
This makes no sense written down, I know. But it works!
-
@eniko The other way is to do Ye Olde Emboss Bumpmapping just like we did (but on GPUs) in about 1998 or so.
You have two sets of UV coords, the second shifted TOWARDS the light by less than a texel. The texture holds a height value. You read the height value twice, once with each UV coord. Then you subtract one height from the other, and that's your brightness.
This makes no sense written down, I know. But it works!
@eniko This was great on GPUs, but may not be faster than the hemisphere version on a CPU because you have to interpolate twice as many UVs and sample twice, which gets expensive.
-
@eniko What's the use case for this? A software renderer?
-
@eniko The simplest way on a SW renderer is you make a tiny hemisphere map (e.g. 5x5), light those vectors before you start the triangle, and then each texel of the triangle's texture has a (0-24) index into that cube map. Works great of flat tris, combining it with gouraud shading is a bit trickier though.
@TomF wait is this basically a matcap?
-
@TomF wait is this basically a matcap?
@eniko Yeah, a very tiny one that is lit dynamically.
-
@eniko Yeah, a very tiny one that is lit dynamically.
@TomF that's very clever. I guess that'd work particularly well for directional lights?
-
@TomF that's very clever. I guess that'd work particularly well for directional lights?
@TomF and I guess instead of reorienting the normals per pixel to match the surface normal you can just reorient the light instead?
-
@TomF that's very clever. I guess that'd work particularly well for directional lights?
-
@eniko You can use SIMD ops for that too but the biggest problem for SW rendering is sampling+edge cull. AVX2 has gather-load ops (to pull texture data from spread-out locations) and conditional store ops (to avoid storing outside of the tri edge), which make it much less problematic.
There are writeups out there for how to compute barycentric coordinates for each pixel, which will let you interpolate the coordinates+matrices of each vert. Beyond that it's the same as using shaders.
-
undefined oblomov@sociale.network shared this topic