how do you "turn off denormals"
-
@aeva yeah. it's where I learned about this problem. I'd experienced a few side effects of it before but never really understood the underlying cause. (sometimes my code runs *near* signal processing but I've never done it directly)
also this is a banger too, just to appreciate a totally different and much wackier way that denormals can really hurt you https://issues.chromium.org/issues/382005099#comment10
@glyph I realized it was probably the problem when I was digging through @TomF 's blog the other day looking for his blog post imploring people to not do the thing I'm doing in another project to find the tolerances on the thing and found a different blog on a completely different topic and it obliquely mentioned shutting off denormals to improve perf and i was like wait i remember hearing something about that somewheres
-
@aeva yeah. it's where I learned about this problem. I'd experienced a few side effects of it before but never really understood the underlying cause. (sometimes my code runs *near* signal processing but I've never done it directly)
also this is a banger too, just to appreciate a totally different and much wackier way that denormals can really hurt you https://issues.chromium.org/issues/382005099#comment10
@glyph oh god i just started reading this. i hate the "we must make computers unusably slow at all costs. for security." people so so very much
-
@glyph I realized it was probably the problem when I was digging through @TomF 's blog the other day looking for his blog post imploring people to not do the thing I'm doing in another project to find the tolerances on the thing and found a different blog on a completely different topic and it obliquely mentioned shutting off denormals to improve perf and i was like wait i remember hearing something about that somewheres
-
@glyph I realized it was probably the problem when I was digging through @TomF 's blog the other day looking for his blog post imploring people to not do the thing I'm doing in another project to find the tolerances on the thing and found a different blog on a completely different topic and it obliquely mentioned shutting off denormals to improve perf and i was like wait i remember hearing something about that somewheres
-
-
-
-
@oblomov @aeva @glyph Sure, but usually normal audio processing doesn't produce denormals. The denormals arise because there's a feedback with a sub-1.0 gain, which eventually drains to very small numbers, and then denormals. If you flush the output, as soon as it gets there it just goes to zero immediately.
-
@oblomov @aeva @glyph Sure, but usually normal audio processing doesn't produce denormals. The denormals arise because there's a feedback with a sub-1.0 gain, which eventually drains to very small numbers, and then denormals. If you flush the output, as soon as it gets there it just goes to zero immediately.
-
@TomF @glyph specifically it was the one about using fixed point coordinates for objects. I'm still considering how to make that work, but I'd like to use jolt and jolt only speaks floats, and I am not expecting GLES2 GLSL to provide everything I need to do what I was doing before on the rendering side either. I'm probably going to use tile relative system instead on the shader side, and whatever will require as little translation on the CPU side.
-
@TomF @glyph specifically it was the one about using fixed point coordinates for objects. I'm still considering how to make that work, but I'd like to use jolt and jolt only speaks floats, and I am not expecting GLES2 GLSL to provide everything I need to do what I was doing before on the rendering side either. I'm probably going to use tile relative system instead on the shader side, and whatever will require as little translation on the CPU side.
-
-
@giuseppebilotta @TomF @glyph what I'm planning right now is putting everything into world tiles, and then rather than putting everything into a common world space, go straight to relative coordinates by treating whichever tile the view origin happens to be in as the current origin tile. this adjustment would be done on the CPU, and thus everything on the GPU would just receive a uniform that applies the current tile transform
-
@giuseppebilotta @TomF @glyph what I'm planning right now is putting everything into world tiles, and then rather than putting everything into a common world space, go straight to relative coordinates by treating whichever tile the view origin happens to be in as the current origin tile. this adjustment would be done on the CPU, and thus everything on the GPU would just receive a uniform that applies the current tile transform
@giuseppebilotta @TomF @glyph the tile coordinates on the CPU will probably just use integer coordinates and follow a regular grid of some kind, which at least avoids having to worry about precision while figuring out the relative adjustment
-
@giuseppebilotta @TomF @glyph the tile coordinates on the CPU will probably just use integer coordinates and follow a regular grid of some kind, which at least avoids having to worry about precision while figuring out the relative adjustment
yep, that sounds very similar to what we do in GPUSPH. In our case it's a particle system, and we need an auxiliary grid for binning/neighbors search. The particles are first generated on CPU in world coordinates in double precision, and then positions are split into cell index + single-precision cell fraction (still on CPU), and this is the information that is uploaded and used on the GPU.
Of course in our case the entire particle system is resident on the GPU, so we have the cell index information as a per-particle property on GPU (it's needed to compute the distance to neighboring particles which are in adjacent cells), but if you're only processing one tile at a time in the shader it makes sense to move that to a uniform —if it's needed at all on GPU?
-
yep, that sounds very similar to what we do in GPUSPH. In our case it's a particle system, and we need an auxiliary grid for binning/neighbors search. The particles are first generated on CPU in world coordinates in double precision, and then positions are split into cell index + single-precision cell fraction (still on CPU), and this is the information that is uploaded and used on the GPU.
Of course in our case the entire particle system is resident on the GPU, so we have the cell index information as a per-particle property on GPU (it's needed to compute the distance to neighboring particles which are in adjacent cells), but if you're only processing one tile at a time in the shader it makes sense to move that to a uniform —if it's needed at all on GPU?
@giuseppebilotta @TomF @glyph i think so. most of the environment is just going to be a small set of models kit bashed with instancing. the instance point sets are to be generated offline (probably in blender) per tile, and then all that's needed is to move the tile relative to the camera's tile. i figure particle systems could work the same way if I end up having any