I recently added a rendering pass in my engine to compute volumetric shapes via raymarching.
-
I'm not sure what I could do better, without temporal anti-aliasing. I'm open to suggestions ☺️
@froyok what about variable rate compute shader dispatch? https://youtu.be/mvCoqCic3nE
-
@froyok Pleanty of tricks like you do volumetric to the closest z value then you can continue from there for you per-pixel rays. I know the edge pixels are the mest expensive ones so maybe a bad idea from that perspective.
@breakin Thank for sharing that idea still ! That's sounds interesting ! :)
-
@froyok Have you tried this: https://c0de517e.blogspot.com/2016/02/downsampled-effects-with-depth-aware.html
The important parts are using a min/max depth checkerboard pattern for the downscaled depth and then choosing between bilinear/bilateral upsampling based on depth discontinuity for a given pixel.Also randomly offsetting the rays with blue noise and then running a bilateral blur over the volumetric target before upsampling helps. Allows to get away with huge steps during raymarching.
@blurple I haven't tried the checkerboard, but I tried different combination of bilinear and nearest filtering with custom weights. No luck so far.
-
@froyok what about variable rate compute shader dispatch? https://youtu.be/mvCoqCic3nE
@WaitForPresent Not doable with the framework I'm working with I believe, plus I'm sticking to OpenGL currently. Or I would need to emulate it myself in compute.
-
I'm not sure what I could do better, without temporal anti-aliasing. I'm open to suggestions ☺️
To illustrate a bit, In the worst-case scenario this is how it looks like:
-
To illustrate a bit, In the worst-case scenario this is how it looks like:
I use a 3x3 bilateral filter, each sample is weighted by both spatial distance (sub-pixel position within the half-res grid) and depth
similarity (to prevent light bleeding across silhouettes). -
@breakin Thank for sharing that idea still ! That's sounds interesting ! :)
@froyok This recent link would help if you had sdfs. Maybe some inspiration in sharing work https://pointersgonewild.com/2026-03-06-a-recursive-algorithm-to-render-signed-distance-fields/
-
I'm not sure what I could do better, without temporal anti-aliasing. I'm open to suggestions ☺️
@froyok If instead of 2x2 binning (aka half resolution) would it work to do two versions like 1x8 and 8x1, and then accumulate those into the full resolution final result? Directly inspired by box blur.
-
@froyok If instead of 2x2 binning (aka half resolution) would it work to do two versions like 1x8 and 8x1, and then accumulate those into the full resolution final result? Directly inspired by box blur.
@mirth Hmm, I not sure I see how I could transcribe the effect into this method. What would be the point of making it separable it, outside of the performance cost ?
-
@mirth Hmm, I not sure I see how I could transcribe the effect into this method. What would be the point of making it separable it, outside of the performance cost ?
@froyok Performance and a full resolution if blurry result, potentially solving the upscale question. I know a lot more about vision than graphics so not sure if this would be a sensible trade off.
-
@froyok Performance and a full resolution if blurry result, potentially solving the upscale question. I know a lot more about vision than graphics so not sure if this would be a sensible trade off.
@mirth I will scratch my head a bit about it ;)
-
@mirth I will scratch my head a bit about it ;)
@froyok Trying a version of this rescaling + threshold on a 2D black and white silhouette image I think gives a rough idea of the behavior:
-
@froyok Trying a version of this rescaling + threshold on a 2D black and white silhouette image I think gives a rough idea of the behavior:
@froyok The math would take some thought but it might be possible to sample the vertical and horizontally squished versions at render time, do the threshold, and skip the larger intermediate texture entirely Then the shadows would have fewer artifacts and the overall memory bandwidth used should be smaller.
(this feels like the kind of idea that one of the elder graphics people will chime in and be able to point out some prior art and improvements from 25 years ago)
-
@froyok The math would take some thought but it might be possible to sample the vertical and horizontally squished versions at render time, do the threshold, and skip the larger intermediate texture entirely Then the shadows would have fewer artifacts and the overall memory bandwidth used should be smaller.
(this feels like the kind of idea that one of the elder graphics people will chime in and be able to point out some prior art and improvements from 25 years ago)
@mirth Given the critical part here is the edges, the demonstration you propose doesn't help on this issue.
Then one solution could be compute bits at half-res where there isn't edges (so edges are at full res), but this sounds a bit like the MSAA or Variable rate solution that were proposed in other replies, they are unfortunately difficult to implement in my engine. -
undefined oblomov@sociale.network shared this topic