Sign in to follow this  
mudd1

Scaling Trick

Recommended Posts

I really didn't understand how rendering something twice to a small buffer can help with visible aliasing when scaling the buffer back up (any more than just rendering twice as many pixel in the first place that is). Did anyone get that or knows the trick he's referring to?

Share this post


Link to post
Share on other sites

I am not a graphics person at all so I'm probably wrong, but I believe he was referring to the concept of mipmapping or anisotropic rendering. In any case, reading the wikipedia article on those things can give you an idea of how rendering something small can improve the image quality.

Essentially, instead of rendering a very large amount of pixels all the time (which would be expensive computationally), you pre-render a bunch of different sizes of the image and blend between them based on the needs of the scene.

Share this post


Link to post
Share on other sites
I really didn't understand how rendering something twice to a small buffer can help with visible aliasing when scaling the buffer back up (any more than just rendering twice as many pixel in the first place that is). Did anyone get that or knows the trick he's referring to?

Here's a guess. One of the most successful tricks for anti-aliasing is to render the same view multiple times with the camera slightly jittered a bit each time (randomly moved around less than a full pixel) and then average the colors you get. Hard lines become softer, jagged edges smooth out a bit, all the high-frequency effects are just more accurately sampled this way. It sounds like this is what they were describing (or some other sort of sub-pixel sampling).

Mathematically this could work as follows:

- Use a buffer with dimensions that are half of the target resolution

- This is 1/4 the number of pixels (half width, half height)

- Render it twice (2/4 the original pixels) still gets you half as much work as the target resolution.

I'm sure the effect is not perfectly pixel bound so the gain in speed won't quite work that way (may be better may be worse) but you can see the physical number of pixels still ends up being half what it would be even with rendering it twice. Could even render it three times and still be at 75% the original number of pixels.

Seth B.

Share this post


Link to post
Share on other sites
I really didn't understand how rendering something twice to a small buffer can help with visible aliasing when scaling the buffer back up (any more than just rendering twice as many pixel in the first place that is). Did anyone get that or knows the trick he's referring to?

Here's a guess. One of the most successful tricks for anti-aliasing is to render the same view multiple times with the camera slightly jittered a bit each time (randomly moved around less than a full pixel) and then average the colors you get. Hard lines become softer, jagged edges smooth out a bit, all the high-frequency effects are just more accurately sampled this way. It sounds like this is what they were describing (or some other sort of sub-pixel sampling).

I was thinking of something like that, especially when he says that there are grainy artifacts to his method. That sounds very much like some kind of Monte-Carlo sampling. But then it would make no sense at all to add additional jitter or sample a low resolution buffer twice. In fact, for every method that would use some sort of sub-pixel accuracy, I can think of a better method than sampling a regular grid twice with a small offset. I mean, I can see how regular grids can be much easier and faster to handle but then I can't imagine using just two of them does much to mitigate aliasing.

I am not a graphics person at all so I'm probably wrong, but I believe he was referring to the concept of mipmapping or anisotropic rendering. In any case, reading the wikipedia article on those things can give you an idea of how rendering something small can improve the image quality.

But since he talks about some dynamic effect, mipmapping would lead to more computations, not less.

Share this post


Link to post
Share on other sites

Hey guys, I might be able to clear up some of the confusion. 2PP did a great job editing the interview, but some details did get cut.

The buffer we render light beams into is 1/16 the screen area (1/4 width and height), allowing for many more ray marching steps then we could do at full res. "Rendering it twice" actually means writing out two values: the beams for the closest of the 16 screen pixels covered by the smaller buffer's pixel, and the beams for the farthest. Then when we add the beams to the final full res buffer, we can choose which of the two values to use based on each pixel's own distance. This is important for edges, where aliasing would cause beams to be clearly wrong due to large changes in depth.

Calculating the close and far values in the same shader pass is very efficient, because (farMarches == closeMarches + extra), so I just need to calculate those few extra marches to get both the close and far results. Twice the information, for a fraction of the cycles.

In practice, this works very well. Some very complex edges aren't completely accurate (like on plants), but it's hard to notice. The most distracting aliasing (on long angled edges) is much cleaner now.

Share this post


Link to post
Share on other sites

I'll pretend I just understood what you wrote and call it "very clever"! It sounds very clever ... and very efficient.

But more importantly YOU know, it's very clever!

Thanks Matt!

(also, "first post" - I sincerely hope, you enlighten us from time to time about your rendering adventures, just like I hope Paul Du Bois, keeps posting about advanced functional programming techniques) :)

Share this post


Link to post
Share on other sites
Hey guys, I might be able to clear up some of the confusion. 2PP did a great job editing the interview, but some details did get cut.

The buffer we render light beams into is 1/16 the screen area (1/4 width and height), allowing for many more ray marching steps then we could do at full res. "Rendering it twice" actually means writing out two values: the beams for the closest of the 16 screen pixels covered by the smaller buffer's pixel, and the beams for the farthest. Then when we add the beams to the final full res buffer, we can choose which of the two values to use based on each pixel's own distance. This is important for edges, where aliasing would cause beams to be clearly wrong due to large changes in depth.

Calculating the close and far values in the same shader pass is very efficient, because (farMarches == closeMarches + extra), so I just need to calculate those few extra marches to get both the close and far results. Twice the information, for a fraction of the cycles.

In practice, this works very well. Some very complex edges aren't completely accurate (like on plants), but it's hard to notice. The most distracting aliasing (on long angled edges) is much cleaner now.

Well thanks for the explanation. Now it does actually make perfect sense :)

Is there a reason, though, why you only use one of the two values and don't do some linear interpolation between the two? Maybe I didn't understand the problem well enough but my gut feeling is that this would be reasonable enough.

Share this post


Link to post
Share on other sites

Good catch, I do indeed use interpolation. This also requires storing the distance of the two march results, which fit in the remaining channels of the render target. I left this part out of the earlier explanation to keep it from getting even more complicated.

Share this post


Link to post
Share on other sites
Good catch, I do indeed use interpolation. This also requires storing the distance of the two march results, which fit in the remaining channels of the render target. I left this part out of the earlier explanation to keep it from getting even more complicated.

Ok great. I think I get what you did then. Makes total sense and thanks a lot for taking the time to explain it :)

Share this post


Link to post
Share on other sites
Sign in to follow this