Jump to content
Double Fine Action Forums
Sign in to follow this  
DF Oliver

Programming Update 8: The Data Pipeline

Recommended Posts

I didn't make an analogy, i was expressing that i don't like both techniques.

No matter how often i've used mipmaps i just don't like the concept, like i don't like lod, pre-baked lighning, pre-baked occlusion culling, static pathfinding, ... you get the idea. I prefer dynamic realtime solutions, or least jit-solutions, instead whenever it makes sense project and target platform related. I know about the ups&downs;. ;O)

The core "concept" of mipmaps is image processing and filtering theory... the basis of digital image synthesis. Lumping mipmaps as the same class of "static" as lods, pre-baked lighting, occlusion, pathfinding, etc seems misguided. There's no plausible realtime alternative, nor would you really want one. Perhaps higher-quality filtering on your mipped images (both offline and online, like doing a real approximation of EWA filtering instead of bilinear), but none at all? Welcome to the PS1! Folks even take realtime-computed quantities (like procedurally defined textures/displacement maps) and bake them down just so they can perform mipmap-like filtering on them to reduce aliasing (for example, take a look at some of the uses of ptex inside Disney, or brickmaps provided by PRman). Hard to imaging how you don't like the idea of making your rendered images look better for trivial cost, and a huge performance increase as a bonus!

Share this post


Link to post
Share on other sites

Maybe my wording was a bit unlucky regarding mipmaps, what i was primary trying to say is that i dislike the upfront generation of them and hereby providing redundant information (they still can have their discrete issues at specific distances/sizes) and so wasting memory and if you need to alter the textures on the fly you need to recalculate them anyway. Obviously you're utalising more energy this way but i like the more dynamic aspect of things. Nonetheless pre-calculated stuff has its authority.

You should see my statement detached from what the best solution in this case is.

Share this post


Link to post
Share on other sites

Some day Oliver will write a book about coding for games and it'll become a #1 best seller.

DF gold.

Share this post


Link to post
Share on other sites

I agree, this was a fascinating and comprehensible update, even for a non-programmer! Put me down for a copy of that book as well :)

Share this post


Link to post
Share on other sites
Maybe my wording was a bit unlucky regarding mipmaps, what i was primary trying to say is that i dislike the upfront generation of them and hereby providing redundant information (they still can have their discrete issues at specific distances/sizes) and so wasting memory and if you need to alter the textures on the fly you need to recalculate them anyway. Obviously you're utalising more energy this way but i like the more dynamic aspect of things. Nonetheless pre-calculated stuff has its authority.

You should see my statement detached from what the best solution in this case is.

So I'm trying to see if I can't correct some misconceptions about mipmaps that are unfortunately common. While computing and providing a meager 30% extra memory may be distasteful to some, mipmaps are actually a fantastically clever and easy solution to a difficult sampling/filtering problem in graphics (both realtime and not): they make images look way nicer while providing a really big speedup/reduction in power consumption.

So I see your statement detached, but you need to consider the reality of doing rendering... when rendering good looking, filtered and stable images, prefiltered data [mipmaps = prefiltered texture data] is key! Computing it online or offline are both fine. Many current/next-generation realtime and offline rendering techniques use the mipmap-like techniques of computnig multiresolution pre-filtered data (for texture+model data sparse voxel octrees and virtual texturing are easy ones, or for surface shading check out LEAN/CLEAN mapping). I hope people reading these posts and are interested in computer graphics aren't thrown off by someone disliking the idea of mipmaps and realize how cool they are.

ps: If you don't like to generate them nvdxt/crunch/squish/pvrtextool/dxtextool/compressonator/any other tool out there will do a good job for you. Generating them on the fly if you have a dynamic texture or uncompressed texel formats is easy and fast: glGenerateMipmaps.

Share this post


Link to post
Share on other sites

The book is totally right of course. In fact there are two potential problems:

1) Geometry inaccuracy

2) Texture filtering discontinuity

Our solutions for the two problems are as followed:

1) We were very careful when choosing our coordinate system. IEEE floating numbers are more accurate in certain number ranges. In fact ideally you don't want to use fractional coordinates at all. Because of this we made the decision that the vertex positions of the clip geometry have to be integer values.

2) This is in fact the reason why we perform the mip-map generation before the chunking. Also you want to have an additional line of pixels on the bottom and right side of the texture, so that the bi-linear filter returns the correct values.

Thank you so much for the explanation. I like the idea of integer values and the additional line of pixel on bottom and right side. Thanks!

Share this post


Link to post
Share on other sites

@BecauseMyCatSaidSo

Are you Lance Williams? :o)

I don't think there do exist misconceptions which need to be corrected. Again, mipmaps have their pros and cons like any other discrete (precalculated) data. They provide a great bang for the buck to avoid aliasing for a number of situations on the hardware limitations computers still offer. But whilst being widely supported they aren't what you somehow really want, also have their disadvantages and if you don't spice them up good enough (maybe calculating in linear space, going with a Kaiser filtering, trilinear filtering), they can look rather terrible, and even more complexity is needed when you're stepping with texturing from 2D into 3D space (anisotropic filtering).

I guess partly my love for dynamic/generative code based approach is based on that fight for every byte when trying to squeeze out as much as possible from a bootblock/boottrack of the Amiga.

Anyway to end this discussion: Every system has different bottlenecks and you'll always try to find the best solution for a specific platform/problem, if you can afford going different routes. Due to a faster CPU and less memory the BBC Micro for instance invited you to solve certain problems (like geometry calculations) by fast implementations or wacky approximations. On the C64 the situation was the opposite, more memory and a lower clockrate forced you to use more lookup tables. Both approaches differ in order to provide the best experience for each system.Personally i'm just more fond of the code based solution (and if it's just to generate the data on the fly), okay?

Share this post


Link to post
Share on other sites

I'm still not sure I've understood why you need different sizes of the same image. I mean, yes, obviously if it's further away it can be smaller, I just don't see how that is ever going to be the case. Take for example the background layer -- isn't it always going to be the same size? The only thing that changes sizes is the character as it walks deeper into the picture. The rest of the images will stay as they are, and what's more, you know what they will be like, because they drew them, so can't you just pick the right size from the get-go?

Sorry for the stupid question, but it doesn't seem immediately obvious too me. :P

Edit: Apart from having hugely different screen resolutions, of course. That point is clear. But I meant in regard to what you talked about in your OP.

Share this post


Link to post
Share on other sites
I'm still not sure I've understood why you need different sizes of the same image. I mean, yes, obviously if it's further away it can be smaller, I just don't see how that is ever going to be the case. Take for example the background layer -- isn't it always going to be the same size? The only thing that changes sizes is the character as it walks deeper into the picture. The rest of the images will stay as they are, and what's more, you know what they will be like, because they drew them, so can't you just pick the right size from the get-go?

Sorry for the stupid question, but it doesn't seem immediately obvious too me. :P

Edit: Apart from having hugely different screen resolutions, of course. That point is clear. But I meant in regard to what you talked about in your OP.

There are main reasons why you would see a texture at a different sizes:

1) Camera zoom: The camera will move in and out to focus on different things (e.g. scene overview vs. closeup of a face).

2) Screen resolution: Drawing a background image on an older iOS device at 480 x 320 will use a smaller mip-map level than rendering the same shot on a HD MacBook Pro screen. The important thing to notice here is that the GPU does try to keep the texel-to-pixel ratio at 1. That means a higher screen resolution will use a bigger mip-level because the screen has more pixels and the best mip-map therefore should also have more texels.

Does that make any sense?

Share this post


Link to post
Share on other sites

Yeah, completely. I simply didn't know we had camera zooms -- and I totally forgot there might be close-ups in the game. :lol:

Thanks for clarifying.

Share this post


Link to post
Share on other sites
2) Screen resolution: Drawing a background image on an older iOS device at 480 x 320 will use a smaller mip-map level than rendering the same shot on a HD MacBook Pro screen. The important thing to notice here is that the GPU does try to keep the texel-to-pixel ratio at 1. That means a higher screen resolution will use a bigger mip-level because the screen has more pixels and the best mip-map therefore should also have more texels.

I'm curious... does this rule apply to PCs? How low does the resolution need to be for the engine to start using those smaller maps?

I mean, obviously, old adventure games (think Full Throttle) ran in the old DOS resolutions of... what was it? 320x200? PCs these days work with really high resolutions, but every once in a blue moon... you get to see a system with 800x600 or less (my aunt has a REALLY old monitor and uses that resolution O_o). Do such cases also warrant the use of said mip-maps, or do you need to go even lower?

Share this post


Link to post
Share on other sites
I'm curious... does this rule apply to PCs? How low does the resolution need to be for the engine to start using those smaller maps?
I'm not Oliver (surprise!) but I think the same rule applies: the GPU will try to keep the textel-to-pixel ratio at 1, so it's going to use the smallest mipmap that provides enough textels, and that will depend on the resolution of the screen and the size of the plane being rendered in the scene.

(Since I had to look it up to be sure: a "textel" is a single pixel in the texture itself, while "pixel" is referring to a displayed pixel. And that makes me wonder if there's a different term for a pixel on a monitor, because if you have your monitor set to a non-native res, then your displayed pixels will be larger than your monitor pixels.)

Share this post


Link to post
Share on other sites

.....

(Since I had to look it up to be sure: a "textel" is a single pixel in the texture itself, while "pixel" is referring to a displayed pixel. And that makes me wonder if there's a different term for a pixel on a monitor, because if you have your monitor set to a non-native res, then your displayed pixels will be larger than your monitor pixels.)

In OpenGL the each sample in the output buffer (the result of the rendering) is referred to as a fragment and can cover less or more than one actual pixel depending on the resolution.

Share this post


Link to post
Share on other sites

While I never really retain much of these posts, thanks for all hard work!

And pretty pictures. And arrows.

Smiles

Share this post


Link to post
Share on other sites

Thank you for this fascinating read.

I had previously never really thought about rendering, presuming that game-logic and determining what to draw were the areas where most could be done through optimization. This gives a whole new perspective.

Do you know how much of the frame time is taken up by the actual rendering? Does this change significantly with different devices? And how much does optimization affect this?

Share this post


Link to post
Share on other sites

Do you know how much of the frame time is taken up by the actual rendering? Does this change significantly with different devices? And how much does optimization affect this?

When it comes to rendering there are two important phases which are related to the overall performance:

1) Draw submission: This is the time spend on the CPU to send all the data related to rendering to the GPU.

2) Draw execution: This is the time spend on the GPU actually drawing stuff.

To make things more complicated these two phases are interleaved, because the GPU will start rendering while the CPU is still sending information. In other words both phases have to be optimized individually and with the other phase in mind. Did I mention that efficient rendering is a pretty difficult problem to solve... :-)

As far as actual timings go you are right that does HEAVILY depend on the actual device. For example the PC is hardly breaking a sweat right now while some of the lower-end handheld devices are struggling to get everything done on time.

Since Moai is (kind-of) single-threaded right now, both the game-play and rendering (draw submission + draw execution) have to be done in at least 33 milliseconds in order to be able to achieve at least 30 frames per second. Right now our budget is 16 milliseconds for game-play and 16 milliseconds for rendering. It's a soft target though as some scenes are more heavy on script execution while others have a lot of fancy shaders...

Share this post


Link to post
Share on other sites

@DF Oliver

Will there be a chance to get 60fps on fast enough systems?

What's the lowest system (performance wise) you're trying to support (maybe 3GS iPhone)?

A for timing, what's it gonna be? delta timing, fixed time steps based on a 30fps experience, different timing methods for logic and rendering, ...?

Share this post


Link to post
Share on other sites
@DF Oliver

Will there be a chance to get 60fps on fast enough systems?

Absolutely!

What's the lowest system (performance wise) you're trying to support (maybe 3GS iPhone)?

We are not sure yet. Right now we are aiming to at least support iPhone 4 and higher and equivalent Android devices. While performance is okay on these devices right now we might run into issues in the future, so as usually I can only say: Let's see... :-)

A for timing, what's it gonna be? delta timing, fixed time steps based on a 30fps experience, different timing methods for logic and rendering, ...?

That is a great question. Working on console titles has skewed me towards fixed time steps, so that is how it's set up right now. I'll experiment with free timing. In theory that should fine, but I haven't tried it yet.

Share this post


Link to post
Share on other sites

Hey, cool about the 60fps support, it just makes a difference. :o) Try to use a minimum platform you really feel comfortable with, supporting this many devices (especially on Android) will be groovey enough. If you can rely on defined platforms or can guarantee a certain minimum stable performance, fixed steps can be easier to deal with and spare you the glitchy look you can run into with too large deltas (no idea if Moai produces hickups).

Share this post


Link to post
Share on other sites

Wonderful read, I second the earlier post about you writing a book - the game development learning space could definitely use it!

Share this post


Link to post
Share on other sites

Fantastic post! I always wondered about a lot of this stuff. Nice to finally have it explained in clear English. Thanks!

Share this post


Link to post
Share on other sites

Nice! These technical updates are really appreciated - there is art in computer programming.

I'm a bit surprised that so many types of texture compression were needed, but then I have

not done written very much graphics code. I wonder when the ASTC compression will be an

option though - it looks very nice on paper at least.

Share this post


Link to post
Share on other sites

Very very useful and insightful post.

We are actually using different sprites for our game to display on different platforms, i thought about using one HQ image for each sprite and enabling a few Mipmap levels but i was not too sure about how good looking the end result would be, you just gave me encouragement to try that method out. Thanks Oliver! :)

Share this post


Link to post
Share on other sites

Thanks, nice post.

As far as I know the 2x2 pixel quads are required to generate the derivates for input values, like uv coordinates, with which the required mip map level and/or anisotropic samples are determined.

Share this post


Link to post
Share on other sites

that was the most useful behind the scenes description of low level technical art asset work that I have encountered. Thank you!

Share this post


Link to post
Share on other sites

Amazing post Oliver... making the guts of low-level computer graphics seem interesting and accessible to a broad audience, that's impressive.

As a programmer, I feel inadequate in your presence.

Share this post


Link to post
Share on other sites

I really enjoyed this post, enough that I though a comment to that effect was appropriate. As a web developer starting to get into game development, this type of 'beyond the basics' content is invaluable. Keep up the good work! I hope to see more soon.

Share this post


Link to post
Share on other sites
Sign in to follow this  

×
×
  • Create New...