Jump to content
Double Fine Action Forums
Sign in to follow this  
DF Oliver

Programming Update 8: The Data Pipeline

Recommended Posts

This was an awesome, informative post!

Please do more technical posts in the future. They are really interesting to me.

Share this post


Link to post
Share on other sites

This was really quite enjoyable and informative to read. Thank you so much for all the effort you must have put into writing this (not to mention your work on Broken Age :D)!

Share this post


Link to post
Share on other sites

Thanks for the reply and I totally understand, but one last one (that I am sure everyone who is a programer wants to know)

Also since I have the feeling that you know just about as much as the PS4 as the rest of the gaming community does the switch to x86 and more PC gaming like features really make you happy or is there still a need for heavy optimizations for each one. I wish I could barrow you for a day and a half to just learn about all this stuff. Thanks again.

No problem, I'm glad you guys are digging the tech updates. :-)

It'll make a lot things much easier that's for sure. I haven't had any hands-on experience with the new consoles, but my gut tells me that there still will be quite some platform-specific work to be able to get the most out of the hardware. My hope is that there will be less of it though and that a similar architecture means that it'll be easier to write an engine core. We shall see...

See, my friends are some of the few people who actually bemoaned the x86 announcement for the PS4, they were all hoping for a RISC architecture. It would be leagues easier than the arcane nonsense of the PS3 and be more efficient than x86. /tangent

Share this post


Link to post
Share on other sites

Mostly for memory reasons. As you said ETC1 doesn't support an alpha channel which means that you have to encode that in a second ETC1 texture. So the memory almost (but not quite) doubles (compared to let's say DXT) but it also adds additional overhead because of data transfer (texture uploads) and shader latency (second texture look-up).

It's a not strictly mandatory, but we feel very strongly about make most of the hardware we are running on (as I said before :-) )

Does it actually take twice the memory? I think an ETC1 texture is half the size of a DXT5 or ATC texture of the same dimensions, so adding the alpha channel would bring them to par (PVR however is actually cheaper). In the past, I've added the alpha channel in the same texture so it's a little bit better organized, although you do pay (a bit) extra in shader complexity.

Okay okay... it's 30% more memory. :-) ETC1 has a compression ratio of 1:6 (3 channels) whereas DXT5 compresses at 1:4 (4 channels).

You said you add the alpha channel to the same texture. How does that work? Do you compress B and A in one channel? If so what is the quality like?

Share this post


Link to post
Share on other sites

See, my friends are some of the few people who actually bemoaned the x86 announcement for the PS4, they were all hoping for a RISC architecture. It would be leagues easier than the arcane nonsense of the PS3 and be more efficient than x86. /tangent

Hehe... as a engine programmer I like a good tech challenge and the idea of the SPUs appealed a lot to me in the beginning... but over the years I had to fix many PS3-only bugs, so now I'm definitely welcoming the decision that Sony made.

Share this post


Link to post
Share on other sites

Well I guess the other point which I should have emphasized a bit more is that the game will have to run in many different resolutions. Old iOS devices have a resolution of 480 x 320 the new iPad on the other hand supports 2048 x 1536 pixels. And I haven't even mentioned desktop computers at this point, which let you run the game in every possible resolution.

Mips also help you in this case because the goal of mip filtering is to keep the texel-to-pixel ratio as close to 1 as possible. So in other words even for 2D games mip maps are hugely beneficial.

I hope that clears things up a bit.

Yes, definitely. Thank you!

Share this post


Link to post
Share on other sites

Nice presentation but i dislike mipmaps, never liked the concept (same with lod) and they can be a pain in a number of aspects (quality, memory or auto generation wise).

@Oliver

How do you calculate your mipmaps (in Photoshop, Nvida lib, third party tools, own/engine lib, ...)?

Will there also be quality settings like compressed vs. uncompressed formats, trilinear vs. bilnear filtering?

Share this post


Link to post
Share on other sites
Is this a result of the different potential GPUs an Android device might have?

That's exactly it, in a nutshell. Almost every GPU manufacturer has their own preferred format that will yield the best performance with their particular GPU architecture, so this needs to be accounted for if the game is to run well across a broad range of devices.

Does the software detect this, and load in different assets? I mean all the users buy from the same place I'd imagine,

Share this post


Link to post
Share on other sites
You said you add the alpha channel to the same texture. How does that work? Do you compress B and A in one channel? If so what is the quality like?

We didn't get that fancy :)

It's simply atlasing the alpha channel together with the color info. In practice I don't think it would make much difference from having two textures (except it limits the maximum size we can have)

Share this post


Link to post
Share on other sites
Okay okay... it's 30% more memory. :-) ETC1 has a compression ratio of 1:6 (3 channels) whereas DXT5 compresses at 1:4 (4 channels).

And I don't think this is actually true. ETC1 is 6x compression, but on 24 bits instead of 32, so each pixel takes 4 bits. DXT5 is 4x on 32 bits, so each pixel is 8 bits. Two ETC1 textures (or one double the size) then equal one DXT5 :)

Share this post


Link to post
Share on other sites

How do you calculate your mipmaps (in Photoshop, Nvida lib, third party tools, own/engine lib, ...)?

Will there also be quality settings like compressed vs. uncompressed formats, trilinear vs. bilnear filtering?

We use our own code. It's just a simple box filter right now, but we may change the algorithm later on.

The discussion about settings is still in progress, so I'm not 100% sure. I won't be able to give you an option for compressed vs. uncompressed because the data will already be in a compressed format. Decompressing it during runtime doesn't make sense really.

Share this post


Link to post
Share on other sites

Does the software detect this, and load in different assets? I mean all the users buy from the same place I'd imagine,

Yep the Android stores (e.g. Google Play, Amazon) are device aware and will return a different package based on the hardware making the request. It's pretty cool actually...

Share this post


Link to post
Share on other sites

And I don't think this is actually true. ETC1 is 6x compression, but on 24 bits instead of 32, so each pixel takes 4 bits. DXT5 is 4x on 32 bits, so each pixel is 8 bits. Two ETC1 textures (or one double the size) then equal one DXT5 :)

You are right of course! I totally forgot to ignore the alpha channel when doing math. I hang my head in shame... :-)

Share this post


Link to post
Share on other sites

@DF Oliver

I just was hoping for some truecolour mode as well (for the textures where it makes sense). Depending on your image data it can vary from looking significantly better to not being much of an issue compared to something like dxtc5 (if you use the alpha channel). Please, no artifacts and as less blurred as possible, at least for the computer systems. It would be a shame to loose the correct pixels after you took so much effort into producing them.

Share this post


Link to post
Share on other sites
@DF Oliver

I just was hoping for some truecolour mode as well (for the textures where it makes sense). Depending on your image data it can vary from looking significantly better to not being much of an issue compared to something like dxtc5 (if you use the alpha channel). Please, no artifacts and as less blurred as possible, at least for the computer systems. It would be a shame to loose the correct pixels after you took so much effort into producing them.

You don't have to worry about artifacts. Tim and the art team won't let me ruin their artwork! (And correctly so.) :-)

Share this post


Link to post
Share on other sites

Must say superb article! Very well presented. I love all the follow up questions and answers :D

Sure hope to read one of your future books. I'd definitely buy one.

Share this post


Link to post
Share on other sites

Thanks everyone! I'm glad you guys enjoyed this article and I appreciate the questions and kind remarks a lot. :-)

Share this post


Link to post
Share on other sites

Very fascinating stuff. I only grasped the basics of what you wrote out, which kinda blows my mind, because of all the work that goes on in behind what was described. The other part that was of interest was the many formats to code for Android, which makes the whole 'platform of anarchy' make even more sense, and why many people don't feel up to delving into programming for it. So thank you for diverting a little of your time in an attempt to explain the goings on to us plebeian's.

Share this post


Link to post
Share on other sites

that's far more than interesting, this is very useful most of all! and it's explained very clearly for a short post!

Have you ever considered writing a book about all this?

Share this post


Link to post
Share on other sites

Thanks for this, Oliver!

Maybe this is a stupid noob question but... :)

Don't you get flicker lines where the different chucks of the background meet when you pan or scroll them? How do you fix this?

I'm reading a book called "Learn Cocos2d 2 - Game Development for iOS" and there's a chapter where you learn to make an infinite scroll effect. The book explains the flicker as: "This line appears because of rounding errors in their positions in combination with subpixel rendering." And it's fixed overlapping the images by one pixel, but I imaging this creates another artifact because you will see where the images merge.

Could you explain something about this, if this doesn't take too much of your time?

Thanks again, and sorry if it's a stupid question again :)

Share this post


Link to post
Share on other sites

+1 on the "write a book" suggestions. This is such a fascinating topic. I've always wanted to learn about the deeper parts of graphics programming, but most of the material I've seen has been painful to go through. It tends to be unnecessarily heavy reading with no regard for what the key things are that the reader really needs to know.

Oliver, I like your style. :)

Share this post


Link to post
Share on other sites

Don't you get flicker lines where the different chucks of the background meet when you pan or scroll them? How do you fix this?

I'm reading a book called "Learn Cocos2d 2 - Game Development for iOS" and there's a chapter where you learn to make an infinite scroll effect. The book explains the flicker as: "This line appears because of rounding errors in their positions in combination with subpixel rendering." And it's fixed overlapping the images by one pixel, but I imaging this creates another artifact because you will see where the images merge.

Could you explain something about this, if this doesn't take too much of your time?

The book is totally right of course. In fact there are two potential problems:

1) Geometry inaccuracy

2) Texture filtering discontinuity

Our solutions for the two problems are as followed:

1) We were very careful when choosing our coordinate system. IEEE floating numbers are more accurate in certain number ranges. In fact ideally you don't want to use fractional coordinates at all. Because of this we made the decision that the vertex positions of the clip geometry have to be integer values.

2) This is in fact the reason why we perform the mip-map generation before the chunking. Also you want to have an additional line of pixels on the bottom and right side of the texture, so that the bi-linear filter returns the correct values.

Share this post


Link to post
Share on other sites
Nice presentation but i dislike mipmaps, never liked the concept (same with lod) and they can be a pain in a number of aspects (quality, memory or auto generation wise).

While the analogy of mipmaps=LODS is very easy to make, it's fundamentally flawed. They are critically important and that analogy is missing two important advantages they give you (which Oliver touches on).

The first is that mipmaps are crucial to prevent aliasing. A good example is this image: mipmapping_example.png -- This would be even worse in motion, the image on the the left would be horribly flickery and shimmery as the image moves, while the one on the left, while blurrier, would not be flickerying and shimmery. Things like anisotropic filtering of textures also go to help this, but without mipmaps would not be sufficient to get rid of aliasing.

The latter is simply bandwidth usage. The omitting mipmaps (and compression) can cause hugely more texture bandwidth to be utilized, which can be disastrous for performance, especially on these mobile devices where they are already bandwidth constrained.

For quality, you'd think "high quality" rendering like film (renderman/etc) would not use mipmaps, because it is a "quality reduction"? Nope: http://renderman.pixar.com/resources/current/rps/txmake.1.html, it's equally important for them (or perhaps even more so, when you have hundreds of gigabytes of textures used in a single frame.. without mipmaps you'd need to read that all in).

All mipmaps really do is pre-filter what you would need to do at runtime to prevent aliasing. One analogy if you rendered a super high resolution image with no mips and then scaled it down, you'd approximate what you get by rendering an image with mipmaps at a regular resolution.

Share this post


Link to post
Share on other sites

I didn't make an analogy, i was expressing that i don't like both techniques.

No matter how often i've used mipmaps, i just don't like the concept, like i don't like lod, pre-baked lightning, pre-baked occlusion culling, static pathfinding, ... you get the idea. I prefer dynamic realtime solutions, or at least jit-solutions, instead whenever it makes sense project and target platform related. I know about the ups&downs;. ;O)

Share this post


Link to post
Share on other sites
Sign in to follow this  

×
×
  • Create New...