Jump to content
Double Fine Action Forums
Sign in to follow this  
DF Lee

Art Update 8: Creating Character Textures

Recommended Posts

"All textures must be square and power of 2."

Can you talk about why this is? I've heard about this pow2 pattern before, but I have never had an explanation that I comprehended. I know that it has something to do with how the textures are batched to the GPU, or something like that.

Cheers

C

It's because of mipmapping.

Mipmaps are precomputed scaled down versions of the base texture which are used on objects that are far away in 3D graphics or otherwise take up a small portion of the screen, in order to both reduce visual artefacts which can occur when a large texture gets scaled down to such a small size at render-time, and to reduce the processing required to handle large textures by replacing those textures with their scaled-down counterparts when appropriate. Although mipmapping introduces a memory overhead (mipmapped textures take up more RAM/VRAM, one third more than just the base texture itself to be precise), it usually greatly pays off to mipmap the textures used in the game.

You're probably wondering what that has to do with powers of two.

Well, mipmaps are created by scaling down the mipmap "one level above" by a factor of two, vertically and horizontally (for the first mipmap, the "mipmap one level above" is the base texture itself). Obviously the dimensions have to be even for even a single mipmap to be creatable, but in order to be able to make all the mipmaps down to the smallest possible texture size, i.e. 1×1, the dimensions have to be equal powers of two.

This example illustrates that:

MipMap_Example_STS101.jpg

The base texture is 256×256, then on the right side the mipmaps follow, from top to bottom: 128×128, 64×64, 32×32, 16×16, 8×8, 4×4, 2×2, 1×1 (those last three are so small you can barely see them, and the very last one is just a single pixel (which a 1×1 image exactly is)).

I hope this explanation was understandable and made sense.

Share this post


Link to post
Share on other sites

Two theories so far? I have a third one I think makes the most sense, since this predates mipmaps. By having a resolution of 2^n it makes texturemap lookups very fast.

Consider that looking up (x, y) in a texture requires you to look up texture[x + y * width] for every pixel drawn. Multiplications and divisions are very expensive operations, either in time, power and/or silicon estate. But, if your texture is 256 x 256 you only need to do texture[x + y << 8].

"<<" means bitshift, and in this case "<< 8" effectively multiplies by 256. Bitshift is extremely trivial compared to binary multiplication.

It gets better: If you want to "texture wrap-around" you need to do this sort of lookup: texture[(x MOD width) + (y MOD height) * width]. But with 2^n you can avoid two expensive modulo-operations, which are practically divisions, by using binary AND-operations: texture[(x AND 255) + (y AND 255) << 8]. So with this optimization we go from 3 mul./div. per pixel to Zero. :)

So by imposing this simple (and at times annoying) limit you get a far more efficient textured polygon rasterizer. On mobile devices this would probably lead to less powerusage in exchange for higher framerates as well.

Share this post


Link to post
Share on other sites

how is that a third theory :). i was already talking about memory alignment. i just didnt go too much into detail. and both of us have mentioned mipmaps. but your explanation is nice!

Share this post


Link to post
Share on other sites

OpenGL has supported non-power-of-two textures for some time now: texture rectangles and more recently ARB_texture_non_power_of_two

It is less efficient for all the reasons that were previously discussed here but not drastically so. Modern hardware can be made to handle the tradeoff well. I imagine the key reason behind this decision is not efficiency but rather desire to support more devices and platforms. Specifically, version 1.0 and 1.1 of OpenGL ES (OpenGL for embedded systems, phones) does NOT support NPO2 textures. Most newer phones do target OpenGL ES 2 which allows for them but there are still a lot of older phones out there that you could target by making this sacrifice (iPhone 3G / iPod Touch 2G and earlier are OpenGL ES 1.1 only and there are still lots of those out there ... and there's probably even more android phones in this situation). And hay, you do gain some efficiency that way which is super important on portable devices!

I'm not as clear on why they would need to be square. Maybe this is just for simplicity and not efficiency. It seems to me that any power of two will pack well as a mip-map and will avoid memory alignment issues and fractional divisions when halving the dimensions. Width and height don't need the same power of two to achieve this. Perhaps this is a filtering thing ... hmmmm.

SethB

Share this post


Link to post
Share on other sites
Two theories so far? I have a third one I think makes the most sense, since this predates mipmaps. By having a resolution of 2^n it makes texturemap lookups very fast.

It's probably for both of those reasons that textures with equal power-of-two dimensions are still very prevalent and those are the de facto standard requirements for texture dimensions in video games.

It makes mipmap generation simple and efficient, and it makes texturemap lookup very fast.

OpenGL has supported non-power-of-two textures for some time now: texture rectangles and more recently ARB_texture_non_power_of_two

The description of that first one says "Mipmap filtering is not permitted."

It is permitted for the second one, but I'm sure it introduces some processing overhead and, no matter how small that overhead might be, if you're dealing with high quality graphics in modern video games, every bit of performance is very important and I don't see how that additional processing overhead could be worth the ability to use non-power-of-two dimensioned textures. It's not like you gain anything visually or otherwise from your textures not having equal power-of-two dimensions.

Share this post


Link to post
Share on other sites

well, one can even rig the editor to export all textures with power of two sizes. the only issues that might cause could be points described in percentages of width/height outside the editor.

Share this post


Link to post
Share on other sites
OpenGL has supported non-power-of-two textures for some time now: texture rectangles and more recently ARB_texture_non_power_of_two

The description of that first one says "Mipmap filtering is not permitted."

It is permitted for the second one, but I'm sure it introduces some processing overhead and, no matter how small that overhead might be, if you're dealing with high quality graphics in modern video games, every bit of performance is very important and I don't see how that additional processing overhead could be worth the ability to use non-power-of-two dimensioned textures. It's not like you gain anything visually or otherwise from your textures not having equal power-of-two dimensions.

Well I could see it as a memory vs speed tradeoff. Your powers of two are 2, 4, 8, 32, 64, 128, 256, 512, 1024, 2048, 4096 ... and that's usually where it stops (2048 is the max on iPhone). Reds is already at the max with 2048. There's a huge difference between 1024 and 2048 (about 3MB uncompressed not accounting for mipmaps). You might often be forced to jump up to the next power of two just to avoid this problem, hence more texture memory used to get a gain in texturing speed. I'm not an expert in any of this but I wonder how important it really is on a desktop or laptop system. On a phone/tablet I could see the gain in texturing speed being important but not on a conventional GPU (especially for Reds which isn't exactly cutting edge 3D rendering). I've still gotta believe this decision was primarily motivated by the fact that they need to support OpenGL ES platforms.

SethB

P.S. Isn't it the case that you only need to be byte aligned to have fast memory access? That would just be divisible by 8 not power-of-two. I understand that power-of-two is important for mipmapping but byte alignment aught to be good enough for fast memory addressing.

Share this post


Link to post
Share on other sites

I'm not as clear on why they would need to be square. Maybe this is just for simplicity and not efficiency. It seems to me that any power of two will pack well as a mip-map and will avoid memory alignment issues and fractional divisions when halving the dimensions. Width and height don't need the same power of two to achieve this. Perhaps this is a filtering thing ... hmmmm.

You guys totally got it figured out. :-)

To answer the question why the textures have to be square: It is because of PVR texture compression (iOS and some Android devices), which (unfortunately) requires it.

Share this post


Link to post
Share on other sites

The updates we've been getting are incredible. They are worth the price of admission alone. You NEED to compile a book of these - Double Fine's Game Creation Master Class. I'd buy it, and I've already read them! So much priceless information.

Share this post


Link to post
Share on other sites
To answer the question why the textures have to be square: It is because of PVR texture compression (iOS and some Android devices), which (unfortunately) requires it.

Oh boy! So you guys will have to use different compression techniques on different systems then yes? I assume PVR is specific to PowerVR chips and that something like S3TC would be used on desktop systems. I did not even know of PVR until you mentioned it but the quickly changing world of hardware texture compression is not something I've had a need to keep up with in my purely graphics research world.

How much does the code change for this? Does Moai help abstract this difference away at all? I guess with so many modern engines now supporting both desktop and embedded OpenGL architectures they'd all have to deal with this in some way.

:-) Now we've pulled Oliver into the art update! Mwa ha ha!

Share this post


Link to post
Share on other sites
P.S. Isn't it the case that you only need to be byte aligned to have fast memory access? That would just be divisible by 8 not power-of-two. I understand that power-of-two is important for mipmapping but byte alignment aught to be good enough for fast memory addressing.

Every power of two >= 8 is divisible by 8. :|

P.S. Oliver is a badass.

Share this post


Link to post
Share on other sites

How was a value of 8 pixels chosen for the border? Does this mean you only use the first few levels of the mipmap so that artifacts don't appear?

Share this post


Link to post
Share on other sites
How was a value of 8 pixels chosen for the border? Does this mean you only use the first few levels of the mipmap so that artifacts don't appear?

Whenever the texture is scaled down by a factor of 2 horizontally and vertically (i.e. "down one mipmap level"), every distance between any two points in the texture is halved as well (pretty obvious, I think).

If they keep the distance of at least eight pixels between each two shapes in their textures, this ensures that three mipmap levels can be created without different shapes overlapping in the scaled down texture:

8 (base texture) -> 4 (1st mipmap) -> 2 (2nd mipmap) -> 1 (3rd mipmap) -> 0.5 (distance between two shapes less than a pixel, i.e. shapes overlap in the 4th mipmap)

If you have multiple shapes in a single texture, no matter how big of a distance you choose to keep between different shapes, they're going to overlap at some mipmap level. This is pretty obvious:

Suppose the dimensions of your texture are (2^n)×(2^n). That means that you will have log_2(2^n) = n mipmap levels. Level n mipmap reduces your texture to dimensions 1×1, i.e. a single pixel. This means that the distance of 2^n gets reduced exactly to 1. But in order for two or more shapes to fit into a (2^n)×(2^n) texture, the distance between them has to be less than 2^n. Since the distance of 2^n gets reduced to 1 in the level n mipmap, this means that any distance less than 2^n will get reduced to less than 1 in the level n mipmap (and possibly even a lower level mipmap), i.e. you will get shape overlap.

This means that not only will some pairs of shapes overlap at level n mipmap, but at that level ALL the shapes will overlap with each other (again, pretty obvious - because they're all crammed into a single pixel). But unless the distance you keep between different shapes is half of the dimension of the texture or higher, you will get pairs of shapes overlapping even in lower level mipmaps.

Share this post


Link to post
Share on other sites
Every power of two >= 8 is divisible by 8. :|

Well yes but there are lots of multiples of eight that are not powers of two. 1024/8 = 128, 2048/8 = 256 so there aughta be like 127 multiples of 8 between 1024 and 2048. That's a lot more choices in terms of balancing the trade-off between texture size and texture lookup speed.

Ultimately, if you don't quite land on a multiple of 8 the worst you'd have to do is add 7 more pixels to fix the problem. That's way better than potentially having to add 1023 pixels. Of course it depends on how big your texture is but I assume we're in an age were things aren't getting much smaller than 64x64 especially when we layout our textures in sprite sheets like Reds or skins for 3D models. The next power of two's gonna require up to 63 wasted pixels, still a far cry from 7.

Memory is at a premium on portable devices just as much as processing power but with displays increasing pixel density to 'retina' resolutions the demand for higher-fidelity textures increases. Little trade-offs like this can't be ignored. I hope the engines like Unity and Moai take care of these gory details so programmers and art directors don't have to! When they target both portable and desktop devices with a single engine they position themselves to abstract away these concerns or provide some type of higher level interface to help you come to grips with these tradeoffs and to really decide which ones matter.

I don't think there's an answer to these questions without building some tests and benchmarking GPU performance. How much slower are arbitrary dimension textures? How much do we gain with byte alignment? With power-of-two restrictions? How much does this change between portable and desktop? We could theorize all day about this but a poorly conceived driver or hardware implementation will break all of our assumptions and we're stuck with sound theory that doesn't fit the practice. This is the hell of universal game engine design (or it could be a blessing if the engine gets it right)! Right now, people like Lee and Oliver still have to think about these things and put constraints on their artists and assets to make sure they squeeze out performance. If it were done right, I would think they wouldn't be forced to worry about such things but I'm an idealist!

I should shut up. I have no industry experience, I'm a researcher and an academic and I can only hope to understand this stuff so I can pass on the wisdom of Lee and Oliver to my students. The DFA backer forums!! It's the next best thing to industry experience! :-)

Seth B

Share this post


Link to post
Share on other sites

I just wanted to take a moment to post and say "thanks." When I initially made the decision to back $100 (which was, at the time, a bit of a financial stretch for me), I felt uncertain about the decision (hey, as much as I love adventure games, I'm also broke!) - however, you guys have already given me WELL OVER $100 worth of entertainment when I consider the excellent forum posts (such as this one) and the terrific work from 2 Player Productions. I wish I'd had the cash to contribute a lot more than I did!

Thanks for all the hard work you folks at Double Fine (and 2 Player Productions) are doing. I've never felt so involved with the development of a game, and as a software developer and general technology enthusiast it's been fascinating for me learning about all the various aspects of game design.

Share this post


Link to post
Share on other sites
The updates we've been getting are incredible. They are worth the price of admission alone. You NEED to compile a book of these - Double Fine's Game Creation Master Class. I'd buy it, and I've already read them! So much priceless information.
I think they'll all be in the art book. Less new stuff to write!

Share this post


Link to post
Share on other sites
The updates we've been getting are incredible. They are worth the price of admission alone. You NEED to compile a book of these - Double Fine's Game Creation Master Class. I'd buy it, and I've already read them! So much priceless information.

Agreed.

The previous update that explained that the 3D models were not the final arbiter of geometry, but that the alpha channel of the texture was going to determine the outline blew my mind.

I hope Double Fine aren't giving away too much of their secret sauce, but I for one am loving it.

Share this post


Link to post
Share on other sites
The previous update that explained that the 3D models were not the final arbiter of geometry, but that the alpha channel of the texture was going to determine the outline blew my mind.

That is used in 3D graphics as well, more frequently than you'd think.

All textures are at least rectangles (as any digital image) and most frequently they're squares, but you notice how foliage such as leaves on the trees aren't rectangular, and definitely not square, but actually shaped like... well... leaves and foliage? The actual texture of the leaves are squares/rectangles, but everything except the leaf in each texture is completely transparent, so even though the engine renders a rectangular texture on the place where the leaf is, you only see the leaf because the whole rest of that rectangle is transparent.

I... feel like I'm inadvertently making myself sound like a smartass. :S

Share this post


Link to post
Share on other sites

So Nobody can explain, how the textures land in the right spot? I mean, look at it, why doesn't the program mess things up?

Does every character come with a file, telling the program, when to use which file and a description where the body etc can be found? Like place body at x and y and take as texture file c pixel x to y. If yes, the pixels had to be counted and I can see this solution be rather terrible for sophisticated 3D models.

If I'm the only one wondering, why can't nobody give an explanation :(

Share this post


Link to post
Share on other sites
Does every character come with a file, telling the program, when to use which file and a description where the body etc can be found? Like place body at x and y and take as texture file c pixel x to y. If yes, the pixels had to be counted and I can see this solution be rather terrible for sophisticated 3D models.

Well yes pretty much. Their called texture coordinates and they are part of the geometry of the object. They tell the rendering engine which pixels in the texture go with which vertices in the model. They are only defined at each vertex of the model (could be even simpler for the 2D models Reds is using) and you just fill in the space in between by smoothly counting from one to the next. Now, texture coordinates are usually normalized to go from 0.0 to 1.0. They don't address pixels directly, the hardware does that by remapping the width and height of the texture to also go from 0.0 to 1.0 ... and it does it smartly. It picks not just one pixel but several from around the desired point (which probably didn't hit one pixel perfectly) and even from different mipmap levels and blends them all together to get as close to the continuous image of reality that the discrete pixels of the texture are supposed to represent (assuming you have all those features turned on).

This might sound like a brute force solution (I've always thought it was) but it's the bread and butter of modern hardware graphics! It solves the problem of realism by allowing us to pre-compute the solution off-line where it can take as long as it wants and then substitute all the time spent solving it with memory instead. Store the result, rendering it carefully and as long as memory is cheap and continues to be so this will work! Texturing has been such a revolution in graphics because memory has become so cheap and so fast. And many clever people have figured out how to make other problems (like shadows, lighting, reflections, even physical light simulations) into texturing problems. We've had a similar revolution recently with shaders but they don't achieve spatial variance (randomness in appearance across a surface) as well as plain old textures (or at least, not without the help of well designed textures).

Texturing has been around a long time. Research solved a lot of these problems over a decade ago and their use in game technology is so ubiquitous that I'm sure the necessary tools and conventions are well established to make something like the actual pixel resolution of the texture decoupled from the geometry. This is precisely how people can supply hi-resolution textures for old games and have it work unchanged (somebody's done that right, I seem to remember someone doing that for Doom or something).

With texturing, the devil's in the details and you can get it horribly wrong, but good tools, a well designed engine and a seasoned team that knows what the ultimate impact of these decisions will be make short work of those details!

SethB

Share this post


Link to post
Share on other sites
Does every character come with a file, telling the program, when to use which file and a description where the body etc can be found? Like place body at x and y and take as texture file c pixel x to y. If yes, the pixels had to be counted and I can see this solution be rather terrible for sophisticated 3D models.

Well yes pretty much. Their called texture coordinates and they are part of the geometry of the object. They tell the rendering engine which pixels in the texture go with which vertices in the model. They are only defined at each vertex of the model (could be even simpler for the 2D models Reds is using) and you just fill in the space in between by smoothly counting from one to the next. Now, texture coordinates are usually normalized to go from 0.0 to 1.0. They don't address pixels directly, the hardware does that by remapping the width and height of the texture to also go from 0.0 to 1.0 ... and it does it smartly. It picks not just one pixel but several from around the desired point (which probably didn't hit one pixel perfectly) and even from different mipmap levels and blends them all together to get as close to the continuous image of reality that the discrete pixels of the texture are supposed to represent (assuming you have all those features turned on).

This might sound like a brute force solution (I've always thought it was) but it's the bread and butter of modern hardware graphics! It solves the problem of realism by allowing us to pre-compute the solution off-line where it can take as long as it wants and then substitute all the time spent solving it with memory instead. Store the result, rendering it carefully and as long as memory is cheap and continues to be so this will work! Texturing has been such a revolution in graphics because memory has become so cheap and so fast. And many clever people have figured out how to make other problems (like shadows, lighting, reflections, even physical light simulations) into texturing problems. We've had a similar revolution recently with shaders but they don't achieve spatial variance (randomness in appearance across a surface) as well as plain old textures (or at least, not without the help of well designed textures).

Texturing has been around a long time. Research solved a lot of these problems over a decade ago and their use in game technology is so ubiquitous that I'm sure the necessary tools and conventions are well established to make something like the actual pixel resolution of the texture decoupled from the geometry. This is precisely how people can supply hi-resolution textures for old games and have it work unchanged (somebody's done that right, I seem to remember someone doing that for Doom or something).

With texturing, the devil's in the details and you can get it horribly wrong, but good tools, a well designed engine and a seasoned team that knows what the ultimate impact of these decisions will be make short work of those details!

SethB

Well I understood only the half, but since you are saying that my description is pretty close to reality I'm rather shocked that it is that "simple"! Shocked I say! Poof Magic is gone, science replacing it. SCIENCE! ... and well thank you.

With ppl saying that the price of the game with the documentary is a steal, I must say it is, but not only because of the wonderful ppl of 2PP and DF, but also of all the helpful people that have been gathered here! I dread the day when the project is wrapped up and finished, until then I enjoy the journey :)!

Share this post


Link to post
Share on other sites

Actually i think many of the questions here are already answered by Oliver in the "Programming Update #4: Animating the Jack", and much more...

Btw, has anyone played around skinned meshes / skeletal animation in MOAI?

Share this post


Link to post
Share on other sites
The updates we've been getting are incredible. They are worth the price of admission alone. You NEED to compile a book of these - Double Fine's Game Creation Master Class. I'd buy it, and I've already read them! So much priceless information.
I think they'll all be in the art book. Less new stuff to write!

I hope this does get made. I would love to see an art/compilation book of the updates (not just the art updates). If this does get made it would be good if it included some of the comments and explanations. For example the reason the textures need to be to the power of 2.

Is there going to be a video of the animated model, maybe showing a wireframe and then with the texture applied? I seem to remember there was a similar demo for the woodcutters head.

Share this post


Link to post
Share on other sites
Does every character come with a file, telling the program, when to use which file and a description where the body etc can be found? Like place body at x and y and take as texture file c pixel x to y. If yes, the pixels had to be counted and I can see this solution be rather terrible for sophisticated 3D models.

Well yes pretty much. Their called texture coordinates and they are part of the geometry of the object. They tell the rendering engine which pixels in the texture go with which vertices in the model. They are only defined at each vertex of the model (could be even simpler for the 2D models Reds is using) and you just fill in the space in between by smoothly counting from one to the next. Now, texture coordinates are usually normalized to go from 0.0 to 1.0. They don't address pixels directly, the hardware does that by remapping the width and height of the texture to also go from 0.0 to 1.0 ... and it does it smartly. It picks not just one pixel but several from around the desired point (which probably didn't hit one pixel perfectly) and even from different mipmap levels and blends them all together to get as close to the continuous image of reality that the discrete pixels of the texture are supposed to represent (assuming you have all those features turned on).

This might sound like a brute force solution (I've always thought it was) but it's the bread and butter of modern hardware graphics! It solves the problem of realism by allowing us to pre-compute the solution off-line where it can take as long as it wants and then substitute all the time spent solving it with memory instead. Store the result, rendering it carefully and as long as memory is cheap and continues to be so this will work! Texturing has been such a revolution in graphics because memory has become so cheap and so fast. And many clever people have figured out how to make other problems (like shadows, lighting, reflections, even physical light simulations) into texturing problems. We've had a similar revolution recently with shaders but they don't achieve spatial variance (randomness in appearance across a surface) as well as plain old textures (or at least, not without the help of well designed textures).

Texturing has been around a long time. Research solved a lot of these problems over a decade ago and their use in game technology is so ubiquitous that I'm sure the necessary tools and conventions are well established to make something like the actual pixel resolution of the texture decoupled from the geometry. This is precisely how people can supply hi-resolution textures for old games and have it work unchanged (somebody's done that right, I seem to remember someone doing that for Doom or something).

With texturing, the devil's in the details and you can get it horribly wrong, but good tools, a well designed engine and a seasoned team that knows what the ultimate impact of these decisions will be make short work of those details!

SethB

Well I understood only the half, but since you are saying that my description is pretty close to reality I'm rather shocked that it is that "simple"! Shocked I say! Poof Magic is gone, science replacing it. SCIENCE! ... and well thank you.

With ppl saying that the price of the game with the documentary is a steal, I must say it is, but not only because of the wonderful ppl of 2PP and DF, but also of all the helpful people that have been gathered here! I dread the day when the project is wrapped up and finished, until then I enjoy the journey :)!

Great explanation, SethB!

And I know where you're coming from, Kiwamu- just got to enjoy this Adventure while it lasts!

Btw- the artwork of the Preener Dad is outstanding! It's funny, unique, curious, and definitely in Bagel's style! If that's a hint of what the Reds Universe will be like, I am totally stoked!

Share this post


Link to post
Share on other sites

I keep miss-reading Preener and Paneer. I like to think of a man made of indian cheese, like the tofu heads in monkey 3

Share this post


Link to post
Share on other sites
@SethB: maybe you could have used the magic term "UV(W) mapping" :).

Maybe if we could come up with a way to pronounce 'uvw' as a single word, give a Harry Potter sort of twist. You wave your wand, say 'uvwamus' and textures fly across the screen! Of course, the 7th years have to learn how to do it with anti-aliasing.

;-)

Seth B

Share this post


Link to post
Share on other sites
@SethB: maybe you could have used the magic term "UV(W) mapping" :).

Maybe if we could come up with a way to pronounce 'uvw' as a single word, give a Harry Potter sort of twist. You wave your wand, say 'uvwamus' and textures fly across the screen! Of course, the 7th years have to learn how to do it with anti-aliasing.

;-)

Seth B

:snake:

Share this post


Link to post
Share on other sites

I truly love seeing the "behind the scenes" updates on just how this game is taking shape, form, color, design, and animation. Thank you so much for providing this very interesting background. When we finally all get to play the game, it will be all that more memorable, having seen much of the inner workings along the way - simply fascinating, especially the gorgeous artwork!

Share this post


Link to post
Share on other sites
Sign in to follow this  

×
×
  • Create New...