Sign in to follow this  
Followers 0
DF Oliver

Programming Update 8: The Data Pipeline

120 posts in this topic

Hi guys!

I want to apologize for the extended radio silence from the programming team. We have been super busy in the last couple which left no time for a forum post unfortunately. Getting the game ready for the HoF meant a lot of bug fixing and polishing. We also worked really hard to getting Broken Age running on all of the platforms we are planning to release it on. Finally the announcement of the name and the trailer caused quite some work for us. Thankfully the team got some more awesome programmers, but you will hear more about that later.

Today I want to tell you guys about a EXTREMELY important part of game development that is very often overlooked, because it’s not as glamorous as game-play programming or writing an engine: The data pipeline! In a nutshell the data pipeline is responsible to prepare the assets created by our awesome artists to run optimally on the different hardware platforms. It’s actually quite an interesting problem and hopefully I can show you that there is way more stuff happening than you might think.

01_Overview.png

But why do you even need a step between the art creation tools and the game? While it is true that one could load the raw data files directly into the game it is very often not desirable mostly for memory and performance reasons. Let’s look at image assets as an example, because it is the most platform-specific data type for us right now.

02_Photoshop.png

As Lee described in a previous forum post the artists are using Adobe Photoshop to paint the scenes as a multilayered image which then gets saved as a PSD file (Photoshop’s native file format). In order to see the scene in the game they have to export the relevant image data. This is done by running an export script that will write out the individual layers as PNG files with associated clip-masks. Let’s ignore the clip-masks for now so that we can concentrate on the image data. This export script is the first step of the data pipeline for images and its main benefit is that it automates a lot of (tedious) work that the artists would otherwise need to do manually. It is a general rule of thumb that a manual workflow full of monotonous steps is a huge source for errors, so it is almost always a good idea to automate the work as much as possible.

03_Photoshop_Export.png

Now since we converted the scene from a high-level asset into individual layer PNG files (and clip-masks) we can start to get the separate images ready for the specific GPUs it’ll be used on. The second step of the data pipeline is actually quite complex and contains multiple smaller steps:

1) Mip-map generation

2) Mip-map sharpening

3) Image chunking (only for scene layer images)

4) Texture compression

Mip-map generation takes the image data from the individual PNG files and generates successively smaller versions of it. This is called an image pyramid or mip-map chain. This is important because the graphics chip automatically uses these smaller versions to efficiently draw distant or small objects without visual noise (caused by aliasing).

04_Mip_Map_Creation.png

Reducing the size of an image basically means that pixel colors are averaged (or blurred). Unfortunately this very often reduces the contrast in the smaller images. In order to counter this problem the second sub-step increases the local contrast of the mip-maps using an image sharpening filter.

05_Mip_Sharpening.png

The third sub-step is called chunking. Image chunking deals with the fact that GPUs prefer textures that have power-of-two resolutions. In fact some graphics chips require the textures to also be square-shaped. It is impractical however for our artists to draw the scenes with these constraints in mind, so this pipeline sub-step splits the large and irregularly shaped images into smaller (square) textures with power-of-two resolutions. The appendix at the end of this forum post will describe in greater detail why GPUs prefer textures with these constraints.

06_Chunking.png

The fourth sub-step converts all the image chunks into the hardware-specific data formats. In order to support all the hardware platforms we are committed to it is necessary to convert the chunks into 4 (!) different texture formats: DXT (Windows, OSX, Linux, Android), PVR (iOS, Android), ATC (Android) and ETC1 (Android). These formats have different requirements and characteristics which actually had quite a big impact on the engine. If you are interested in why we are using these texture formats rather than loading PNG images directly into the game you can check out the appendix at the end of this forum post. Be warned though it is quite technical. :)

At this point the images are basically ready to be used in the game. Depending on the type of asset or the target platform there might be other pipeline steps though (e.g. file compression using gzip).

Here in Double Fine we call the second pipeline step “munging”. Other names for this process are “data cooking”, “content processing” or “data builds”. Here is a list with some of the asset types we use with their associated data pipelines:

Images (character textures, scene layers, other textures)

1. Export from Photoshop

2. Munging: a. Mip-map generation, b. Mip-map sharpening, c. Chunking (scene layers only), d. Texture compression

3. File compression (iOS PVR files only)

Character models

1. Export from Maya: a. Extract hierarchy of joint-transforms, b. Extract meshes and group them by materials, c. Calculate normalized skin-weights for all vertices

2. Munging: Count number of active joints per subset

Animations

1. Export from Maya: a. Extract joint-transformation for each frame, b. Extract subset visibility for each frame

2. Munging: a. Strip constant animation tracks in rest position, b. Strip delta-trans transformations (for non-cutscene animations) c. Remove redundant key-frames

Shaders

1. Munging: a. Resolve file includes, b. Generate shader permutations, c. Optimize shaders for target GPU (e.g. standard OpenGL vs. OpenGL ES 2.0), d. Identify and remove redundant shaders

Sequences (cutscenes, visual effects, animation events)

1. Export sequence from sequence editor

2. Munging: a. Group commands into sections, b. Sort commands based on execution priority, c. Remove redundant data (e.g. fields with default values)

This concludes this forum post about the pipelines we are using in order to get the data ready for the many different platforms the game will eventually run on. I hope I could show you guys that there is actually a lot of work that has to be done to an asset before it shows up in the game. It is a very important part of game development though, because the representation of the data will very often have a profound impact on the memory footprint and run-time performance of the game, so getting it into the optimal format is super critical.

As usual please feel free to ask questions. Also make sure to check out the appendix for the gory technical details of efficient texture representations.

TECHNICAL APPENDIX: Optimal texture representation

This appendix will describe why games generally don’t use standard image formats like PNG, JPG or TGA to store textures. I will also talk about why GPUs tend to prefer square images with power-of-two (POT) resolutions (2ⁿ => 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 …)

It all comes down to data-access speed. While the execution speed of GPUs (and CPUs) has steadily increased both by fitting more transistors onto a the chip as well as adding multiple execution cores the latency to read and write data from memory hasn’t improved to the same degree. At the same time modern games require more and more data to achieve a high visual fidelity. Higher screen resolutions require larger textures in order to keep the texel-to-pixel ratio constant. That leaves us in a bad place though because we have super fast GPU cores which need to access more data very quickly in order to render a frame efficiently.

Thankfully there are different things that can be done to improve the situation. As I mentioned above textures are very often represented as an image pyramid rather than a single image. The smaller versions of the image are called mip-maps and require only a quarter of the memory of its parent image. The GPU can leverage the smaller memory footprint of the lower mip-maps for surfaces with smaller screen-space coverage (e.g. objects that are far away or oriented almost perpendicular to the camera).

And this is where it the square and POT requirements come in handy. Textures that abide by these rules simplify a lot of the computations required to look up texture-pixels (called texels) at different mip-map levels. That means that the GPU can find out very quickly what color a pixel should be on screen. There are additional benefits for POT textures too like simplified coordinate wrapping and clamping.

To speed up data access even further the GPU (just like the CPU) uses the benefits of different memory cache levels. The level-1 cache is smallest level in the memory hierarchy, but is the fastest to access. If the processor can’t find the requested data in cache it will search the next level which is slightly slower. If the data can’t be found in the cache at all a slow non-cache memory access is issued. Rather than just retrieving the requested piece of information additional values are fetched and copied into the memory caches. This is done in order to be able to benefit from data locality.

Locality uses the observation that very often when a value is used for computations other data nearby will also be fetched for the following operations. The important thing is that the cache implementation is very low-level and generally doesn’t know about the type of data (e.g. vertices, textures), so the memory controller simply copies a linear section of memory into the cache centered on the address that was accessed.

07_Memory_Hierarchy.png

Unfortunately images are rarely accessed in a linear fashion though. For example texture filtering combines multiple adjacent texels into one resulting color. Also graphics chips usually draw 2x2 pixel blocks at a time in order to leverage the parallel nature of the rendering computations. These observations finally lead us to texture compression, because the major goal of this technique is to lay out the texture data in the most efficient way. Rather than expressing an image as a linear array of pixels the data is converted into a block-based (or swizzled) representation. This image shows the difference between the two different data layouts:

08_Texture_Memory_Layouts.png

In addition to a cache-friendly data layout texture formats like DXT also compress the pixel data by exploiting the fact that the color of neighboring pixels very often doesn’t change very much. That means that the difference between adjacent texels can be expressed with fewer bits, which reduces the memory required to represent an image. So the GPU has to deal with less data which is formatted in an optimal way! Hooray!

But that still doesn’t explain why we don’t use standard image formats like PNG directly though. Well they simply aren’t designed to represent images in the optimal fashion described above. Usually these formats don’t support multiple surfaces necessary for mip-maps and the image is expressed as an linear array of raw RGBA values. In theory one could load an image from a PNG file and compress it before sending it to graphics memory but the compression requires a lot of CPU and memory overhead. Also this transformation really should only be done once rather than every time an image is loaded.

This leads us back to the data pipeline which is the main topic of this forum post. One of the most important steps of image munging is in fact texture compression, which will convert the raw image data into the data representation preferred by the different GPUs.

I hope you enjoyed this appendix and that I was able to convince you that compressed textures are great and should almost always be used instead of raw images!

Share this post


Link to post
Share on other sites

That was a super interesting and great explanation of something I knew nothing about. I had no idea how complex the process is to go from Art to just getting something to display on screen.

Share this post


Link to post
Share on other sites

Why are there more texture formats necessary for Android? Is this a result of the different potential GPUs an Android device might have? Or is it because each format is better for a different type of texture? Thanks, fascinating as always.

Share this post


Link to post
Share on other sites

Wonderful post. Thank you for all the effort to organize/present this update. Really interesting!

Share this post


Link to post
Share on other sites
Is this a result of the different potential GPUs an Android device might have?

That's exactly it, in a nutshell. Almost every GPU manufacturer has their own preferred format that will yield the best performance with their particular GPU architecture, so this needs to be accounted for if the game is to run well across a broad range of devices.

Share this post


Link to post
Share on other sites

Loss of contrast: Make sure you do the mip filtering in linear light space! That was the single most important quality improvement I ever did to mipmapping code on an engine, even more important than fancy filter kernels.

1) Gamma should be in the png header, but I fall back to 2.3 or 1.9 if it is missing or reported at 1.0, depending if the original artwork was previewed on a TV or a computer monitor.

2) Do the transform from gamma space into linear space, make sure you use float values to avoid losing precision in dark areas.

3) Filter down to the desired mip level. I actually prefer a box filter, because you don't get ringing artefacts. But let artists decide. An important test is that you don't gain or lose energy with your filter, so a pure white or pure black texture should not change.

4) Do the transform from linear to gamma space and round back to integer. You could just use the inverse of the value you used before, or you could select whatever your target platform uses.

Try this, especially without your contrast filter, and compare the results!

Share this post


Link to post
Share on other sites

I love this technical posts, thanks a lot for them.

One silly question - in my (ridiculously small comparing to Broken Age) projects, I used mipmaps only for textures in fully 3D interfaces, to avoid visual noise produced by bilinear/nearest neighbour filtering. On the other side, for 2D I usually used just pure bilinear or even nearest neighbour filtering, to preserve crisp look.

You write in the post:

This is important because the graphics chip automatically uses these smaller versions to efficiently draw distant or small objects without visual noise

That is why it surprises me to read about mip maps in relation to displayed huge screen elements (not distant or small). Even the first mipmap level is 2 times smaller, which would make the background look noticeably "worse", even when used in smooth camera "zoom".

I hope my question is not too rude, the game looks awesome and I am 100% sure you know what you are doing. I am just curious as I observed picking correct filtering can make a difference.

Share this post


Link to post
Share on other sites

I love this as I am in a graphics programing course and a game engine course right now in college. I do have a question what openGL packages do you use, if you can tell us. For those who dont know what I am talking about or maybe help clarify my question is that openGL does not have one SDK download and you are good like DirectX does. It has multiple different packages that are used to get the frame work. For example I use for (said graphics class) GLEW, GLM and GLFW. GLEW is a group of extensions for open gl, GLM is math libraries, and GLFW is the open GL frame work. There are tons of these that all do about the same thing just different ways of accessing them, I think (Please let me know if I am wrong as I do like learning about this stuff).

Also for a different question is everything at double fine OpenGL (I understand for this project as you can make "one" engine for all of the supported devices) or is there some Direct3D development happening as well?

Thanks Oliver and please keep up with these awesome updates. I love this technical stuff.

Share this post


Link to post
Share on other sites
Loss of contrast: Make sure you do the mip filtering in linear light space! That was the single most important quality improvement I ever did to mipmapping code on an engine, even more important than fancy filter kernels.

1) Gamma should be in the png header, but I fall back to 2.3 or 1.9 if it is missing or reported at 1.0, depending if the original artwork was previewed on a TV or a computer monitor.

2) Do the transform from gamma space into linear space, make sure you use float values to avoid losing precision in dark areas.

3) Filter down to the desired mip level. I actually prefer a box filter, because you don't get ringing artefacts. But let artists decide. An important test is that you don't gain or lose energy with your filter, so a pure white or pure black texture should not change.

4) Do the transform from linear to gamma space and round back to integer. You could just use the inverse of the value you used before, or you could select whatever your target platform uses.

Try this, especially without your contrast filter, and compare the results!

That's a good point. I have to check how we are doing on being gamma correct. Thanks for reminding me. :-)

Share this post


Link to post
Share on other sites

That is why it surprises me to read about mip maps in relation to displayed huge screen elements (not distant or small). Even the first mipmap level is 2 times smaller, which would make the background look noticeably "worse", even when used in smooth camera "zoom".

Well I guess the other point which I should have emphasized a bit more is that the game will have to run in many different resolutions. Old iOS devices have a resolution of 480 x 320 the new iPad on the other hand supports 2048 x 1536 pixels. And I haven't even mentioned desktop computers at this point, which let you run the game in every possible resolution.

Mips also help you in this case because the goal of mip filtering is to keep the texel-to-pixel ratio as close to 1 as possible. So in other words even for 2D games mip maps are hugely beneficial.

I hope that clears things up a bit.

Share this post


Link to post
Share on other sites

No wonder it takes that long between posts, this entire post looks like a significant amount of work.

Very well written and easy to understand.

Share this post


Link to post
Share on other sites
I love this as I am in a graphics programing course and a game engine course right now in college. I do have a question what openGL packages do you use, if you can tell us. For those who dont know what I am talking about or maybe help clarify my question is that openGL does not have one SDK download and you are good like DirectX does. It has multiple different packages that are used to get the frame work. For example I use for (said graphics class) GLEW, GLM and GLFW. GLEW is a group of extensions for open gl, GLM is math libraries, and GLFW is the open GL frame work. There are tons of these that all do about the same thing just different ways of accessing them, I think (Please let me know if I am wrong as I do like learning about this stuff).

Also for a different question is everything at double fine OpenGL (I understand for this project as you can make "one" engine for all of the supported devices) or is there some Direct3D development happening as well?

Thanks Oliver and please keep up with these awesome updates. I love this technical stuff.

The non OpenGL ES version used for desktop computers uses GLEW to bind the required extensions. Apart from that we aren't using any other library or framework. In fact I feel much better if I can be as close to the metal as possible when it comes to something as performance critical as rendering.

As mentioned in previous posts we are using (a modified version of) the Moai engine, which is open source: http://getmoai.com/

Our other internal engine that was used for the previous games (e.g. Brutal, The Cave, Once Upon a Monster) does support Direct3D (because it runs on the XBox).

Share this post


Link to post
Share on other sites
No wonder it takes that long between posts, this entire post looks like a significant amount of work.

Very well written and easy to understand.

Thanks gsm. I'm glad you like it! :-)

Share this post


Link to post
Share on other sites

The non OpenGL ES version used for desktop computers uses GLEW to bind the required extensions. Apart from that we aren't using any other library or framework. In fact I feel much better if I can be as close to the metal as possible when it comes to something as performance critical as rendering.

As mentioned in previous posts we are using (a modified version of) the Moai engine, which is open source: http://getmoai.com/

Our other internal engine that was used for the previous games (e.g. Brutal, The Cave, Once Upon a Monster) does support Direct3D (because it runs on the XBox).

Thanks Oliver for your prompt reply. I also liked your memory hierarchy bit at the end and that has prompt me to ask another question. Does the fact that game consoles have different layouts (not the standard Registers->L1->L2->RAM->GPU (GPU RAM) cause issues with run times of different algorithms or even how you choose to pass in thing or do calculations. I know with the Xbox 360 the layout is not traditional as the GPU is before the RAM and if you cache miss you have to go threw the GPU to get to the ram. If you don't know that is fine I am just curious and love it that you even just take time to make the first post. Happy Programming!

Share this post


Link to post
Share on other sites

Thanks Oliver for your prompt reply. I also liked your memory hierarchy bit at the end and that has prompt me to ask another question. Does the fact that game consoles have different layouts (not the standard Registers->L1->L2->RAM->GPU (GPU RAM) cause issues with run times of different algorithms or even how you choose to pass in thing or do calculations. I know with the Xbox 360 the layout is not traditional as the GPU is before the RAM and if you cache miss you have to go threw the GPU to get to the ram. If you don't know that is fine I am just curious and love it that you even just take time to make the first post. Happy Programming!

Oh yeah the hardware architecture of consoles is very often quite different, which means that you have to change quite a few things to get the most out of the hardware. I'm not allowed to talk about specifics, but both the PS3 and XBox had their pro's and con's in terms of video memory access.

Share this post


Link to post
Share on other sites

Thanks Oliver for your prompt reply. I also liked your memory hierarchy bit at the end and that has prompt me to ask another question. Does the fact that game consoles have different layouts (not the standard Registers->L1->L2->RAM->GPU (GPU RAM) cause issues with run times of different algorithms or even how you choose to pass in thing or do calculations. I know with the Xbox 360 the layout is not traditional as the GPU is before the RAM and if you cache miss you have to go threw the GPU to get to the ram. If you don't know that is fine I am just curious and love it that you even just take time to make the first post. Happy Programming!

Oh yeah the hardware architecture of consoles is very often quite different, which means that you have to change quite a few things to get the most out of the hardware. I'm not allowed to talk about specifics, but both the PS3 and XBox had their pro's and con's in terms of video memory access.

Thanks for the reply and I totally understand, but one last one (that I am sure everyone who is a programer wants to know)

Also since I have the feeling that you know just about as much as the PS4 as the rest of the gaming community does the switch to x86 and more PC gaming like features really make you happy or is there still a need for heavy optimizations for each one. I wish I could barrow you for a day and a half to just learn about all this stuff. Thanks again.

Share this post


Link to post
Share on other sites

Thanks for the reply and I totally understand, but one last one (that I am sure everyone who is a programer wants to know)

Also since I have the feeling that you know just about as much as the PS4 as the rest of the gaming community does the switch to x86 and more PC gaming like features really make you happy or is there still a need for heavy optimizations for each one. I wish I could barrow you for a day and a half to just learn about all this stuff. Thanks again.

No problem, I'm glad you guys are digging the tech updates. :-)

It'll make a lot things much easier that's for sure. I haven't had any hands-on experience with the new consoles, but my gut tells me that there still will be quite some platform-specific work to be able to get the most out of the hardware. My hope is that there will be less of it though and that a similar architecture means that it'll be easier to write an engine core. We shall see...

Share this post


Link to post
Share on other sites

This is great stuff! The only thing that would make these updates even more awesome would be relevant snippets of code that shows how you deal with the problems you're faced with. Would that be possible to do in future programming updates?

Share this post


Link to post
Share on other sites

I know that this has been said before but I'd love for you to write a book on programing and game development. Your posts are so clear and I've gone through a few books/training videos on the topic and they don't even touch on some of the topics you bring up; epically multi-platform workflow. (They also have a sever lack of red bot) Thanks again!

Share this post


Link to post
Share on other sites

Really interesting stuff!

As most programmers know, the key to progress is to maintain a good balance between optimization and usefulness. Learning to go from the pride in making the most optimized possible solution to stopping when it is good enough is something you usually learn by senior coders in the first year of your career.

I understand that this is an engine that you are making and that you want to do it right from the start. I am just a little curious of how much of an impact these optimizations have these days. It is my impression that the devices you are targeting are pretty powerful, but perhaps that is just an illusion due to smart phone developers using these kind of optimizations?

What is it in the game that consumes most rendering time? I remember that you put in some 3D models like the windmill-thingie, but apart from that there can't be that many quads to render right? Would the FPS be less than 100 if you simply used PNG files on the targeted device with the worst performance?

I guess on handheld devices, more optimized code leads to longer battery times and therefore longer possible play times, which would of course make players happy :)

Share this post


Link to post
Share on other sites

Question about the compression.

First thing that comes to my mind hearing the word compression is reducing the file size, but in this case it sounds like it's referring to two things. One is doing things like swizzling and getting textures into the format most suitable for the GPU to make reading the data more efficient, and the other is reducing the file size (which is what I imagine people typically think of) using things like GZIP and the adjacent texels being similar colours trick. That's not the question.

My question is, doesn't compression of the type that reduces the file size also require decompression when the asset is initially read before it is sent to the graphics memory? Which is one of the things you try to avoid by not using standard formats like PNG, which would necessitate compression (the read data more efficiently type) before sending it to the graphics memory.

I guess the obvious answer would be that there's no benefit to be gained by keeping the texture saved as a PNG, whereas saving the texture as one of these other formats means you can skip things like swizzling, and the file size saved by compressing for space is a worthwhile trade off for having to spend some extra time decompressing, and the decompressing is probably very fast anyway.

But isn't memory and storage generally considered cheaper resources than CPU cycles?

Or maybe reading that smaller file from disk actually saves enough time that despite requiring decompression, it's still faster overall disk access being as slow as it is.

I'm also wondering about how the data is represented on Graphics RAM vs I supposed general RAM (is there a technical term to differentiate it from the Graphics RAM?) vs disk. So on disk it can clearly be represented in whatever file format you like. Once it gets to memory, does the graphics card require a certain representation, or can the way the graphics card reads from the general RAM be programmed? I would imagine that there isn't much choice in the matter of how the image data is represented in the Graphics RAM, and that it has to be in a certain format for the Graphics Card to do its job. Can the Graphics RAM even be directly manipulated?

Now that I think of it, everything written never talked about the non-graphics RAM or the disk. So those diagrams of memory layout were for the Graphics RAM. Are these layouts decisions made by Double Fine, or is this an explanation of the decisions the graphic card designers made about the memory layout, which in turn mean it makes sense to store the files in a similar format?

Edit:

Also, a more practical less low level question. Are there programs/tools/libraries available out there designed to do all this mip-map generating, and sharpening, and chunking, and compression, or is everybody just writing their own? Can we have yours (I ask that in a joking way, though it would be cool if the answer is yes.)?

Share this post


Link to post
Share on other sites

I understand that this is an engine that you are making and that you want to do it right from the start. I am just a little curious of how much of an impact these optimizations have these days. It is my impression that the devices you are targeting are pretty powerful, but perhaps that is just an illusion due to smart phone developers using these kind of optimizations?

I think even developers of relatively small mobile games use texture compression. For example Middle Manager of Justice would not run w/o compressed textures on iOS devices. The game would simply run out of memory in which case the OS terminates the process.

I only touched on this in the article a little bit, but compressed textures actually require less graphics memory than raw textures. So let's say you have a 1024 x 1024 RGBA texture. A raw representation would require 4MB of graphics memory whereas the DXT1 version of the same image would only 'eat' 0.5MB. In the end of the day that means you can use more and/or larger textures which is always good.

What is it in the game that consumes most rendering time? I remember that you put in some 3D models like the windmill-thingie, but apart from that there can't be that many quads to render right? Would the FPS be less than 100 if you simply used PNG files on the targeted device with the worst performance?

That is a non-trivial question and there isn't one answer (unfortunately). Here are some of the main offenders:

- Expensive fragment shaders

- Uploading textures and constants to graphics memory

- State changes

Share this post


Link to post
Share on other sites
I know that this has been said before but I'd love for you to write a book on programing and game development. Your posts are so clear and I've gone through a few books/training videos on the topic and they don't even touch on some of the topics you bring up; epically multi-platform workflow. (They also have a sever lack of red bot) Thanks again!

Thanks dude. :-)

Share this post


Link to post
Share on other sites

My question is, doesn't compression of the type that reduces the file size also require decompression when the asset is initially read before it is sent to the graphics memory? Which is one of the things you try to avoid by not using standard formats like PNG, which would necessitate compression (the read data more efficiently type) before sending it to the graphics memory.

For traditional file compression techniques (e.g. ZIP) that is true. Compressed textures as described in the article do not have to be decompressed. In fact the goal of the data pipeline is to prepare the data so that it can be used by the GPU directly the way it is.

As a side note: Traditional file compression can still make sense if it reduces IO time. So if it's faster to decompress the data in memory than reading it uncompressed from a storage medium, then it is still a good choice. For example we use this for iOS devices, because IO speed isn't a particular strong suit of the hardware.

I'm also wondering about how the data is represented on Graphics RAM vs I supposed general RAM (is there a technical term to differentiate it from the Graphics RAM?) vs disk. So on disk it can clearly be represented in whatever file format you like. Once it gets to memory, does the graphics card require a certain representation, or can the way the graphics card reads from the general RAM be programmed? I would imagine that there isn't much choice in the matter of how the image data is represented in the Graphics RAM, and that it has to be in a certain format for the Graphics Card to do its job. Can the Graphics RAM even be directly manipulated?

Now that I think of it, everything written never talked about the non-graphics RAM or the disk. So those diagrams of memory layout were for the Graphics RAM. Are these layouts decisions made by Double Fine, or is this an explanation of the decisions the graphic card designers made about the memory layout, which in turn mean it makes sense to store the files in a similar format?

Whether or not graphics memory can be manipulated directly depends on the device. ;-) For this project we don't have the benefit of a shared memory architecture, so that means we have to rely on OpenGL (or rather the driver implementation) to communicate with the graphics memory.

The swizzled layout is something designed (and recommended) by the GPU manufactures. We are merely trying to getting the most out of the hardware the game will run on.

Also, a more practical less low level question. Are there programs/tools/libraries available out there designed to do all this mip-map generating, and sharpening, and chunking, and compression, or is everybody just writing their own? Can we have yours (I ask that in a joking way, though it would be cool if the answer is yes.)?

Unfortunately it's not simple to share that code, because it's designed with our data build servers in mind, which means that the code contains a of of extra stuff that wouldn't make sense for somebody else.

Having said that there are a lot of free and open source tools available that do (part) of the work discussed above. Check out nvdxt and the PvrTexTool for example...

Share this post


Link to post
Share on other sites

It's happened before, but DFA has definitely provided me with the densest accumulation of instances fitting the description, "I am incredibly interested in this despite having practically no idea what's going on."

Share this post


Link to post
Share on other sites
Sign in to follow this  
Followers 0