Jump to content
Double Fine Action Forums
Sign in to follow this  
DF Oliver

Programming Update 10: Broken Age's Approach to Scalability

Recommended Posts

- How many light sources are supported in the game ? I guess there is a loop inside the fragment shaders as usual to average the directions and colors of these lights.

The game supports an infinite number of light sources, since accumulation is done on the CPU. All the GPU ever 'sees' are the averaged values.

- Your explanation of the shadow blobs and a rough polygon which encompasses these blobs to make them distorted was very insightful, but I wonder how you apply it to non-flat floors. Are there stairs in the game ?

Our artists have a lot of control over the shadows (and the lighting in general) so they tend to make it work by reducing the size and/or opacity of the shadows. Everything in the game is 2D and the code doesn't have any semantic knowledge about what certain pixel in the artwork mean.

- When you tried to compare the difference in performance between using chunks and not for environment objects, how did you do the experiment without the clip masks ? Was it just by loading large image files without doing any split ?

Testing w/o the clip masks was easy, because instead of loading the polygon we simply create a full rectangle. As far as the performance difference goes I remember that the iPad was running at 10FPS in a scene with 8 or 9 layers before we added the optimization. The game now runs at 30FPS in the same scene.

Share this post


Link to post
Share on other sites

- How many light sources are supported in the game ? I guess there is a loop inside the fragment shaders as usual to average the directions and colors of these lights.

The game supports an infinite number of light sources, since accumulation is done on the CPU. All the GPU ever 'sees' are the averaged values.

Cool.

- Your explanation of the shadow blobs and a rough polygon which encompasses these blobs to make them distorted was very insightful, but I wonder how you apply it to non-flat floors. Are there stairs in the game ?

Our artists have a lot of control over the shadows (and the lighting in general) so they tend to make it work by reducing the size and/or opacity of the shadows. Everything in the game is 2D and the code doesn't have any semantic knowledge about what certain pixel in the artwork mean.

Interesting, so the engine doesn't know if certain parts are floors are not ?

I guess the size/opacity of the shadows can be adjusted inside the game based on light sources nearby, and after that, the animation data will somehow affect the final result as well, such as giving less shadow when legs are away from the floor and such. But I kind of think that a lot of work needs to be done by the artists to make it look good. Is that the case ?

- When you tried to compare the difference in performance between using chunks and not for environment objects, how did you do the experiment without the clip masks ? Was it just by loading large image files without doing any split ?

Testing w/o the clip masks was easy, because instead of loading the polygon we simply create a full rectangle. As far as the performance difference goes I remember that the iPad was running at 10FPS in a scene with 8 or 9 layers before we added the optimization. The game now runs at 30FPS in the same scene.

Full rectangles that contain those large textures. Makes sense. I am sure you can get a lot of performance increase by addressing that.

One thing I'm wondering. When you split the data into chunks, do you consider how tiled rendering is done in the specific hardwares in order to make better chunks ?

Share this post


Link to post
Share on other sites
One thing I'm wondering. When you split the data into chunks, do you consider how tiled rendering is done in the specific hardwares in order to make better chunks ?
Perhaps I misunderstood, but I though he mentioned a piece of software that automated and optimized generation of the chunks.

Share this post


Link to post
Share on other sites
One thing I'm wondering. When you split the data into chunks, do you consider how tiled rendering is done in the specific hardwares in order to make better chunks ?
Perhaps I misunderstood, but I though he mentioned a piece of software that automated and optimized generation of the chunks.

Yes, he did. I am just wondering if the size of the tiles in tiled rendering is considered as one factor in configuring the size of the chunks.

Maybe if each chunk corresponds to one tile, then it will be faster. Maybe.

Share this post


Link to post
Share on other sites

Using chunks which are smaller than a tile will most likely be slower (since that would increase the number of chunks involved in rendering a single tile), but AFAICT there is no reason to use chunks that small anyway...

Share this post


Link to post
Share on other sites
You guys will be able to choose whether or not you want to see letterbox bars, so if you want to see all of the pixels even on a 16:9 or 16:10 display you can make that happen. :-)

As a 16:10 user, I can say we definitely appreciate it.

Share this post


Link to post
Share on other sites

Hi Oliver,

Thanks for posting this talk, really interesting, but I was wondering if there is any chance you would be able to make the Questions and Answer section of your presentation available too? I mean I enjoy your answers here, but it would be interesting to hear what questions the game developers at GDC brought up.

Share this post


Link to post
Share on other sites

Thanks for posting this talk, really interesting, but I was wondering if there is any chance you would be able to make the Questions and Answer section of your presentation available too? I mean I enjoy your answers here, but it would be interesting to hear what questions the game developers at GDC brought up.

I wish I could. I think the Q&A section might be part of the official Vault recording, but that's not available for free unfortunately...

Share this post


Link to post
Share on other sites

I don't know anything about 2D/3D programming, I do web design for a living, but this reminded me of how we've started to focus on building mobile-first websites and "responsive design." This was like responsive design for games. Very cool stuff.

I was a bit disappointed to see the lack of Windows RT/Windows Phone platforms considering they now allow C++ which I figured might be an attraction for porting a game. Have you looked into Windows Phone OS or Windows RT?

Share this post


Link to post
Share on other sites

I was a bit disappointed to see the lack of Windows RT/Windows Phone platforms considering they now allow C++ which I figured might be an attraction for porting a game. Have you looked into Windows Phone OS or Windows RT?

Yeah Windows Phone is unfortunately the odd one out since it doesn't support OpenGL as far as I know. We picked OpenGL as our primary graphics API because it covers a dominant part of the market. Having said that it would totally be possible that we could port it if there was enough interest and a big enough market to pay for the port.

Share this post


Link to post
Share on other sites

Maybe I misunderstood something but is the problem really the number of overdraws like the heatmap you used suggests or isn't it rather the total texture area a pixel causes to be pulled in? So wouldn't a pixel that needs three huge rectangles be worse for mobile GPUs than one that needs 20 tiny ones?

Share this post


Link to post
Share on other sites
Maybe I misunderstood something but is the problem really the number of overdraws like the heatmap you used suggests or isn't it rather the total texture area a pixel causes to be pulled in? So wouldn't a pixel that needs three huge rectangles be worse for mobile GPUs than one that needs 20 tiny ones?

The answer is no in most cases. A texture look-up is a incredibly expensive operation, because it can take a long time to fill the texture cache with values. In fact you want to generally hide that latency by executing some texture-independent math operations while waiting for the look-up to finish. So in other words you want to minimize the number of texture fetches. The size of the textures doesn't really matter all that much AS LONG AS they fit into VRAM.

Also overdraw really is a problem orthogonal to operation latency. It's bad because you are spending GPU cycles on something that can't be seen, so these cycles are wasted. In an ideal world you only ever write to a pixel once, but that's rarely achievable.

Share this post


Link to post
Share on other sites

I'm one of the KS backers, but haven't logged into the backers-forum for much. But I am a programmer (operating systems/sysadmin), and found this particular presentation very interesting. I think this gives backers a good idea as to why it takes so long to do a good job on an ambitious game like this. This video was interesting enough to me that I'm glad I backed the project just to hear the challenges faced while programming it.

Mind you, I'll probably never have reason to use any of this info in my own programming, but still it was very interesting to me.

Thanks!

Share this post


Link to post
Share on other sites

This was very insightful, thanks for sharing it with us. I especially liked the fillrate visualisation and the tour of your shader processing pipeline. With modern OpenGL versions you could cache the shaders as binary blobs after the first compilation, but on mobile it doesn't seem to be possible.

Share this post


Link to post
Share on other sites

Thanks for sharing that with us Oliver, I found it very informative. I especially liked how you spoke about how characters and environments were authored from a technical perspective, it was very interesting to see. Also some of the solutions you talked about in regards to the rigging and shadows and shader effects, while I didn't completely understand the technical jargon, seemed rather clever and elegant.

I really enjoy these information bites, so thank you :-)

Share this post


Link to post
Share on other sites

Fantastic talk, Oliver! I hope it went well at GDC :) I have a few questions about the gradient lighting technique :D

1. How are the dynamic lights modelled? Is a light simply a position, colour, and range?

2. How do the dynamic lights interact with environment art? Or do environment textures ignore dynamic lights and instead toggle the hand-painted states you mentioned?

3. I'm not sure I understand the need to approximate normals. Naively, I would guess that I could achieve that gradient tint by setting the vertex colours on the character's mesh based on the average colour at that point. How do the raycasts from that "pull back point" fit into the calculations?

Sorry for all of the questions. I'm super curious! :) Cheers!

Share this post


Link to post
Share on other sites

1. How are the dynamic lights modelled? Is a light simply a position, colour, and range?

The dynamic light are 2D omni-directional, which means have a location, and inner and outer radius, a top- and a bottom-color and some other parameters (e.g. flicker, ...).

2. How do the dynamic lights interact with environment art? Or do environment textures ignore dynamic lights and instead toggle the hand-painted states you mentioned?

They do not influence the environment at all. The cross-fading is the only lighting technique we have for the environment.

3. I'm not sure I understand the need to approximate normals. Naively, I would guess that I could achieve that gradient tint by setting the vertex colours on the character's mesh based on the average colour at that point. How do the raycasts from that "pull back point" fit into the calculations?

Even though we calculate the average light color(s) on the CPU all of the actual lighting is done on the GPU. There are multiple reasons for that, but the most important one is that it's way more efficient this way and can be disabled easily on weaker GPUs. We could have used and location-based gradient meaning the verts in the head pick up the top-color whereas the feet use the bottom-color, but using normals was more natural choice that also offers flexibility for the artists. They can easily tune how 'blobby' the character is or in other words where the center and how wide the gradient is.

I hope this answered your questions.

Share this post


Link to post
Share on other sites

1. How are the dynamic lights modelled? Is a light simply a position, colour, and range?

The dynamic light are 2D omni-directional, which means have a location, and inner and outer radius, a top- and a bottom-color and some other parameters (e.g. flicker, ...).

2. How do the dynamic lights interact with environment art? Or do environment textures ignore dynamic lights and instead toggle the hand-painted states you mentioned?

They do not influence the environment at all. The cross-fading is the only lighting technique we have for the environment.

3. I'm not sure I understand the need to approximate normals. Naively, I would guess that I could achieve that gradient tint by setting the vertex colours on the character's mesh based on the average colour at that point. How do the raycasts from that "pull back point" fit into the calculations?

Even though we calculate the average light color(s) on the CPU all of the actual lighting is done on the GPU. There are multiple reasons for that, but the most important one is that it's way more efficient this way and can be disabled easily on weaker GPUs. We could have used and location-based gradient meaning the verts in the head pick up the top-color whereas the feet use the bottom-color, but using normals was more natural choice that also offers flexibility for the artists. They can easily tune how 'blobby' the character is or in other words where the center and how wide the gradient is.

I hope this answered your questions.

Thank you for taking the time to respond, Oliver! That all makes a lot of sense :) The results are beautiful :)

Share this post


Link to post
Share on other sites

I was a bit disappointed to see the lack of Windows RT/Windows Phone platforms considering they now allow C++ which I figured might be an attraction for porting a game. Have you looked into Windows Phone OS or Windows RT?

Yeah Windows Phone is unfortunately the odd one out since it doesn't support OpenGL as far as I know. We picked OpenGL as our primary graphics API because it covers a dominant part of the market. Having said that it would totally be possible that we could port it if there was enough interest and a big enough market to pay for the port.

Would this help? https://github.com/stammen/angleproject

Would be nice to see windows store at least :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

×
×
  • Create New...