Jump to content
Double Fine Action Forums
Sign in to follow this  
DF Lydia

Art Update 12: Character Texture Management

Recommended Posts

Hi, backers! My name is Lydia Choy, and I’m a technical artist on Broken Age.

Lee previously talked about lighting in this forum post: http://www.doublefine.com/forums/viewthread/9172/

Now I’ll go into more detail about the script that manages the various views and data that is in a character texture, using the boy as an example.

STEPS:

1) Artist first paints different views of a character (front, side, back, etc), and then organizes the different layers into folders which correspond to each character view. In this case there is three textures for the back, side and front of his head, and another three textures for the back, side and front of his body. Each of these views have a folder with the “View_” prefix.

art12_01.png

2) Different parts of each view need to be separated into animatable parts (Head, eyes, upper arm, forearm, etc), which are arranged in a reasonable manner onto a square texture, which are then UV-mapped to geometry. (For more on the process of UV mapping, Wikipedia has some explanation: http://en.wikipedia.org/wiki/UV_mapping ) The script creates a folder called UV_Layouts, and each child folder corresponds to a UV texture that needs to be generated. It crops each texture based on the layer mask, and also grabs the desired resolution of the texture from the folder name.

art12_02.png

3) When the character is broken up in this way, Artists need a way to make changes to the texture in the original context. For example, someone wants to add more stripes to the boy’s sleeve. It is way easier to paint these stripes in the original view than the “broken up” uv-layout view. Ideally this updates the uv-layout view as well. Using naming conventions, the script takes the different body parts and creates parent folders containing photoshop smart objects. Photoshop smart objects allow multiple instances of the same image to exist, and whenever you edit any one of these instances, it keeps all instances of this image in sync. This way, when stripes are added to the boy’s sleeve in the original configuration, the images in the uv folder also get updated. The script will also add any new layers that are created to the uv layout when the artist wants to sync them up.

art12_03.png

4) For rim lighting (more about that in Lee’s previous posts), artists create a folder called “Rim” in each smart object, and three layers named Red, Green, and Blue, which correspond to the different directions of rim lighting (left, right and top). They can paint these rim channels using more “natural colors”, such as a warm sunlight, and the script will take each layer, based on name, and fill out the final rim texture with red, green or blue using the alpha values of each layer.

art12_04.png

5) The export script then takes each UV Layout group, and outputs a square diffuse texture with the specified resolution and name, and also a corresponding rim texture.

Here is what is generated for the Side Head view for the boy:

art12_05.png

Some final notes:

All asset processing is based on naming, which is good and bad:

- Good, because it makes it really easy to script and change and test, obvious in the file what is being processed or not processed. The script will ignore any garbage that doesn't match the naming.

- However, naming errors sometimes result in some digging to figure out what went wrong, or if some parts of the naming convention are forgotten or not well documented.

Photoshop scripting is both straightforward and annoying:

- You can record actions simply by doing them and using script listener output. The output is basically unreadable and not obvious if you aren't organized with your function names

- You can use Javascript or Visual Basic

- Because the script is run "in place" in the ui, you run into a lot of random bugs like photoshop assuming certain selections and modes, if you don't clear it out explicitly in the script

So, why did we do things this way?

When we decided that characters would be 2d, there was no existing pipeline for constructing these characters. With the help of this script, we could keep all of the paint and lighting information for one character in a single file, organized in a manner that makes it easy to edit the characters in an artist-friendly layout, but still keep it in sync with the final texture layout that the rigged characters use, without any significant changes from the way the artists preferred to work. It also made it easy for artists to take early concepts and translate them into game-ready assets without any additional tools that they weren’t already using.

As someone who has written many pipeline and artist scripts, you learn to pay attention to artists’ workflows and try to find “easy wins” that scripts might give them--anything that might help automate or speed up steps in the production process so that they can spend more time creating the art instead of managing non-art-creation steps in the pipeline. It also makes it easy to hand off work to others in the process when artists agree on a common workflow that works for everyone involved.

And making it easier to make the game, makes the game better!

Share this post


Link to post
Share on other sites

Thanks for the update Lydia!

I've recently reached kind of a eureka moment with UV unwrapping and mapping (but in Blender, mostly creating low poly assets that I later sculpt in Zbrush), but I never really thought of using Photoshop smart objects this way to link the UV islands to an object layout that makes more real-world sense. I don't think it's super useful for my current workflow, but it is interesting, and makes total sense for Broken Age's 2D textures. I could see using the technique in the future.

You wouldn't happen to know of any good tutorials on Photoshop scripting that specifically deals with the sort of things your script is doing here (using the RGB alphas for rim lighting, keeping layers synced in the UV map, etc.), would you?

Anyway, thanks for sharing!

Share this post


Link to post
Share on other sites

I like the the images broken down look like something a serial killer would have or create. Great work as always!

Smiles

Share this post


Link to post
Share on other sites
look like something a serial killer would have or create
You should see a typical face texture on a UV-unwrapped/flattened 3D model *_*

cool look into the process - thanks DF

Share this post


Link to post
Share on other sites

Thanks for the update Lydia! I had been wondering how this would be automated. I was expecting that this would take the form of an exporter rather than an action thingy, but I can see that that's probably easier.

Share this post


Link to post
Share on other sites
Thanks for the update Lydia!

I've recently reached kind of a eureka moment with UV unwrapping and mapping (but in Blender, mostly creating low poly assets that I later sculpt in Zbrush), but I never really thought of using Photoshop smart objects this way to link the UV islands to an object layout that makes more real-world sense. I don't think it's super useful for my current workflow, but it is interesting, and makes total sense for Broken Age's 2D textures. I could see using the technique in the future.

You wouldn't happen to know of any good tutorials on Photoshop scripting that specifically deals with the sort of things your script is doing here (using the RGB alphas for rim lighting, keeping layers synced in the UV map, etc.), would you?

Anyway, thanks for sharing!

Hi! Sorry for the slow response--I forgot that Chris posted this thing, hahaha. To answer your question--I didn't really consult any tutorials for this (other than the Photoshop scripting guide which is available from Adobe's site). The work flow we decided on was based on what would a) be the least disruptive way to organize layers and names for the artists and b) the easiest way to keep things in sync with smart objects after some trial and error. It was one of my first forays into Photoshop scripting in general, so I bet I still have a lot to learn! Often these script requests come up in addition to the work I already have on my plate, so once a reasonable solution is found I just kind of crank it out. I have no doubt there are probably faster, more efficient "offline" export methods that are probably better, but it would have just taken more investigation and more time.

Share this post


Link to post
Share on other sites

Amazing Lydia! I read your post five times but I still can't figure out how the game knows which texture to call up at any given moment. (Maybe I'm over-thinking it?) Up until now, I was just assuming that the characters were going to be some variation of a standard animated sprite, with diffuse maps assigned to each piece of geometry - like any 3d model... but now I'm all confuddled again. You made me feel stupid... in a good way!? :)

Anyway, I have no idea how you do what you do - but keep on doing it!

Share this post


Link to post
Share on other sites

Thanks for this. I too, never thought of using this process and smart objects in photoshop. It is also nice to see how you guys are managing your layers and all of the characters assets. I am really interested in how this transitions into your engine or 3D software and how you guys handle the geo-swapping when the character angle changes from a front to front third, so on.

Thanks again for taking the time to post this and share your process. Really exciting stuff.

Dan

Share this post


Link to post
Share on other sites
Amazing Lydia! I read your post five times but I still can't figure out how the game knows which texture to call up at any given moment. (Maybe I'm over-thinking it?) Up until now, I was just assuming that the characters were going to be some variation of a standard animated sprite, with diffuse maps assigned to each piece of geometry - like any 3d model... but now I'm all confuddled again. You made me feel stupid... in a good way!? :)

Anyway, I have no idea how you do what you do - but keep on doing it!

Hi Meisjoe--I have a feeling that you and Dan in the previous post are curious about how the characters are ultimately rigged and how these textures are assigned to mesh. It sounds like a future post about how we mix rigged geometry and flipbook (sprite) animation in our characters will have some interest! This post was more about how we take what an artist paints initially, and lay out all of the pieces onto a square texture, which the engine requires, for uv-mapping onto 3d geometry and how to keep all of those layouts in sync. In game, we then have a mesh shader which will sample the various channels from these textures, and final color values are calculated after lighting parameters and other inputs are passed into the shader as well. Perhaps that is also a good later post! Thanks for reading.

Share this post


Link to post
Share on other sites

Lydia,

Thank you for the replies and insight. I too am interested in anything you are willing to share regarding your work and your process. I think that the level you and your team have taken 2.5D animation is inspiring. In fact I am so happy to see the steps that you are going through to make this happen. I am running a small animation company and we have similar struggles that you guys have managing budgets and artistic passion. I don't have much of an off switch when it comes to pushing the process and pipelines on our projects. Everything you guys are doing really demonstrates what is possible when really intelligent and talented people come together to make something they love happen...although I am sure there is a tonne of stress involved in this.

I am loving this and find it so helpful. Did you come up with the scripts/process to manage the UV layouts? It is a pretty brilliant fix to a, traditionally, very time consuming process. What scope does your job cover, do you oversee the technical art for the whole thing or are you focused on primarily texture management?

No pressure to answer. Appreciate the posts. Good luck!

Dan

Share this post


Link to post
Share on other sites

I second Dan's post. I think where I got lost is in the custom shader work you do. I've only ever used prebuilt shaders, and have never tried to write one from scratch - so I probably don't know half of what's possible. It takes a special mind to be a programmer/technical artist... and I don't think I got it. :)

Share this post


Link to post
Share on other sites
Lydia,

Thank you for the replies and insight. I too am interested in anything you are willing to share regarding your work and your process. I think that the level you and your team have taken 2.5D animation is inspiring. In fact I am so happy to see the steps that you are going through to make this happen. I am running a small animation company and we have similar struggles that you guys have managing budgets and artistic passion. I don't have much of an off switch when it comes to pushing the process and pipelines on our projects. Everything you guys are doing really demonstrates what is possible when really intelligent and talented people come together to make something they love happen...although I am sure there is a tonne of stress involved in this.

I am loving this and find it so helpful. Did you come up with the scripts/process to manage the UV layouts? It is a pretty brilliant fix to a, traditionally, very time consuming process. What scope does your job cover, do you oversee the technical art for the whole thing or are you focused on primarily texture management?

No pressure to answer. Appreciate the posts. Good luck!

Dan

Yeah there was a lot of back and forth between me, Lee (the art director), and the texture artists (mostly Levi). Usually there is some general request like "hey, I wonder if there is a way to make X workflow better/more automated", and I try to investigate if there exists a quick solution that also doesn't require much programming or scripting. In general if something takes, say, a day or two of scripting but saves artists a few minutes or a few hours every time they need to do a specific thing, and this is multiplied over the number of times they need to do this, it is usually a win. And yeah, I oversee the technical art on Broken Age, which includes some tools/pipeline work, VFX, and shader work. I'll see if I can do a post about shaders and materials in the future!

Share this post


Link to post
Share on other sites
I'll see if I can do a post about shaders and materials in the future!

Awesome - I'm sure everybody would love to hear more if you can make time to write about it :D

Share this post


Link to post
Share on other sites

oh man, this post reminds me how much I love Photoshop Smart Objects! however it also reminds me how much i hate Photoshop Smart Objects!

ha. thanks for the insight though, and hopefully you get a chance to post about shaders and materials as that's kinda my world and i'm always interested to see how it's done in other fields like games.

(cute avatar btw.)

Share this post


Link to post
Share on other sites

Thanks for a very informative post.

I'm a little bit confused about this part:

(...) Using naming conventions, the script takes the different body parts and creates parent folders containing photoshop smart objects. Photoshop smart objects allow multiple instances of the same image to exist, and whenever you edit any one of these instances, it keeps all instances of this image in sync. This way, when stripes are added to the boy’s sleeve in the original configuration, the images in the uv folder also get updated.

When the artist edits a smart object later, doesn't it open in a new file defeating the original purpose of painting the whole character together?

Thanks!

Share this post


Link to post
Share on other sites
Thanks for a very informative post.

I'm a little bit confused about this part:

(...) Using naming conventions, the script takes the different body parts and creates parent folders containing photoshop smart objects. Photoshop smart objects allow multiple instances of the same image to exist, and whenever you edit any one of these instances, it keeps all instances of this image in sync. This way, when stripes are added to the boy’s sleeve in the original configuration, the images in the uv folder also get updated.

When the artist edits a smart object later, doesn't it open in a new file defeating the original purpose of painting the whole character together?

Thanks!

I'll take a shot and trying to answer this... (If I missed the point of your question, please disregard.) Smart objects are separate documents that appear as a single composited layer in Photoshop. So, if you edit a smart object - it applies the edit to the "master" file where the smart object exists as a layer. (It works just like Layer Compositions in After Effects.) The character is still together in the "master" file - but each layer of that file is a smart object that is saved on it's own.

I'd imagine that the entire character is painted - and then broken up into smart objects. Then those layers are saved automatically, based on their name... and then... that's where I get lost. :)

Share this post


Link to post
Share on other sites
I'd imagine that the entire character is painted - and then broken up into smart objects. Then those layers are saved automatically, based on their name... and then... that's where I get lost. :)

if the complete character was "assembled" out of smart objects then the original question makes sense, ie. you double click on his arm layer and it 'open' his arm in a separate document window and you're back painting it out of context.

however if your assembled character is some kind of master document, and you're painting within that, then script the export from the various live layers to create new psd/png files on disc then a uv layout file could be created using smart objects. that uv document will update each time you re-export the layers from the character document.

... maybe.

Share this post


Link to post
Share on other sites
I'd imagine that the entire character is painted - and then broken up into smart objects. Then those layers are saved automatically, based on their name... and then... that's where I get lost. :)

if the complete character was "assembled" out of smart objects then the original question makes sense, ie. you double click on his arm layer and it 'open' his arm in a separate document window and you're back painting it out of context.

however if your assembled character is some kind of master document, and you're painting within that, then script the export from the various live layers to create new psd/png files on disc then a uv layout file could be created using smart objects. that uv document will update each time you re-export the layers from the character document.

... maybe.

I think you're onto something. If the UV "master file" is made up of smart objects from the original painted file - then they could be repositioned to fit the Uv layout in a separate "master" file... and any edits made to the original smart objects would "automatically" be saved to the actual texture.

This is why I suck at programming... because after a thing reaches a certain level of complexity - my head starts spinning.

It also occurs to me that Lydia is probably laughing her @ss off at all of us dullards trying to keep up. haha!

Share this post


Link to post
Share on other sites
I'd imagine that the entire character is painted - and then broken up into smart objects. Then those layers are saved automatically, based on their name... and then... that's where I get lost. :)

if the complete character was "assembled" out of smart objects then the original question makes sense, ie. you double click on his arm layer and it 'open' his arm in a separate document window and you're back painting it out of context.

however if your assembled character is some kind of master document, and you're painting within that, then script the export from the various live layers to create new psd/png files on disc then a uv layout file could be created using smart objects. that uv document will update each time you re-export the layers from the character document.

... maybe.

I think you're onto something. If the UV "master file" is made up of smart objects from the original painted file - then they could be repositioned to fit the Uv layout in a separate "master" file... and any edits made to the original smart objects would "automatically" be saved to the actual texture.

This is why I suck at programming... because after a thing reaches a certain level of complexity - my head starts spinning.

It also occurs to me that Lydia is probably laughing her @ss off at all of us dullards trying to keep up. haha!

Hahaha--not at all! Yeah, the one downside to using smart objects is still having to edit a body partially out of context when you doubleclick on the smart object--but then you also have the original document open with the appropriate master layers visible next to the smart object window, and when you save the smart object it will update the master document. It's a slight downside to this process but the artists were ok with working this way in order to gain the other benefits.

Share this post


Link to post
Share on other sites

Thanks for this great post, TA's ftw! You should do a more techy post on tech-artists.org. I was thinking you used python with the win32 api to create the tool but it was nice to see another method. would love to see more posts on tech-art tools that were created for the pipeline of this game and others by double fine if you are allowed.

Share this post


Link to post
Share on other sites

Great post Lydia. I knew that you were using Photoshop from previous documentaries, but I had no idea that it was being used with all of these scripts. Thanks for sharing!

I never would have guessed that Photoshop would be used for game development. The things that I'm learning from the documentary episodes and reading the posts has more than given me my money's worth for backing and I haven't even played the game yet.

Share this post


Link to post
Share on other sites
Sign in to follow this  

×
×
  • Create New...