Smooth Enum Space Switching with “AB Space” (working title)

As far as I’ve known during my time in rigging/animation, there have been two widely used methods of control for multi-space switching. Either you have an enum attribute with all the spaces or float attributes between two spaces, as more typically seen in IK FK switches, but sometimes with space switching as well.

Enum Attributes:

enum-2.gifThe Good- Easy and clean, expandable, current space is always clear

The Bad- Works like integers under the hood, so no smooth interpolation. So switching isn’t practical unless paired with a matching script. Even if Maya were to change somehow behave a bit more like floats between values, it’s not like blending between 2 and 5 would be direct.

 

Float Attributes:

float.gif
The Good- You can switch over a few frames or blend spaces rather than necessarily relying on matching scripts, making the whole thing a bit more lightweight and portable.

The Bad- This setup is typically confined to switching between two spaces (world space vs parent space, or something like that) with the only way of expanding typically being either adding more float attributes that sort of daisy chain and have a priority order or constraint-switch to another controller that in turn has more float or enum attributes.

Animator preference (and in turn, the rigging tools) at Insomniac is to use the float attributes largely because of the smooth interpolation. That being said, I’ve personally never been a fan of that setup largely because that sort of ties our hands for controllers where we’d like the give the animators more options without making a messy attribute-filled controller. So I’ve been tinkering with a C option.

 

A Quick Aside

The lookDevKit plugin that’s been shipping with the last few Maya versions is quickly becoming one of my favorites with its set of super useful utility nodes (god I’m such a nerd)

1

The one I’ve been using the most is floatMath because it conveniently combines a lot of operations in to one node, so one doesn’t have to deal with all the different nodes in the standard Maya set with their accompanying different behaviors and attribute naming conventions. So in general, the one stop-shop is my preference. Plus I feel less wasteful when I’m using them for my typical use of calculating single numbers where most of the old math nodes are designed for color and therefore you have to switch between having xyz or rgb attributes or different naming patterns  (again, that slightly different behavior can cause confusion and consternation, at least for me)2.png

In the cases were vectors/3 floats are needed, for most of these nodes, there are color variants, and occasionally they also have alpha attributes as well so you can maybe sneak in a forth attribute if needed. I dunno, I haven’t messed with those, but I’m assuming it would work though it may clamp values.3.png

In any case, there are a lot of other useful nodes in the kit. You should check it out. Now that’s enough with the evangelizing. Back to the subject at hand

 

Enums or Floats

da316kc-78a5aafd-6870-41f8-b83c-2f5374a19a7d

(Photo credit goes to theporkchopexpress on deviantart)

To try and get around the issue of getting the advantages of both floats and enums, I started playing around with a new scheme where there are two sets of enums for an “A Space” and a “B Space” with a float to blend between the two.

(This video was too long to convert to a GIF, and I’m too tired to recapture)

 

This setup uses floatLogic nodes (from the lookDevKit, though for people with older Maya builds, I think condition nodes would also work) and floatComposite set to mix mode (though blendTwoAttr would work, in fact, until I almost had finished writing this post, I was using blendTwoAttr, but realized I could use composite) to create a switchboard of sorts for connecting to constraint weights.

new_graph.png

For each of the target nodes in the Enum, You’ll need 2 floatLogic nodes and a blendTwoAttr to output to the weight attributes on the parent (or whatever else) constraint.

The floatLogic nodes for A or B space are set to output a bool for True if the enum attributes are set to the same index as the parent constraint target. The out bools are hooked up to the blendTwoAttr and blended between a space or b space by the followB float attribute. That way, it’s all set to 0 except for the chosen values for A and B space, which are set to 0 or 1 depending on which space is set using the followB attribute.

Annotation 2019-12-10 162505.png

One could potentially pack down the floatComposite nodes in multiples of 3 using colorComposite nodes, but I tend to prefer sticking to 1 dimensional nodes when possible and just making more of them when dealing with 1 dimensional attributes for readability purposes. To each their own though. Just saying it is possible to use the color nodes instead for that bit (using color logic instead for packing won’t work though since it compares all 3 attributes)

Here’s a quick script I hastily recreated because I’m away from my usual computer(s) because I’m on vacation in Morocco /humble-brag

import pymel.core as pm

sel = pm.selected()
controller = sel[-1]
targets = sel[:len(sel)-1]

# This setup presupposes a parent group for the controller to be constrained
const = pm.parentConstraint([targets, controller.getParent()], mo=True)  # or whatever else constraint
controller.addAttr('Space Switching', keyable=True, lock=True)
controller.addAttr('aSpace', at='enum', enum=[target.nodeName() for target in targets], keyable=True)
controller.addAttr('bSpace', at='enum', enum=[target.nodeName() for target in targets], keyable=True)
controller.addAttr('followB', at='double', min=0, max=1, keyable=True)

if not pm.pluginInfo('lookdevKit.mll', q=True, isLoaded=True):
    pm.loadPlugin('lookdevKit.mll', force=True)

for i, target in enumerate(targets):
    aspace_logic = pm.createNode('floatLogic', n='{}_aSpace_following_{}_equals'.format(controller, target))
    bSpace_logic = pm.createNode('floatLogic', n='{}_bSpace_following_{}_equals'.format(controller, target))
    blend = pm.createNode('floatComposite', n='{}_{}_blend'.format(controller, target))

    # Set the constants
    aspace_logic.floatB.set(i)
    bSpace_logic.floatB.set(i)
    blend.operation.set(2)  # mix

    # Make the connections
    controller.aSpace.connect(aspace_logic.floatA)
    controller.bSpace.connect(bSpace_logic.floatA)
    aspace_logic.outBool.connect(blend.floatA)
    bSpace_logic.outBool.connect(blend.floatB)
    controller.followB.connect(blend.factor)
    blend.outFloat.connect(filter(lambda x: x.endswith('W{}'.format(i)), const.listAttr(userDefined=True))[-1])

 

So I just thought I’d share this to see what people think about it because I haven’t really seen it done this way before. Plus to be honest, I’m not nuts about the attribute names for switching as far as clarity for animators goes. That’s actually the other reason I’m writing this post, to maybe crowdsource some better name ideas, so feel free to mention any ideas you might have if something comes to mind while reading, even if just for the setup itself (I’ve just been calling it AB Space, but am not wedded to it)

Anyways, let me know what you think, if you’d maybe like a more tutorialized post about the setup if the way it works isn’t clear enough, or if you spot any ways the setup could be a bit less node-heavy or as mentioned before if you have any bright ideas on attribute or setup naming.

Deformation and Controlling/Displaying Triangulation in Maya

Maya kinda sucks when it comes to working with triangles. Many a Character Artist arguing (in vain) for your studio/project to switch to 3DSMax will tell you all about it when they’re not complaining about bright lights messing with their Cintiqs and how much it sucks to create hair. They have a point, but it doesn’t have as much of a tendency to mess with us over in rigging world. It did mess with me a bit recently, and being an exclusively self-serving person (service role, schmervice role) I tried to find a solution only when it affected me.

For anyone unaware of what I’m talking about: a quick refresher/crash course. You can be working with a quad model, but under the hood, Maya (and probably about any other 3D anything) will turn those quads (or the N-gons of our sloppier character artists) in to triangles. So why not just triangulate and call it a day? Because triangles suck to work with. You can’t select edge loops, it screws with smoothing algorithms, and just sort of makes things visually crowded as well, making it hard to get those sexy smooth edge-flows you see in all the polycount threads. So most of the time, we either just roll with a quad topology and hope for the best (rarely running in to any incidents) then maybe for those vital things, transfer weights over to a triangulated mesh later. But that approach gets more and more messy as you start to deal with meshes with more than just a simple skinCluster, and get more and more history from other deformers, mesh rivets, or multiple versions of a character rig created for performance speed, etc.

Untitled-1.jpg

Still think working in quads is working in quads? Think again, ya square.

The way Maya and many game engines do it by default is to draw the dividing line between the two closest verts in the quad. Where things start to get complicated, however, is as things start to deform. In Maya, the mesh dynamically retriangulates. You can show the triangles by checking Display Triangles in the shape node. So if two verts were close to each other in bind pose, but through skinning, blendshapes and other wackiness stray away from each other, suddenly your triangle will flip direction. When you see a bit of a visible pop in parts the surface while modeling or deforming things, that’s what has just happened.

Capture.JPG

Did you hear? 517 and 5275 broke up. It’s a shame, they used to be so close.

That might be all well and good as you want things to always strive to be their best, problem is, game engines don’t tend to do that, so what can end up happening is that things can look fine in Maya, but then out you go to the game engine, your once smooth silhouettes look like bread knives

I ran in to this a bit ago when working on improving the neck deformations for a character at work.

fnf

Since for NDA reasons I can’t show this on the actual character (I enjoy being employed) I’ll just show it on an old model I did a while back and dug up and skinned in like 5 unintelligently spent minutes (So don’t judge me). I was working on getting a nice sternomastoid flex on said character when they turned their neck, but the topology wasn’t on my side. Luckily my model I’ll use for this post has the same problem as the model I got at work. But needless to say, at the point where I got the model, it was kind of too late to retopo on a whim for a small feature when the topology is already standard across all Tier 1 heads, otherwise working well, and rigged on multiple heads with hundreds of blendshapes already created per character.

6560551_orig.jpg

This isn’t me confirming that Jessica Alba is in any project I’m working on. It’s confirming that this is the first result that comes up in image search for “sternomastoid” that isn’t a medical diagram or a terrifying surgery picture.

So imagine for the purposes of this exercise that the model actually supports the idea of the top end of the sternum…

Untitled-1.jpg

So the mesh is dense enough so that you might be able to conceivably make a ridge across the diagonal faces. You could even get things looking good in Maya. But as you might be able to see if you squint, or enlarge the picture, the triangles are doing us no favors. That’s kind of what we’re stuck with in the eyes of the game engine since it locks edges. Thought it would be that easy?
dY8euVK.gif

So as we move along, things might look fine in Maya, but remember as you’re making your corrective blendshapes, it’s dynamically re-triangulating.

So say we create a (very) quick corrective blendshape…

Untitled-1.jpg

The preceding mouse-sculpted blendshape is brought to you at midnight by someone who’s somehow employed at a AAA game studio and apparently a smooth talker.

Maya has re-triangulated in a way where most of the triangles now run along the diagonal direction we want. Problem is, this won’t carry over to the way the engine does things.Untitled-2.jpg

So, first problem to solve: How do we see the faces in Maya the same way they’ll be seen in-engine? Well, if you’re saying “Let’s just run triangulate on our main mesh,” you’re wrong….

…. but you’re close.

Problem is, it will still dynamically retriangulate like it did before, because the polyTriangulate is at the end of the input stack, so it’s still not representative. It’s basically doing what Maya was already doing, but giving the triangles solid lines instead of our old implied dotted lines. But as I said, we’re close.Untitled-3.jpg

Instead of running triangulate on head_geo, we can run polyTriangulate on the orig shape.

Capture.JPG

Whenever you apply a deformer to a mesh, it duplicates and hides a version of the mesh that feeds in to the deformer as a baseline connection. So if you select that mesh and triangulate it, then you’ll get that ugly jagginess we were all vying for.

Capture.JPG

Maya back to working against us. Just as nature intended

Once you’re done with your nice preview, you can simply delete that polyTriangulate node and go back to working in Quads. Just make sure you delete it before you export. I actually just wrote a tool at work to toggle that I use in these situations.

But frankly, just knowing what it’ll look like isn’t always quite enough. Doesn’t matter how much pushing and pulling of verts you do in pursuit of getting a slim line across perpendicular edges, it’ll always have an angle that has jaggies. Believe me, I’ve tried to find the magic shape that works. So how do we get the control back after the fact and triumph over Maya’s triangular tyranny? It’s actually also fairly simple.

Simply duplicate the mesh in its base pose (you’ll actually wanna make two duplicates. Keep one untouched, and do your work the other), then run triangulate on the faces you want, or do all if you’re a glutton for punishment, and then flip or spin edges as you want (don’t use split polygon.  Even if you don’t make new verts, it’ll renumber your verts and break the process) then clear history once you’re done.

Capture.PNG

Then you take the outMesh of that shape, and connect it to the inMesh of the Orig shape of the head, then delete that connection after it’s taken effect.

Capture.PNG

Remember to break the connection immeditately after. We’re only doing the connection as a sort of convoluted setAttr. Another process that can be scripted

And there you have it, you now have a history-free custom-triangulated mesh. Could probably use a few more triangulated faces, but you get the point.Capture.PNG

And thanks to the second duplicated version we did earlier (hopefully you remembered to do that) you can always easily jump back to the quad version for when you want to do touch-ups on the corrective sculpts or anything else.

Capture.PNG

Anyways, hope that helps for anyone in a similar jam. I’ve tested it in Unreal and it works there as well with no hitches. It of course also works in Luna (our proprietary engine) but that most likely doesn’t apply to you, unless you work at Insomniac, in which case, hi Edwin (there’s nobody at work named Edwin, but wouldn’t be bananas if we hired an Edwin after I wrote this?)

Until the next random problem comes up that I haven’t seen a more skilled and articulate person solve and write about, causing me to write a rambling bit of nonsense…

-N

Why isn’t my material showing in the 2016 Hypershade? (UPDATED)

UPDATE – This issue has been fixed in SP2 that just shipped 8/11/15. For anyone who can’t update to SP2 for whatever reason, read on.

So we’re about to switch to Maya 2016 here at work, and one of the features is the new Hypershade, and while it’s mostly good, it’s a little buggy. Earier today, I found one that had me scratching my head a bit. For some reason, whenever I was opening scenes and opening hypershade, the cool new material viewer wasn’t working, just displaying nothing. It was odd because whenever I had been messing around with it in empty scenes it always worked fine. I even referenced another problematic scene in to my empty scene, and it was working.

New Scene with Materials

New Scene with Materials

Existing Asset Scene

Existing Asset Scene

So I started to test what was different between the two scenes. I saved the existing asset in 2016 (it was created in 2014) and re-opened, that didn’t work, then tried a few other things. Along the way, I noticed a few things that suggested that maybe the viewer was affected slightly by scene conditions, and thought, “It can’t be as obvious as scene units being different, can it?” Turns out it can be.

We use Meters in our scenes and not the Maya default centimeters, so I switched it back to centimeters, and voila. At first, I was thinking it was that they had hard-coded some camera transformations for some reason, but tested the other units and found they all worked, with the only other exception being yards, the second largest measurement. So I’m thinking it’s some sort of precision error making the render camera shoot off in to the stratosphere, I dunno, that’s their problem (I submitted a bug report to Autodesk) but in the meantime, how do we solve it?

Turns out, if you simply set to centimeters, then set back to meters after opening hypershade, that fixes it. So at some point, I might try poking around the actual hypershade script (last time I did that though, it was kind of a mess, maybe they cleaned it up with this new version) but in the meantime, calling this to open hypershade fixes it:

import pymel.core as pm
old = pm.currentUnit(q=True, linear=True)
pm.currentUnit(linear='cm')
pm.Mel.eval('HypershadeWindow')
pm.currentUnit(linear=old)

I’m writing this because I couldn’t even find a mention of this problem when I was googling it, so just in case anyone else runs in to this problem, this is what’s causing it, and here’s a quick solution until Autodesk patches it. I’ll probably try and just find a way to hotswap some commands and shelf buttons to use this command so our artists can use it properly.

Ever miss Maya’s popupMenu Command? Simple Right-Click Menus in PySide/PyQt?

So I’ve recently switched to starting to do interfaces in Qt, and one thing that annoyed me that I was missing from my old maya command scripting days was a simple single command to create a right-click menu. A lot of the solutions I found online seemed a little weird, involving creating the entire menu as part of a command run when you right-click. I dunno, maybe it’s because I was just used to creating the menu and having it behave like any other menu when adding menuItems (or in this case, actions)

Anyways, I was starting to try and do something complicated, but realized I could just do it in a few lines to get what I wanted. By the way, this works with PySide and Qt4 unadulterated (at least it did for me)

class RightClickMenu(QtGui.QMenu):

    def __init__(self, *args, **kwargs):

        super(RightClickMenu, self).__init__(*args)

        # Prepare the parent widget for using the right-click menu
        self.parentWidget().setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
        self.parentWidget().customContextMenuRequested.connect(self.showMenu)


    def showMenu(self, *args):

        self.exec_(QtGui.QCursor.pos())

So all you’d have to do to implement it is like any other menu:

wid = QtGui.QWidget()

ppup = RightClickMenu(wid)
ppup.addAction('Testing')
ppup.addAction('Testing...')
ppup.addAction('123')

wid.show()

This might be common knowledge, but I couldn’t find it out there before, so maybe it will help any other Qt newbs like me that may run in to the problem. But now I can have it sitting in a module ready to go whenever I need it.

Pro-users, if there’s some red flag I’m not spotting, let me know as well.

Facetime (Prepare for some bad face puns)

Let’s face it, (rim-shot), often when doing rigging for mobile/tablet games (at least for now)  it’s a practice at making things look as not-crappy as possible in an age where people are used to seeing some pretty high-end real-time graphics.

 

Whereas Mobile GPU tech has made leaps and bounds in the last few years, raising poly counts, high-res textures and normal maps, we’re often stuck with PS1-PS2 level joint counts. This usually means a fairly hard-limit of 45 skinned joints and 2 influences per vertex. After giving us just enough for a decent body with a few fingers in the hands melted together to consolidate, that gives us very little to work with for any kind of facial animation.

 

Back in the day when consoles and PCs were dealing with that sort of joint count, it didn’t quite matter that the face wasn’t moving. The characters all looked like their faces had gone through a belt sander because that’s all they could process across the board graphically.

A tearful farewell moment from one of my favorite story-driven PS1 games. A graphical powerhouse at the time

A tearful farewell moment from one of my favorite story-driven PS1 games. A graphical powerhouse at the time

But now because of aforementioned GPU advances, we have some high fidelity looking characters with dead faces. Not to mention modern consoles and PCs setting a benchmark. In Infinity Blade for example, among a few other things, faces have been smartly covered up to make a fantastic looking tablet game that doesn’t run at 2 frames per second.

 

However for Lexica, the game we were working on, we couldn’t necessarily do that, as our game was starring characters from classic literature. It might have come across as an odd choice to put a gas mask on Emma Woodhouse or a full-face helmet on Dr Jekyll. You can also combine that with the fact that the game and therefore a lot of the art was initially planned for more of a top-down and far away Diablo-esque camera angle, but the repercussions of that are a whole other ball of wax. So what’s a story driven tablet game to do? The decision was made to simply go with joints for blinking and rely mostly on body animation (and no, there were no polys for a separate lid from an eyeball, it was simply grabbing verts above the eye texture space and flitting them down fast, squashing the eye texture space). If I were there at the time (this call was made a long time before I joined the studio and project) I might have maybe fought for doing at least 1 joint for eyebrows, but hindsight’s 20-20. That’s especially so with remote disconnected hindsight. To the credit of our fantastic animator Kyle Kenworthy, it really did work pretty well in most cases. But there were a few characters modeled with a slight smile so as to seem pleasant that were then sometimes coming across as weird and/or disconnected from the dire situations unfolding around them, and a few really dramatic moments that just fell flat on their face (rim-shot) because of all our characters’ blank stares.  There was also no fantastic voice acting to lean on, as all dialogue was text. It was a game about reading after all.

 

So in pre-production for our second season of the game, investigating animated faces a high priority. First off, we had to figure out how many joints we wanted to add to the rigs, and how much we could add to the rigs before our performance guy would beat us over the head with his chugging iPad.

 

So I took two characters that were at different ends of the stylization spectrum (Queequeg from Moby Dick, and Maid Marian from Robin Hood) for our humanoid characters and quickly created 4 levels of facial rig detail not including the blink joints: 10, 7, 5, and 3. There are some physical integrity issues that go on especially around the brows that kind of make me cringe a bit, but I tried to be a little more careful with that in actual implementation (more on that possibly in a later post), but sometimes the power’s just not there for subtlety. Mobile games, brah.

 

10 – left, right, and mid eyebrows; left and right outer cheeks; left and right sneer/levator labii; left right and mid lips

 

7 – left and right eyebrows; left and right outer cheeks; left, right, and mid lips

 

5 –  left and right eyebrows; 1 joint for both outer cheeks; left and right lips

 

3 – 1 joint for both eyebrows, 1 joint for both cheeks, 1 joint for both outer sides of lips

 

NOTE – in order to be able to iterate faster, when consolidating joints downward, in lieu of re-skinning to combine sides, I just parented them under a group and moved them as one, in effect, making them move as one joint would. As such though, showing the joints wouldn’t be accurate to what’s going on, just the controllers which should just be seen as visualizations of the joints, so it might be slightly hard to see what’s going on visually

So 10 obviously gave the most power and control mixed with some nice subtlety especially around the nasal labial folds.

 

7 saw only a little bit of drop considering it was a 30% decrease, definitely phasing out the need for a mid eyebrow joint and turning nasal movement in to more of a luxury item than a necessity.

 

5 still looked pretty good, but there was a noticeable lack of control over the middle of the lips because blending in to 50% weight split between the left and the right meant any rotation from the sides would make the middle fly off (that 2 influence limit rearing its ugly head.)

 

3 actually kind of surprised me. The difference between the 10 joint rig and the 3 joint rig really wasn’t anywhere near as bad as I was secretly thinking it might be. The pitfalls were there though. Though I was pretty proud of my idea to switch the pivot from one side to the other to fake isolated side movement, the lack of ability to ever counter rotate sides started to hurt, and there was no control over the middle of the mouth. Also, the big problem is having things go away from each other as they would in a smile for instance (both the sides of the lips and the cheeks). The best solution is scaling the joints, but a lot of engines don’t support that. Unreal supports uniform scale (at least last time I used it, that was the case) and Unity supports non-uniform scale in joints. We’re using unity, so there’s that, but for the purposes of the test, I wanted to use only rotate and translate, so I faked it as much as I could by moving up and forward or backward.

 

Spurred on by the success of the 3 joint rig, I decided to do another quick test on Marian, seeing how much I could pull off using lots of middle joints that controlled both sides, but in a higher joint-count situation. It’s a bit of an eyeful since it’s all crammed in the middle. I actually moved some the controllers out, so I could tell what was going on while animating it.

 

It’s 7 joints – 1 for both brows, 1 for the middle-brow area, 1 for both outer cheeks, 1 for both sneer areas, 1 for outer lips, and one for the middle of the mouth.

It’s a little unorthodox, but I think it was pretty successful. I was especially a fan of the cheeks and sneer ones as I felt they brought of cheap physical grounding in an area that doesn’t need as much fine control as the eyebrows and lips.

 

After sending these around to the team, the 7 joint count was set. I was a big proponent of doing basically the first 7 joint setup, but replacing the 2 cheek joints with the cheek and sneer setup from the last maid Marian setup as I thought it would make the best look. I gave the animator both 7 joint setups to play with but the he worried about the lack of easy isolated control on the sides, so since he was going to kind of be the one who would ultimately have to live more with the decision, I quickly deferred to him on that one.

 

…Which kind of reminds me of a little philosophy thing if I might digress from my already very long post (my bad), I was reading fantastic article in a monthly email by Animation Mentor co-founder Shawn Kelly a few years back that always comes to mind in these situations. The post was simply titled: “You Are A Tool.” Long story Short (I think we’re well beyond that with this post), he talks about how in the end, animators are all tools of the director/project. While it’s worth it to let your opinion be known, your opinion can and probably will be superseded many times during your career. You’ll work with people you disagree with, but thing is, it’s bigger than you and ego. Fact is, while you’re getting paid for your vision, in the end, it’s to push some one else’s vision; if it’s not their vision, it’s wrong. That’s kind of hard truth of any business, which in the end, is what this all is. We do it anyways, because it’s still fun as hell. I’m kind of doing the article no justice, but while it was aimed towards animators, it obviously applies to just about any job ever. And if the artist is the servant of the director/project and the tech artist the servant of the artist, we’re the servants of the servants. In the end, our job is to equip the artists do their jobs as well as they can. We can let our opinions be known, but ego has no place when a call is made. We’re tools who create tools for tools. The tools we create for those tools give them the tools to be better tools. Sorry, couldn’t help myself.

 

Anyways, I hope you survived through the post, and that it was informative and helpful. Luckily for us, we had it in our hardware budget to improve by more than 3 joints, but for some projects, maybe 3 will be all that’s actually available. Next post on this subject I’ll talk about how some of the facial stuff was actually implemented based on all this experimentation.

Tools – Maya Asset Browser

Hey all,

It’s been a while. I’ve been meaning to continue posting, but between travelling, work, and a general bit of laziness, I’ve been putting it all off. Anyways, among other things, I want to start doing videos and posts about tools I’ve been working on, and going as in-depth as I can get with the approval of my bosses.

So in this video, I go in to a Maya Asset Browser I made for Schell Games. It’s written in PyMEL and uses XML to point to files for projects for use by artists and anyone else who wants them. It uses a couple other tools I have made while there that I may cover in future posts including a toolset for saving key values for tool preferences and a set of tools to run perforce commands and checks from Maya.

I hope this helps some people think of ways to stremline things for artists, and if you have any questions, confusion, or suggestions on anything, let me know in the comments or email.

 

Outline:

Video 1 – Me showing the tool in action.

Video 2  – Showing the way the files and things are set up and how the tool edits them as you work

Video 3  – Showing the way some of the functionality works on a script level

Character Modeling Course

So I plan on normally talking about rigging and tech art stuff, but I figured I’d show this other thing because it’s what I’ve been up to lately.

While at the CTN Expo in November, I dropped by the Gnomon booth to find out about some of the workshop videos and was lucky enough to win a free course at the Gnomon School as a part of a raffle. I rarely ever win raffles, but I guess that just means when I win, I win big. I feel like my modeling skills have kind of faded over the last few years, so I decided to take a character modeling class.

It’s fun so far. It’s being taught by a guy named Kevin Hudson (look him up, he’s done everything) and it’s fun to get back in to modeling. It’s also giving me an excuse to start to learn Zbrush. Anyways, It started with Maya for base mesh stuff, then covers the back and fourth pipeline with Zbrush. Anyways, here’s what I’ve been doing so far…

Week 1 Homework: Model something simple and fun in Maya. I found a concept of this guy in some tutorial on creating orthographic concepts in Photoshop.

Week 2 Homework: Model a Head. I chose to do a couple images of a guy that Kevin used to work with. He had a big bushy mustache, so I kind of guessed at the upper (and even a bit of the lower) lips. Sorry, I won’t post the images out of respect for the guy’s privacy since I don’t know him and can’t then ask.

Week 3-4 Homework: Model a full body for import in to ZBrush. I kind of had to re-do the head because I wanted to do a figure that would give me a chance to expore musculature. Luckily, I was able to just move points around on the head I did in week 2. Guess well thought out topology does work out really easily. I’ve never really adjusted a “base model” myself, so it was kind of cool and fun.

I’m currently working on the ZBrush bit now. My first real ZBrush project. Hopefully it won’t be too cringe-worthy since I do have Mudbox experience, so similar skill-sets involved.