Lip syncing for evolver character using facefx plugin for maya

Hi, everyone

I downloaded an evolver character with facial bone rigs. Now I have to add lip syncing to that character and import it to unity. I'm new to facefx. Is it possible to generate lip syncing just with the maya plugin? Facefx studio professional is too expensive...

I followed the steps in the video tutorial "facefx exporter for maya" but still can't get it... based on what I understand so far, you first select the bones that facefx is going to control and then export different bone poses to facefx, then you hit generate animation? Then animation will be automatically generated based on the audio file?

another question: when we are creating the bone poses in maya, which frames should different bone poses be at respectively?

Thanks in advance!

Permalink

For the unity pipeline using Maya and the plugins, we have a special video tutorial:

http://facefx.com/content/facefx-evaluation-unity-maya-evolver

That should get you up and running, but I'll answer your questions anyway for anyone looking for enlightenment.

1) It is possible to generate lip synching with just the Maya plugin. The video tutorial demonstrates it.

2) That pipeline is correct. First you export a reference pose, then you export the bone poses, then you can generate an animation that will create curves. If you named your bone poses correctly, the curves will drive the bone poses you created. For more complicated setups you need connections in your face graph. You can use "FaceFX Studio Free" for that.

3) when we are creating the bone poses in maya, which frames should different bone poses be at respectively?

That's up to you. When you "Batch Export", you can select a text file that provides the frame numbers and bone pose names. So if you open the Batch Export file in Studio/Samples/Src you could follow that system. But if you are using Unity and evolver, you don't need to set up bone poses one at a time. The unity integration can "transfer bone poses" from one evolver character to another (Assuming they have the same skeleton...Darwin Default with facial bone rig for Jake). This happens when you import the Actor XML file.

Permalink

Thanks for the reply. When you export the bone poses, what is the rule of naming the nodes? I saw in the video tutorial, some nodes are named "ShCh","W" like phonemes, some are not. Do we have to follow the way that sample file "Slade Batch Export.txt" uses to name the nodes?

Permalink

In the most simple case, set up your bone poses to be named like the curves in your mapping: open, W, ShCh, PBM, FV, wide. That's all you need to get a basic talking character, because the curves created by the default mapping will drive those nodes.

The sample content in FaceFX 2012 uses a slightly more complicated setup because we use the Normalized Power curve to amplify the mouth. So if you use the poses exactly as they are in the Batch Export file, you will need to copy the face graph as well with a template file.

The sample content also have sppech gestures set up, so the next most important targets are those that rotate the head and eyes and that blink and squint. Some face graph setup is also required to do gestures.

If you don't want to copy the entire slade setup, open the Slade-Face-Graph-Setup.fxl file in a text editor and copy only those commands you want into a new file for setting up your character.