How to Layer Lipsync and Expression Animation

What is the recommended workflow or pipeline for getting a character to speak using FaceFX, while also showing changing expressions in the face like Fear, anger and love?

Would it be just creating a new set phonemes for each expression, or just using one set of phonemes and handling the rest of the facial expression with separate blendshapes?

Is there any documentation or support on how to blend the lip sync and the expression changes in Unity or is it all handled within FaceFX software? Any documentation on how to handle those changes within FaceFX if it can only be done in FaceFx? Thank you!

Check out text tags, and specifically emoticons for an easy way to create simple emotional curves that synchronize with the text.

Let's say you have text like this:

I was :) feeling very happy :) But I :( just heard some terrible news :(

If you analyze with the Default analysis actor, you will get "happy" and "sad" curves in your animation. You will also get some event stubs in the EmoticonEvents animation group that you can double-click on to create custom animations that will be fired if you want to get fancy with events (this is how the Analysis Actors/Default.facefx file does it). Events can be tricky though, so for now, let's just use the curves provided by the Default analysis actor.

If you drive a "happy" bone pose with the "happy" curve directly, you might run into problems when the character speaks, especially phonemes like P,B,M,F,V which are very sensitive. For this reason, many people create a "happy_upper" bone pose that only influences the eyes and eyebrows, and a "happy_lower" bone pose that drives the mouth. The happy components can then be driven independently. You would still have a "happy" combiner node in your Face Graph to receive the curve, then it would drive the "happy_upper" and "happy_lower" targets with standard linear link functions.

To fix a conflict with the PBM target for example, create a "corrective" link from PBM to "happy_lower". Now when PBM and happy are active, the PBM curve will drive down the happy_lower pose so it doesn't interfere with your lip-synch.

Do this for the other speech targets until your character can speak with that emotion curve active and it looks good. You can adjust the "correction factor" in the corrective link function for a more subtle correction. Then repeat the whole process for sad, or any other emotions where there is a conflict with speech targets.

To go into Unity, you need to export a "collapsed" XML file, and this will bake out the curves onto the targets so that Unity can play back the animation without having to run the face graph computations.