Is there a way to change the settings for the default generated emoticon curve? The default curve is currently taking too much time to tween between emotion states. Also, is hand-editing this curve possible? No keys appear on the curve editor to be dragged/rotated despite being marked in the timeline.
Also worth noting: attempting to use the smooth curves function to edit the curve in editor also fails.
By "default generated emoticon curve" I'm assuming you mean one of the handful of special emoticons that the Default analysis actor converts into curves. For example the :) emoticon will create a "happy" curve.
All of this is configurable in the Default analysis actor, or the Scripts/GestureLib.py file that creates that file. For example, the blendin and blendout of those curves is hard-coded to .2 seconds (see the Events tab in the Default.facefx file's EmoticonEvents/:) animation, and select the happy event to edit the properties.
The reason why you can't edit this curve is because it is "owned by analysis" like the phoneme curves. It was created by the analysis process, and in some cases the analysis process needs the ability to update that curve when a new analysis take is generated or when the phoneme times change. But if you want to break that ownership, just select the curve and uncheck that checkbox to expose and edit the underlying keys.
Yes, that was it! Is there a way to configure the emotion curves to span the entire animation including the silence before/after the phonemes?
Also, is it possible to have silence drive a face graph node?
Yes and yes. Both can be done with the appropriate analysis events in the analysis actor. _PhonemeEventGroup/SIL will let you add curves to silence, and _AnimEventGroup/Anim_Entire will let you add curves that span the entire animation. These events will have a duration scale that matches their length, so if your curves are between time 0-1, the curves will stretch to the length of the silence or animation.
Working with Analysis Actors can be tricky...so I would keep your changes surgical. I'd also encourage you to script your modifications using Scripts/GestureLib.py as a base...that way you can comment your work and trace bugs back to the commit that broke them. It also allows you to easily run the script on your character's actor file and analyze with the analysis actor so you can trace all of the animation back to the source events that generate them to debug issues.
Finally, beware of trying to improve lip-sync by adding curves to silence. If you aren't getting the results you want, let us know in an email and we can help troubleshoot the issue.