A Way to Affect the Importance of Specific Phonemes?

Hi there. I was wondering if there is any way I can affect the settings of how important certain phonemes are in FaceFX. For example, I have a character saying part of a line, "One sec, almost done." She opens her mouth one "One", and then her jaw stays open in the exact same position and all she does is tongue flap until she gets to the "m" in "almost". Obviously, this looks really odd with her not closing her jaw for the "n" and "s" sounds before that. This is what the program maps for the phonemes: "W-AH-N-S-EH-K [SIL] AA-L-M-O-S"

This is not the only time this happens for me, but it happens especially regularly during the first second of most dialogue lines. When I go to look at the curves, there aren't any keys on the "open" phoneme between "W-AH" and "M", but on my mapping for the "N" and "S" phonemes(which show up as being acknowledged in the phoneme tab), open is set to 0.00. I'm guessing this has to do with the overlapping of curves for certain sounds, but I would like the software to prioritize sounds where the jaw needs to be closed to make a consonant sound if possible. This result happens both with importing the audio without text and with.

I know that I'm able to go in and manually edit these curves, but as I am working on a game that has ~10,000 lines of dialogue that isn't something logistical for us to do. So, I'm wondering if there is any way I can affect these defaults that I haven't discovered yet or if this is just something the system doesn't allow me to do? I would also like to be able to change how soon she makes a phoneme shape before speaking the next word. There will be a pause in the audio but the character will slowly start to make the next phoneme long before the next word actually starts.

Thank you for your time. Please let me know if any additional information is needed.

Unfortunately, there isn't a way to tweak FaceFX results to make some phonemes more important than others. Our algorithm inherently treats some phonemes as more important automatically (more on this below), but you can't adjust this easily.

I believe what you are seeing with the "N" and "S" phonemes is our "New Coarticulation" algorithm trying to solve the problem of determining which are the most important phonemes. "New Coarticulation" can be turned off from the Tools->Application Options menu, or it can be turned off for individual files/folders via .fxanalysis-config files. Turning it off will allow you to see the results of each phoneme on the output curves more clearly, but there will be more movement in general because the new coarticulation algorithm attempts to prevent unimportant phonemes from influencing the mouth & jaw while enabling them to influence the tongue. The "N" phoneme is almost always removed since it can very easily be pronounced in any mouth shape except a fully closed one.

While editing curves can be very tedious, you might find that you get better results editing the text for an animation. If the voice artist didn't enunciate a word, it is much better to type the text as it was said...and in some cases typing even more abbreviated text will improve results. If this doesn't give you the results you are looking for, you can also remove a few phonemes from the phoneme bar to achieve smoother results. Not ideal when you have a lot of lines, but it is way faster than editing curves and sometimes it can greatly improve a troublesome area of speech.

Thanks a lot for the quick response! I will try messing around with the Coarticulation setting and see what results I get. Perhaps that with a combination of tweaking some settings on the mapping will give me more desirable results. I actually am using the text with the audio for enunciation but am still getting these results. I have done a bit of deleting/moving around the phonemes as well which definitely is faster than editing curves as you say, but yes, it's not ideal for how many lines we have and how few animators are on the team. :)

After you change the coarticulation setting, you need to analyze a new file, or at least move a phoneme boundary on an existing file for the new setting to take effect.

Thank you! So I got the setting to work on an individual animation basis and I'm trying to see what it looks like on a few lines in our game. However, it doesn't look like our batch exporting tool is acknowledging that setting. We are using FaceFX Python commands to batch export/process all of our files, is there access to the "use new coarticulation" setting anywhere in the Python commands exposed to FaceFX?

Use new coarticulation option is a machine-wide setting that you would have to set on the machine doing the analysis at least once. It survives a restart of the application.

The .fxanalysis-config files are the only way to set this setting so that you can check the preference into source control with your audio and text.