FaceFx acting weird

Hi,

I’m writing on the forum today because I’m struggling with FaceFx. In fact, I’m doing tests to see if FaceFx could be my primary software for automated lip sync but I’m very disappointed.

I have a sentence with two times the word You. So, I’m using the phoneme UW twice but I can’t figure out why I haven't twice the same shape. Why the result is so different ? How can I do a lipsync if I can’t predict what will be on screen ? (Check the attachement to see the difference between the two You in the same sentence)

I don't know if I'm doing something wrong but I checked everything and I can't understand.

If you have an idea or if you're struggling with the same problem, let me know.

Permalink

It may be due to the fact you are using "u" rather than "you". Can you try analyzing the same file using "you" instead of "u"?

Are you using the default FaceFX phoneme mapping or are you using a custom phoneme mapping?

It's hard to tell without having the audio and text you're using. Maybe you can attach those, too (or email them to support {at} oc3ent.com).

- Jamie

Permalink

It's hard to tell exactly what's going on without the source assets. Can you send along your audio/text and facefx actor to support {at} oc3ent.com so we can help you get good results with FaceFX?

Permalink

The thing is why its working for a UW and not working for another? I also did my own mapping but in fact the problem comes from curves. My good U shape and the bad U shape haven't the same keys. It's looks like random. I can't figure out how facefx does to set keys. I mean, I have twice the same shape, why I haven't twice the same keys on the curves ?

You can find my audio and text files on my first post.

I also sent an email to the support team.

Thanks and have a nice day.

Permalink

The biggest issue I can see is with the text file. The phrase was repeated in the text file twice, so the recognizer was thrown off pretty badly. I cleaned up the text file and put in a few (optional) time hints and saw dramatically improved results on the default mapping.

Also, while I don't have the mesh and the targets to see the custom mapping in action, it seems to over-map many phonemes. I can't figure out, for example, why T is mapped to Silence, MBP, and Th. If you look at our default mapping, T only drives open and tRoof (a tongue target). Other phonemes have this same possible issue. Even if the text file mostly fixes the sync problems, I'd suggest spending a bit of time paring down the mapping. This will pay big dividends if later you choose to hand-tweak some animations, you'll have far fewer keys to deal with.

It's worth noting that mapping to silence is generally a Bad Idea. Silence has special status in the coarticulation algorithm, and while you can technically map to it, there's usually no reason to do so, since silence will produce the neutral pose (which should be modeled with a closed mouth).