Mesh becomes separated in FaceFX

Hello,

So I got all the animations working properly in FaceFX from Max, but the issue I'm having is that when I import my character from Max into FaceFX, the eyes and mouth both become separated from the body (they are above the body quite a bit.

I don't know if it has something to do with the fact that I'm using a mixture of morph targets and bones to create the default poses, but I'm still trying to figure it out by comparing the sample model (exo) that comes with the no save version to my model. I've attached screen shots of both the Face FX view and my schematic view from Max that lists the bone heirarchy to see if someone could point out something that I'm not doing or could try differently.

Thanks!

Permalink

It looks like the bind pose is incorrect.

This could be an issue with the FaceFX FBX importer, or perhaps with the FBX file. Does the FBX file import correctly into the Autodesk FBX viewer? That's always a good test to run. Also, the first thing I try is to import the FBX file back into into Max (or Maya), then re-export it. For some reason the process of reading it and writing the FBX file with Autodesk tools can auto-resolve some issues.

If the FBX file looks good and importing/exporting it doesn't resolve the issue, there is one thing you can try.

in the Studio\Scripts\FBXImporter\ogrefbxproxy.py file, you can insert the following line to in the create_render_asset function on line 43: (make sure the indentation matches above and below lines)

'-g', "0", # Use frame 0 as the bind pose.

Then restart FaceFX Studio. That will force the exporter to create the bind pose from the position of the bones at frame 0.

You can also look at the fbximporter log file in Documents\FaceFX Studio 2013\Logs for clues.

I would be interested in reproducing the problem here if it isn't an issue with the FBX file itself. Can you send the file to our support (at) oc3ent (dot) com email address?

Permalink

Thanks Doug!! The reimporting and reexporting the model from 3ds max worked like a charm.

Another quick question while I have your attention....

So we're using Unity and I know that I will have to create a facial rig so that the facial animations can be baked back to the rig (after we purchase face-fx that is), is there a certain facial rig setup that you all recommend? Should I use the rig setup on the Exo sample as a guide?

Also, do you have any recommendations for tools to quickly skin the facial rig to the mesh? I know there are tons of scripts out there that people have written, but have you ever used any 3rd party software (bonespro, etc.) that you could recommend? We only have 2 3d artists currently on our small team and we both handle modeling, texturing, rigging, and animations so I'm always on the prowl for great software that can help expedite the tedious tasks in the art pipeline (which is why I'm so grateful that I found you guys). You'll definitely be hearing from us in the next couple months about purchasing FaceFX.

Thanks for all your help and advice.

Permalink

Glad to heat that the reimporting trick worked!

I really don't have much I can recommend as far as rigging software or tips. I would definitely look into buying pre-rigged content (from Mixamo or Rocketbox for example) and you could also outsource the rigging since it can be tricky.

Exo is a great character from Mixamo, but as far as the rig is concerned, I prefer when the lower lip bones are parented to the jaw.

Permalink

Doug,

So I showed the character with the voice-over to my boss and his boss and they both liked it a lot, however they questions regarding the rest of the pipeline. Basically say our character has a facial rig and we're ready to publish and bake down the animations to be used in Unity. I went into the Docs and read the section on Publishing content, but I guess until I purchase FaceFX Studio, there's really no way to ascertain the time it will take to get the animations into Unity, is that correct? How are the animations usually divided up as far as being playable in Unity. Do people typically have a separate animation for every iteration of dialogue or have you found that what game studios do is take the entire conversation of a character and divide that animation up in the game engine to better control when each piece of dialogue is said?

I guess I'm just looking to solidify how long of a process it is to bake down the animations and get them in to Unity ready to use on a character? Is it possible to bake animations from one model to another model in FaceFX?

Permalink

The easiest way to go into unity is to export an FBX file from FaceFX Studio (you will need a license to enable the export FBX option).

As far as how to organize it in your project, that really depends on the requirements for your project. I would start with each audio file having its own animation. From inside of Unity, you can tell when the audio has finished playing, and you can use that to trigger the next animation.

That's the way it is done in the Unity demo (which includes a sample project):
http://www.facefx.com/page/unity-demo

Permalink

Many Thanks for all your help Doug! It looks like we'll be getting in touch over the next couple weeks to get a license for FaceFX.