Is it possible with FaceFX to setup just one base animation that gets audio at runtime and dynamically adapt to it?
We have hundreds of characters and audios and creating an animation for each seems like overkill.
The FaceFX Runtime (and FaceFX in general) assumes that you have one animation per audio file. Multiple different characters can play that same animation but the FaceFX Runtime requires that the Face Graphs for each character are identical except for the bone poses.
In the system you describe, would audio (and text) be dynamically analyzed? There are several problems with that approach: performance, portability, and licensing. We do not port our audio-analysis software to console platforms because it is far better from a performance standpoint to pre-analyze audio files, generate the small animation data file, and then play that animation data file at runtime.
If the system you have in mind uses pre-recorded and pre-analyzed audio and animation data files but just assembles them at runtime, then I'm not entirely clear about how it differes from the current system. Perhaps you could elaborate on what you are looking for.