would it be possible to create a dictionary for facefx to refer to when we group phonemes into words?

Hey guys I had a Idea for facefx that would make tweaking and polishing a lot faster and easier. I’ve been making a dictionary that i refer back to for facefx while Ive been tweaking and polishing characters facefx, and a idea hit me. My workflow usually consists of grouping the phonemes into words and then I change the phonemes if I need to. And I noticed that when you process it, facefx usually gets the number of phonemes right, it will just mix up b with p or s with z or just put some randomness in there that makes no sense. dont by anymeans get me wrong, i love facefx gets you to 80 percent done fast, but going the last extra 20 percent takes forever. What if you could get facefx to refer to a dictionary, that we the artist could update. When we group the phonemes into words and if the number of phonemes matches what is in the dictionary, it automatically changes them for us, so that the facefx artist only has to adjust the length of one phoneme or another. But the beauty of this would be most of the work would be done. Anything that doesn’t match maybe could change color so that the facefx guy can check it. I just find im fixing the same word all the time which I have no problem with, but if I could tell facefx what word a group of phonemes are and facefx could change them out for me and then I can spend more time adjusting them so that it matches the audio. But over all, it would make the process a whole lot faster and more efficient. with games getting longer and longer they get filled with more and more audio we artist need a faster way to polish.

i also think the artist should be able to adjust the dictionary, because a game like gears of war will have different words they use all the time is different from la noir and likewise a fantasy game that might make up all kinds of weird words. also lets say someone makes a startrek game, someone could take the klingon dictionary and upload it to their facefx dictionary, and they would be good to go. also this would allow a game set in the south, they say things differently than someone someone normally would, so the artist would want to adjust their dictionary accordingly or add some southern definations to words. ultimately it would give the artist flexability to adapt a already powerful tool to their own needs, and make it even more powerful.

let me give you a example lets say the audio says "go run", and so facefx gave me a jh uh eh exr ao m, which is close but if your watching it doesnt exactly look right. i select the jh and uh hit ctrl g, group window comes up and i type go, and in my dictionary go's phoneme spelling is g ao or g o, the program will look at the dictionary and say hey that matches and maybe give me a choice between the two, and change it for me. you just saved me 30 to 20 seconds of changing phonemes. THATS HUGE so now i only have to group the exr ao m into the word run which might be r ah n and then ctrl d to change eh into silence play to make sure it looks right and im off to the next animation. all together you might shave off a minute of time because its a short phrase but in my project there are thousands and thousands of audio files, alot of them much bigger than that and with something like this one person could polish all the audio for a game fast verses having to have abunch of people polishing the same.

thanks for your time

Richard

Are you using text when you analyze? When you select an audio file you should also select the text file or type the text that's in the audio file... the phonetic transcription when you use text is *much* better.... from what you're saying it sounds like you are analyzing without text and then *manually* grouping the phonemes into words in the phoneme bar. That's not the way to work with the software... in the 2012 version due at the end of July we do have the ability to alter dictionaries and have custom dictionary files for Studio Professional, however you must be using text-based analysis to do this.

Permalink

so, i have looked at the soundCues used. the audio cues i was working with in gears 3 did not have text with them. so that explains why none of the phonemes were grouped. so im working on DLC now, and i used the (analyze -package "soundCueName" -group "faceFxAssetName" -gestureconfig "fast") to process all the soundcues at once for each asset. so heres my question, where is the text suppose to be when someone processes or batchs a facefx asset? is it suppose to be in the soundcue or is it somewhere else? i ask because this time around they do have the text in the soundCues and its so much more accurate. you dont understand how much less work it is. I LOVE FACEFX. but its not consistent, right now the correct text for each sound cue lives, if you open each soundcues properties, under the TTS tab under spoken text, and its under subtitles > [0] (text="insert text here",time=0.000000). I have one package that it works great and 2 that dont for each character. the weird thing is all the soundcues have the correct text in both places and there is no difference between the ones that work and the ones that dont. do you have any ideas why it would work for one and not for the other?

thanks guys :D

Richard

Permalink

In UE3 the text always comes from the SpokenText field. That has always just worked, so if it's not working for specific packages then I don't know what it could be; you'll need a programmer to step into the code and see what it's doing.

In the future, please keep Unreal support on UDN or in the case of Epic direct support, use our epic support email alias. Thanks!