Announcing Release 0.2.3
Hey all! I’m happy to announce the latest release of YerFace!, 0.2.3 is now available along with experimental binaries!
I would not consider this release to be “stable” (by whatever definition of “stable” you use), but it is definitely usable. In fact, we have been using it at a rapidly increasing pace over at Markley Bros. Entertainment to power all of our character animation.
Although we have cut other releases since the project started, and we have been providing automated master branch builds for a while now, this is the first release I felt warranted its own post on the YerFace! web site.
Why? Even though the semantic versioning version number might not indicate a milestone, this release represents a milestone nonetheless.
Over the past two years (since the inception of the project) we have:
- Made over 475 commits.
- With over 12,500 insertions.
- Affecting over 80 files.
But more importantly, we have
- A working, markerless facial performance capture solution.
- Two different lip synchronization methods, both based on an audio analysis of the performance. (One prioritizes low latency, the other prioritizes quality.)
- Flexible audio and video capture using cross platform libraries.
- Real time transmission of event data over the network (using standard WebSockets for maximum compatibility) for preview and live production purposes.
- The capability to write captured audio, video, and event metadata to the disk for future use and for replay purposes.
- Support for handling and passing through game controller events, so clients can translate those events to character motion, and performers can “puppet” their characters as they see fit.
- Extremely high performance, with frame rates exceeding 60 FPS on appropriate hardware.
- A preview window with adjustable levels of detail, allowing the performer to see what he/she needs to see at a glance.
- An experimental Blender plug-in supporting both real-time and keyframe animation.
And all of this is only the beginning! We still plan on (in no particular order):
- Unit and integration testing, to better catch regressions in fixes and functionality.
- Blender 2.80+ support.
- Support for multiple WebSocket clients at once in the Blender plug-in. (Allowing for more than one character to be animated live at a time.)
- Blender plug-in CI/CD with tests and downloadable packages.
- Automated Windows builds (currently the process is semi-manual).
- Support for macOS? (Maybe.)
- Documenting the API in detail, so we can support other projects which might want to use YerFace! for other use cases.
- Demo character rigs and documentation/training on how to use them.
- A robust GUI for folks who want to enjoy YerFace but don’t want to deal with the command line.
- Developing the ability to train our own facial landmark neural network models. (The out-of-the-box model we’re currently using has specific weaknesses related to our use case, which we could address if we have enough help from the community!)
- Support for tracking and reporting eyeball direction.
As you can see, we have accomplished a lot. But it’s only the beginning! Are you willing to help? Pull requests are welcome!
Come see my talk at Ohio LinuxFest to learn more.
Thanks for reading and have a great day! –Alex