This production will have a very different emphasis from previous Blender Open Movie projects, in that we are focused primarily on the storytelling and are working on a longer format story (the pilot is about an hour, most episodes will be 30 minutes long). Thus, the emphasis for Lunatics will be on rapid workflow and reuse of material to maximize animation and model assets, and less on extremely intricate models or photorealistic rendering.
I hope to use motion capture data and digital puppetry techniques to provide "performance-based"/"near real-time" animation. There will also be extensive physical simulation and path-based animation in this project, as we also need to simulate a lot of space hardware (this is particularly true of the pilot episode).
A bit of calculation based on the animation budgets for the Blender Open Movies ("Sintel", "Big Buck Bunny", "Elephants Dream") shows that the frame-by-frame animation techniques used in those films (and for which Blender is optimized) is simply too expensive for "Lunatics" where we plan to have 30-45 minute full-length episodes.
We'd need to raise something like $1.5 to $2 million an episode to do it that way, and I really don't think that's going to happen. Nor am I going to be able to convince animators to do that amount of work for free or on speculation of getting enough money from profit-sharing to justify their investment of time.
If that were the whole story, I'd probably have to drop the project. Fortunately, though, it's not.
Of course, not all animation is done by pain-staking frame-by-frame adjustment. In 3D video games, for example, the animation is typically patched together by an artificial intelligence program, based on individual motions which were generally added by motion capture. These same kinds of methods can be applied to creating animation for stories, and we are going to try to use that to keep our animation budget manageable (ideally it will be low enough that we _can_ do it on a spare-time basis and just get paid on speculation from profit-sharing -- failing that, we'd at least be able to keep costs low enough to handle with a small fund-raising campaign for each "block" or "volume" of episodes).
This is probably the highest-risk issue with the production plan right now, so I'm giving it a lot of attention.
First of all, I've broken this down into three main techniques:
Machinima, broadly, is the practice of using engines designed for video games to animate dramatic presentations. It started as simply screen-captures of scenes played out within games, but has evolved to include engines optimized for machinima production. More narrowly and definitively, I would say that machinima is a technique that draws heavily on the use of simulated physics and artificial intelligence techniques to have characters move on their own, with relatively little guidance from the animator. In this way, even high-end systems like "Massive" (used by Peter Jackson for the big battle scenes in the "Lord of the Rings" movies) should be considered examples of "machinima". In Blender, there are some techniques that use physics and AI simulations to generate animation, as illustrated by this video:
This particular approach is probably the way to go for exterior mechanical shots, like driving a rover or flying a spacecraft. It might also be used to convert simple movement commands into actual walking patterns for characters' legs (although patching together motion capture material is another way to do that).
Another possibility would be to use some existing machinima engine, possibly the open source Open Simulator package, derived from Second Life, which is already used by a modest-sized machinima community.
Sometimes, though, you want finer control. This can be achieved better without AI or simulation interfering. Instead, you directly control the character using game-control surfaces. This example video is based on an early version of Pyppet. An updated version is being worked on now, and funding development and adaptation of Pyppet may well be part of the production for "Lunatics" (worth the cost, considering how much it could save relative to frame-based animation). The author, "HartsAntler", has expressed some interest in this already:
Unlike digital puppetry, which requires the operator to learn the rig and possibly control the character in artificial, learned ways, motion capture attempts to capture an actor's performance directly and allow it to be mapped onto a CG character. This is a fairly hard problem, but there are a couple of possible solutions.
Here's a Blender example using some motion capture data available from online archives:
This example of facial capture in Blender is from "Monet", which is a non-realtime system created by Mark Kane using Python scripting in Blender. Mark has also expressed some interest in updating and improving his code:
Honestly, I don't know which techniques will work best in Lunatics, whether a mixture will be better than sticking to one technique, or how exactly it will "feel" on screen. I do think these techniques will affect the visual style of the show, and I think I'm okay with that. Not only will they save money, but they can also lend a degree of spontaneity to the production that can be difficult with traditional animation. Another advantage is that these techniques may be easier to learn, making the animation task a less esoteric skill.
For us to use these techniques, we're going to have to get the software up to a level of stability sufficient for production, and we need to make the different packages work with each other in consistent ways and work with our models and rigs. For example, it's going to be fairly awkward if our facial capture software works all with "shape keys" but our digital puppetry software is based entirely on armatures. So we pretty much have to work out a single rig that can work with all the techniques if we want to be able to use them in combination -- which would allow for the greatest artistic flexibility. We'd also like to be able to resort to frame-by-frame tweaking if the situation calls for it.
Motion capture generally works with "BVH" motion data -- so that's another element we need to be able to include.
If we work with an external machinima environment, such as OpenSim, rather than with Blender's built-in game engine, we'll probably also need to write some adapter and listener code to combine BVH animations for import into Blender.
To give you an idea of the aesthetic I'm hoping to achieve, here's the character lineup artwork that we created for our pre-production Kickstart backers. As you can see, the background is rendered with more texture and photorealism than the foreground characters -- this is something like the "masking effects" as described by Scott Adams in "Understanding Comics". This is a very common technique in Japanese manga and anime, which is a major source of inspiration for "Lunatics".
I should mention that the appearance of the wall and floor here are not quite as realistic as I would prefer, but I was trying to keep it simple for this poster. The moonscape behind the windows has texture and the Earth is actually an unmodified NASA image, so it's completely photorealistic.
I plan to make extensive use of non-photorealistic rendering through the use of the "Freestyle" renderer on the characters to enhance this effect. I described this idea in a production blog about some of the reasons why I think Freestyle character rendering might be appropriate for Lunatics: "Freestyle or Not?".
Note that I think it's quite likely that we'd want to render the backgrounds with different settings, or even using the usual Blender renderer, while using Freestyle on the characters -- this would presumably involve two rendering steps followed by a compositing step.
Here's a few things people have done with Freestyle on other projects, to give an idea of what is possible:
This is a sequence of different renders from the same model:
Here's a sample of some possible NPR effects using Freestyle:
Animated anime-style character using Freestyle: