Source: befores & afters
If you work at a visual effects studio then you probably know that when a ‘show’ finishes, the assets tend to get archived. Sometimes a studio needs to work on a sequel or needs the assets for a showreel, so a portion of the assets will then need to be brought back online. But what if just about the entire show needs unarchiving?
Well, that’s something a number of visual effects facilities, such as Weta Digital, faced with the advent of Zack Snyder’s Justice League. Weta, which had worked on the 2017 Justice League, was called upon in particular to re-visit several sequences—including abandoned ones where some work had already begun—as well as craft whole new visual effects for what is essentially Synder’s four-hour director’s cut of the film.
Working in collaboration with production visual effects supervisor John ‘D.J.’ DesJardin and visual effects producer Tamara Watts Kent, these new visual effects shots from Weta Digital revolved mostly around the characters Steppenwolf and Darkseid. There were new environments, too, and even scenes that had one character in them in the original film, who was replaced with a different character this time around.
To get a handle on just some of the more than 1000 visual effects shots completed by Weta Digital, befores & afters chatted to visual effects supervisors Kevin Smith and Anders Langlands and animation supervisor Simeon Duncombe about the changes.
In this article:
– Unarchiving the 2017 show
– Making a new ‘iridescent’ Steppenwolf
– iPhone facial capture for Darkseid and DeSaad
– Destroying a temple
– Crafting shots in 4:3
b&a: Congratulations on Weta Digital’s work in the new Justice League, I did sit down and watch it for four hours and I really enjoyed it.
Kevin Smith: All at once?
b&a: Two sittings.
Anders Langlands: It’s nice that they put break points through it, right? So you can have a good place to go to the toilet or go get a beer.
b&a: Yes! I’m really curious about the first thing that Weta Digital had to do when you came on board to re-visit things, was it un-archiving?
Kevin Smith: The very first step was going to the Weta Digital data managers. Because ordinarily when you go to data managers, it’s usually like, ‘Hey, we have to do something for publicity. Can we get a shot? Can we get this comp brought back online? We just need some images,’ or, ‘Hey, I need that plate in there.’ And this was having to go to them with your hat in your hand and say, ‘We need to get Justice League back online.’ And they’re like, ‘Oh, that’s cool. What shot do you need?’ And you’re like, ‘No, I need Justice League.’ ‘Like, what part?’ No, the whole thing.’
Anders Langlands: And then waiting for two months while the robot did its thing. ‘When’s it coming back?’ ‘We don’t know, the robot’s working on it. It’ll be back at some point.’ And then piecing through and trying to figure out what the hell we were thinking back in 2017. It was all about going back through everything and just figuring out what stage things were at, where they were left off, which assets are reusable, what layouts and animation we can use from what shots, what stuff needs to be completely redone. Because, workflows have changed so much in the intervening time, that stuff isn’t just directly usable again.
I think Justice League was the last, if not one of the last, movies that was lit in our old Maya-based lighting tools. And everything after that went into Katana. So CG had to go through and script all of the exports, all the light rigs from our Maya tools into the Katana tools. Luckily, that all still opened up, so they could do that. But it’s a whole conversion process and getting everything up to date with the latest tools.
Kevin Smith: The inside joke is that we chase the technology so much, our pipeline changes so fast, that if your show’s long enough, the shots you final at the beginning of the show, don’t work in the pipeline by the end of the show. And so bringing something back three years later might as well have been a hundred years. So it was definitely, as Anders says, a lot of work to dig through and figure out what we could use and how to make all that stuff work with all the newest rigs and lightings and puppets and simulations. You think it’s nice to not start from scratch, but in a sense it’s worse because you’re not starting from scratch and you’ve got all the baggage that comes along with that.
Anders Langlands: I was mostly responsible for the sequences, Themyscira Attacks and History Lesson, which were sequences that were done in 2017, but cut way down by Joss and then re-expanded again in this version to their original vision. And so in some ways you think, ‘Oh, well, we already did this in 2017. We’ve got finals there. We’ll just make it look like that. Easy. We don’t have to figure out what it needs to look like anymore.’
But, actually–and it’s the first time I’ve ever done something like that–where you have something that’s such a set template for it, but it’s actually quite difficult going in because you realize that part of the creative process that you go through when you’re developing a movie, when you’re first starting out in post-production, is figuring out with the director, with the client side supervisor, who was D.J. in this case, you go through and figure out what you want stuff to look like together. And that’s an iterative process that works between you, between the client, between all of the artists on your team as well. And so you’re not really developing the look as like an abstract thing so much, you’re developing the look as like a product of all of your processes and all of your creative input into it.
So by the time you actually get to doing shots, you’re working more on intuition than anything because you’ve learnt what everything needs to look like at a gut level. So you don’t have to question your creative choices every time you do something. But when you’re trying to match to what someone else had done, as great as that is, and as great as it is having a template for it, you’re not able to make those intuitive calls anymore. You have to kind of constantly second guess yourself. Like, ‘Oh yeah, I just make this a bit brighter, but what did they do in 2017? Oh, no, they did the opposite.’
And it’s not that, ‘One way’s right or wrong,’ it’s just that everyone makes choices about that. Having that stuff done already is an advantage in a lot of ways, but also it actually meant having to pay a lot more attention creatively to the choices that you’re making, rather than just making the choices that you wanted to.
b&a: Let’s talk about Steppenwolf. What were some of the big technical and artistic challenges in revisiting the way he looked?
Simeon Duncombe: From the animation perspective, I was worried about how we’d represent that suit in animation, because it had so many pieces on it. Anatomically, that was the first thing we looked at from animation and considering the performance. So how much can we use existing animation? Is that going to break on this new character design? But thankfully, he’s largely still biped. He’s got more hoofed feet than the standard human foot. He’s got extra fingers and thumbs in funny places. So that meant he was going to be holding his weapon a little differently than he used to, so that was going to be additional work. If we were to replace old Steppenwolf performance with the new one, we knew we’d have to change that that axe grip, that sort of thing.
So, it wasn’t going to be one to one, but then creatively looking at the new design inspired us to approach the character in an entirely different way. When we did replace existing motion, we adjusted his posture and any facial performance that was in that particular shot. And for all the new stuff with the mocap performance on the stage with Isaac Hamon and Allan Henry, they approached how they portrayed that character as well in this new design, because he’s just a lot more of a formidable foe that carries himself with a lot more presence, I suppose. And so that influenced our approach to all the remaining shots with that new design.
Kevin Smith: It’s not actually the new design, it’s actually the old design that was replaced with the new design. And now we’re going back to the old design. So, we didn’t have to start with a design from scratch. The suit is cool, but that’s a lot of things to move around and keep track of and to be able to move in an art directable way and tie into the new performances that Hamon was giving him. It was a lot of Houdini work, a lot of FX artists’ time in getting that to look interesting and trying to distil all the things that move and create all the settings that you use to create getting that thing to move in a nice way into like one ‘knob’, so that you can go, ‘More, less, more, okay, yeah, there,’ without having to give too technical notes.
I think the other biggest hurdle we found was that basically he’s made of metal, not only is he made of metal he’s made of iridescent metal! So what looks really great in a turntable over a neutral grey background is like, ‘Oh, that looks cool.’ And then you put them into a shot, and it’s just disco all day, all the time.
Anders Langlands: You have to lean into that though. You’ve got to give yourself over to the disco.
Kevin Smith: The technical challenge wasn’t really dialling the disco back. It was just getting it to behave in an interesting art directable way for the director. Because Zack really leaned into it as well. He was like always, ‘More shine, more disco, like let’s make it colorful. Let’s make it interesting.’ And just being able to make sure since Manuka, our renderer, is very physically based, it’s quite a challenge to get that stuff to look good and be art directable.
b&a: Was there new facial capture done for the characters? Or was it based on existing stuff done in 2016, 2017?
Simeon Duncombe: There was no facial capture. There was facial reference done. And there were new lines of dialogue for all our hero characters. So with Steppenwolf, obviously we had a lot of lines in the sequence where he’s talking to DeSaad and Darkseid, and Darkseid had some lines at the end there where he’s talking to DeSaad in Apokolips and the actors that were doing the voice acting for those—they were basically just holding their iPhones in front of them. And we were getting iPhone footage of their performance. Sometimes it was a little [signalling left of face] ‘over here’.
Kevin Smith: Or at night…
Anders Langlands: …by their computer…
Kevin Smith: …upside down.
Simeon Duncombe: Yeah, so it’s far from the approach that we would have taken in a non-COVID world where they’d have a face cam and there’d be markers up and all that sort of stuff.
Instead it relied heavily on the experience of our facial animators and they love that sort of challenge. That means that they can just focus on their craft and it becomes more of the art of just looking at the actor’s performance and then portraying that in a manual approach. I think there was a lot of enjoyment in that in the work that those guys do, because it was a lot of lines of dialogue in a short amount of time. Some shots were 400 frames of facial animation. Those guys really smashed it out of the park, considering the reference and the original sort of material we had to work with was fairly basic.
b&a: With Steppenwolf and the other characters, were they shared assets?
Anders Langlands: Yes, certainly Scanline did some Steppenwolf stuff, and I think maybe DNEG did as well.
Kevin Smith: We did the lookdev and the modelling and then handed it off to the other facilities.
b&a: I always think in these films where you’re sharing these characters, I think it’s amazing. Most audience members don’t even realize that’s going on.
Kevin Smith: It never used to be that way. So much of of everything was proprietary and no one works the same way and the idea of sharing a shot or sharing a character was just not the way you work. But I think now, once the big studios have started to amortize the risk with lots of people working on the movie, there’s just no way you can have a show like Iron Man or a main character who is digital all the time, you can’t have one facility do all that work. So now that’s part and parcel of the visual effects process, we, as in the industry, have gotten really good at sharing stuff.
b&a: Tell me about Darkseid and what that character meant in terms of its own challenges.
Anders Langlands: He was a bit easier than Steppenwolf. He was finished to a reasonable level in 2017 before he was cut out of the movie. We even had a bunch of animation done with him in ‘History Lesson’ that was then in 2017 swapped out for Steppenwolf, and now we’re just putting him back again. So a lot of those shots were designed with him in mind.
We had to just finish him off and do a bunch of upres’ing work, extra texture detail, and some extra modelling detail in a few places for things like that shot where there are hero closeups on his face, and there’s the shot where he’s picking up the soil from the anti-life equation. We had to do a little bit more work on his knuckles, because we didn’t really plan for that first time around. It was the same design that Zack originally envisioned way back when he first started putting Justice League together at the concept stage. So it’s really just continuing that through until now.
Kevin Smith: The nice thing was that a lot of work was done and we got to just do the cool bits at the end. We got to just put the finishing touches and all the little tiny details that really make the character sing.
b&a: But I’m curious, when you were animating these characters, Simon, and perhaps re-using some animation already done, did there need to be a new post-vis process to work out the ‘new’ shots?
Simeon Duncombe: It depended on the sequence. Obviously there are all the various states and some sequences didn’t see the light of day in 2017. So they were full of very early previs, sometimes just a title card, that sort of thing. And different previs at different states as well, and maybe handled by different vendors.
The sequence where Steppenwolf is talking through his intergalactic telephone to DeSaad and Darkseid, that had to be entirely fleshed out by us. We had to figure out how that staging would work and figure out all the coverage. We had to wear many hats, but basically we could turn that around in a cohesive fashion so all the shots looked the same. There was some flexibility in there for Zack to see a variation of angles and pick his favorites and know when he wanted to punch in on a shot, depending on the dialogue and that sort of thing. That was an example of, we could send them a whole run, and we didn’t really have time for previs. So we considered it sort of first pass blocking that we’d pass over. And those guys, D.J. and Zack are really trusting and thankfully they rolled with the majority of what we presented the first time around–there wasn’t a lot we needed to get it right.
The third act battle was a whole other beast. This is a sequence that Weta Digital wasn’t involved in the first time around. The templates in that initial turnover were finaled shots, early previs, a couple of title cards in there as well. I remember trying to break down that sequence and I just became so confused with the geography of what was going on. So we had to unpack all that and then try and make some sort of cohesive and sort of flowing action that would happen throughout all those fight beats, making sure that they land in all the right places for continuity, not just for the film, but internally that we’d know where we were staging our characters correctly. It was also so there was, say, persistent destruction, and all that sort of stuff can be tracked correctly and makes everyone’s process down the line easy.
b&a: What about the character DeSaad? When he’s in that molten form, what were the main FX sim issues here?
Kevin Smith: Well, it’s the first time you see him, right? He’s this molten kind of lava-y hot metal, weird version of himself. But you’ve still got to do the performance. So really, we just took the same puppet we used for the shots in the end in Apokolips where he has his brief conversation with Darkseid and used that as base for the FX sim that went on top of it to make the molten plinth-y thing. I think our goal there—the brief to the FX guys, was like, ‘You can almost do whatever you want, as long as you leave his face alone so that we don’t lose that performance.’
b&a: Speaking of FX sims, Anders, did you supervise the temple collapse into the water?
Anders Langlands: That shot was one, when we first started working on it, I was like, ‘Okay, we need to start that now, because it’s going to take the entire length of the show to get that done.’ And it did. And I think we did one pass at it, basically. We did it a couple of different iterations on each stage, but it was not enough time to go back and start the concept over again. And we really only got, we had some work in progress renders going, showed some stuff at 1K, but then we really only ever showed one final quality version to Zack and D.J. and they liked it. Thank God.
We did n’t have time to render it again. A large part of it was done by an FX artist called Florian Hu, who was responsible for the main RBD sim, which is all of the fragmenting cliff, which we basically pre-fractured in models so we could design the end shape of the gouge that we wanted left there at the end. Then we did some large-scale pieces out of that hand model so that we could design the pieces that we wanted lying down at the bottom of it after the thing was done and then FX took that one, fractured that, and then all in Houdini, just fractured it, sent it all falling down as an RBD and then lots and lots of layers of grain solver for earth falling off it and things breaking apart as well on top of that.
Then of course, it was huge volumetrics. The biggest problem that we had with it, which is what tends to happen with stuff like that. It’s you sim that and then you sim all the dust coming off of it, and you go, ‘Oh, I can’t see the thing I need to see anymore!’ But by that point, we were too far into it to massively change it. So we basically just cheated and put like a spherical volume density multiplier in the middle. So you can still see the temple as it’s coming down. And then there’s big water sims and everything.
It was a one shot deal and it worked, so I was very glad about that. One touch that I really like on those, the sim of all the pieces breaking up at the start, when the big chunks of the ground are lifting up and down next to one another. And we actually went back and did animation on top of that. So animation were driving the shot with a camera based on Hippolyta running away from the temple. And then after we’d done all the pieces starting to break apart, we went back and did a second animation pass with her running across and jumping between the different pieces as she escapes, which I thought was really a nice touch, which went really well.
b&a: I believe you needed to deliver this in 4:3 or IMAX format—how did that change the way you thought about or conceived any of the shots at all?
Simeon Duncombe: It actually did, from my perspective. Suddenly you’re framing things completely differently. In the previous aspect of the film, you’re framing on thirds. And so with this 4:3, you’re suddenly pivoting to a more center framing on everything. I know we definitely took that into account in a lot of the new shots, particularly the dialogue heavy sequence where Steppenwolf is talking to the obelisk, that’s all largely centre-framed because of the new format.
b&a: I was watching it just on my laptop. And it didn’t feel like I was missing anything. It actually felt bigger. And it really sort of almost suited that longer story. I don’t know why, but perhaps it was just a lot more attention given to framing.
Anders Langlands: Well, it’s interesting because it changes the way you compose shots a little bit. You can see in some of those big aerials, for instance, in the History Lesson where we’re looking down on the battlefield and we have the whole antilife equation, you can actually frame individual elements a lot larger because you have the top and bottom. You’d have to pull out so wide on things in order to frame something completely top to bottom.
So things actually do tend to feel bigger in an interesting way. And particularly characters, as well, when you can get like huge faces in there without too much space around. So it ends up with things feeling a lot bigger and yeah, your brain just turns off the black bars after a while.