VP Land

This music video tested out every new piece of filmmaking gear

New Territory Media Season 3 Episode 10

In this episode, we dive deep into the making of Project Chimera with Snehal Patel, a music video project that blends classical Indian dance with high-tech filmmaking.

We dive into:
‣ Working backward to figure out when virtual production is the right approach...and when it isn't
‣ Snehal's approach to integrating AI in storyboarding and 3D element generation for a futuristic cyberpunk Mumbai (and when AI didn't work out so well)
‣ The limitations and breakthroughs encountered during the film's production, like speeding up lens calibration to get over 100 setups in 2 days
‣ Using Kino Flo MIMIK lights, SISU camera robot, and more

And a whole lot more!


📧 GET THE VP LAND NEWSLETTER 
Subscribe for free for the latest news and BTS insights on video creation 2-3x a week: 
https://ntm.link/vp_land

📺 MORE VP LAND EPISODES

The CPU Powering The Creator, Pixar, and Virtual Production [AMD's James Knight]
https://youtu.be/F-4bJaOaYlc

Fully Remote: Exploring PostHero's Blackmagic Cloud Editing Workflow
https://youtu.be/L0S9sewH61E

Imaginario: Using AI to speed up video search and editing
https://youtu.be/4WOb5Y1Qcp0

Connect with Snehal and Project Chimera:

Project Chimera - https://www.fearlessproductions.tv/project-chimera
YouTube - https://www.youtube.com/@fearlessproductionstv
TikTok - https://www.tiktok.com/@fearlessproductions.tv
Facebook - https://www.facebook.com/fearlessproductions.tv
Instagram - https://www.instagram.com/fearlessproductions.tv
LinkedIn - https://www.linkedin.com/company/fearless-productions

Snehal @ Twitter - https://twitter.com/snehalmp
Snehal @ LinkedIn - https://www.linkedin.com/in/snehalmp

#############

📝 SHOW NOTES & SOURCES

Music Video: Janaka Selekta - Piya (DnB Remix) ft. Alam Khan, Shrii, Sheela Bringi
https://www.youtube.com/watch?v=yxN3xrGlAnc

Check out vp-land.com for all the show notes for this episode.


#############

⏱ CHAPTERS

0:00 Intro
01:39 What is Project Chimera
02:56 Virtual production and AI in Project Chimera
04:10 The importance of planning and preparation
05:15 Collaborations and partnerships
07:02 Lighting in virtual production with Kino Flo MIMIK lights
09:31 Role of AI in Pre-production
10:00 ShotDeck
11:00 Concept and storyline of the video
13:20 Calibrating lenses and lens tracking with Fujinon Premista Zooms
15:15 Staging and lighting
17:15 Camera works with stYpe using Premista Zooms and RED Komodo
19:30 Lens calibration process with ZEISS and Ncam
21:15 Learning the principles of virtual production
24:25 Utilizing AI for mood boarding and world building
28:00 Collaborating with storyboard artists to create detailed storyboards
30:48 The Challenges of creating 3D environments
33:45 The Use of AI in meetings and communication with Fathom
35:15 Reflections on AI's Impact on filmmaking
36:40 Introduction and challenges with 3D scanning processes
37:30 Exploring scanning processes and technologies with DNE and Dengenuity Labs 
39:55 16 Komodo Cameras in the shoot
42:38 Volumetric 3D in the Shoot
44:00 Utilizing the SISU Cinema Robot
47:18 Post-production process with AI
52:50 Reflections on the budget and cost-saving measures
58:02 Final thoughts and acknowledgements
59:48 The potential for future cost reductions
01:04:38 Outro

You can't say, oh, I want to do a virtual production shoot. No one should care about that. They should care about what's your shot? What do you want to do? And then start working backwards. Welcome to VP Land. That's Snehal Patel. He just completed a music video that takes place in a futuristic cyberpunk Mumbai. The video is a lot of fun, but from a behind the scenes perspective, Snehal used the project to experiment with some of the latest filmmaking tech. From working with a virtual art department, to shooting in a volume, to lighting with MIMIK lights, 3D scanning his actresses and a whole bunch more. We had a limited budget, limited amount of time to pull it off, but we wanted to see if we can use all the tools that are available in Hollywood as an independent filmmaker and create a highly advanced project. And he documented the entire process so other directors, producers, and creators can learn how to use these tools for their own projects. In this episode of VP Land, we talk about some tricks he used to get over 100 setups in just two days. I have multiple cameras pre calibrated so that we're not having any downtime switching lenses. Some of the limits he hit experimenting with this new technology. We were relying on NERF technology and NERF playback couldn't work in real time the way we needed it to. How he was able to use AI as part of his workflow and when AI didn't work out so well. So I certainly don't want to use AI to do that part of it. I learned that quickly. And a whole bunch more. Links for everything we talk about are available in the description. And for the latest news, trends, and behind the scenes insights into virtual production, be sure to subscribe to VP Land over at vp-land.com. And now, let's dive into my conversation with Snehal. Well, thanks a lot for joining me, Snehal. I appreciate it. So let's start off big picture and just explain to me, what Project Chimera is. You know what, I think it's pronounced Project Chimera, and I've been saying it wrong this whole time because- I was going to go with Chimera and then I was watching some of your videos. I'm like, oh, they're calling it- He calls it Chimera. I don't want to like mispronounce the- the project. So- I love that. Well, you know, I'm all about starting controversy. The more people talk about the better, there was some comments about it, so it's hilarious what you can learn from interacting with people in social media but- All right. Chimera. Chimera. Yeah. So it's, uh, it's a three part like Beast and we have three female characters that are strong characters that are dancers and there's a love/hate triangle between the three of them for some reason and we're trying to figure out why. And that's what the storyline's about. There's action, there's Indian classical dance. So we really use, like, Indian mythology and, iconography to add into the world that we created. So actually, it's like a cyberpunk world influenced, uh, by India. So, like, maybe in Mumbai in the future, But that was the idea, is to bring into this new flavor. And it's beautiful because we could use different environments. And that's what we did is that we could have different looking environments when we're showing, um, you know, them dancing together. And then there's this world in the backdrop where action's taking place. so Project Chimera is this virtual production project that I wanted to accomplish, which was a technological feat in the sense that we had a limited budget, limited amount of time to pull it off, but we wanted to see if we can use all the tools that are available in Hollywood as an independent filmmaker and create a highly advanced project and make it work. And is it even possible? Is it even affordable? Because that's really the question, right? You have a lot of new tools that have come out recently. There's a lot of technology being bannied about. There's a lot of companies trying to make money from this technology, and they're investing in it. They believe in it, and there's a lot of people that believe in it. But, you know, what about the user base? Are we comfortable, uh, employing these technologies. So what I wanted to do was, uh, do a virtual production project. I thought I could use, uh, AI tools that existed to assist in different parts of the process, storyboarding and stuff like that and even thought that maybe it could help us with. generation of, 3D elements for our real time backgrounds, for virtual production. But we learned a lot of lessons along the way. It was a long process. We got a lot of sponsors and people on board that believed in us and were able to take it to the end. And we got the production done. But you have to plan ahead. You have to know what you're getting into. You have to understand Unreal Engine and how it works or Unity or whatever software you're using to generate your 3D. You have to know what the limitations of your team is, how to communicate with artists, how to get work done ahead of time, how to get it to run in real time, what kind of team it requires, how many people, what kind of processing power. Like, the more you know, the better. From a director's point of view, I feel like if I am very clear with what shots I need I know which stage to hire, Which capabilities I need. Do I just need plain Jane LED wall? Do I need frustum? Do I need tracking? Do I need XR? Do I need AR? Just know ahead of time what you need. Don't just go in there thinking you need everything. Uh, you don't for most of it. Figure out what you need, figure out how to line up the shots, and then you can go in and shop around, figure out which stage you're going to use, and then start working backwards to get there in terms of getting your VAD together and testing and getting your team together, figuring out what capabilities you want to have. I went into it wanting to have a learning experience. And that's actually why we created a TikTok and Instagram series, which we have posted on all the social medias, which outlines exactly what we did, and what steps we took and how we asked for help, and who do we turn to, and then also what the process was like and what steps you have to take ahead of time to get there. And a realistic view on, on what you're doing. We did get lucky. We, we got some sponsors, we got some help. We were able to get lenses from Fujinon. Lighting from Kino Flo, which was the MIMIKs. All this worked great with the technologies that were on the stages already. We used RED KOMODO cameras. RED actually provided a couple of KOMODO-X cameras. We shot at Fuse Technical Group in Glendale, which was great. They have an awesome stage that is really a nice curved backdrop which is really helpful to get those shots where you want to go in and get a different angle from a different position. You can totally do that and make a really an environment, really pop in 3D, when you can change angles quickly and use a lot more of the LED background. So it was really awesome to work with them, DNE, which is Digital Nation Entertainment, helped us, volumetrically capture a couple of our actors so that we can then place pieces of them in the VAD. Like if you want a shot where someone's in a helicopter and it's a wide shot where they're far away, you're better off having them as a 3D character. Like you really don't want to film that on the stage. It's not going to make any sense. We had Mod Tech Labs helped us take the Unreal stuff that we had created and optimize it. So that's a step that you learn about is that, yes, now you've created your world, but how does it play? And how fast does it play back in frames, on your LED wall and what you can do to make that better. So, yeah, a lot of, a lot of hands, uh, helped out with this. Uh, I couldn't have done it without, without, uh, this help. Certainly, Fujinon and Kino Flo really stepped up the game and helped us from the, the get go. The Kino Flo lights actually can play back your content. So you can create an Unreal Engine world and then use pieces of that world to play onto the Kino Flo's and there's multiple ways of how you can make that happen all through, a computer processor that the Megapixel makes. we certainly didn't try to light with the LED walls, by the way. That, that, that was a, that's a fool's game because the, the color of light that they put out is just, is just messy and you don't want to mess with that. We really used good quality sources in the foreground to, to Like the MIMIK or like other- MIMIKs or we had some, um, Chauvet Lighting already on the stage with some spot sources which was perfect. It's a great match, you know. Chauvet actually owns Kino Flo, so they're a great team for them to work together. so that, that was wonderful to, to see that in action. It really helped is lighting the foreground. The way you would normally light like in a green screen stage, uh, really, really helped us give us a lot more control. Um, and like what I said with the MIMIKs, it's the same thing. It's like having the LED wall except the 10,000 nits if you want it, you know, so you just can bring the brightness like a TV screen. I think it was your video, your TikTok with the MIMIKs and loading in a Midjourney background for the lighting was where I first stumbled on the project. Oh cool. Yeah. And it's beautiful because we could use different environments. And that's what we did is that we could have different looking environments when we're showing, um, you know, them dancing together. And then there's this world in the backdrop where action's taking place. And there we have lasers and motorcycle chase scene and stuff like that. So we got to use a real motorcycle, which is an electric bike and Tarform actually, provided that. They're a manufacturer out of Brooklyn, New York. And they make, these custom electric bikes and actually influenced by cyberpunk. And then when I was explaining the idea of using, like, my Indian background because my, you know, parents are from India. I grew up here, but I actually worked in Bollywood. I worked in the industry as an adult. So I've worked internationally and directed commercials and stuff all over. but I'm from Chicago originally. It's just that, I got a taste of what it's like to work and live over there and certainly picked up some things, right? And cultural things, some understandings, some education, I was able to bring that back. So I wanted to put that flavor. into what I was creating, because I was like, look, this is where I'll get attention, that I'll get a much bigger crowd of people that will be interested to see this, if I could do something new and different. And if it was a challenge, how can we overcome that challenge? Because you couldn't easily find, like, cyberpunk stuff set in India, right? So initially, what really, really helped was AI, uh, to tell you the truth, because normally I would take photographs of people, um, do a lot of Photoshop, create, extensive like look books and stuff like that if I could mostly from found photography. I'd be looking on the internet or I'd be going to Getty images or places like that and just finding like sample images that made sense with my story. Manipulating that to make it, coherent and make sense, and then try to create like a lookbook or a shot list. And we all do this, right? I mean, ShotDeck, literally, the website that Larry started, uh, Larry's a fantastic guy, a phenomenal cinematographer. And it's a great idea that instead of having to go to the internet to look for movies and then pull shots from whatever trailers and stuff. And try to actually get around the DRM issues of screenshotting Exactly. Here we got, here artists are giving their permission, right, and production companies are agreeing to it. Like, that's a big deal. Like, hey, yeah, go for it, take shots from our movie and use that for inspiration. That's wonderful, but what AI can do is personalize it. So when we wanted stuff from this kind of look that we were trying to generate in ideas, um, we wanted to see what kind of flavor we can come up with. And what was really cool was, uh, being able to go to AI and type in a few things like in Midjourney and generate some stuff and test it out. Certainly we didn't replace anyone's job because I would have been doing it myself, right? And it wouldn't have been exactly my vision or I would have been much more difficult to get there, right? Either you personalize it by taking photographs of your own and creating stuff, which I do all the time, or you're looking for a lot of stock images. And there's a balance to what you do per project, depending on how much you want to invest into it. So this helped. It jumpstarted us because it gave us a vision. I was able to put that into a presentation and show everybody. I was able to get sponsors on board. I'm able to get funding. I'm able to get people to work with me because there's a clear vision. And I've chosen, like, what the images are. And then, it helped with the storyboard. So, the storyboarding, it's still a storyboard artist. But we're now referring to images as we go along to figure out, like, what shots will take place when, what kind of environments we want to create, stuff like that, and then just looking at ideas and communicating, because even with the artists that created our 3D environments, it wasn't a matter of, I, just drew some stuff and then he went and just created these beautiful backgrounds, uh, from kitbashing stuff. No, we had to come up with an idea of what we're looking for, what kind of look we're looking for, what kind of angles and stuff like that, and it kind of helped to have these. AI generated images. So, Thomas Mathai, a friend of mine helped with the prompting. So he's become the prompt master over the last year, doing all this prompting, And then I got, some friends on board. So the three amigos were myself, Aman Segal, and Tom Thudiyanplackal, who is, a VP producer that's, been working with a lot of great projects recently. He actually was one of the people that worked on will.i.am's video for F1. That was a virtual production shoot, which is really cool. Right before this conversation, I just texted a friend and I'm like, I don't know why, for some reason I have will.i.am's F1 song stuck in my head. And just very random. He just brought this up. Uh, yeah, but it had that surreal desert look and um, Okay. They were at XR Studios. That was a cool thing. And so they have XR. They have floor LEDs. They could make really things that look really epic with some camera moves. and they had a crane. So Cool advanced stuff. That was a lot of fun, uh, watching them put that together. But, um, you know, we were trying to emulate and try to get it to look like that. And so how do we do that? But with like, maybe, I don't know, a quarter of the budget, um, try to figure out how to get around that, uh, and with a lot of help. So again, having multiple lenses ahead of time, what we did was, uh, have multiple cameras. So that we're not waiting, having any downtime switching lenses. And what we did is, we used zooms. So, Fujinon provided Premista Zooms, all three of them. So we had a wide, a mid range and then a long. And what that allowed us to do is have three cameras set up so we could just switch between and pick which one is going to be the frustum, essentially. And you could always use the really tight camera to shoot the background regardless if you have a frustum or not. As long as you have colors and stuff going on- Coz you're just soft, out of focus on so you can't really tell. Yeah. In fact, some of our best shots are those shots. Some of my favorite, um, because my cinematographer, for the project was Robert McLachlan, who has a 35, 40 year career in doing VFX heavy shows. Game of Thrones, Red Wedding, but also when the dragons come and kill a whole bunch of army people, um, I can't remember the official name of the Wagon Heist or the Wagon Torch, um yeah, I can't remember the official name. That scene, yeah, Yeah, exactly. And, and, you know, uh, he's really great to work with. Uh, he's a friend and I reached out to him early and got him on board and he was so happy to do it because it's a learning experience for him team as well. So he was able to bring, uh, a few of his, uh, people that he uses on his camera teams, along with him and it was really like great having those A level professionals on set, just, you know, camera operating and then, interacting and giving us tips, all, DPing and doing the work of everything, you know, because we were just a few hands, a small team. And then, our gaffer, Shawn, who, worked with Kino to really understand how they worked, the lighting worked, was able to program on the fly using a laptop, a phone, an iPad. We had an operator that was quickly, you know, changing the lighting to match whatever background we wanted to enhance, which was great. When you have driving shots, you've already created in your 3D world, like light fixtures. You can have them go past. Just put them on the MIMIKs and have them go past, uh, with the nice brightness. And that really shows on the face and the contrast levels. It was pretty cool. It was great. It was like having a special effects lighting that you really just didn't have to program ahead of time. You could just do on the fly. That was the one department we didn't wait for, which was lighting. We would change environments and backgrounds, backdrops completely from like night to day and different location. They were able to be up and running like in 20 minutes, you know, just move the sources to the positions that looked correct, that matched the direction of the lighting that we had. Move the big source to where the main light's supposed to be, which they ganged up some MIMIKs to do that. Uh, I think eight units to create a big source. And then, put the other little sources wherever you want for touches. It was great. It was like having a special softbanks, special effects lights all in one. It was pretty cool. Certainly there's still a need for spotlights and fresnels. I thought you also did build- I don't know what the exact size was, like an 8 by- like paneling together a series of MIMIKs, so you turned it into one big Big source, yeah. But it still displays everything as one. As one big unit, yeah. That's the best part. Was that your main light source for a lot of the scenes or was that just like a good- Yeah. That plus like having the grid of lights above with the spots and things like that because we had a a lighting board that allowed us to control that. That helped when you needed to just spot something. And what we did is we positioned a stage in the kind of the middle of the curve, uh, so that we had some height where then we can move stuff up like, you know, you can do low angle shots that way. Otherwise it's impossible to do low angle shots unless your LED just keeps going or unless you have XR to do wide angle shots. But, you know, honestly, like in a music video, you want to keep things tight. You don't want to do wide shots unless there's a lot of information and stuff going on at least for, our music video, because it's a drum and bass video. it's fast paced. Since there's a lot of cuts and things going on, you know, having emotional connection is really important. So what I found is I tended to favor for, sections of my edit, depending on the part of the storyline, the closer shots, where it wasn't important if we used the frustrum or not. Honestly, where, you picked up emotional cues, you picked up expressions, things like that, movements, which sometimes you lose that detail in the wide. So it's nice, we have a mix. Three camera setups, we can switch between them, we can run two or three cameras at a time if we needed to. So is the main reason for the three cameras, uh, for speed? So you wouldn't have to just keep you could just, they just have the lens on them. They're already And so you just, Painstakingly calibrated by stYpe, who actually loaned a technician, DJ, to come in and do it. And he was fantastic because he stayed with us during the shoot as well. He was just very interested in what we were doing. Wanted to be helpful. And we really appreciate the support from them. But basically, yeah, we had to calibrate for hours. Because a zoom is complicated, it's prime Even with these zoom lenses that have the lens data in it? Yeah, but the lens data only gives you positional data, it doesn't actually tell you the distortion of the lens. You can get that from, XD lenses. You can get that from Fujinon's Premista Zooms because they have the ZEISS XD technology which gives them the ability to transmit their distortion and their exposure drop off characteristics. But, in a real world situation, like in a virtual production situation, you have to contest with the realities of precision. And your sensor offset is an issue because when you mount the lens on the camera, uh, not all the camera tolerances are as exacting. We were using Komodo. Certainly a Komodo camera is not expected to have the tolerance of, let's say a Raptor. So if we had a 65,000 body versus a sub 10, 000 body, then we would expect the mount to be better, and certainly the Raptor comes with a PL mount. You know, whereas we had to use an adapter. Stuff like that. You have to account for that. And then lens tracking, because zooms won't always track straight, either because you're not supporting them correctly, there's lens motors on it, the lens is getting older, the housing hasn't been serviced well, it's been shipped around, there's a multitude of variables that come into play where the lens doesn't track correctly, meaning that if you zoom in and out, it's not zooming in and out perfectly straight. Uh, it's off by a little bit. You know, this is a common issue, especially with the broadcast lenses. So broadcast lenses, you forget about ever using any data that they have, you know, it's not going to work because, you know, there's a lot of plastic involved, a lot of expanding and contracting of metals and plastic. So you can't trust that. But even with cinema lenses, you really have to, uh, calibrate differently. Now there are companies like ZEISS working with Ncam that are going to have their lens profiles for their Prime lenses, which then are compared against real world by doing a chart with your camera once it's set up to see how that calibration fits and then making adjustments to make it work. That's similar to if I calibrate a lens with the Mo-Sys system or with stYpe or anybody, and then take the lens off the camera, switch it, and then put it back again. Then I'll have to do a recal anyways, which is kind of like confirmation calibration, where it's like, oh, did anything move around? Did anything shift? How did it shift? That's the idea. What ZEISS is doing is they'll provide ahead of time information to the Ncam system to tell it what the Prime lens is expected to do and behave how it's supposed to behave, versus how is it behaving in the real world. Just line it up, you know? But if you have some data points, then you can get all the rest. Whereas you don't have to do a full calibration from start to finish for like an hour or two to figure out- Every time, yeah, to figure all this out. I mean that makes sense because they acquired Ncam right now, but it's under different, uh, CinCraft CinCraft, yeah, that's their brand. Okay, the new path, the new- So I used to work for ZEISS, and I used be the head of cinema sales for North and South America. So I was involved in this project before it became public, so certainly I have a background in how virtual production interacts with camera tracking, and I understand it. I was, of course, a champion of lens metadata when I was at ZEISS and ARRI previously. So that's another reason why to kind of get into this space is I can communicate in layperson's terms, how this stuff is used and why and how it works. Especially as a director, I can help people creatively to get the answer like, okay, you can put this and this together and they work together like Legos or don't do this because you'll never need this, you know, and it's not worth it. But, over the last, let's say six months, as a filmmaker, It's a whole new thing when you're doing it for your own creative project. So the learning has accelerated. So it's not like I just learned everything just for, for this project. I certainly had a background, and I understood the technology, theoretically. It was the application that I, actually learned about. And so now I feel like I want to spread the message, and I want other people to understand that this is attainable, you know, it's just that you have to understand how the pieces go together, you have to understand what you have to prepare for and what you need, and then give it the time that it needs, it requires to get there. And I think that this is you know, easily communicated. A lot of the training that we have for virtual productions are very technical out there. It's meant to be for technicians and people that want to make the system work whereas I think there has to be training for directors. That's certainly what I want to do over the next year is reach out to different organizations, work with the DGA, work with different directors groups and organizations. We've been doing this TikTok series and Instagram series, just breaking down the technologies and just talking about how it works. But we want people to interact and put together a workshop that people can join into, either virtually or in person where they'll be able to learn how this all comes together. I think that it's better if you understand what you're getting into and how the principles work as a director, as a creative, and then know where to go. just like you would with a green screen shoot, honestly. Like, you know, you're not concerned with which stage you're going to use. Your green screen stage choice is going to be based on capabilities, what lighting they have, how much space they have, can they house your, you know, team, do you have a make up place to do make up and hair, you know, and craft services. The same way you should look at it in LED stages, you know, know what your requirements are. And then go into picking the different technologies you're going to use and, having a reason for it. Why? You know, because you could get away with just a VIVE Tracker and a made-up frustrum that's not 100 percent accurate and with no calibration for lenses. You could do that. It's fine. It'll look great. It looks good enough. It looks really good. It looks really good if you know what you're doing. It's not a problem. And then if you need to step up your game, like, no, I need to have those wide angle desert shots where my characters are super tiny in the screen. Well, then you're going to need XR. So now it narrows the field of where you can go. Oh, I'm also going to need a 3D character in real time interacting with everything so that I can record it and then just be done and never do compositing later. Well, then you're going to need AR. And that's like exponentially more expensive, um, than, than just being on a stage with a VIVE Tracker You just need to know what you're getting into. And I think that this is what is not being explained to people, that are on the creative side. So, it becomes daunting. It becomes scary. It's like, ah, how do I figure this out? But if you just show them, like, hey, this is all you're doing. It's like, you're doing the same thing as VFX, except you're just, you know, doing it ahead of time and just being ready with it. Put it into those terms, I think it's easier to understand. I want to run through a couple of this just stages of the process and kind of just break down some of the things that you, uh, experimented with or went through. So with prep, um, you covered, you did a lot of, uh, Midjourney and AI tools to kind of convey the world building or just kind of like mood boards. There was one post you had. showing comparisons between, like, the, um, storyboard sketches and Mm some rendered, uh, Midjourney images. This has always been a challenge for me. The framing and stuff between your storyboard and what you're able to generate in Midjourney, they were very similar. And that's always been a struggle with, like, you have the vision and then getting the AI tools to, like, match it and then not letting the AI tools, like, generate something else. And you're like, uh, that's not what I wanted, but okay, I guess I'll go with that. Um, how did you guide and direct to get, like, outputs that where the framing matched what you sketched out and what you envisioned? So at that time, with Midjourney, you cannot do like lense types. That didn't really work because if you think about it like a lens can be anything, right, depending on the sensor size. It's not going to necessarily always show you the exact type of output. You could say 50 millimeter that can be medium format, can be large format, that can be Super 35. Nowadays, full frame Super 35 is so interchangeable that it could be anything. So, you can't do that. We tried at that time, I think, even medium shot or wide shot. And even that's difficult to do. The stuff was either portrait or wide. Like, that's generally what Midjourney at that time was really giving you. For storyboard, you have to pick and choose the battles. For me, it was like, oh no, I like that vehicle. Okay, and I, I like how the position is and the, the direction it's coming from. Um, so it became more of like a directional thing. And then say, okay, let's, you know, create a storyboard. So it was incidental that some of the storyboards look like that. But really trying to generate like, um, the flying vehicle in a certain position in a certain way. It's a fool's game unless, you know, you have patience and really want to do it. And you really, really want to create everything from AI. So those people that are kind of creating these full videos from AI, which is Midjourney generated images or, some other engine generating images, and then they're using something else like RunwayML to then animate that, they're being very patient. And then they're using things like seeds, you know, where you can then use the seed over and over again and try to reframe and stuff like that, where the look is similar, but in a different angle or something like that. Um, that just requires a lot of patience and time. And certainly that's going to get faster. But I don't know if really we were after that. Like, did we really want a computer to generate our storyboards? I don't think so because when I sat with my storyboard artists, who actually helped initially to, with our, our VAD creation, our, our virtual, art direction, it was fantastic because I got feedback and he's like, Oh yeah, well, that won't make sense with that. Oh yeah, that's right. Let's do this. You know, that storyboard session that we had together became canon because we came up with like 90 storyboards because we came up with a storyboard per cut. That's different when you do a music video. The way I like to do it is I want to envision all the cuts ahead of time. So that I know I covered everything. I got all the footage I need for all the different cuts. So, uh, we storyboarded the whole thing. So, by the time we were shooting it, there was like 86 storyboards that we were marking off as we did shots. That's crazy. And we're supposed to do that in two days, but, um, the fact that we had the, the canon and then we did a shot list based on the storyboards and just locked it in. And then when we're creating our backgrounds and all that stuff, we already know what environments we need, what action is going to happen in the environments, what direction we're facing. All that stuff is really well laid out. So, you know, in the storyboard shot list, you know, combo, we can now have a shot list and you could use different software. We use Google, uh, or you can use other software to track your storyboards and VAD creation. But certainly we can, separate by environments. We could separate by, uh, looks, colors, by characters that are in the frame, all kinds of stuff. I couldn't have done that without a storyboard artist. So I certainly don't want to use AI to do that part of it. I learned that quickly. Then for the actual VAD creation itself, uh, which is, you know, creating the 3D environments that your characters are going to be in, we couldn't use AI really to do that quickly. You can certainly generate things in Midjourney and then find things on the online marketplace and Kitbash it's called, where you download 3D kits of looks and environments and then personalize them. That's what we did. Like personalize, else made or that's pub that's We did a lot of to use and then yeah, and then I did a lot of billboards. So I did a lot of work with billboards and, and, and logos and stuff like that, which is easy when you're doing, you know, cyberpunk night scenes. Uh It's a great tool to, to do that quickly. But the look of what we wanted, uh, kind of like having the wide shots in Midjourney gave us an idea of how to stack things. What kind of look to create when I wanted to communicate, Hey, I'm trying to make a cyberpunky looking slum background, then I could send that to the artists and they could look at it and go, okay, that's what we want. Also, I could have that in the storyboards and a reference along with the shot, so that everyone could see what we're talking about. And that actually helps us with the colors, too. So that helps us with costumes and wardrobes. So, I actually pick color schemes for the costumes and wardrobes based on the color schemes of the environments that we were going to create. So, actually the color scheme of the characters matches the environments. When our heroine is on her motorcycle, the lights around her match her outfit. Whereas the foil, when she's up in the air in her flying ship, her colors match her ship because she's in the military. So, it's great. We were able to put all that together. And the dancer, of course, had her own look, which is expressed through the environment that she's in when we see her first. We were able to break everything down. We were able to have people that are interacting and telling me yes, no, maybe so. Um, there's no way I could replace that. So, I would say AI was helpful in the personal processes and the communication and making things faster. It didn't help in replacing anyone's job or replacing any process that we were going to do anyways. It's as if I did spend months creating artwork and then finally was able to be like, all right, this is what I want to do. You know, instead I was able to spend a week. and have an idea. So, it really was a personal thing. Uh, but I still had somebody else helping me prompt, so it's not like, didn't really save anybody, right? I still needed a prompter, right? So it was like an extra person, really, like, oh, okay, I could have hired an artist, I guess, to help me out, but the artist would have been tired of me, you know, after a month. Uh, and this way, you know, when it's a computer, you can just keep generating stuff. So that's that, that's the truth of the matter. And then for the VAD creation, there is no way just yet to really do this in a nice way. Yes, there are some companies out there that are doing stuff like, for example, taking images, cutting them apart, creating a 2.5D effect. You know, you can do that. So you can generate something in Midjourney and be like, Oh, I really love this environment for some odd reason. And so I'm going to cut it apart in high resolution and add depth and things like that. And then use it as a 2. 5D model where I can kind of move the camera just a little bit. So add a little bit of parallax in it. Um, but there's no, uh, there's no 3d way that you can't create 3d environments quickly. Certainly people are trying to in, in Maya and other 3d software to be able to generate quick 3d like, textures onto buildings and stuff like that. So basically they're creating geometry and then just splashing stuff on it or going back and then meticulously uh, taking generated images and putting them on onto geometrical shapes. Um, but that's, you know, still a process and it's not quick, it's not Uh, immediate. The only immediate thing that we saw was about Verizon. They have a lab, which they study stuff. And what they did was they created just simply a skybox and where generated images can live in that are put into like a 3D shape, essentially. So it kind of looks like parallax, but it's just really, you're stretching out the image. Imagine a skybox, the floor part of the skybox is stretching out your image to make it look like there's depth in the top part of it. So it stretches out the top, the bottom and the middle's kind of there. And so it kind of looks like it's 3D because at the, Verizon lab where they do this testing, they have a virtual production stage, but it's a floor and a and wall LED, and they have it set up for XR. So it's not a very large stage. It's a small slice of a stage, but it could be made to look quite large. They place you, you stand on the stage, the small thing within a, looks like you're in a bigger world. Yeah, it's just like garbage matting on green screen, where as long the characters are in front of the LED, then you can add to the background, and you can color match, which takes some doing, but you can do that. You can color match it so it looks perfectly fine. Um, so you can make it look like I'm walking through the desert as a small little character really quickly. Um, but now with this technology, they are basically creating a 3D skybox so that you can speak into a telephone or like in my case, it was like a Android phone. I spoke into and I told it what environment I wanted and then it quickly generated something and then put into the skybox and makes it look like kind of like 3D kind of like there's parallax there. That's that's okay. But that really you could only use that for like quick stuff really like if you just You wanted something quick and dirty, you're doing a documentary, you just want some funky backgrounds. Um, for your interviews, yeah, that makes sense for doc style shooting. Certainly doesn't make sense necessarily for having high detail, or again, you know, if I generate something else, like a different angle, it might look completely like a different location. and then we used AI for like our meetings. So we use the Fathom app in Zoom a lot, which was fantastic because it keeps track of your, whole Zoom call by person. It annotates. You can search by text or through video, uh, find moments. You can confirm things. So it's great when you're doing a complicated project, uh, like virtual production, you have a lot of moving pieces. You can go back and say, Oh yeah, they said this and we had to prepare for that. And you can do clarification really quickly. So that was cool to, to use. I haven't heard that. You said it was Fathom was the other one Yeah. I haven't used that one. We've used, we've used the Otter and, uh Fireflies, which have I think pretty similar features. Pretty similar, I just like the interface for this one, so that's the one we've been using, and the videos are accessible and downloadable, so it's actually good for BTS too, so what we want to do behind the scenes, we're like, oh cool, we want to show like, filmmaking BTS today is, uh Zoom calls and, how many, do you know how many Zoom calls I did for the creation of our backgrounds? Like, it's crazy. So, you know, we record that and so then when, when we're in an Unreal Engine, um, we're able to pull shots and stuff like that. Um, but you know, like I said, this was all a learning experience and we're being very open book about it because we got funding and sponsorship. Uh, this is very public. Like, anything we're doing, you know, it's open source. I don't have to ask permission to show the backgrounds or characters or anything like that or use the music, which has been great. It allows us to to tell people the truth, you know, about stuff that's going on with AI and what realistically you can do. And then, realistically, is it a threat to filmmaking? No, I don't think so. Like, for a while, like, all this AI generated stuff was so cool. But people aren't into that as much as you would think. They're into realism. They're into connecting with, with human characters. So like, we certainly couldn't have AI actors. There's no way. There's absolutely no way. The actresses we had were fantastic. They were all classically trained dancers, dancing different forms of Indian classical dance. So there's a multitude of different forms, depending on the region of the subcontinent. So they're dancing like two or three different forms and using movements from different Indian classical dance. I mean, they were super talented, and they brought so much to the project, and, you know, the way that they emote and their, their expressions, their eyes, that's the important stuff, uh, which you can't computerize that, um. And the people talked about, well, you digitize somebody, you know, you use them, but yeah, but it's not like we use them in a different project. It's just that we wanted to be able to turn one girl into a hologram. Instead of animating that or capturing that against a green screen, which I normally would have done, it's better off doing it as a 3D, because today, right now, I need a shot, and I need her in there right now. But I need it from a different angle. So if I hadn't shot this in, in, in a green screen backdrop, uh, that's the, the lower angle that I need right now, I wouldn't have it, but I, but as a 3d character, uh, her performance is intact. And now I can just pick a different angle and generate, um, something else to, to composite in. I did want to ask about with the scanning because I- so you had some of the videos and it seemed like there were- you had two different- did two different scanning processes or- so I don't know, walk me through what was the- what were the two processes and I'm assuming that maybe the first one not worked? The first one was an experiment that we did in conjunction with Dengenuity Labs . And, you know, we're grateful for the experience, but, um, the content that we got at the end, uh, couldn't work in real time the way we needed it to because we were relying on NeRF technology and NeRF playback in real time is, is kind of still a pipe dream slightly. Uh, it's getting there, I mean, it's, interesting watching these videos too, because I think, where are you doing this, like, July or something of, uh, and then I'm watching them, I'm like, ah, but you know, like, this is, everything's advancing so quickly. So, where, We were trying to do that, like, by the deadline of when we were filming, luckily our shooting got delayed, uh, and then I was able to, you know, work with Digital Nation Entertainment to do our second round of capture, um, but Digital Nation Entertainment already specializes in creating 3D for Unreal that plays back in real time, that's their thing. So that's technology that they've proven, it works. What they're doing is displaying a 3D grid, on the characters that gives them very accurate, representation in 3D of what's going on in the world, with their infrared cameras and then mixing that with the visual. So that's structured light, basically method of doing it. It's well regarded in the industry, works really well. It's solid, and they did a fantastic job. because I could play back the content that we have versions for our phones, for tablets, for computers, uh, for different bandwidths, and then of course for playing back in Unreal. Or we can do even higher quality for, for like a 3D modeling or 3D software if we wanted to go even more in detail. Um, so that was really great. Uh, what we tried to do with, um, Dengenuity Labs  with their initial capture was different because we were using visual only. So it's not structured light. In this case, the idea was to capture visually and then generate using NeRF technology in real time with a plug in for Unreal that would take this data because it's NeRF data. NeRF data is not the same as visual data. So the visual is converted into NeRF data. So now you have this cloud of information, that's moving in real time, that's animated in real time. And then playing that back, and then NeRF would then fill in the blanks, right? So if there's like a dot missing, it fills in the blanks, it makes the, the character look whole. Um, it's, it's really good. It's, it's a cool idea. Uh, it works for still images. Uh, for sure, you can NeRF stuff, like you can NeRF a room right now and then on environment, or create a static 3D environment, which is just essentially a moment in time. Sure, you can do that, uh, but to have it play back in real time is still tough right now. But it's getting there. Have you revisited this with, uh, 3D Gaussian Splats? Well, we're gonna be doing, uh, tests in December, again with Dengenuity Labs , with an updated setup. So, let's see what they come up with. If they're using Gaussian splattering yeah, sticking with NeRF, frame every, like, ten seconds with the NeRF versus, I've what like, a hundred frames a second with the Gaussian Splats, and, like, performance has been, like, crazy. So, yeah, I'm very curious if, uh, that one, too, you just shot with, like, 16 RED Komodos- 16 Komodo cameras, so we're shooting in 6K, so it's a lot of data. We learned a lot of lessons also about what to set your shutters at and how much light you need, and to get the image you want, what ISO you have to keep to keep the noise down because you really need a clean image. The cleaner the better for this kind of setup. And of course, that idea is different. That's like a, uh, setup teardown concept, where you can setup and teardown anywhere. You can basically go to a movie set, uh, find a, a section of studio you can use, set up for the characters, and then begin it out. That's the idea. Or bring it to, uh, set on a truck. Uh, it's not meant to be a permanent installation or a stage. So, you know, there's a lot of technologies that have to work together for that to come together. So, yeah, we're going to be working with them again, uh, trying out this image based capture system again in December. But there's definitely a benefit to the structured light approach with Digital Nation Entertainment. Their setup that they have is solid. It's really for creative. It's fantastic because they're very clear on what works and doesn't work. They give you clear instructions on even hairstyles, costumes, wardrobe and all that. And before you leave the set that day, you're, you're able to see your character is live. And if you, are planned enough and stay on schedule. You'll even be able to see a rendered shot before you leave and see an output from any different angle. So it's pretty cool to have an animated 3D character that's really that clean and that tight, that doesn't have a lot of like, clean up to do. But hey, it was again, it was a learning experience. I mean, these people are volunteering there. interesting. Yeah. Yeah, they're volunteering their time, their energy, and their equipment, and their expertise, and giving us really advanced Uh, you know, access to technology because they want more independent people to understand how this works. I could take these 3D characters that we captured, uh, with DNE and then put them into our environment. So now we're seeing them on the screen live, you know, playing back as a hologram when we're at Fuse, you know, on their LED stage. And now we're capturing all the information correctly with the metadata and everything, um, to be able to do compositing later if we want to. So that's the benefit of like the stYpe system with Fuji's Premista Zooms. It can read the Information from the lens, so at least you don't need focus iris zoom position, you don't need encoders for that. And then mix it with the profile that stYpe created and send that information, down the packet line, that goes to Unreal Engine. And then when you hit the record button in Unreal Engine, it actually records that data for you, so you have that information later. So, what was really cool is to see the tight integration of all the different steps, um, to be able to, you know, make this all work together. Is that, yes, we can mix volumetric 3D, put a character live, capture everything correctly, then also be able to composite later if we want to as well. Um, that's why you see a lot of movies, high end movies, uh, capturing their characters, uh, in 3D like this. Um, because if you can capture, like The right type of things like, um, when people fly away in ships or helicopters and you want them in the doorway of something like that's the kind of stuff where, don't want to risk having Scarlett Johansson do that. And certainly why pay, a stunt double just to do that shot. Uh, on a green screen stage with wires, like, that doesn't make any sense. Uh, you're better off, I think, just capturing her in 3D so that you get her performance and not somebody else's in the doorway of that flying away ship. And then you could do everything in 3D. Um, and I think that's a safer option, certainly, and certainly looks better too. So, um Yeah, there's a lot of cool applications for that tech and that's the other thing we learned, right? We learned about, we learned AI and how it's useful and not useful. We learned about virtual production, how it practically works. But then we learned how to do volumetric and how to use those characters later. Um, it's, it's pretty important. And now we're using it for VFX because we're still putting her in the ship and doing shots with it. So it's great. In, uh, the production, um, so we did a two day shoot, right? A hundred setups roughly, um, and you used, uh, the SISU Cinema Robot? That's correct. Yeah, so you can walk me through, uh, just how the production went and, uh, the use case of that, of the robot. Spacewalk Moco, helped us out with that. It was great just having the team there helping us get these shots where we're trying to make the motorcycle look like it's in motion. The whole idea for the robot was to match the animation that's happening in the background with the movement of the robot. And we tried a multitude of different ways to accomplish this, and it took some doing and a couple of sessions offline to be able to do that, but essentially we were trying to add motion to, you know, that's in conjunction with the background moving, to the foreground. We had like a lane change shot that I wanted to do where our character is dodging a fire blast It's hard to do. And it's difficult to make that happen. But if you think about what the robot should do and how it should move, you can make it look like she's changing lanes and stuff like that. So, because you have to kind of zoom out and move at the same time and, you know, do different techniques. But it's great with the SISU robot. You can program all the different positions that the camera's pointing, along with the focus iris and zoom as well. So you can do all of that, put that into a timeline, and then the motor that's driving the lens is hooked up with the SISU interface. So when you program in 3D space and you frame up, you can do everything at once. And that's exactly what Andrew was doing uh, with the robot and adding the moves to it, which was great. And it added dynamic movement and it's really cool to have that kind of movement. It's, it's kind of like we had a dolly as well from Fisher, um, number 11. So, you know, we had that if we needed it, but honestly, this pre program moves makes a lot more sense, uh, motion control for when you're trying to do movement on a still object. So the motorcycle was not moving. and it's still so certainly if you want to make the character look like she's moving around you have to add tilts and stuff like that and movement so having sweeps and stuff like that that line up with the background really really sold it and made it made it pretty cool and it was kind of like a lo fi way of doing it. We tried a handheld and that was impossible to make it look good. Then you, what's cool is the process. You got to think about like, okay, are you a follow car or are you a camera that's strapped to the vehicle? Like you think of it that way of like, how would you have filmed this normally on the street? You know, and that's what you're trying to emulate because you're trying to speak cinematic language, which is already established. So certainly that's what's cool about the SISU Robot System is that it was invented for industrial use. Uh and of course that's a large part of its business, but, uh, it has this motion. Uh, picture business, which is, of course, being integrated into sets like ours, which was pretty cool. Well, I mean, they're still a really big robot, but they're relatively portable Yes, far as industrial robots go, that you can move it to location. We did an interview with them at NAB and that was one of their big points was like, yes, you can move this and take it to location and it's not bolted to the ground in your studio and you're stuck with it forever there. No, and then also has a track if you want a track system well. So you can also track, which is helpful and adds a dimension to it. So you can add a track that it moves on while it's doing its thing. Um, no, it's a pretty cool system, like you said, very portable. Uh, and, uh, can be set up anywhere. and then, uh, post production. Anything different or unusual of, like, how you went into post or what you did? I know you mentioned, because you had the scans, you needed to get other shots, and so you had the scans, uh, were you able to, you were able to do that, but anything else that was, uh, different? No, we certainly tried to see if there was any, uh, AI tools because, you know, we were committed to this idea of, um, seeing if there's any possibilities of incorporating AI tools into anything that we did. Uh, again, nothing that would upend anyone's job. I mean, there are tools like, oh, we'll do color correction for you and looks and stuff, but I'm like, really? Like, I already know what I want. Like, I knew months ago what I wanted, like, why would I need an AI to do that for me? That doesn't make sense. Certainly that was like the one thing that I saw in the process. Like, okay, maybe that's viable to try to experiment. One of the coloring tools, one of the tools where it's like, oh, we'll match your shots? No, the matching thing is interesting, certainly, because like with music videos, you might not have the budget for a color correction session at all, and you're just doing it with the editor or whoever's editing, and sometimes it's the director that's doing it, or the DP, in that case, yes, a matching tool would be awesome, but I don't know if I would trust it, because, yeahThere yeah There . are a couple out there that are like, Yes. we can match, match the shot or match the camera or especially, I mean, it's probably more in documentary world, but when you have stuff shot on different cameras and it's like, we need to match to make them look, No, we do have that issue. Yeah, we do have that issue because we had two Komodo Xs and one Komodo, and the tight shots are Komodo, and they definitely look different than the other two, even though the settings were the same, the camera, just footage looks different. Different sensors, certainly they behave differently, and even if you're using the same pipeline of color science, it's still gonna look, come out a little different. just have a colorist uh, match them. But yeah, so that's usually what a colorist would do. But so in that case, yes, I think I will reach out to you to find out those softwares and we'll give it a go to see if there's any matching stuff out there. We don't have too many issues with matching, don't the names offhand, I know we got links in the newsletter, but Yeah. I'll, I'll, I'll look them up and send them to you. but in terms of, creatively, uh, I don't know, like, there's so many tools for you to be able, like, with Resolve, you could just download tons of LUTs and see a bunch of stuff really quickly and use your eye. you know, I'm, almost 30 years in the business, so certainly I know what I'm looking for and, uh, you know, with my team I'm sure we'll get there. But I will, I think afterwards, try out some of these color tools for AI just to see what the results would have been. I'm curious. But with the other stuff, I mean, uh, we're seeing AI already incorporated into Premiere. Photoshop with Firefly, yeah, and things like that. So, actually, that stuff can help. Like, I can see how, if you want to do, generative fill to fix shots, something where a boom pole came in or something like that, and you just need to fix it, instead of rotoscoping, Gotta change the logo, remove the logo, um, yeah, that. stuff has been, yeah, very interesting. Um, Yeah, I've been messing with that on Runway, but now that it's, having everything in the same, I feel like it's like a lot of thing with the AI tools, which is like, oh, it's having it in your central spot where you work with it the most is one of the biggest advantages, Yeah, I think that if you can do away with rotoscoping altogether that'd be great. You're Rotoscoping is like the job that no one wants to do and that's usually you're getting it done for pennies on the dollar overseas. Everyone outsources rotoscoping from your most independent films to the highest budget movies that we have. They all do the same thing. that's a tedious task that I'm okay with, um, not, not existing. And if there's ways to, to go around that, that's okay. Uh, it's not creative. It's nothing creative. I think that's the, where we have to look at AI is like, are you replacing a creative job? Are you replacing a job that What a thinking person is doing that's putting thought into it and expertise and their background, and the knowledge of what works and doesn't work, or, are you just replacing something that's rote and tedious and, really doesn't really matter? Does this free you up to do more creative tasks than spending time doing tedious stuff that is not creative. no, because I tell you what, even the budget that's spent on rotoscoping, if it was spent on something else, just makes the shot look better. So if you could just pay an artist to work another day on the shot, instead of having to do five days of tedious rotoscoping. Yeah. what producer would not do that? I mean, honestly, wouldn't you rather pay your artists that are doing creative stuff more and keep them longer and get the shot the way you want and have a better output? I think, yes. And we already do stuff with software, like software has come in to get rid of these tedious tasks. I come from linear editing. I don't even know if you know what that is. Like, that's tape to tape. We had to actually know our edits. So why as a director now, like 30 years later, you know, I'm, uh, uh, very good with storyboards and want to stick to a plan is because I, that's what I had to do when I started out is literally when I was a kid, 17 years old, uh, in high school. And I had to, you know, create a TV show. It was linear, my friend, we had to know every shot before we laid it on the tape. Because if you don't know, you're going to miss the time code in your head. The whole project's messed up, you've got to start all over. So you have to learn very quickly to plan ahead. So I think that's what I've learned, I think, through this process is like, what's the realistic, what AI damage it can do, or what is it doing right now, or what's the potential in the future. I think, yeah, there's potentials for AI, but I don't know if it's going to replace everybody just yet. That's gonna take a lot of time. So looking back and talking about budget, um, so like factoring out like the sponsorship and the in-kind donation, if this was just like a music video shoot going through the process that you went through, where do you see that these new tools and these new processes like helped, do things like reduce budget or save money? Like how has things changed? We're able to do things now with this new technology, at a lower, budget point. Well, let's be realistic. Even with, um, let's say with the in kind and funding that we got, it's still a $250,000 project. because, you know, your stage is expensive, your production's complicated, you have more people on your team than you normally would have because all your VFX team is ahead of time and they're all on set with you. It's still not, it's, it's not a cheap endeavor. By any means. But, it could have been much worse. It could have been another, I think, uh, 15, 20 percent more. if we didn't have some of the tools that we had and the clarity that we had at the beginning. I think that helped. That certainly centered everything. You're not doing multiple iterations of stuff, even costumes and wardrobe. You're not going back and forth with, uh, the designer. you days, weeks, hours, money, time. So, you're not doing that. Like, as simple as that. You have a very clear idea. Like, hey, this is what we want to do. Oh, cool. What about these? Great. It works perfectly because I know my world. I know my color scheme. That's where I think the savings came in, is knowing where everything was and how it all fit. Certainly with the creation of the virtual production backgrounds, the 3D environments, we went through some headaches and we went through some trial and error. We went through different teammates that were contributing to the whole VAD creation. And then when we learned what worked and got that finalized, we were thankful for the work that we had did ahead of time. So having the clear storyboards again, having the clear direction, knowing what our angles are, knowing what I have to build and don't have to build all that totally came into play. So again, 15, 20 percent of time and money saved over there. In post production, saving a ton, to tell you the truth, because I'm not doing visual everything's already mapped out, you already know, so you're just, less time that you have to. And it looks really good. because the angles and everything match. So I'm not creating roads that the motorcycle's traveling on afterwards. Certainly don't have to marry things together and try to figure out by eye what's going on with the angles. I don't have to use camera tracking software to figure out what the movement is. it's so much easier to just look at it and go, Oh, cool. That's great. Or I want a different angle. And luckily I paid attention and according to my storyboards, I took different angles as I needed them. So that's what helped, is, is thinking about this stuff ahead of time. So, and then, I think the timeline of, uh, post production is much shorter. with, uh, with this virtual production shoot where I relied on a lot of in camera visual effects. ICVFX to, finish my day. And I have very few actual VFX shots that we're doing. It's just a handful, maybe like six. it's great, because it could have been the other way around. So pretty much most of the stuff you shot in camera, like, that was your final pixel done. Yeah. Yeah, it's great. Fewer production days would go in this route, do you think, too? On stage, like, yeah, two days, if this was, green screen, or if this was practical sets. know, I think it would have taken longer on green screen because it would have been harder to communicate what's going on. So, matching up your action. to your environments would have been more difficult. So you could get around this by creating your environments ahead of time and then going to a green screen stage that has camera tracking, like Castle Studios in Burbank, where you just walk in and then on the monitors you can see live what's going on. So you could actually tape that, and play back and then show your actors, like, hey, you're in a spaceship and it looks like this. And they're like, oh, I get it now, it's placed like that. That helps. Totally cool, but of course it would have been an extra step, it would have taken more time. So I think the shoot would have been longer, even over there. I think having the LED, you'd go, Oh, okay, I'm in. And then, having the KinoFlo showing you the same environment. So you're lit by the same stuff, and you're kind of seeing that too. I think the actors, it's much faster for them to get into it. These women were great. I mean, they were rehearsed, and they knew their fight choreography, which was half dance, half fight, and their dance choreography really well. We had rehearsed that ahead of time. They came on stage and they're like, Okay, I'm in. And then boom, it's there. And I had very little to do there, so I think I saved time there, uh, interacting with the actors. Certainly, you don't want to, as a director, pull back and just concentrate on the tech technical. You want to be there, so like, you want to talk to them and make sure that you communicate the emotion, but they got it because they knew what was going on. They knew the storyline. Storyboards are clear. It's like right there in front of them. They can see what's happening. They get what's going on. They have very few questions. also, it's great for makeup and hair because you, you have a, it's perfect. You don't have to deal with like, oh, we can't have flyaways because it's green screen. And this and that. It's like you can be messy and stuff. Or you can get the look that you really wanted in the first place without having to compromise. And then you have no green spill on the actors actors so you don't have to dial that out later. So that in post production, you're not trying to get rid of stuff all the time. So there's a lot to be said for that. Just got to be realistic. So I would say realistically, we probably saved, you know, using some AI tools, probably some time and money. But it wouldn't be like 50 percent of our budget. I don't think, I don't think that'd be realistic, to say. but, uh, maybe it helped us a little bit. To, to move us along. Anything we didn't cover across any of this production that was worth mentioning? Well, I just want to thank, uh, from the bottom of my heart, everyone that was involved in this project from start to finish. you know, we had great people that worked on this. it was a pleasure to work with everybody and, bring you all together to, to do this technological feat that we did. Uh, yes, it was, uh, a low budget thing, but It certainly doesn't look like it, it doesn't feel like it. Love the fact that we got people to help us out, you know, that sponsored us. Uh, and companies and teams, uh, of people that, that work with us. Please go see our TikTok and Instagram or Facebook or LinkedIn or whatever, wherever you want to go. Uh, FearlessProductions.tv is out there and, uh, we have a lot of, videos that show how we did everything so you can get an idea of what our process was like. It seems like, Joey, you've been following us and really paying attention. So you had some very specific questions. I really appreciate that. Yeah, you've had a lot of really great behind the scenes videos and stuff to documenting every stage of the process. Thank you. and we really want people to ask questions. So please reach out, ask your questions. We're happy to help. Uh, certainly want to help filmmakers understand all this technology, and if there's anything that we can do to, to kind of further the use of this technology in the future, uh, we'd love to see that happen. Certainly all the players that were involved are looking for that. They want people to use this tech, and it doesn't have to be the fifty plus million dollar projects. It could be your project, you know, if you're realistic and you know what you want, you can do it. Um, our budget shouldn't reflect what it, what it costs to do this because you could do this for a lot less as well. Certainly if, if you knew what resources you, uh, needed to bring to bear, it's going to be a much more affordable, I think, um, proposition. And certainly you could do it a lot quicker than we did it. This was a big learning experience for us. I'm actually going to expand. I want to expand on that. So how would you do this cheaper or less expensive? Well, I mean, you could do it at a different stage, you know, that's your largest portion of your physical costs. Yeah, Because physical stage, so if I, I shrank down my shots and was able to get camera tracking, keep the camera tracking and do XR, then certainly I can go to a much smaller stage. Um, I could even go to a stage like, um, uh, with the floor LEDs, uh, since we did dance, you know, that, that comes into play, we had to avoid feet. Uh, unless we had a floor and we did, we did have a floor, but, you know, it's not exactly the way I would want to do it. Like, it'd be cool to do what will.i.am did and, and J Balvin and, have a floor that they could stand on that's actually lit up as well. that would be kind of cool to do. Certainly we would have to use a, an expensive technology for that, you know, like for dance movements if you want to have enough space, you know, you got to go to a stage like XR Studios. But there's more stages coming up that have this capability. So I would say maybe a smaller stage that had floor LED, uh, built in, uh, could be a possibility. The other thing is taking less time to create your backgrounds. So now that I have a lot more experience, especially where I learned about Unreal and stuff, I know how to talk to artists, I know what I need. I can actually pick and even though we were budgeted and we tried to be careful of what we create, we did still create too much. There's still three miles of cityscape. We didn't really need three miles of cityscape. Like stuff created that was never filmed. You never had to be in, so. So we could, we, we, we should know the tricks better. We should pre vis earlier and, um, and know exactly what we have to create. Uh, stuff like that. So I think time, then I would say the 250 budget would come from the time that we spent on it. if we were able to lessen the impact of, uh, how much we went back and forth, creating environments, certainly we could save some time. And, and money there too. So yeah, maybe smaller stage, maybe, a shorter, timeline for, VAD creation because we're more careful in what we want. And it depends on, again, you can't just say virtual production, you have to tell me what is the shot. I think that's the main thing that I learned from this. You can't, you can't say those words. You can't say, oh, I want to do a virtual production shoot. No one should care about that. They should care about, what's your shot? What do you want to do? Oh, I want to do, you know, two people driving in a car and it's a one minute on a road. Great. You don't even need camera tracking for that. We can just put you in a magic box car, process shot with ceiling, wall LEDs, you're good, you could do it there. Easy enough. Fuse Technical Group has a second stage that's really perfect for that, because they have all these panels that you can just put any which direction you want. It's perfect for car process. You could do that, quickly, not a problem. Then you tell me, well, no, I want a spaceship to land and someone to come out of it, and send a big wide shot, and they're small in the frame. Well, okay, now you need something else. Now you need a big stage. Now you need XR capabilities. Now you possibly need a floor LED, right? So I think that that's what I learned the most is like Identify what the shot is and then work out how you're gonna capture it and then figure out, you know What everyone needs so I think that that's what I've become good at and you know, I want that to help me As a director what I want to do is be able to exploit these new technologies and offer that as a service And also create my own content Uh, with it. So now I will be creative first, come up with a plan, and then know how to execute it. but even integrating with, uh, just a regular drama. I want to do a feature film that's It's a straight up thriller and, you know, it doesn't require a lot of virtual production, but certainly I can now see how I can use it. Because you could do simple stuff, like people don't think about, like, I don't want to close down a bank. Okay, cool. Go in and NeRF it. Make it 3D. How long is the shot? It's, it's like, um, you know, two minutes long. She needs to go into the bank and then go to the counter. What's the angle? Oh, it's this angle. Great. NeRF it from that angle. Have the 3D environment. Have her walk up to a desk that's real. That's created by production design. You're done. It's great. You could do that now if you needed to, right? And now you can think about it and be like, Oh, cool. I can execute it that way. So that this is the stuff that's going to save Hollywood in LA is the capability and the ability of directors to be able to do this, and creatives to be able to create this this way, because you have actors that are on demand. Sometimes you need a pick up day and you don't have the capability of going back to that location, right? Or you need magic hour to match a shot because he said the wrong words and it doesn't work with the edit and now you've had time and now you've got to do a reshoot. Well, it'd be great if you just NeRFed the background while you were there or created a 3D, 360 capture while you were there and then be able to do a close up and throw in that shot and relight it and match it. Why not? Gonna be part of the production process in the future. It's totally going to be, I think, your scenes. I think that, uh, pickup shots are going to be done this way in the future. It makes a lot of sense. hmm. Yeah, it It makes a lot of sense to be able to do Do you like cheat a close up because you need a pick up shot of like some other location to make it work? Uh, well yeah, I really appreciate the chat and all the insights and stuff. And, uh, excited to look for the, uh, the film or if someone's listening to this in a few weeks, it'll be out already and we'll have links to that. Um, so yeah, really appreciate it. Thank you so much. Thank you. Thanks for having me. And that is it for this episode of VP Land. Thank you so much for watching slash listening. If you enjoy this episode, please give it a thumbs up or 5 stars on whatever podcast app you are listening to. And for more insights, up to date news, behind the scenes insights like this, be sure to subscribe to the newsletter, vp-land.com. Thanks for watching. I'll catch you in the next episode.

People on this episode