I'm making this video both as an explanation of the tech, AND as a blueprint for any event organizers or content creators who may want to replicate or build upon what Aaron Parecki and I did at NAB 2024, where we shot short form interviews on the trade show floor using a professional camera, then published edited vertical videos as Instagram Reels within an hour or two of the interview. This video will explain the objectives, and decisions… the camera configuration AND the remote editing workflow, around our “Atomos News at NAB” project. Let's get into it. Let's talk about the big picture objective of this project. And first – just in case you're not familiar with the event, NAB is a huge annual trade show, that's actually over 100 years old, focusing on the broadcast industry. Fun fact – in 1923 when it was founded, it was called the National Association of Radio Broadcasters, because – wait for it! – television hadn't even been invented yet. To this day, NAB is for the content professionals; from creators to distributors to managers and so-on. Now, I've been going to NAB for over 20 years, and I highly recommend it. This year's NAB had over 61,000 registrants and more than 1300 exhibitors, making it really easy to miss something. Our “ATOMOS News at NAB” series delivered short form news stories from the NAB show floor to social media in near real-time, for both those who couldn’t attend, and attendees looking for what they'd missed. Needless to say, but I'll say it anyway!, this workflow I'm about to describe can be applied to ANY event of ANY type where the objective is to get high quality news stories on social media QUICKLY. Let's start at the end – the final delivery, which dictates much of our format. I chose Instagram Reels as the delivery platform primarily because of of its collaboration feature. When you invite a collaborator on Instagram and they accept, the Reel shows up in their Instagram feed just as it does in yours. This of course increases exposure for the video, but it also means that every company we interviewed could choose to have their interview show up on their own Instagram page. Free marketing for them, which encouraged them to give us the interview. It also meant the tagged sponsors could have any or all of the Reels on their pages as well. We could have cross posted to TikTok like we did last year, but the simple fact is that most of the tech companies we knew we'd be interviewing, including our sponsors, simply don't have a big TikTok presence. We tried it last year, and it was largely a waste of time. Of course for your industry or clients, that could be totally different. We would have loved to have cross posted on YouTube, especially since both Aaron and I have much higher subscriber counts on YouTube than on Instagram, but… YouTube shorts are currently limited to just 60 seconds, where Instagram Reels are 90 – and we learned last year that the majority of the time, 60 seconds simply isn't enough to tell these tech stories. But 90 seconds… is. And finally we could have also posted to Twitter, or X, and maybe we should have, but we decided to focus on a single platform and try to drive all roads… to Rome. Alright. Now that we know we're delivering vertical video for Instagram Reels, let's talk about the camera's orientation and resolution. The obvious answer to vertical video is to shoot with a vertical camera. But as anyone who's seen my Open Gate videos knows, when shooting with the LUMIX cameras like the micro four thirds LUMIX GH7 or the full frame LUMIX S5IIx, shooting 6K Open Gate means that you have plenty of resolution to shoot horizontally, then crop both widescreen 16:9 ultra HD and vertical 9:16 full HD – or even vertical Ultra HD if you actually needed it – from the same 6K image. So that's settled, right? Shoot Open Gate horizontal! Well, not so fast. If we were shooting on the show floor to edit and deliver after the show, then yes, shooting Open Gate horizontal would undoubtedly be the best solution. But our PRIMARY purpose wasn't editing later. Our primary purpose was editing now… or, then. You know what I mean. So… hold that “horizontal” thought for just a minute. I'll side step for a moment and introduce the remote editing aspect of this. The footage we were shooting on the NAB show floor was being converted to proxy files by an Atomos Shogun, and uploaded to frame.io via a Sclera bonded cellular backpack – and don't worry, I'll come back and dive into all those details later – but the important point here is that the proxy file made by the Shogun is full HD – 1920x1080. That would mean that if we were shooting Open Gate horizontally, our editor would only have 1080 tall footage to cut vertical video from – really not enough since the deliverable for Instagram is 1920 tall. So, the solution, was to shoot Open Gate… vertical. Watch: Here's the 3:2 horizontal aspect ratio of full frame 6K Open Gate. When that's proxied to a 16:9 1920x1080 file, you get pillar-boxes, since the aspect ratios doesn't match. Now if I rotate the camera vertically, the pillar-boxing become letterboxing, and I get much more vertical resolution – still not a full 1920 pixels tall, but the usable image once you crop past the letterboxing is 1620 pixels tall, so it only needs scaling of 19% to grow to 1920 tall – and that was something that we could live with. So now you're asking; “well… why bother shooting Open Gate at all then? Why not just shoot vertical video in full HD? That way the editor gets exactly what they need, right?”. Well… because… we ALSO wanted to be able to cut ultra HD 16:9 wide videos… later. Lemme show ya how. Even when shooting VERTICAL 6K open gate, that vertical 6K file is still WIDER than ultra HD – meaning we could cut full resolution 3840 wide after the show, while also cutting a slightly scaled vertical HD edit, from the proxy, during the show. Best of BOTH worlds. Again, I'll go into more details on the actual proxy editing workflow later, but now that we KNOW the camera will be vertical, let's check out the rigging. After all, someone needed to haul this thing around the show floor for four straight days. Joining us on the red carpet today is the LUMIX S5IIx, adorned entirely in Kondor Blue rigging. The rigging for this whole setup was all Kondor Blue, and here's how it came together. And yes, before you ask in the comments, links to all of these pieces are in the description below. The LUMIX S5IIx is of course in the S5IIx cage, and its mounted with the grip and SD door facing down. Yes this means that the monopod mounting plate blocked the SD door, but we needed the HDMI port on the other side to be accessible. I just mounted it to the monopod plate normally, but a better solution would be to use the 501/ARCA PIVOT CAMERA PLATE and replace the quick release with an ANTI TWIST NATO CLAMP – that would provide a more secure, non-twistable solution. That would also make it quick to remove if you did need to swap out the SD cards. Next, the top handle TALON XL was attached to the NATO rail on the cage, and an important feature on this handle is that it includes a record trigger. Since we were recording both internally to the S5IIx in Open Gate, and externally to the ATOMOS Shogun for the proxy, it was critical that we start/stopped recording by triggering the camera, and not triggering the Shogun. The handle is wired into the camera, so its easily accessible button was used to trigger camera recording. The camera, in turn, automatically tells the Shogun to start and stop recording, so we know we're always getting both files. On the front of the handle I mounted a NATO SWIVEL TILT MONITOR MOUNT, which is where I mounted the ATOMOS SHOGUN. I could have used the ATOMOS Ninja V or V+ for this, which would be smaller and lighter, but I actually wanted the bigger display, so we went for the 7-inch Shogun. The Shogun was mounted vertically. I put a small 3½ inch MONITOR CAGE NATO RAIL on the bottom – well, on the side, which is now the bottom – and it has another larger rail on the side to mount it normally if needed. For power management, fortunately Kondor Blue had just released their new V-MOUNT PRO BATTERY PLATE, which I mounted using a MINI-LOCK QUICK-RELEASE PLATE. Here's Lukas from Kondor Blue talking about this new V-Mount plate at NAB: [Lukas] The Pro Battery Plates come in Gold Mount and V-Mount, this is our V-Mount, completely made out of machined aluminum. The cool parts are all the I/O. So we've got reversible D-Taps here, so you can plug in your D-Tap going up or going down. We got a little fighter jet pilot style switch here for on and off. We've got expansion module pins here so you can add an expansion module for 2-pin Lemo or 3-pin Fischer. Of course USB-C PD, so right now we're powering this LUMIX S5IIX with USB-C PD directly into the camera, and it'll take PD-in and PD-out. So there's another PD right here, so you can plug this into the wall, and charge your battery, and power your whole entire rig at the same time. And a myriad of different mounting options on the back. You can mount this sideways, undersling, you could put a Nato on it, there's so many ways to mount this; it's not just for rods. [PHOTOJOSEPH] Now to get some cables on here. Power to the Shogun comes through a COILED D-TAP TO LOCKING DC cable. This is a straight cable but the right-angle one would have been better here. The RIGHT ANGLE COILED HDMI CABLE goes between the camera and the Shogun. Then to power the camera, I used a D-TAP TO LUMIX DUMMY-BATTERY CABLE. Now for audio. The next pieces to go on here are the Panasonic XLR-1, and the Sennheiser AVX receiver plugged into that. We used a rented AVX-835 handheld mic, which as you could hear in the audio earlier, is a tremendously good mic in a noisy environment like the show floor. We rented from LensRentals.com which if you're in the US is a great place to rent from. On the show floor I actually ran into Ally from Lens Rentals, and we're actually using a microphone from LensRentals.com! I understand that you guys have a TON of Atomos products on your storefront as well. [ALLY] We have roughly 80 to 90 unique listed products from Atomos. We love them; we've been carrying products from the original Ninja! [PHOTOJOSEPH] Rental is always a great way to get your hands on the gear that you don't want to buy. On the cold shoe of the XLR1 I also mounted the receiver for the RØDE Wireless PRO. Aaron and I each had one of the RØDE Wireless PRO mics pinned to ourselves, using the included lav mic to keep it relatively hidden. This was just for backup; in case the main mic dropped out for any reason. As an omni mic it of course picked up a ton of ambient noise from the show floor, but our editor did tell us that he used a few seconds of its audio, cleaned up with some AI noise reduction, where we had some unusual dropouts from the Sennheiser. So I'm really glad that we had the RØDE as backup. The output from the receiver gets plugged into the mic port on the S5IIx, which can capture four channel audio; two from the XLR and two from the mic input. And all of that audio of course gets saved in the proxy files as well. For audio monitoring, I didn't want a cable going from the camera to headphones, so instead I connected this little bluetooth transmitter to the headphone port on the Shogun, and just taped it onto the back, then paired these wireless headphones to it. This made monitoring wireless, which turned out to be a really good idea. I also added a few Kondor Blue MONDO TIES to the setup, to keep cables tucked away. All locked in place with the Kondor Blue EDC MULTI-TOOL, of course. The lens we used is the Panasonic 24-105 f/4; a big lens, but it has excellent close focusing and a great zoom range at a constant f/4 aperture which suited us very well on the show floor. To the lens I attached a magnetic Freewell VND filter; and since it's magnetic it was easy to pop off when it wasn't needed. And all of this was supported on the YC Onion Pineta Monopod. We actually discovered this monopod last year on the show floor, and it was perfect for this project. We were constantly adjusting the height of the camera when shooting b-roll, so its single handle release to adjust height was awesome. There's actually a new model of it now – Aaron talked to Sandy at YC Onion about it; check this out: [SANDY] As you can see, the biggest feature of the new Pineta Pro monopod is that the lock is different; you can literally control the height by just one hand. [PHOTOJOSEPH] The next part of this puzzle is the internet connection. As anyone who's ever been to a trade show knows, IF there's wifi available, it's useless, and if you try to use your mobile phone as a hotspot, there's so much traffic in the air that getting anything more than a web page to load is pretty challenging. So we needed something bigger. Enter Sclera Digital. Sclera rents a bonded cellular solution that sits in a backpack. It runs all day on a big battery, and the mushroom antenna to the cell towers makes for an extremely robust connection even among all the digital garbage on the show floor. The link was so good in fact, that we bumped the recording bitrate from 6Mbit to 10Mbit midway through the first day, and it was still perfect. In fact, next year way may try bumping up to a 4K proxy file and increasing the bitrate even more. If that's reliable, then we won't be scaling up at all, and our Reels can look even better. The Sclera has WiFi built into it, but by far the best connection is a wired connection, so here we did have an ethernet cable running from the backpack into the Shogun. Now, I've said that the Sclera connection was really good, but what do I mean by that? To appreciate the reliability of it, you need to understand how the Atomos Shogun uses its internet connection. The Shogun is creating a proxy file in realtime of the video that its receiving over HDMI. It also simultaneously is trying to upload that proxy file to the cloud. I'll show you later how to set up where the file goes and what size it's creating, but the Shogun is TRYING to get that proxy file up there as quickly as possible. If the internet connection is slow, or constantly dropping, the Shogun will just keep trying to push it up, piece by piece. If it completely loses internet, or even if you power cycle the Shogun, it'll pick up where it left off as soon as it can. In a best case scenario though, where the internet bandwidth is greater than the bitrate of the file, and totally reliable, the video clip will be be uploaded in near realtime, WHILE you're recording. It doesn't wait until you stop shooting – it uploads while it's capturing. Which means in an ideal scenario, as soon as you stop recording, the last bits go up, the file is closed, and the upload is done. And THAT is what we experienced on the NAB show floor. Time after time, file after file, every - single - file - got pushed to the cloud immediately. Last year we sometimes would get so backed up that we'd have to go stand by the windows to get a better signal and wait for the files to upload. This time though, whatever updates Sclera has made in the last year certainly made a huge difference. The connection was flawless. Now let's talk about this whole cloud connection and how that works. I'll start with a high level overview of what this even means. The idea here is that we're recording BOTH a full resolution file – either internally to the camera or into the Shogun; in our case, Open Gate in the camera – AND recording a lower resolution, lower bitrate proxy file into the Atomos Shogun. The Shogun will then upload that file to a cloud service – ideally in near realtime – as we just talked about. A remote editor can then access that proxy file immediately after it is uploaded, and start editing it. And that's exactly what we we're doing. The ATOMOS Ninja V or Ninja V+ with the ATOMOS Connect, or the ATOMOS Shogun, are what make this possible. The hardware handles the file creation and upload, but where it ends up is handled by ATOMOS Cloud. The ATOMOS Cloud service can transfer files to a variety of destinations; in our case, to Adobe's frame.io. To connect a Ninja or Shogun to ATOMOS Cloud, you just need to look up the “three words” on the Atomos device, log into your atomos cloud account, and add the device using those three words. Once connected, you can add a destination, so in our case that's frame.io, where I choose my account and the project. You can choose a variety of settings, including the bitrate, bit depth, and resolution. You can do 4K, which again we will try next year, and most importantly, you can set your bit depth to 10-bit. By choosing 10-bit, you're creating HEVC files, which are not only higher quality, but also much more efficient. Then you can set your bitrate from one to 30 megabits per second. We used ten. And here's what those HEVC, 10-bit, ten megabit proxy files looked like. On the left you're seeing the 1080p file at 100% on this 4K timeline… in the middle, at 119% as we used it… and just for reference, on the right it's scaled up 133% which fills this ultraHD frame. We shot Log, and applied a color transform in Resolve. You could shoot with a look baked in if you prefer, just to simplify the workflow, but my preference is to always shoot and edit in Log. Finally, we get to the editing part of the process. We were using DaVinci Resolve, although Adobe's frame.io can also be accessed by Adobe Premiere – obviously – or Final Cut Pro, or since you can actually access the media in frame.io from a web browser, you could use literally any NLE that you like. Now I'm not going to dive deep into the editing process here as it's not really relevant to this workflow discussion, but I do want to show you how the media is accessed, and how the editor sent the cuts back to us for approval – also, of course, using frame.io. And I want to remind you here of the different geographies of this process. Aaron and I were in Las Vegas, shooting the interviews. Our editor lives on the other side of the country, in Ohio. We could have actually have worked with an editor anywhere in the world, but we did need someone close to our timezone as this was definitely an all-day job. And I of course am now in Europe, accessing the same media and same library that my editor in America used. So, let's head over to my editing system for this part of the video. Since we did use Resolve, we were able to use the Blackmagic Cloud database, which means I have access to all of the edits that the editor made, here. First let's see how the media is accessed. Step one is to connect your frame.io account to your NLE. In Resolve, that's under the preferences. Go to the Internet Accounts, scroll down to frame.io, and sign in. You'll also need to assign a cache location for any media that's downloaded. Then in the Media tab, you'll see Frame.io as just another source for media, along with your local and network drives. Toggle open Frame.io, and open your project, and from here you'll have to dig down a little bit… there's the Atomos project, and all the media goes into a folder called Cloud_Devices. You open that, and you'll see them by date, and then inside of that; video… and then inside of that, finally, the ATOMOS-Cloud-Studio device. This is where the media will actually show up; as you can see we've already organized ours into subfolders, but it would normally just show up right here. Also if you're sitting in this folder and new media comes in, unfortunately you're not going to see it just automatically appear. If you leave and come back to the folder, it'll update, or you can force a refresh, like this. Just right-click, and choose Refresh. Since this media has already been organized, we'll jump into a folder here like the Kondor Blue one, and add this media. Now, the first time that you look at the media in frame.io, you're going to see it as red “offline” thumbnails. But as soon as you scroll your mouse over them, they'll update. But you don't actually have to do that; you can just drag and drop them into the bin. So let's go ahead back to the Kondor Blue folder… I'll grab all of these, and drag them down. Ignore the preview for the moment – it's not gonna look right yet. But, all I gotta do here is select all the media, and I'll go ahead and add this to a new timeline… make sure I have all four audio tracks so we can see those, and then open that timeline. Now if I jump over to the Fairlight page, you'll see all of the different audio tracks on there. Channels 1 & 2 are the XLR microphone coming in onto the same channel, and on channels 3 & 4 – the lav mics – one would be full and one would be empty, just depending on who was on camera. Now as you saw, the video is Log, and it also needs to be rotated, and scaled. In my normal workflow, I'd work in DaVinci Color Managed in Resolve, and set the Input Color Space of the video clips to the Panasonic V-Gamut/V-Log, and that would apply the Rec709 conversion to my log clips automatically. Then to rotate them, I'd go into the clip attributes and rotate them 90˚. For reasons though that we never figured out – I guess it's a bug – both of these properties are lost on the frame.io footage after a Resolve relaunch! This never happens with regular imports; just the frame.io clips. It's gotta be a Resolve issue. So instead, the editor just handled both of those things on the timeline, like this. I'll go back to the Edit page, select all of these clips, go to Transform, rotate these negative 90˚, and scale it 1.19. So now all the video clips are rotated and cropped appropriately. Then color was handled with a color space transform on the Color page. I don't need to show you that now. From there, edit, grade, slap the logos on, etc., and then when it's time to send it out for review, just head to the Deliver page! From the Deliver tab, choose the preset for frame.io. Give the file a name, and then choose where you're going to upload it to. When you click the Browse button, you're gonna be navigating inside of frame.io. So once again, you're just gonna navigate to that project folder, and find the place you want to put it. I want to ensure that my frame size is accurate, so that's 1080x1920, as vertical, and I want to make sure that my video codec is set to H.265 so I maintain that 10-bit. Click “Add to Render Queue”, and then, “Render All”. And that will render and upload it. Once it was uploaded, our editor would send us a text that there was a new video to review. So let's jump into frame.io on my phone and see that process. There's that demo folder, and there's the new timeline that I just uploaded. From here I can preview it, I can play it, scrub through, and even add comments. “Change this”… and I can even choose to draw on the page, and as you can see over in Resolve, those comments show up on the timeline pretty quickly. There's the drawing that I just made. So the editor can see our notes on the timeline, make the requested changes, and upload another version. Once a new version has been uploaded, if you looked at the phone immediately, you'd see two different videos, but what you really want to see are stacked versions. This is another one of those little caveats, and I should point out that Adobe is working on a major new version of frame.io, version 4, which hopefully will address some of these issues, but for now the Resolve uploader doesn't version stack automatically. But if we jump to the web browser, we can fix that. I've already uploaded a second version, and if we look in the demo folder, we'll see them both. So all I have to do is drag one on top of the other, and that creates a version stack. And now when I go back and look at it on phone, I'll see that there's two versions of it, and I can even switch back to a previous version if I want to look at the older edit. Once the edit is approved, I can go to the three dots, and download to the camera roll. I would always choose the original format to make sure I had the best quality, and once that's down, I'll jump over to Instagram to upload it there. So, into Instagram… create a new Reel… add the video, and most importantly, invite the collaborators. Which again going back to the very beginning, means that once the invited collaborator accepts the invitation, my video shows up in their Instagram Reels. And then… you're on to the next video! Alright, let's go back over there. Remember that in reality, this was flowing fast. On the show floor, Aaron and I were rapidly going from booth to booth, shooting interview after interview, and piles of b-roll. We had a dedicated camera operator this year, as well as a scout who was looking for and setting up new interviews for us, which really allowed us to make a lot more stories than last year. Our editor was busy! When he finished a video, there was always another story already waiting for him. The whole workflow is a lot of pieces, but this isn't kid stuff. This is a really cool, relatively advanced workflow across multiple modern technologies allowing you to do something pretty dang awesome. We did this at a tech show, but again this could be replicated at any kind of event. In fact if you're thinking you might want to do this yourself, either as an event organizer or as a content creator, feel free to ask questions in the comments, or in the channel's “members only” Discord, or even better, book some time with me to discuss. I can help you ensure that nothing is missing, so that you have a successful event. Visit the link here, or find this link in the pinned comment or description below. Thanks for watching, I hope you learned something, make sure you're subscribed, and… yeah, that's it. Seeya in the next video!
Comments from YouTube