I just finished the months long processing of the RubyConf Philippines 2015 videos so I guess it’s a good time to do an article on covering tech events as an amateur.
As far as I know, I’m the only amateur videographer in the local tech/developer scene. All other videos of events are either taken by camera phones or are done by paid professionals. Compare this with photography: there are still camera phone pictures, but many events have amateur photographers with decent gear. Why is this so?
It’s not because videography is expensive. Yes, if we look at the equivalent video cameras for DSLRs, you’ll get into the P50k+ range. But you can get somewhat decent cameras for much less like the point-and-shoot equivalent that I use. Overall, my current gear is cheaper than a typical hobbyist photographer’s gear; even the “pro-level” device, the RØDE VideoMic Pro, just hovers around the price of typical lenses.
If amateur videography isn’t expensive, why aren’t we seeing more people taking videos of events?
It’s simply not fashionable.
No, this isn’t a dig at camera owners who use their devices as status symbols. It’s just the blunt truth: videography takes way too much effort compared to photography.
Let’s look at what happens when you cover a meetup talk – the most basic thing a tech event videographer will record.
First you’ll need to deal with the hassle of having to bring a tripod to the venue as the crappy ones that fit in your bag won’t be stable enough for taking videos.
Then you’ll have to sit in a corner for the entire talk… if only that was that easy. You can’t just sit there playing with your phone or move away and chat with the people in the back. You have to focus the whole time, sometimes you’ll have to pan towards the speaker, other times you’ll have to zoom to the projected code. The worst part is that you can’t even speak or comment or heckle because it’s going to be obvious in the audio (watch the RubyConf lightning talks for some examples where I stopped caring about it).
Speaking of audio, it’s a whole different beast compared to videos/images. You can’t fix it in post production, and if you don’t have external microphones like my shotgun mic, you’ll have to place your camera much closer to the speaker to lessen background noise.
Then there’s the post production. Fortunately for talks, it’s just trimming off excess video and possibly re-encoding to get a smaller sized video for a faster upload.
Finally the upload. Hope you’ve got good broadband like mine. If not, you may have to do what I did before PLDT fixed the line in my area and shove the videos into a netbook (to save electricity) and let it upload all day.
And that’s just a user group meetup talk. What about more complicated events like a developer conference?
First you’ll need more cameras: one for the projector, and another for the speaker. Panning between the two isn’t good enough. You might even need a third camera taking a wide shot for backup purposes.
The audio is also slightly more complicated here. Having a shotgun mic really improved the quality of the audio in my usual videos but it alone isn’t enough to get good audio – I ended up not using the shotgun mic audio in the RubyConf videos and instead used the backup camera audio because the latter was in a better position to pick up the speaker and audience audio.
RubyConf PH’s video post-production was much more complicated, though. I got over 150GB of video which, over the course of 3 months, I converted and stitched together to make 21GB of 720p video. I don’t have plans to purchase professional video editing software so I had to make do with Blender VSE, which unfortunately is single-threaded when rendering the final video. Hence the 3 month delay.
Hand over a camera to a random person and ask them to take pictures of an event and many will be happy to do that for you.
Hand over a video camera and tell them to cover an entire talk (not just random B-rolls) and they’ll give up less than 5 minutes in.
In case you’re wondering what a professional setup looks like, here’s what Confreaks uses when it records conferences:
- 1 – Manned HD camera (used for primary speaker(s) coverage)
- 1 – Wide shot HD camera (used for backup and synchronization with slides)
- 1 – Backup slide camera
- 1 – Hi-res slide recorder that goes in between the presenter’s laptop and the projector (records hi-resolution output of exactly what is seen during the presentation, including slides, videos, live coding, etc.)
- 1 – Audio recorder that takes an auxiliary out from the house sound system to provide a crisp, clean audio track
This brings up another requirement for video coverage: manpower. Not only does a professional setup require expensive equipment, it also needs at least 2 operators to work properly.
I’m still surprised that I was able to pull off RubyConf PH on my own with what little gear I had with only minimal problems.
This article’s already a bit long but I still haven’t covered one common comment I hear in the local tech scene regarding videos:
“I hope someone streams this event.”
To which I just shut up instead of explaining why this makes me ಠ_ಠ internally.
First off, you need at least 2Mbps for good quality streaming. Most venues do not have those speeds.
Then there’s the camera. If you want decent video for your event, you’ll need a $200+ video capture device that you would connect to your video camera. Don’t have one? You’ll have to settle with webcams then.
Even if you skip both, being ok with crappy webcam video and occasional lag, you still have the real main problem:
Nobody is watching.
Try to watch streams of events and you’ll see that even the biggest local tech (i.e. non-mainstream) events get less than 50 viewers. Once you realize this, you shouldn’t be surprised that I don’t bother with all the effort to stream events.
So to sum it up, videography is hard; everyone says they want video coverage, but in the end no one really watches videos.