Live Streaming Experiences

Discussion in 'Video' started by Ed_Ingold, Mar 2, 2020.

  1. It's been and interesting two days. I have a concert to stream in a couple of weeks which involves a Klezmer band with 9-10 musicians and vocalists. I had to dig out my Midas rack mixer and 32/16 digital stage box. Their sound tech will use that box as a splitter, so she can do FOH and I can record multichannels without affecting each other. Despite its small size, it has all the capabilities of a studio sound board. There are almost limitless configurations, all of them wrong except one. I also need to make the live-streaming internet connection bulletproof. I will use the Teradek VidiU Go and CORE cloud service, as mentioned above, which is fairly complex in its own right.

    Not surprisingly, the best way to prepare is to set everything up, verify the configuration and operation, then check everything again. In other words, you must practice using your gear, much like a pianist memorizing a concerto. In the field, you must check every connection, both cameras and audio, and make sure there is signal continuity and the levels are set. It's not hard to do, but easy to forget something, have a bad connection, etc. I'm tempted to make up a check list and put it on a knee board like a Navy pilot.

    In short, don't take your gear for granted, nor your knowledge of its intricacies. Looking for cables and adapters chews up precious time, and show time doesn't change. So does shaking out knots in long cables. Use the same care when it's time to close up and go home. Put everything away in the right place and condition, like you might need it in a storm. Believe me, every event has stormy moments.

    Practice. Practice.Practice
     
  2. I forgot to mention a very important subject. Before you go on a job, make sure your software and firmware are up to date. Updating firmware often changes settings in the equipment. You have to check every setting and menu item to make sure they are what you want.

    I use iPads for remote control, and have been caught unaware when the software is no longer compatible with the hardware or DAW. A case in particular is Avid Eucontrol, which allows DAW software like Pro Tools and Nuendo to be mirrored in an iPad. The app, Avid Control, is updated automatically in the iPad, but you must manually update the host software in the DAW to maintain compatibility.

    If you don't have time to check out upgrades thoroughly before a job, DON"T UPGRADE! That goes for OSX upgrades too. Wiindows is not the only thing prone to buggy upgrades.
     
  3. I used the Teradek VidiU Go for the first time in a real project. It worked perfectly. Despite an unreliable internet connection, I had no interruptions while streaming 1080p60 video. When Comcast went crazy, one of the cellular nodes picked up the difference. I have two modems, but usually only one goes on line if there's WiFi or ethernet available. There are other bonded encoders (e.g., LiveU), but in the same price range. About anything is better than the cellular modems you get at a phone store. Phone store data plans are expensive, and slow you down once you exceed 2 GB or so (about 1.5 hours of streaming).

    If you're freelancing, you never know what support you will get at the venue. Sometimes it boils down to paying the money up front for reliability vs making excuses on your way out the door.
     
  4. I'm trying out a PTZOptics 30x SDI camera (Pan, Tilt, Zoom), which will give me more flexibility in a multi-camera shoot. While it's not especially light (3 lbs, 1.4 kgm) or small (6.5" cubed), it is well within the load limit for a medium-duty light stand or tripod.

    The most important attribute is, of course, remote control. I find myself in situations which make it hard to move about to operate or make adjustments to a camera. A salon type recital would be a typical example, often in a private home or small recital hall. When shooting a play, recital or large ensemble, it's best to shoot from a balcony, where you get a direct view of everyone on stage. About half the places I shoot don't have a balcony, but I have light stands up to 22' tall which could accomplish the same effect. At other times the balcony is full, especially in the first row, Then there is the opportunity to shoot from other angles while maintaining concert decorum (not be the guy dressed in black, hidden in the viola section).

    Some of the positive features I'm exploring include...
    • Remote control via infrared (useless beyond about 15', or less in a large space)
    • Remote control via ethernet, including building WiFi or (more secure) a small WiFi transmitter. There is control software for mobile devices as well as a laptop.
    • Video via HDMI, ethernet (RTSP or RTMP/s), and SDI (preferred). I have an SDI/HDMI to WiFi remote with practically zero latency, if cable runs are impractical
    • POE (Power Over Ethernet) is possible (preferred). Alternatives include an AC power supply (last choice), or a video battery with D-tap connectors when cable runs are impractical or AC is not available.
    • 30x zoom lens, from 60 deg (~ 30 mm equivalent) to 2.3 deg (900 mm equivalent). Perfect for working from the back of a large auditorium.
    • Outstanding video quality and sharpness, even from a 1/2.7" sensor.
    The "cons" of this camera include ...
    • No built-in microphone. Needs a line level (-10 dB) feed, which could be a battery-operated blogging microphone.
    • Audio is embedded only in ethernet or HDMI, not SDI (stunning omission). Alternative: inject audio into the SDI stream prior to broadcast or recording.
    • Real time internet preview works only in Windows, not OSX. Alternative - use an in-line SDI monitor.
    • No time code options. In order to sync video streams in post, you need time code, audio, or a recorder which handles all video streams at once.
    • Operating instructions are a "work in progress." In at least one instructional video, the presenters look things up while on camera.
    I've put in a lot of time filling in the blanks, and I hope to work with their support team in this endeavor. I'm sure I'll have more to say on this topic.
     
  5. I forgot to mention an extremely useful feature. You can store. up to 255 shots, including position within 0.1 deg and zoom level, and up to 255 cameras. Ten presets are available for each camera from a number pad. Once you have identified "targets," you can lock on in a couple of seconds. It takes about 15 seconds to set up a shot with a regular camera on a tripod, if you're good. Even so, rapid pans are distracting. You can hide the motion by switching to another camera, or by freezing the frame prior to the move. The former method is esthetically better, but takes practice to coordinate actions.

    For a church service, I might designate the pastor, reader, choir director and choir. For a symphony concert I might spot the conductor, principal violin, violin section, wind section, etc. - sections v frequent soloists. The B roll would be a wide or medium shot.
     
  6. For audio, it's usually not necessary to ride gain in classical music. Once the mix is balanced, the musicians take care of the rest. The exception would be for a speaker. The gain is usually high for speaking, which picks up to much room noise and bleed when not in use. For a rock or pop group, things get complicated. It's best to make friends with their FOH (Front of House) guy for a feed, preferably pre-fader. Most boards have a pre-fader direct out for each channel. If needed, I have a 32/16 channel stage box, configured as a digital splitter so the same mics can be used independently, for FOH and recording.

    Most of my jobs can be handled with an 8 channel recorder, usually with channels to spare.
     
  7. I have been struggling to find a way to include pre-recorded video into a live-stream event at high quality and reliability. I think I have a solution, which I need to test thoroughly once hardware I ordered arrives.
    • Streaming software like Wirecast (($$) and OBS (free) can insert video into the stream easily, but at relatively low quality. All the encoding must be done in the computer or laptop, a very CPU intensive task. Secondly, you are limited to one connection to the internet, either WiFi or ethernet, which is unreliable, especially at higher bandwidth.
    • Atomos recorder/monitors can only play back videos recorded live from a camera. You can't load an edited video and play it back
    • My video switches (BMD ATEM...) can store fixed graphics, up to 20, and display one or two on command, but not video clips. They can interface with solid state recorders, but those recorders can only play clips they have recorded.
    • Most HDMI output dongles for a Mac display the entire screen. That's great for instructional videos, but you don't want all the navigation tools on display for a video.
    The solution I'm hoping for is a Black Magic Ultrastudio Mini Monitor. It is an output-only device with 3G SDI and HDMI 2.0 ports, connected and powered by Thunderbolt 2 (sorry, PC users). It comes with software which will grab video from a media player, transcode it if necessary, and sent it out via SDI or HDMI (or both). It will also work with the output window of Premiere Pro, Final Cut Pro and DaVinci Resolve. If it works with Wirecast, then it will be very easy to cue up videos, graphics and screen shots. The only downside is the maximum output resolution is 1080p30.

    Blackmagic Design UltraStudio Mini Monitor Playback Device

    My plan is to use an Atomos Shogun 7 to ingest up to 4 SDI cameras plus analog mixed audio, handle the camera switching, and output the program to one channel of a hardware switcher (ATEM Television Studio), and record the ISO's and program streams. A laptop with graphics and video will be connected to a second switcher channel. Output from the ATEM will go to an SDI monitor, and hence to a Teradek VidiU Go encoder and the internet.

    It sounds complicated, but will actually fit on an 18"x24" folding table. I'll post a photo or two in the near future.
     
  8. The Black Magic Ultrastudio Mini Monitor device seems to be the solution I was looking for, but it's harder to use than I anticipated, and the manual is not very helpful. It took the better part of a day to figure it out. There is a newer version, not yet available, with higher resolution (1080p60) and a cheaper price.

    It comes with a free program called "Media Express", which offers an uncluttered way to send video through the Mini Monitor. However it only works with video captured by a companion device, the Mini Recorder, or Quicktime video from a BM camera.

    In order to stream recorded video effectively, you need to cue up one or more clips and play them on demand

    The good news is that the Mini Monitor is recognized by major editing programs such as Premiere Pro, Final Cut Pro, and DaVinci Resolve, which can ingest nearly any video format. It can also be used with Wirecast and OBS. The Mini Monitor will capture a video as it is played in the time line (or program screen in Wirecast and OBS) over SDI and HDMI connections. The maximum resolution is 1080p30 or 1080i60 (and the drop-frame equivalents), which you can set in the editor's time line. This setting overrides the default you may set in the Mini Monitor driver. I'm most comfortable with Premiere Pro, but DaVinci Resolve is similar, and the vanilla version (which works great) is free, not limited to two computers like Adobe CC products.

    The downside of using a video editor for playback is the play stops if you use the laptop for any other operation, anything which shifts focus from the timeline. The stream will go black or freeze on the last frame, as you opt in the Mini Monitor driver. While it works, you would like something more robust. To that end, Wirecast is a better solution, but not without pitfalls.

    It is very easy to import video clips into Wirecast, adding or dragging them in to a Wirecast layer. Shots can be a camera, output from a switcher, video clip, or still graphic. Wirecast works like a M/E (Mix/Effects) switcher. You click on a "shot", which places it in the Preview Window. A separate operation moves the Preview Window contents to the Live window. The clip will start playing automatically, sending its contents to the Mini Monitor. You do not have to be live on air for this to work. That clip will play until you clear it or replace it with another "shot". You can stop (pause) and start the clip. You set the output resolution and start the Mini Monitor in the tab Output/Ultrastudio Mini Monitor. The Wirecast output and canvas size are separate settings. The Mini Monitor will transcode the resolution as needed.

    Wirecast is not cheap and has many quirks and a lot of overhead. However it offers much more control than the free product, OBS. For this purpose, I will use Wirecast as a player and and SDI input to a hardware switcher
     
  9. Only high level ($$$$) switchers can transcode inputs, therefore all of the inputs must have the same resolution and frame rate. Finding an overlap between a Sony A7iii, Sony FS5, Sony MX0, PTX camera and the Mini Monitor proved to be challenging. The answer proved to be 1080i60, which is not ideal, but live streaming is not yet at Academy Award level.

    60 fps is not the same as 59.94 as far as most switchers are concerned. Most video cameras are the latter, even though they're labeled 60. If there's a mismatch, the input remains black. You also have to decide whether to use SDI A or SDI B. The former alternates frames and is compatible with i and p formats.The latter puts everything into single frames, and is compatible with p only. Again you get strange effects if you get it wrong. I have a small SDI monitor (Atomos Shinobi) which ingests nearly any SDI signal and displays the resolution. It's a great time-saver in shakedown, as well as a convenient monitor anyplace in the signal chain.

    Another annoying problem is cable. There are three HDMI connectors, used in various combinations. There are at least four USB connectors, but most share the same "A" end. So far there is only one SDI connector - a simple BNC, but half a dozen cable types depending on the bandwidth, up to 12G. Most of my audio cables are XLR, but 1/4" and 1/8" TRS connectors are not uncommon, even in professional applications. Keeping track of small stuff is an ongoing job. Nothing wastes more time than looking for the right cables during setup, unless it's getting knots out of a 50' mic cable (cables of any sort, longer than 50', I put on reels). I have two large semi-rigid cases for cables, the smaller one for most jobs and a larger one for surprises. ThinkTankPhoto makes durable clear zipper pouches for the small stuff.
     
  10. Ed: Can you explain why it's 59.94 rather than 60fps?
     
  11. SMPTE timecode - Wikipedia

    The 0.1% difference between linear (ATSC) and dropframe (NTSC) time code was instituted by the broadcast industry to allow time between frames to synchronize sound and video, and for compatibility between mono and color broadcasts. The actual elapse time for the same number of frames in linear time (60 fps) and dropframe time (59.94) is the same. Time code numbers are dropped, not frames. PAL broadcast standards us a different method and different frame rates (25 and 50), corresponding to their line frequency of 50 Hz.

    I don't understand much of the what and why, except that live-streaming and camera switching is an adjunct of broadcasting, hence bound by conventions that may not be totally relevant to the internet.
     
  12. While it seems obvious, it bears repeating. When streaming, make sure you check the signal flow from end to end. Check both the audio and video signals. If signals aren't reaching your encoder. they aren't going over the air. The ultimate test is whether the streaming site(s) reflects the program you are attempting to send.

    On a practical note, monitoring the destination site involves a lengthy delay, from a few seconds to over a minute, and can't be used to make real-time adjustments. It is for "proof of life", not a rescue.

    In all but the simplest podcasts, the audio and video streams are generated separately, then combined and synchronized at some point in the process stream. Internal camera audio is of little use, unless it comes from an external microphone or mixer. If you have multiple cameras and/or multiple microphones, it's much easier and better to mix the sound separately and inject it into the program. Video processing always introduces some degree of lag, so the audio should be injected as far upstream as practical and embedded in the SDI or HDMI signal. That done, downstream processing affects latency of the audio and video equally.

    I insert a small SDI monitor after the last stage before it enters the encoder. A relatively insexpensive Atomos Shinobi monitor (5") allows me to see the video and listen to the audio over headphones. The encoder translates the SDI signal into a form which can be broadcast over the internet. The encoder can be a computer (e.g., using Wirecast or OBS), or a specialized hardware device (e.g., Teradek VidiU Go). If I wish to record the program, I can use an Atomos Ninja V (or other model) instead of the Shinobi.

    I strongly recommend using SDI rather than HDMI, for all but the simplest setups. HDMI cables are usually thick and stiff, and the connectors are easily detached, especially the mini and micro varieties. SDI cables are always the same at each end, and lock in place (BNC connectors). While HDMI cables should be less than 15' (5 m), SDI can be up to 300' (100 m). They should be rated 3G or better for 1080p60 video.

    This is getting rather complicated, but you have to handle programs designed by the client, and variations are endless. Most of my jobs are on site, so everything must be mobile yet robust. I want to have a recording of each camera plus the (switched) program stream. Rather than use a recorder on each camera, I bring each camera (and separate audio) into an Atomos Shogun 7, which can take up to 4 cameras, switch from a touch screen, and record everything to an SSD. The ATEM switch is optional, when I need to insert a side stream for graphics or pre-recorded video.

    A laptop can be used to provide another layer of switching between the program stream and graphics or pre-recorded clips. The Black Magic Ultranet Mini Monitor captures the program stream by Thunderbolt 2, and converts it into an SDI signal, which can be monitored by the Shinobi and/or sent to the Teradek encoder. Wirecast and OBS can encode and transmit on their own, but the quality is limited by processing speed, and the speed of a single ethernet connection.

    Slide1.jpeg
     
  13. I have found that all 3G SDI may not be the same. A Teradek VidiU GO encoder may not read embedded sound in the SDI output (1080p59.94) of a BMD ATEM TV Studio HD switcher. The video is streamed without sound. The same signal is properly interpreted by the Atomos Shinobi SDI monitor described above.

    In this application, I'm using an Atomos Shogun 7 to receive and record SDI video from several cameras. The SDI output of the Shogun 7 was connected to the ATEM, which can add or superimpose graphics (e.g., an opening screen and titles), which is not possible with the Shogun. The solution was to take the ATEM out of the loop, and run the Shogun output directly to the Teradek, or through the Shinobi.

    Both the Shinobi and Teradek have an headphone jack (3.5 mm stereo). If sound is properly embedded in the video, you can hear it with headphones. In this case, the Shinobi had sound but the Teradek did not. After re-routing, I could hear sound on the Teradek, and the live stream site logged the video and sound correctly.

    When streaming, it's always best to monitor the results on the host site. The downside is that there is a delay of 30 seconds or more, which makes troubleshooting very difficult. However it is the ultimate tool to see if the encoder, internet connection and website are working. Always make a backup recording of the program you can upload later without the vagaries of real time operations. If possible, record the ISO's (individual camera outputs) too, so you can adjust switching points for the final production.

    I have used the ATEM in exactly the same manner as recently as last week without issues. When troubleshooting, it is very difficult to accept that something works, then stops working without a complete failure. The key is systematic signal tracing and thorough knowledge of the hardware and software involved. There are many points of failure, and usually only one way to get it right.

    Until I spoke with Teradek, I did not know the function of the headphone output. Not hearing a signal is not the same as not having a signal. I learned that the headphone monitors sound going to the encoder, whether by SDI, HDMI or an analog input. Secondly I learned others have reported problems with ATEM compatibility. IMO, part of the problem may lie in a Teradek firmware update the previous day.

    The Teradek VidiU GO is an essential tool for reliable streaming with questionable venue internet connections (or without local internet). Yesterday had perplexing consequences, but it turned out well. My forehead will heal, and the wall can be repainted ;) I will follow up with Teradek to find a more suitable resolution.
     
  14. I spent this last week live-streaming a concert series at 5 Mb/sec (clean 720p30). The sessions were successful, largely due to use of a Teradek VidiU Go encoder and CORE cloud service (also by Teradek). I was connected by ethernet to a Comcast 300 Mb/sec service, 20 Mb/sec upload speed. While this might seem overkill, in practice the ethernet speed was highly irregular. On one day, ethernet alone carried the load, but on another the transmission relied solely on the two cellular nodes in the Teradek. The remaining days had mixed performance. Fortunately, there were no dropouts nor glitches due to this variability. If one service sags, CORE compensates using the cellular services.

    I put the Teradek on a tabletop tripod. It is fan colled, but gets too hot to touch if placed on a flat surface. That doesn't seem to affect performance, but I can't say the same for my fingers.

    My source video was 1080p60, as usual, at 50 Mb/sec. The Teradek handled downsampling and encoding with ease. Meanwhile I have a high-quality 1080p60 recording of the program and ISO's for later use, using a Shogun 7.
     
  15. I added a photo showing my most recent (and compact) setup for live streaming and monitoring the results. Not shown is a cell phone I used to remotely control PTZ cameras, which I'm finding indispensable. More recently I no longer use the small BMD ATEM TV switcher, because of problems the VidiU Go has reading embedded audio in the SDI signal.

    From left to right, you see an Atomos Shogun 7, which receives, switches and records SDI signals from the camera. Below it is an ATEM Television Studio switcher, not used. Buried in a stack is a Zoom F8 8-channel recorder with a Zoom FRC-8 mini mixing panel, then a POE ethernet switch with a TP-Link mini access point (cor control, not connected to the internet), then a USB power supply. Moving to the right there is an Atomos Shigoni 5" monitor for the program stream, and slightly to the rear is the Teradek VidiU Go encoder, on a small tripod for cooling, resembling an Orwellian Martian invader. The laptop is connected to the internet, to monitor the live stream at the destination, and an iPad used to control and monitor the VidiU Go status.

    Everything is portable and relatively light. However it takes, on the average, about 3 hours to set up and debug, and about 1-1/2 hours to pack up. I like to reserve the last half hour before showtime to establish and debug (if necessary) the streaming connection. That can be tricky. Most streaming sites will go into archive mode if you pause longer than a few (e.g., 5) minutes, and lock you out. Vimeo will not let you link up more than 30 minutes before showtime. YouTube and Facebook can be funky. I find using an RTMP streamkey connection easier to set up and reliable than the "easy" methods for those sites.

    There is a wireless handheld mic on the table I used to record introductions and announcements. I did not have a loudspeaker on this occasion, which was a big mistake. When people speak into a mic, they expect to hear the amplified sound (as does the audience). It's unnerving to them.

    IMG_0217.jpeg
     
    Last edited: Sep 6, 2020
  16. Since the beginning of September I have live-streamed half a dozen concerts using a PTZ camera, from vocalists to chamber music to a rock band. Doing it all, in real-time, presents some challenges, but it's been a learning experience. Using a PTZ camera is like having three cameras in one, multiplying your ability to have variety in your broadcast. It also locks you down to a certain central location rather than behind a camera. You need a compact yet ergonomic set of tools and practices.
    • There is no substitute for controls you can touch and feel. While you can do most things with a computer screen or iPhone, everything is visual. A mixing board has both visual and tactile advantages. You don't need to look to dip a particular channel then return it to the center position. Likewise a joystick control makes handling a PTZ camera much easier than tapping a touch screen.
    • Preset camera positions are great but with caveats. (1) Transitions are not smooth enough to use in real time. You need a fixed view to cover the changes. (2) It's hard to remember which setting is which. I try to use rules like "left to right" and "medium to close" with reasonable success. (3) Groups like the rock band tend to move around, so the presets are merely "suggestions".
    • I find I can handle about 6 presets, but rely mainly on about 3.
    • When people move around, e.g., for solos, it is almost as fast to move the camera by hand as to use a preset and make adjustments.
    • In that case, I still use the appropriate preset, make the adjustments and re-save the preset with the new values. That way I can jump back to that shot quickly.
    • A joystick controls the change velocity, not the end point, and the there are wide steps in the control effect. The controls are much more sensitive when the lens is zoomed out. You must tap the stick to make fine adjustments. It takes practice to do this quickly and efficiently. In case I forgot, there is a noticeable delay between motion of the camera and what you see on the monitor, as much as 1/4 seconds.
    • Using two PTZ cameras is a temptation to move both cameras alternately. RESIST! The second camera is best used as a fixed view with benefits. I use it as a wide view which can be adjusted occasionally if there is a scene change. In a group, you never know exactly where they'll be until the audience is present and they go on stage. In live-streaming we have sound checks, not rehearsals.
    • Don't forget to switch cameras while you're manipulating the PTZ!
    That done, you are probably ready for helicopter flying lessons, maybe an invitation to "Top Gun" flight school.
     
  17. Sooner or later you're going to want to add something special to your shooting - motion. After all, why should the subject have all the fun? Most of us look for interesting and unusual angles for otherwise routine subjects. If it works for still images, why not video? Having witnessed an attempt to live-stream a music recital, music-video style with a bare-back camera held at arms length, I thought it appropriate to come up with some suggestions. More than a little motion makes for a very unpleasant viewing experience.

    First of all, you need to consider some means to support and smooth the camera motions. IBIS won't help much. It is designed to remove millisecond jiggles, not deliberate or accidental motion. You can start with a shoulder mount, something with double grips to keep from "Dutching", and to free up one hand for focus or zooming when needed. A skilled operator can walk, Groucho Marx style, without a lot of up and down action. Mostly, though, a shoulder mount is best used standing still, like a human tripod. It helps that the weight is borne by your shoulder, not extended arms. The viewfinder will be out of reach, but you can compensate by attaching a magnifier to the rear screen, which will be within about 3" of your eye. I use an Hoodman, attached to the camera with a couple of big rubber bands.

    If you want to move with the camera, and shoot from low to high positions, then a Steady-Cam rig is probably the best choice. The weight (which is substantial) is supported on a shoulder harness. The camera is on a motorized gimbal, and its height is managed with a counter-balanced bracket. A more affordable and portable option is to skip the harness and rig for a motorized gimbal on a stick. I have a Ronin S (medium duty) gimbal, which I put on a monopod. I can hold the monopod off the ground for mobility, raise it high above my head, or down to ground level, with very little effort. Best of all, I can plant the monopod on the ground rather than hold the camera, lens and 3.5 pound Ronin at arms length, a welcome relief every 5 minutes or so. The servo mechanism can be tuned to suit the job, from rock-steady to follow-motion with butter smoothness. The Ronin can be put on a tripod and programmed to quickly pan/tilt to several positions via an iPhone and Bluetooth. It can also be programmed to shoot a 3D panorama, taking a still shot in each position.

    For streaming, you need some way to bring the video signal back to your controller. You can use an HDMI cable up to 25' long for 1080p, if you anchor it to the camera in some way. My camera has an L-bracket, which can hold a cable clamp. The camera connector is delicate, and you don't wan't the cable to come loose mid-program. A more elegant (and expensive) solution is to mount an HDMI to WiFi transmitter in the flash shoe, with a corresponding receiver at the switcher. Suitable systems run from about $600 to $5K. You pay more for low latency and signal integrity, as for news gathering (to the truck or satellite) or on a movie set.

    Think first before taking this plunge. Moving around is distracting to the talent. Avoid unnecessary movement during a live performance. Secondly, you don't want to "star" in your own production via a static camera. You can do a music video type production with a single camera if the talent performs to a pre-recorded track, which you also use for the final cut. Even amateur musicians find this relatively easy, provided you don't look too closely at the lips or fingers. With pros, you can't tell live from fooling around.

    If you haven't used a motorized gimbal before, use caution. Don't turn it on until the camera is balanced and everything is locked down.The grip must be held stationary at all times once powered up. If you forget and grab it by the camera, the handle will flail around like a wounded rabbit, possibly damaging the gimbal or hurting someone.
     
  18. I have managed to achieve greater flexibility with less desk space, not to mention the wear and tear on my bones schepping the stuff. this setup gives me control over up to 4 SDI cameras, including a joystick controller for two or more PTZ cameras and an 8-channel recorder with a mixing panel. It all fits on a 24x18" folding table. Each camera is recorded separately, with embedded audio from the mixer. Each audio channel is also recorded separately from a pre-fader source. A mixing board with sliders makes it much easier to make quick adjustments, including turning down the announcement mic (#6) as needed. I use faders rather than mute buttons, for smooth transitions without clicks or thumps.

    Clockwise, from the top left: Atomos Shogun 7 4-channel recorder/monitor/switcher; Zoom F8n 8-channel audio recorder; Zoom FCB Control Panel; PTZ Optics gen 3 IP Camera Controller. Not shown are a Teradek VidiU Go streaming encoder; a Netgear POE Ethernet router (for PTZ control); and an iPad to monitor signal status and the destination websites.

    IMG_0236.jpeg
     
    Last edited: Oct 15, 2020
  19. I'm facing a problem that hardly existed before. How do I get the necessary gear from the van into the job site? In better times, I would load everything on a warehouse cart and roll it inside. These days, I'm recording in places that don't always have grade level access, ramps, nor elevators. Everything is schlepped by hand. A lot of small bags means multiple trips to the van. Microphone stands and tripods are heavy and bulky, with things sticking out all over. On top of this, equipment needed for live streaming roughly doubles the things to schlepp and set up. Cables are surprisingly heavy, and now I'm carrying SDI and ethernet cable in addition to microphone cables.

    Using larger cases, with wheels, is a viable solution, with limitations. Large cases are heavier and harder to transport than typical gig bags. Even when I'm able to use a wheeled cart, my largest case takes up nearly the full bed length. Everything else must be stacked on top of it, and it takes careful packing to keep the load stable when rolling over doorstops, much less curbs and rough sidewalks and driveways. In the old days, most of my production gear fit in 19" rack cases, which stack nicely. Now none of my normal gear can be rack mounted.

    I wish I could say "Problem solved!" But no, I'm still seeking solutions. In the short term, I'm looking at smaller cases which will stack on the cart and in my van. An acquaintance of mine ran his entire operation out of a van, via an arm-sized cable or two into the venue. That's tempting. It would cut the setup time in half, because all the connections would be at the distal end of the cable(s), and the production gear would stay in the van, ready to go.

    Things in the event world are opening up, albeit gradually. However I doubt audiences will be the same for months if not years to come. My opportunity is to facilitate an audience at a distance for the performance (and vice versa), with no theoretical limits on its size.
     

Share This Page