Jump to content

Live Streaming Experiences


Recommended Posts

For audio, it's usually not necessary to ride gain in classical music. Once the mix is balanced, the musicians take care of the rest. The exception would be for a speaker. The gain is usually high for speaking, which picks up to much room noise and bleed when not in use. For a rock or pop group, things get complicated. It's best to make friends with their FOH (Front of House) guy for a feed, preferably pre-fader. Most boards have a pre-fader direct out for each channel. If needed, I have a 32/16 channel stage box, configured as a digital splitter so the same mics can be used independently, for FOH and recording.


Most of my jobs can be handled with an 8 channel recorder, usually with channels to spare.

Link to comment
Share on other sites

  • Replies 66
  • Created
  • Last Reply

Top Posters In This Topic

I have been struggling to find a way to include pre-recorded video into a live-stream event at high quality and reliability. I think I have a solution, which I need to test thoroughly once hardware I ordered arrives.

  • Streaming software like Wirecast (($$) and OBS (free) can insert video into the stream easily, but at relatively low quality. All the encoding must be done in the computer or laptop, a very CPU intensive task. Secondly, you are limited to one connection to the internet, either WiFi or ethernet, which is unreliable, especially at higher bandwidth.
  • Atomos recorder/monitors can only play back videos recorded live from a camera. You can't load an edited video and play it back
  • My video switches (BMD ATEM...) can store fixed graphics, up to 20, and display one or two on command, but not video clips. They can interface with solid state recorders, but those recorders can only play clips they have recorded.
  • Most HDMI output dongles for a Mac display the entire screen. That's great for instructional videos, but you don't want all the navigation tools on display for a video.

The solution I'm hoping for is a Black Magic Ultrastudio Mini Monitor. It is an output-only device with 3G SDI and HDMI 2.0 ports, connected and powered by Thunderbolt 2 (sorry, PC users). It comes with software which will grab video from a media player, transcode it if necessary, and sent it out via SDI or HDMI (or both). It will also work with the output window of Premiere Pro, Final Cut Pro and DaVinci Resolve. If it works with Wirecast, then it will be very easy to cue up videos, graphics and screen shots. The only downside is the maximum output resolution is 1080p30.


Blackmagic Design UltraStudio Mini Monitor Playback Device


My plan is to use an Atomos Shogun 7 to ingest up to 4 SDI cameras plus analog mixed audio, handle the camera switching, and output the program to one channel of a hardware switcher (ATEM Television Studio), and record the ISO's and program streams. A laptop with graphics and video will be connected to a second switcher channel. Output from the ATEM will go to an SDI monitor, and hence to a Teradek VidiU Go encoder and the internet.


It sounds complicated, but will actually fit on an 18"x24" folding table. I'll post a photo or two in the near future.

Link to comment
Share on other sites

The Black Magic Ultrastudio Mini Monitor device seems to be the solution I was looking for, but it's harder to use than I anticipated, and the manual is not very helpful. It took the better part of a day to figure it out. There is a newer version, not yet available, with higher resolution (1080p60) and a cheaper price.


It comes with a free program called "Media Express", which offers an uncluttered way to send video through the Mini Monitor. However it only works with video captured by a companion device, the Mini Recorder, or Quicktime video from a BM camera.


In order to stream recorded video effectively, you need to cue up one or more clips and play them on demand


The good news is that the Mini Monitor is recognized by major editing programs such as Premiere Pro, Final Cut Pro, and DaVinci Resolve, which can ingest nearly any video format. It can also be used with Wirecast and OBS. The Mini Monitor will capture a video as it is played in the time line (or program screen in Wirecast and OBS) over SDI and HDMI connections. The maximum resolution is 1080p30 or 1080i60 (and the drop-frame equivalents), which you can set in the editor's time line. This setting overrides the default you may set in the Mini Monitor driver. I'm most comfortable with Premiere Pro, but DaVinci Resolve is similar, and the vanilla version (which works great) is free, not limited to two computers like Adobe CC products.


The downside of using a video editor for playback is the play stops if you use the laptop for any other operation, anything which shifts focus from the timeline. The stream will go black or freeze on the last frame, as you opt in the Mini Monitor driver. While it works, you would like something more robust. To that end, Wirecast is a better solution, but not without pitfalls.


It is very easy to import video clips into Wirecast, adding or dragging them in to a Wirecast layer. Shots can be a camera, output from a switcher, video clip, or still graphic. Wirecast works like a M/E (Mix/Effects) switcher. You click on a "shot", which places it in the Preview Window. A separate operation moves the Preview Window contents to the Live window. The clip will start playing automatically, sending its contents to the Mini Monitor. You do not have to be live on air for this to work. That clip will play until you clear it or replace it with another "shot". You can stop (pause) and start the clip. You set the output resolution and start the Mini Monitor in the tab Output/Ultrastudio Mini Monitor. The Wirecast output and canvas size are separate settings. The Mini Monitor will transcode the resolution as needed.


Wirecast is not cheap and has many quirks and a lot of overhead. However it offers much more control than the free product, OBS. For this purpose, I will use Wirecast as a player and and SDI input to a hardware switcher

Link to comment
Share on other sites

Only high level ($$$$) switchers can transcode inputs, therefore all of the inputs must have the same resolution and frame rate. Finding an overlap between a Sony A7iii, Sony FS5, Sony MX0, PTX camera and the Mini Monitor proved to be challenging. The answer proved to be 1080i60, which is not ideal, but live streaming is not yet at Academy Award level.


60 fps is not the same as 59.94 as far as most switchers are concerned. Most video cameras are the latter, even though they're labeled 60. If there's a mismatch, the input remains black. You also have to decide whether to use SDI A or SDI B. The former alternates frames and is compatible with i and p formats.The latter puts everything into single frames, and is compatible with p only. Again you get strange effects if you get it wrong. I have a small SDI monitor (Atomos Shinobi) which ingests nearly any SDI signal and displays the resolution. It's a great time-saver in shakedown, as well as a convenient monitor anyplace in the signal chain.


Another annoying problem is cable. There are three HDMI connectors, used in various combinations. There are at least four USB connectors, but most share the same "A" end. So far there is only one SDI connector - a simple BNC, but half a dozen cable types depending on the bandwidth, up to 12G. Most of my audio cables are XLR, but 1/4" and 1/8" TRS connectors are not uncommon, even in professional applications. Keeping track of small stuff is an ongoing job. Nothing wastes more time than looking for the right cables during setup, unless it's getting knots out of a 50' mic cable (cables of any sort, longer than 50', I put on reels). I have two large semi-rigid cases for cables, the smaller one for most jobs and a larger one for surprises. ThinkTankPhoto makes durable clear zipper pouches for the small stuff.

Link to comment
Share on other sites

SMPTE timecode - Wikipedia


The 0.1% difference between linear (ATSC) and dropframe (NTSC) time code was instituted by the broadcast industry to allow time between frames to synchronize sound and video, and for compatibility between mono and color broadcasts. The actual elapse time for the same number of frames in linear time (60 fps) and dropframe time (59.94) is the same. Time code numbers are dropped, not frames. PAL broadcast standards us a different method and different frame rates (25 and 50), corresponding to their line frequency of 50 Hz.


I don't understand much of the what and why, except that live-streaming and camera switching is an adjunct of broadcasting, hence bound by conventions that may not be totally relevant to the internet.

Link to comment
Share on other sites

While it seems obvious, it bears repeating. When streaming, make sure you check the signal flow from end to end. Check both the audio and video signals. If signals aren't reaching your encoder. they aren't going over the air. The ultimate test is whether the streaming site(s) reflects the program you are attempting to send.


On a practical note, monitoring the destination site involves a lengthy delay, from a few seconds to over a minute, and can't be used to make real-time adjustments. It is for "proof of life", not a rescue.


In all but the simplest podcasts, the audio and video streams are generated separately, then combined and synchronized at some point in the process stream. Internal camera audio is of little use, unless it comes from an external microphone or mixer. If you have multiple cameras and/or multiple microphones, it's much easier and better to mix the sound separately and inject it into the program. Video processing always introduces some degree of lag, so the audio should be injected as far upstream as practical and embedded in the SDI or HDMI signal. That done, downstream processing affects latency of the audio and video equally.


I insert a small SDI monitor after the last stage before it enters the encoder. A relatively insexpensive Atomos Shinobi monitor (5") allows me to see the video and listen to the audio over headphones. The encoder translates the SDI signal into a form which can be broadcast over the internet. The encoder can be a computer (e.g., using Wirecast or OBS), or a specialized hardware device (e.g., Teradek VidiU Go). If I wish to record the program, I can use an Atomos Ninja V (or other model) instead of the Shinobi.


I strongly recommend using SDI rather than HDMI, for all but the simplest setups. HDMI cables are usually thick and stiff, and the connectors are easily detached, especially the mini and micro varieties. SDI cables are always the same at each end, and lock in place (BNC connectors). While HDMI cables should be less than 15' (5 m), SDI can be up to 300' (100 m). They should be rated 3G or better for 1080p60 video.


This is getting rather complicated, but you have to handle programs designed by the client, and variations are endless. Most of my jobs are on site, so everything must be mobile yet robust. I want to have a recording of each camera plus the (switched) program stream. Rather than use a recorder on each camera, I bring each camera (and separate audio) into an Atomos Shogun 7, which can take up to 4 cameras, switch from a touch screen, and record everything to an SSD. The ATEM switch is optional, when I need to insert a side stream for graphics or pre-recorded video.


A laptop can be used to provide another layer of switching between the program stream and graphics or pre-recorded clips. The Black Magic Ultranet Mini Monitor captures the program stream by Thunderbolt 2, and converts it into an SDI signal, which can be monitored by the Shinobi and/or sent to the Teradek encoder. Wirecast and OBS can encode and transmit on their own, but the quality is limited by processing speed, and the speed of a single ethernet connection.



Link to comment
Share on other sites

I have found that all 3G SDI may not be the same. A Teradek VidiU GO encoder may not read embedded sound in the SDI output (1080p59.94) of a BMD ATEM TV Studio HD switcher. The video is streamed without sound. The same signal is properly interpreted by the Atomos Shinobi SDI monitor described above.


In this application, I'm using an Atomos Shogun 7 to receive and record SDI video from several cameras. The SDI output of the Shogun 7 was connected to the ATEM, which can add or superimpose graphics (e.g., an opening screen and titles), which is not possible with the Shogun. The solution was to take the ATEM out of the loop, and run the Shogun output directly to the Teradek, or through the Shinobi.


Both the Shinobi and Teradek have an headphone jack (3.5 mm stereo). If sound is properly embedded in the video, you can hear it with headphones. In this case, the Shinobi had sound but the Teradek did not. After re-routing, I could hear sound on the Teradek, and the live stream site logged the video and sound correctly.


When streaming, it's always best to monitor the results on the host site. The downside is that there is a delay of 30 seconds or more, which makes troubleshooting very difficult. However it is the ultimate tool to see if the encoder, internet connection and website are working. Always make a backup recording of the program you can upload later without the vagaries of real time operations. If possible, record the ISO's (individual camera outputs) too, so you can adjust switching points for the final production.


I have used the ATEM in exactly the same manner as recently as last week without issues. When troubleshooting, it is very difficult to accept that something works, then stops working without a complete failure. The key is systematic signal tracing and thorough knowledge of the hardware and software involved. There are many points of failure, and usually only one way to get it right.


Until I spoke with Teradek, I did not know the function of the headphone output. Not hearing a signal is not the same as not having a signal. I learned that the headphone monitors sound going to the encoder, whether by SDI, HDMI or an analog input. Secondly I learned others have reported problems with ATEM compatibility. IMO, part of the problem may lie in a Teradek firmware update the previous day.


The Teradek VidiU GO is an essential tool for reliable streaming with questionable venue internet connections (or without local internet). Yesterday had perplexing consequences, but it turned out well. My forehead will heal, and the wall can be repainted ;) I will follow up with Teradek to find a more suitable resolution.

Link to comment
Share on other sites

I spent this last week live-streaming a concert series at 5 Mb/sec (clean 720p30). The sessions were successful, largely due to use of a Teradek VidiU Go encoder and CORE cloud service (also by Teradek). I was connected by ethernet to a Comcast 300 Mb/sec service, 20 Mb/sec upload speed. While this might seem overkill, in practice the ethernet speed was highly irregular. On one day, ethernet alone carried the load, but on another the transmission relied solely on the two cellular nodes in the Teradek. The remaining days had mixed performance. Fortunately, there were no dropouts nor glitches due to this variability. If one service sags, CORE compensates using the cellular services.


I put the Teradek on a tabletop tripod. It is fan colled, but gets too hot to touch if placed on a flat surface. That doesn't seem to affect performance, but I can't say the same for my fingers.


My source video was 1080p60, as usual, at 50 Mb/sec. The Teradek handled downsampling and encoding with ease. Meanwhile I have a high-quality 1080p60 recording of the program and ISO's for later use, using a Shogun 7.

Link to comment
Share on other sites

  • 2 weeks later...

I added a photo showing my most recent (and compact) setup for live streaming and monitoring the results. Not shown is a cell phone I used to remotely control PTZ cameras, which I'm finding indispensable. More recently I no longer use the small BMD ATEM TV switcher, because of problems the VidiU Go has reading embedded audio in the SDI signal.


From left to right, you see an Atomos Shogun 7, which receives, switches and records SDI signals from the camera. Below it is an ATEM Television Studio switcher, not used. Buried in a stack is a Zoom F8 8-channel recorder with a Zoom FRC-8 mini mixing panel, then a POE ethernet switch with a TP-Link mini access point (cor control, not connected to the internet), then a USB power supply. Moving to the right there is an Atomos Shigoni 5" monitor for the program stream, and slightly to the rear is the Teradek VidiU Go encoder, on a small tripod for cooling, resembling an Orwellian Martian invader. The laptop is connected to the internet, to monitor the live stream at the destination, and an iPad used to control and monitor the VidiU Go status.


Everything is portable and relatively light. However it takes, on the average, about 3 hours to set up and debug, and about 1-1/2 hours to pack up. I like to reserve the last half hour before showtime to establish and debug (if necessary) the streaming connection. That can be tricky. Most streaming sites will go into archive mode if you pause longer than a few (e.g., 5) minutes, and lock you out. Vimeo will not let you link up more than 30 minutes before showtime. YouTube and Facebook can be funky. I find using an RTMP streamkey connection easier to set up and reliable than the "easy" methods for those sites.


There is a wireless handheld mic on the table I used to record introductions and announcements. I did not have a loudspeaker on this occasion, which was a big mistake. When people speak into a mic, they expect to hear the amplified sound (as does the audience). It's unnerving to them.



Edited by Ed_Ingold
Link to comment
Share on other sites

  • 2 weeks later...

Since the beginning of September I have live-streamed half a dozen concerts using a PTZ camera, from vocalists to chamber music to a rock band. Doing it all, in real-time, presents some challenges, but it's been a learning experience. Using a PTZ camera is like having three cameras in one, multiplying your ability to have variety in your broadcast. It also locks you down to a certain central location rather than behind a camera. You need a compact yet ergonomic set of tools and practices.

  • There is no substitute for controls you can touch and feel. While you can do most things with a computer screen or iPhone, everything is visual. A mixing board has both visual and tactile advantages. You don't need to look to dip a particular channel then return it to the center position. Likewise a joystick control makes handling a PTZ camera much easier than tapping a touch screen.
  • Preset camera positions are great but with caveats. (1) Transitions are not smooth enough to use in real time. You need a fixed view to cover the changes. (2) It's hard to remember which setting is which. I try to use rules like "left to right" and "medium to close" with reasonable success. (3) Groups like the rock band tend to move around, so the presets are merely "suggestions".
  • I find I can handle about 6 presets, but rely mainly on about 3.
  • When people move around, e.g., for solos, it is almost as fast to move the camera by hand as to use a preset and make adjustments.
  • In that case, I still use the appropriate preset, make the adjustments and re-save the preset with the new values. That way I can jump back to that shot quickly.
  • A joystick controls the change velocity, not the end point, and the there are wide steps in the control effect. The controls are much more sensitive when the lens is zoomed out. You must tap the stick to make fine adjustments. It takes practice to do this quickly and efficiently. In case I forgot, there is a noticeable delay between motion of the camera and what you see on the monitor, as much as 1/4 seconds.
  • Using two PTZ cameras is a temptation to move both cameras alternately. RESIST! The second camera is best used as a fixed view with benefits. I use it as a wide view which can be adjusted occasionally if there is a scene change. In a group, you never know exactly where they'll be until the audience is present and they go on stage. In live-streaming we have sound checks, not rehearsals.
  • Don't forget to switch cameras while you're manipulating the PTZ!

That done, you are probably ready for helicopter flying lessons, maybe an invitation to "Top Gun" flight school.

Link to comment
Share on other sites

  • 2 weeks later...

Sooner or later you're going to want to add something special to your shooting - motion. After all, why should the subject have all the fun? Most of us look for interesting and unusual angles for otherwise routine subjects. If it works for still images, why not video? Having witnessed an attempt to live-stream a music recital, music-video style with a bare-back camera held at arms length, I thought it appropriate to come up with some suggestions. More than a little motion makes for a very unpleasant viewing experience.


First of all, you need to consider some means to support and smooth the camera motions. IBIS won't help much. It is designed to remove millisecond jiggles, not deliberate or accidental motion. You can start with a shoulder mount, something with double grips to keep from "Dutching", and to free up one hand for focus or zooming when needed. A skilled operator can walk, Groucho Marx style, without a lot of up and down action. Mostly, though, a shoulder mount is best used standing still, like a human tripod. It helps that the weight is borne by your shoulder, not extended arms. The viewfinder will be out of reach, but you can compensate by attaching a magnifier to the rear screen, which will be within about 3" of your eye. I use an Hoodman, attached to the camera with a couple of big rubber bands.


If you want to move with the camera, and shoot from low to high positions, then a Steady-Cam rig is probably the best choice. The weight (which is substantial) is supported on a shoulder harness. The camera is on a motorized gimbal, and its height is managed with a counter-balanced bracket. A more affordable and portable option is to skip the harness and rig for a motorized gimbal on a stick. I have a Ronin S (medium duty) gimbal, which I put on a monopod. I can hold the monopod off the ground for mobility, raise it high above my head, or down to ground level, with very little effort. Best of all, I can plant the monopod on the ground rather than hold the camera, lens and 3.5 pound Ronin at arms length, a welcome relief every 5 minutes or so. The servo mechanism can be tuned to suit the job, from rock-steady to follow-motion with butter smoothness. The Ronin can be put on a tripod and programmed to quickly pan/tilt to several positions via an iPhone and Bluetooth. It can also be programmed to shoot a 3D panorama, taking a still shot in each position.


For streaming, you need some way to bring the video signal back to your controller. You can use an HDMI cable up to 25' long for 1080p, if you anchor it to the camera in some way. My camera has an L-bracket, which can hold a cable clamp. The camera connector is delicate, and you don't wan't the cable to come loose mid-program. A more elegant (and expensive) solution is to mount an HDMI to WiFi transmitter in the flash shoe, with a corresponding receiver at the switcher. Suitable systems run from about $600 to $5K. You pay more for low latency and signal integrity, as for news gathering (to the truck or satellite) or on a movie set.


Think first before taking this plunge. Moving around is distracting to the talent. Avoid unnecessary movement during a live performance. Secondly, you don't want to "star" in your own production via a static camera. You can do a music video type production with a single camera if the talent performs to a pre-recorded track, which you also use for the final cut. Even amateur musicians find this relatively easy, provided you don't look too closely at the lips or fingers. With pros, you can't tell live from fooling around.


If you haven't used a motorized gimbal before, use caution. Don't turn it on until the camera is balanced and everything is locked down.The grip must be held stationary at all times once powered up. If you forget and grab it by the camera, the handle will flail around like a wounded rabbit, possibly damaging the gimbal or hurting someone.

Link to comment
Share on other sites

  • 3 weeks later...

I have managed to achieve greater flexibility with less desk space, not to mention the wear and tear on my bones schepping the stuff. this setup gives me control over up to 4 SDI cameras, including a joystick controller for two or more PTZ cameras and an 8-channel recorder with a mixing panel. It all fits on a 24x18" folding table. Each camera is recorded separately, with embedded audio from the mixer. Each audio channel is also recorded separately from a pre-fader source. A mixing board with sliders makes it much easier to make quick adjustments, including turning down the announcement mic (#6) as needed. I use faders rather than mute buttons, for smooth transitions without clicks or thumps.


Clockwise, from the top left: Atomos Shogun 7 4-channel recorder/monitor/switcher; Zoom F8n 8-channel audio recorder; Zoom FCB Control Panel; PTZ Optics gen 3 IP Camera Controller. Not shown are a Teradek VidiU Go streaming encoder; a Netgear POE Ethernet router (for PTZ control); and an iPad to monitor signal status and the destination websites.



Edited by Ed_Ingold
Link to comment
Share on other sites

I'm facing a problem that hardly existed before. How do I get the necessary gear from the van into the job site? In better times, I would load everything on a warehouse cart and roll it inside. These days, I'm recording in places that don't always have grade level access, ramps, nor elevators. Everything is schlepped by hand. A lot of small bags means multiple trips to the van. Microphone stands and tripods are heavy and bulky, with things sticking out all over. On top of this, equipment needed for live streaming roughly doubles the things to schlepp and set up. Cables are surprisingly heavy, and now I'm carrying SDI and ethernet cable in addition to microphone cables.


Using larger cases, with wheels, is a viable solution, with limitations. Large cases are heavier and harder to transport than typical gig bags. Even when I'm able to use a wheeled cart, my largest case takes up nearly the full bed length. Everything else must be stacked on top of it, and it takes careful packing to keep the load stable when rolling over doorstops, much less curbs and rough sidewalks and driveways. In the old days, most of my production gear fit in 19" rack cases, which stack nicely. Now none of my normal gear can be rack mounted.


I wish I could say "Problem solved!" But no, I'm still seeking solutions. In the short term, I'm looking at smaller cases which will stack on the cart and in my van. An acquaintance of mine ran his entire operation out of a van, via an arm-sized cable or two into the venue. That's tempting. It would cut the setup time in half, because all the connections would be at the distal end of the cable(s), and the production gear would stay in the van, ready to go.


Things in the event world are opening up, albeit gradually. However I doubt audiences will be the same for months if not years to come. My opportunity is to facilitate an audience at a distance for the performance (and vice versa), with no theoretical limits on its size.

Link to comment
Share on other sites

  • 1 month later...

What is life without challenges? This week the challenge is to live-stream via Zoom Meetings with 3 cameras and 5 microphones. This event is a musical audition with possible interaction between the performer and the audition jury. The jury members will probably be working from home, plus they are likely more familiar with Zoom Meetings than with other live-streaming hosts.


I need audio support for a piano (stereo), a small instrumental ensemble (stereo) and a singer (mono, discretely placed). The camera angles will consist of a wide-angle overview (Sony A7iii + 16-35/4 Sony/Zeiss zoom), and two PTZ cameras covering the instrumental group, and (of course) the singer. Two of the participants are Julliard graduates, so this is a classy affair. Using a FF frame mirrorless camera is a relatively inexpensive way to get super-wide coverage (compared to $5K for a video quality zoom lens for Super-35).


As usual, I will use SDI for the video, connected to an Atomos Shogun 7 recorder/monitor/switcher. The audio will be handled by a Zoom F8n recorder, with a stereo feed to the Shogun, inserted to its SDI output signal. I have a joystick controller for the PTZ cameras, and a small, 8-channel mixer attached to the F8n recorder. This setup will allow me to get a full-resolution, 1080p60 recording of each camera and audio (48kHz, 24 bit) for post processing. With this setup, the video latency is negligible with respect to the audio.


The Teradek VidiU Go is not directly compatible with Zoom Meetings, so I will use a laptop instead. I will use an AJA U-Tap SDI to USB-3 input adapter, which I have ascertained works with the Zoom program. The U-Tap appears in the drop-down boxes when you configure the video and microphone sources. There are probably other interface adapters which work too, but I can't attest to devices I don't have. I know that the inexpensive Black Magic micro SDI/HDMI recorder is NOT recognized by the software.


I am fortunate to have a high-speed ethernet connection. WiFi is unpredictable in a streaming situation, as anyone who has used Zoom Meetings from an iOS device can attest. I haven't found a way to use the VidiU Go as an access point to a cellular data network. This bears more research.

Link to comment
Share on other sites

Setting up Zoom Meetings involves a few details if you want the best quality.

  • Video: Select the input device as the Camera
  • Video: Select the aspect ration of your video, Standard (4:3) or HD (16:9)
  • Video: Do not "mirror" your video.
  • If you are recording for a client, rename your window with their name.
  • Audio: Select the input device as the Microphone
  • Audio: Turn Auto Volume off. Set the level manually using a test tone if possible. I use a 1000 Hz slate tone set at -20 dB, so that the Zoom meter reads 50%.
  • Audio: Set the "Original Sound" options in the advanced page
  • Audio: Disable echo cancellation, enable high fidelity music, and enable Use stereo audio,
  • Audio" Enable "Original Sound" in the main window, in the top left corner.

Someone needs to monitor the Zoom meeting from a computer or tablet, and communicate with the participants or host. Log in as a participant on a separate device than the one you are using for the meeting, Make sure the sound is not picked up by your production microphone(s) or audible to the presenter (the 1/2 second delay is stressful), preferably by using headphones or ear pods. You could use the slate mic or set up a mic for voice com, but but the others can't see you. It's easier to use a separate device.


If you do this professionally, buy a subscription to Zoom. It starts at $150/year, but gives you unlimited meeting time and more options setting up.

Link to comment
Share on other sites

  • 3 weeks later...

It seems like every job has new challenges, which keeps life interesting. Today I helped a client use a Slingstudio to live-stream to multiple destinations, including Facebook and Vimeo.


A Slingstudio is a compact solution when you need to control up to 4 remote cameras, use manual or automatic switching, and include graphics and play videos. There is one HDMI (Type A) input, and other cameras are connected using WiFi. It is especially easy to use smart phones with built-in WiFi. Other cameras are conned through proprietary HDMI to WiFi adapters. The Slingstudio connects to the internet through WiFi or ethernet, but can send to only one destination at a time. Operation is controlled using an app in a smart phone or (better) tablet.


The Slingstudio has an HDMI output, which can be programmed to transmit the program (switched) video, among other things, at full resolution of the input. The output stream is usually scaled to lower resolution, limited by the internet connection. My thought was to connect this output to a Teradek VidiU Go hardware encoder, which also streams to only one destination. However I use the VidiU in conjunction with Teradek's CORE cloud service, which takes one input stream and distributes it to any number of destinations without imposing additional load on the source.


The VidiU has both HDMI and SDI input ports. In this case, I used an Atomos Shogun 7 to monitor nd record the Slingstudio signal. The Shogun has both HDMI and SDI ports as well, and can transcode HDMI to SDI, which I sent to the VidiU. At first I used a Black Magic mini HDMI to SDI converter, so I could comfortably remain out of the way. However no signal came through. The converter is programmable to fit any HDMI protocol, but I didn't have time to do it. Instead I used a 15' HDMI cable (the longest I care to use with HD video). This worked fine, and the show got off on time. As always, it's best to monitor the destination(s) to make sure everything is working.


There are other cloud services which can send an input to several destinations, including Vimeo. The Teradek/CORE solution has the advantage of bonding (load sharing) WiFi, Ethernet, and cellular connections. On this day, the ethernet speed was only 5.6 MHz and variable, but I needed 9 MHz for HD. One cellular modem on the VidiU was enough to make up the difference.


Using a 15' HDMI cable proved to be a real stretch (LOL). I prefer 25', 50', or longer cables. I learned last week that there are HDMI cables which use fiber optics up to 200' (possibly longer). The have a transmitter at the source end, powered by HDMI, and a receiver in the display end. All the electronics are enclosed in the metal connector shells. After the show ended I tried a 50' FO cable, and it worked great, with none of the sparkles and jitter you sometimes get from regular cables. They've been around a long time at over $300 each. I bought two from Amazon for $50 each, along with full size to mini and micro HDMI adapters. The FO cables are fast enough for uncompressed, 4Kp60 (~ 6G SDI).

Link to comment
Share on other sites

I was live-streaming a show choir concert last weekend, which was much more of a challenge than I anticipated. I record a lot of classical concerts, but the pace is much slower and the transitions more predictable. There are clear divisions between pieces and movements. A show is highly choreographed, with one scene running into the next with practically no separation. It takes a lot of planning and documentation to do a good job, and I'm really low on the learning curve.


There are limits to what a single operator can do. Not many jobs have the budget for a 3-5 person crew, nor the opportunity to rehearse in advance of the show. This is where I'm at, using two PTZ camera and 1 or 2 fixed cameras.

  • Make a list of the key angles and framing. You can work from a scrip, consult with the director or producer, and observe during the a rehearsal or warmup (I usually have only the latter).
  • Assign the key shots to the PTZ controller. There are 10 presets for each PTZ with a dedicated key (up to 254 if you use a computer).
  • List the preset numbers for each scene in an outline script.
  • Have a fixed camera on a wide shot, to be used to cover transitions in moving cameras.
  • Duplicate some medium or wide shots on two cameras to allow clean transitions to closeups.

In a perfect world, you would have a producer speaking into your ear something like "Camera two ready on set 3", then "Live on 2 in 5 seconds," then "GO on 2." Most of the time I wear several hats, drawing the line at running front-of-house (FOH**) sound. I'm sure cinematography classes cover most of these details. I have to rely on experience, books and the internet for tips. I will be fine-tuning things as I progress, but this is my outline.

  • Prepare a much-simplified script that you can follow with one eye, or rely on an inexperienced volunteer producer
  • Devise a standard set of terms for camera angles (e.g., used by the movie and broadcast industry)
  • Expect surprises!

I can say, unequivocally, that if you have two or more camera operators, you must have a producer. Unless the operators are telepathic, they will tend do do the same things or at the wrong times. This means you need a way to communicate, discretely as to not interfere with the production. The console operator (especially a solo operator) has few opportunities to turn pages. It takes two hands, mind, eyes and ears, even if other tasks are shared.


** I placed 16 microphones for this show, which I shared with the FOH guy. I have an 32/16 channel electronic splitter, so FOH and recording levels are independent. In other situations, I get a pre-fader feed from the house board with pretty much the same results. If possible, each microphone is recorded on a separate channel. FOH and recording mixes have completely different objectives.

Link to comment
Share on other sites

Cables, cables, cables! There's no end to the type and length of cables you will need for live streaming, at least if you run a mobile operation. Except for extremely simple setups (or costly professional rigs), you will need to connect one or more cameras to a central location, including a video switcher, computer, encoder, or all three. Video is usually carried via HDMI, SDI (Serial Digital Interface), or occasionally ethernet. I like to have cables in 100', 50' and 25' lengths, because shorter cables are easier to lay down, and much easier to roll up when you're done.


I settled on SDI because it is relatively inexpensive, similar to that used for Cable TV and has BNC (locking) connectors. Moreover it can be used up to 300' from the source. Most professional processing gear uses SDI. Not many consumer cameras have an SDI output, but HDMI to SDI adapters are readily available, powered by a USB battery or AC adapter. Nearly every camera, including mirrorless and DSLRs, have an HDMI output, so why not use it directly? The problem with HDMI is that many cameras have only a micro connector, which is weak and the cable easily dislodged. HDMI cables capable of handling 1080p60 (or higher) tend to be thick and stiff, and only used at 15' or less for signal quality.


That paradigm has changed with the invention of affordable fiber optic HDMI cables, which are thin and flexible, and more important, can transmit 108p60 for 100' or more. They are one-directional, with a transmitter at the source end, and a receiver at the destination end, both powered by the HDMI interfaces. If you need an SDI converter for compatibility, it can be located on your desk rather than near the camera. Some switchers, notably by Black Magic Design, have both HDMI and SDI inputs. BMD also has a very capable ATEM Mini which is HDMI only, with a built-in ISO and PGM stream recorder. For safety, I use a short HDMI adapter cable at the camera, and a strain relief to hold it securely.


I bought my FO cables from Amazon, which were delivered the next day. Other places may have better prices.


When you have the time, tune into the internet for instructions how to roll up cable so it doesn't have loops, kinks or knots when you lay it down.

Edited by Ed_Ingold
Link to comment
Share on other sites

  • 4 weeks later...

I have been asked on several occasions to incorporate special effects into a live stream broadcast, including graphics, PowerPoint presentations, pre-recorded sound and video, split-screen, and green-screen effects. Hardware with these features can run into five figures, and can require programming to execute more complicated functions. It would help to have a utility van or box truck to carry it around. It's worth exploring less expensive and more portable ways to get it done while meeting basic priorities

  1. Clean inputs from multiple cameras some distance from the console, generally 25 feet or more.
  2. Full resolution recordings of individual inputs (ISO's) and the switched output (PGM).
  3. Camera switching capability
  4. Efficient, versatile effects and video processing
  5. Robust internet connection with a bandwidth of at least 6 Mb/sec

One solution I have used successfully involves the integration of a Slingstudio Hub. The Slingstudio is designed to handle switching of up to four windows, which can be connected to cameras, pre-recorded material and graphics, which can be loaded via a portable hard drive of flash drive. You can attach up to 10 cameras but only four active windows, including graphics, can be active at one time. There is one input HDMI port, and up to four HDMI adapters which connected to the Slingstudio through WiFi. You can also connect up to 4 (?) cell phones or tablets with a simple app via WiFi. In all, it is very powerful and a great timesaver because it does not have to be hard wired. Laying down and rolling up cable constitutes at least have of my setup time. The Slingstudio will also record the ISO's and PRM video on an SD card, but in a highly compressed MP4 format. The chief disadvantage is that the Slingstudio depends on WiFi or wired ethernet connection to the internet. Secondly, it can only transmit to a single destination at a time.


In order to capture, record and switch high quality video, I use an Atomos Shogun 7 monitor/recorder/switcher for up to 4 SDI connected cameras. I connect the Shogun HDMI output to the Type A HDMI (full-sized) input port on the Slingstudio. I connect audio from a Zoom F8n 8-channel recorder to the balanced stereo input on the Shogun, and the unbalanced stereo 3.5 mm jack on the Slingstudio. The video latency is on the order of 50 msec, and can usually be ignored. However, the output of the F8n can be delayed in msec increments if necessary.


The Slingstudio also has a Type C (mini) HDMI port for output, which can be configured to carry the PGM signal, as well as other signals. At that point, the audio is embedded in the video signal. I connect that HDMI output directly to a Teradek VidiU GO encoder and modem. As noted in earlier posts, the VidiU GO connects to the internet by WiFi, ethernet or cellular modem. All three are "bonded" by Teradek's CORE cloud service to work in cooperation for a robust internet connection. If WiFi slows down or drops out (unfortunaly a common occurrence) one or both alternative services will automatically take up the slack. The CORE service can connect to at least 6 CDN's (destinations) simultaneously with out imposing any additional burden on the VidiU GO.


For additional system confidence, I often connect an Atomos Shinobi HDMI/SDI monitor between the Slingstudio and the VidiU GO. The Shinobi will show the exact program stream output to the internet. You can never be too careful in real time programming.


All this requires some manual dexterity if you go it alone, about like juggling tennis balls while riding a unicycle. The Slingstudio is controlled (only) by a computer or tablet application by WiFi, from any place in range.


Instead of a Slingstudio, effects, graphics and pre-recorded material can be handled with Wirecast, VMix or OBS in a computer or laptop. The ethernet output is RTSP, which is not compatible with the VidiU GO. It is possible to capture the output window and audio. That's a little involved. More on that solution later.

Edited by Ed_Ingold
Link to comment
Share on other sites

  • 1 month later...

I finally had the confidence to use my Slingstudio on a real job in the field. My overall impression is quite favorable. The setup is easy, and I used Slingstudio WiFi camera adapters to attach two PTZ cameras on poles and a Sony A7Siii on a tripod. Since the Slingstudio has only one HDMI port, the adapters are the only way to connect up to four additional cameras. (You can connect up to 5 iPhones using a Slingstudio app, but only four cameras can be active at a time.) I prefer to record at 1080p60, but Slingstudio (and the adapters) downscale to p30, which is does not seem to affect quality.


Ordinarily I would use SDI cables to bring video to a conventional video switch. While the signal quality of SDI cable is excellent, the cables are fairly stiff and refuse to lay flat. Plus you need a separate cable for each camera.


The Slingstudio will switch, record and stream the output to the internet. Slingstudio uses WiFi or wired ethernet for streaming, In this session, I used it only as a recording tool. As I noted earlier in this thread, The Slingstudio has a clean HDMI output which can be used with a different encoder. I use a Teradek VidiU Go for streaming, since it has bonded connections for reliability, and can send to several destinations at once.


The Slingstudio has a very useful feature, automatic switching at fixed intervals, sequential or random. While this event was primarily a recording session, I needed to turn it around quickly for rebroadcast. Switching can be part of the story, or just for variety of angles. This was a solo piano with introductions, so the program could switch without regard to the content, except for the spoken parts. This gave me time to listen carefully and take notes, which are essential for post production.


It will also record to an SD card, or an SSD attached via an USB-3 dongle or USB-C. You can select one or more sources to record, including individual inputs, program (switched) output, a multiview screen (sources) and audio. The audio signal can be from each camera (yuch!) or a stereo source, and is also embedded in each video. You can use the individual inputs to patch the program video, or start from scratch (which takes about 4x real time).


As usual, I recorded the audio on a seperate device, a Zoom F8n 8-channel recorder, with a stereo feed to the Slingstudio. I used two microphones under the lid, high and low strings, and a stereo ORTF pair on a tall stand about 6' away for "room" sound, plus a speaking mic. Each mic is recorded in a separate channel, and I spend an hour or two balancing, leveling and enhancing the sound for the final product. I used a small mixing board during the session, mainly because the speaking mic must be muted while the piano is playing, which would otherwise overload by about 20 dB.


All was not sunshine and happiness. Using PTZ cameras with a Slingstudio is made difficult due to the video processing lag of about one second. It's like driving on black ice, where nothing happens at first, then boom. I used the remote control to fine tune the camera position, one tiny bump at a time, then wait to see the response. While I can set and recall several positions for each camera, you would have to anticipate live action to follow it. Presets are okay for variety, but real-time motion - forget it.

Link to comment
Share on other sites

The Slingstudio and camera adapters can run up to two hours on battery power. Running power to the camera adapters defeats their purpose. This was a 5+ hour session, so I used portable USB battery packs to run the camera adapters. I hang them on the stand or tripod, and a 20,000 mAH battery is good for about 8 hours. Plan ahead, because the batteries take about 4 hours to charge.
Link to comment
Share on other sites

I can find no Slngstudio operating manual, per se, but there are a dozen or so application and instruction videos on www.myslingstudio.com, which are well prepared and informative. among my discoveries is that the Slingstudio has a low-latency mode for video conferencing. I will explore, and report if there are any drawbacks.


Other features included split screen, panel view and picture-in-picture. Each view can be created independently, and use other live views, video clips or graphics. However only 4 views can be ready at one time. The active views can be updated using drag-and-drop. There is also a fade-to-black option, which also silences the audio output. That's perfect for hiding the chaos prior to a broadcast, and the expressions of relief when it's over.


The premiere advantage of the Slingstudio, apart from its video processing, is the ease at which cameras can be attached remotely without wiring. I'm getting to hate laying down and rolling up hundreds of feet of cables. Cables are time-consuming, dirty, and present a navigation hazard to the talent and audience. I use carpet runners and/or yards of riggers tape to provide a safe walkway if the cables can't be routed away from traffic. It is possible to use wireless connections for HDMI and SDI devices as well, which work at distances up to 400' (line-of-sight). 1080p60 devices are almost affordable at $1000 per pair. 4K devices cost 4-6 times as much. Teradek makes the most reliable devices I have found, but there are less expensive sets, which tend to have more latency and be subject to interference.

Link to comment
Share on other sites

  • 2 weeks later...

Recordings made with a Slingstudio leave much to be desired. The quality is good enough for most purposes, although it is highly compressed. You have options to record the program (switched) stream, quad view, each individual camera (ISO), and audio.

  • The video streams are not necessarily synchronized, and can deviate much more than "lip sync" quality (~ 2 frames)
  • Only audio recorded by the camera is embedded in the ISO's (a Shogun 7 embeds both the original (if any) and master audio in each)
  • Re-syncing by watching the video only is difficult, and may be impossible. It is very simple to synchronize streams which have an audio track, even correct latency within each track.
  • All recorded tracks are divided at the 2G point. Continuity of the video is clean and frame-accurate. External audio switchovers leave a gap of two or three frames. Grrr! If you want clean external sound, you have to record it separately and re-sync in post.

Automatic sequencing of cameras is a very useful feature. It frees you to do other tasks, such as monitoring the streaming destinations, or simply checking your messages. For switching relevant to the action, turn AUTO off, and select the next source for preview, or double-click (double-tap) to move it to PROGRAM immediately. You want to hold on a speaker, presentation or soloist, but variety in a concert is often more important than any particular angle. For concerts, It's the audio that counts.


Even in low-latency mode, there is far too much video lag to use a PTZ camera as intended. I find it necessary to bump the joystick (or tap the software control) many times to get the shot centered and zoomed properly. You can set 10 or more preset positions for each camera, but it can take 30 seconds or longer to establish each shot from scratch. With or without latency, you cannot use a PTZ camera to follow action in a closeup. At best, you can use a medium shot, but nothing is as good as a camera on a tripod with a skilled operator.


Which PTZ presets to use depends on the situation and your personal taste. Try to establish a pattern which you can recall without consulting a shot sheet. I like to use presets 1-3 as a combination of wide and medium shots. If there are speakers or soloists in fixed positions, I use the second row (4-6) for 3/4 or close up shots. With auto switching at 8-10 second intervals, there is ample time to select new presets while another camera is live. Avoid panning or zooming while the camera is live, unless there is a special (i.e., rare) artistic reason, and it can be done very smoothly.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Create New...