Jump to content

Live Streaming Experiences


Ed_Ingold

Recommended Posts

I find myself well past the point of no return on the rocky road of live stream video. There are, of course, experts in this subject, and businesses built around providing streaming services for corporate meetings and educational venues. If you willing to pay (or charge) upwards of $5K a day for a team, go for it. Working alone or with at most one assistant on a tight budget is on the challenging side. I am strictly mobile, going to where the work is. Nothing comfy like a studio, more like a table in a corner.

 

I'm not touting any particular brand or model. I hope to relate which features I find most important, and some idea to the stumbles and solutions I found on the way. I can do UHD and DCI 4K, but HD is best for live streaming at this point in time.

  • Streaming is like a podcast, but in real time. Everything has to work right the first time around. You get a second chance with a delayed broadcast, but you lose the immediacy of the event
  • If you use multiple cameras, you need a computer with multiple inputs, or a video switching device feeding a single adapter or laptop.
  • All the cameras and switching devices must be set to exactly the same resolution and frame rate! If you can't see a signal, read this line again.
  • The computer must re-package the video signal for streaming, and transmit the results. I use Wirecast for this task, on recommendation from others.
  • For HD, you also need a fast connection to the internet. Wired is best, but WiFi will work if you can reliably get 6 MB/s upload speed (10 MB/s for a safe margin).
  • If you want good sound, microphones on the camera won't do. They're not in the right position for good pickup, and camera microphones are hardly the best choice even if you could detach and move them.
  • I do mostly classical music concerts and contests, which typically employ between 4 and 8 microphones. In addition to feeding a stereo mix the video stream, each microphone is recorded on a separate track, so you can really polish it up for delayed broadcast (or production).
  • It is best to use SDI connections for video. This is thin, shielded cable with locking BNC connectors at each end. Runs can be as long as 100 meters (328 feet) for 1080p60 video. HDMI is limited to 15 feet, the cables are stiff and unwieldy, and the connectors are easily dislodged. Nearly all professional cameras have SDI outputs, but HDMI to SDI adapters are relatively cheap ($50) for HD video.
  • SDI signals can be easily split and sent to multiple destinations, such as monitors and recorders.

The switcher is the key device in the signal chain. You send one camera to the output, with another camera cued up for a switch. The transition can be a cut (instantaneous), cross-face, or any number of creative variations called wipes. The best choice would be a hardware panel, with all the buttons, lights and gadgets in easy reach. However hardware panels are large, heavy and expensive - impractical for mobile operations without a box truck and riggers. You also need a multiview monitor, showing what's on each camera, what's playing and what's cued up.

 

My first switcher was a Black Magic ATEM Studio Pro 4K. It's great, in that it does everything I need, and can be controlled by software or a hardware panel. The connection is by ethernet cable, so the computer/controller can be a long way off (but usually sits on top of the box, which also contains a monitor panel). Two problems. I found I can't run the switcher and Wirecast on the same computer. It takes too much time to switch screens, and Wirecast can get dicey if it doesn't have your full attention. Secondly, buttons on the virtual panel don't coincide with those on the multiview. You have to look and think twice before pressing a "button."

 

Buttons on this ATEM itself don't serve a useful function. They control an auxiliary output, without any multiview or transition options.

 

My second iteration is an ATEM Television Studio HD, but a simpler version which serves as controller and switcher in one. There are four HDMI inputs and four 3G SDI inputs, for a total of eight useable ports. Buttons on the panel are red for live and green for cued, and correspond to the multiview. Unlike the new ATEM Mini, this one has a multiview output, which can be any TV or monitor screen with an HDMI or SDI input. I'm using an Atomos Shinobi, 5" monitor intended for on-camera use. Everything fits in a small case, ready to deploy. However, neither can be rack-mounted, and rack cases are a lot easier to transport and set up in the field.

 

A third iteration looks promising - an Atomos Shogun 7. This is a portable monitor/recorder with a 7" screen, battery operated, with a removeable SSD hard drive. The Shogun 7 has one HDMI and four SDI inputs, and can switch those inputs to a single SDI or HDMI output. Each channel can be recorded at the same time as the output (switched) channel. A 1 TB SSD would give you about 1 hour of five channel recording. In addition to two audio channels for each camera, there are two balanced (XLR) analog inputs. Switching is via a touch screen, with various transition options. Recorded in this way, all the video and audio channels are synchronized - a huge time saver when editing. The 3000 cd/m^2 is bright enough to see in broad daylight.

Link to comment
Share on other sites

  • Replies 66
  • Created
  • Last Reply

Top Posters In This Topic

Perhaps I should have started with the application rather than an abbreviated gear review.

 

My interest in streaming technology started with a particular project. I was engaged to video stream live concerts in a small piano studio. The room is about 20' x 30' with a 12' ceiling. The customer wanted to use at least three cameras, and switch them in real time during the broadcast. The concerts involve one or two pianists, instrumentalists with piano accompaniment, and speaking - in various combinations. These are professional musicians, who also require studio quality sound, using 4 to 6 microphones. Some of the events will be studio sessions, but most will have a small audience of up to about 25 people.

 

I'm staying well within a budget of $5000, not including cameras and audio gear, which I already have and use for other projects as well. I leave behind enough infrastructure to make setup and strike fairly simple. (Anyone who has laid, taped and connected cables can sympathize.) There is one connection for each camera, and another for a computer connected to the internet.

 

Because the public is invited, wires and cables must be routed away from traffic areas, or completely taped down. I cross a doorway only as a last resort, so the cable runs range from 10 feet to 75 feet, on the outer walls, 3/4 of the way around the room. The layout and distances involved dictate the use of SDI video cable rather than HDMI. For audio, I routed an 8 channel snake to a stage box under the piano. That way microphone cables don't intrude on the public space and traffic areas. A small alcove in a rear corner is available for operations, and can be screened off from public view without obstructing my view of the talent.

 

Lighting is reasonably good, with floodlights in ceiling fixtures and several large windows facing east (no sun streaming in for afternoon or evening concerts). I have two LED panel lights with variable color for fill, to match the principal light in the room at concert time.

 

Camera placement and strategy is simple. I have a camera in each rear corner of the room, and a third, miniature camera, focused on the keyboard. That works for a single piano, with or without a soloist. Continual operation is usually not needed, other than occasional re-framing. For a piano duet, you need a camera on each soloist and one for a wide view showing both. So far I have not needed a camera operator, but that may change. How do you mic a piano (or two)? Ask any two engineers and you'll get five answers.

 

I hope this answers some questions, and invites comments.

Link to comment
Share on other sites

I find that there are two versions of SDI (Serial Digital Interface), type A and type B, which are mutually incompatible. This is in addition to bandwidth capacity, including SDI (max 1080p30), 3G (max 4Kp30) and 12G (4Kp60 and up). Type B has two parallel data streams, one for each alternate raster line, typical of interlaced video for broadcast. Type A SDI combines those streams into one, with all the raster lines. Type B signals can carry progressive video, and be combined by the receiving device, generally with a slight delay. Lower-end cinematic cameras from Sony and Black Magic tend to use type B SDI, for 4Kp30 maximum. HDMI to SDI converters can usually be set to transmit either type A or type B.

 

Everything I do can be handled with a single 3G to 12G cable. High-end cinematic cameras may require 2 to 4 separate SDI cables, well above my pay grade.

 

Black Magic ATEM switchers accept both types, and can cross-convert depending on the output device. The Atomos Shogun 7 has a 4-channel switcher option, which accepts only type A for 1080p60 and above, but type B for single-channel use as a monitor/recorder. The Shogun switcher will work with type B SDI at 1080p30 or below. The switching option is SDI only, but either SDI or HDMI can be used for single-channel operation.

 

The Atomos Ninja V recorder/monitor is normally HDMI, but has an ATOMX SDI input/output option. The inputput type is automatically selected, but the output can be either A or B, with cross conversion.

 

There is nothing like a black screen with a warning to make your day, and start you on a quest for answers not even mentioned in the operating manuals.

Link to comment
Share on other sites

A lot can go wrong when live-streaming video. The streaming software can freeze, you can drop the connection, and you can make mistakes switching multiple cameras. Except for the last, a recording of the stream can be used for delayed broadcast, cleaning up as necessary. I generally record each camera separately, at much higher quality than possible over the internet (1080p60 v 720p30 streaming). An hour long broadcast with a single camera takes about 2 hours to edit, including the time it takes to render the results in H.264 format. I like to add titles to the stream in this process, which doesn't add much time to the process. Editing two or more cameras into one stream takes much longer, typically between 3 and 6 hours. You have to decide where to cut, then make adjustments to in order to fit the action or music better.

 

It is much simpler to record the live stream, cuts and all, in one pass. Wirecast can do this directly, but it demands more of the computer (it's recommended to stay below 40% CPU utilization), and only as good quality as the video being streamed. You can edit to clean things up, even add titles, but re-rendering means re-compression, which can cause artifacts.

 

I prefer to use a separate monitor/recorder for this job, using a spare Atomos recorder/monitor. Atomos recorders have loop-through capability, either HDMI or SDI, and can be inserted between the switcher and computer without loss. B&H recently had a special sale on the Atomos Shogun 7, will record up to 4 input streams (SDI) plus the output, when configured as a switching device. It will also record a balanced audio line input, in perfect sync with the video, which I take from a separate audio mixer/recorder. One downside is a 1 TB SSD has only enough room for about 1.5 hours (ProRes 422), if all 5 streams are recorded. You are also limited to hard cuts in real time, which I find preferable to cross-fades. (How many times to you see cross fades on TV or in the movies? - almost never).

 

An E/M (e.g., the ATEM described previously) switcher can add a variety of fades and wipes, in addition to picture-in-picture and fade-to-black. You also need a separate monitor with a multi-view, which shows each camera, plus pending and on-line views. This setup is a little more involved than the one-step approach cited above. It's not bad, considering you'd need a panel truck and three man crew to do the same job 10 years ago.

Link to comment
Share on other sites

There are far simpler solutions to live streaming if your needs are modest. Going solo, you can stream directly to social media from a smart phone or tablet. The inexpensive app, Wirecast Go, can connect you to RTMP servers, with enhancements like extra graphics and titles. Wirecast Go can also mate with Wirecast running in a laptop, including several smart devices combined with video cameras. For live switching of HDMI sources, there is the budget-priced, book-sized Black Magic ATEM Mini.

 

The next challenge I'm facing is a reliable internet connection in the absence of a wired or WiFi with adequate speed. I'm lucky to find an electrical outlet in 100 year old churches, much less internet. Forget using a smart phone as a hot spot. The service is expensive, relatively slow, and tends to drop out when voice traffic picks up. I'm looking into video-oriented cellular modems from Teradek and LiveU to fill this niche. Among other things, these devices can bond two or more services, and switch automatically if a bottleneck occurs. They are in another universe, performance wise, compared to the black pill box modems you get from Verizon or AT&T, and not necessarily tied to a particular carrier (it's optional). However service is expensive, $3 to $10 per gigabyte, and an hour of HD video uses about 2 GB. I need to rent or borrow before making a decision. More on this later. Once some degree of normalcy is restored in Chicago, I'll post some photos of my setup(s).

Link to comment
Share on other sites

Video processing generally incurs a brief delay, on the order of 2 frames at 30 fps, but often more, which may or may not affect synchronization with the recorded sound. At some point lips no longer match words nor fingers the notes. 1/15th of a second doesn't seem like much, but that's two notes in a Chopin etude, or the time it takes sound to travel about 60 feet. The question is whether it occurs or not, and what to do if it does. It's easy enough to fix in post by slipping the sound relative to video in the timeline. In live streaming, you need another solution.

 

Sony Alpha cameras give you the option of synchronizing the recorded sound with what you see over the top of the camera(Live), or what you see in the viewfinder (Lip Sync). The difference is just enough that you might find the lag noticeable if you're filming and listening over the headphones at the same time. Unfortunately this setting affects the way sound is recorded to the video. Be sure to use Lip Sync, unless you're prepared to re-align the sound and video in post. I have not seen this option in professional video cameras, which appear to record only in "Lip Sync" mode.

 

The video switchers I discussed above have the option of recording sound from the cameras or from an independent source, through unbalanced (RCA) or balanced (XLR) connections. "Wirecast" software has similar options - embedded in the video stream or from a USB sound input device. Now that I have more time on my hands (not by choice), I plan measure the effect objectively, and report back. All of the digital mixers and recorders I use can insert a delay in each channel and output, in 1 millisecond increments. That's close enough to deal with acoustic phase relationships, and more than sufficient to manage processing time lags.

 

Stay tuned.

Link to comment
Share on other sites

  • 2 months later...

Please feel free to say "I could have told you so," but WiFi is unreliable for live-streaming. If at all possible, use a wired ethernet connection. I. use an Eero mesh network in two venues, which tests perfectly compared to a wired connection. But it's not perfect when you use it for streaming. Using the diagnostic tools in Wirecast ("Statistics"), I see wide swings in the buffer length, 10 seconds or even more. The same buffer statics with a wired connection run 100 msec or less. The other stats are reasonably steady, except bandwidth suffers, necessitating lowering the output stream quality.

 

Streaming is so much more demanding than ordinary computer use, even transferring large files over the internet. It's still not perfect, since home broadband is subject to demands in the neighborhood, not just in the household or business.

 

Since most of the venues I will be working don't have WiFi, much less ethernet, or it's so heavily filtered that it's useless. This includes nearly every church and public school in the Chicago metropolitan area. I'm trying out a modem/encoder by Teradek, a VidiU Go, which combines WiFi, ethernet and cellular into one unit, along with H.264 or H.265 encoding. For a nominal fee you can get "bonded" service, which shares all three media in a way to maximize reliability and continuity. I'm using it with a special cellular data service rather than ordinary consumer 4G. The latter can throttle your speed unexpectedly.

 

Most of my clients are performing arts groups for which live broadcasting and streaming are likely to be essential in the foreseeable future. Even if venues open to concerts, audience participation is likely to be low until confidence builds. The challenge is to create a professional rendering, reliably, at reasonable cost.

Link to comment
Share on other sites

  • 3 weeks later...

I decided to go with the Teradek VidiU GO encoder. With it, I can push twice the bandwidth to the internet using the same wired service as with Wirecast and a laptop, at the expense of losing lower third titles and graphics capability. As I mentioned before, a wired ethernet connection is best if it's available. The VidiU Go can handle up to two cellular modems. With one cellular modem, I can stream 720p30 HD for a data charge of less than $5/hour.

 

To further bolster the speed and reliability, I subscribed to Sharelink, a Teradek cloud service. This does two things: (1) automatically bonds multiple internet connections, sharing the load and providing a hot backup should one of the services slow down or fail, and (2) allows the stream to be broadcast to as many as 8 destinations simultaneously without loading your internet service. Sharelink costs about $2/GB on a pay-as-you-go basis. Sharing the connection with even a slow WiFi or ethernet connection reduces the cellular cost proportionately.

 

You can also share the load with up to 4 phones or tablets configured as hot spots. That chews up data plans like junk food, but you do what you gotta' do.

 

Tomorrow I take streaming to the next level - lighting. Modern video cameras work miracles with all forms of lighting and a huge dynamic range, but good lighting adds a competitive edge to the production. The trick is to do it effectively with as little setup (and schlepping) as possible. I have some 10"x12" LED panels and umbrellas. More on this later.

  • Like 1
Link to comment
Share on other sites

Even though I have zero interest in video, never mind streaming, I'm finding this fascinating reading and very enlightening. Thank you and keep it up!

 

 

One area where I may be able to offer a suggestion is with regard to mobile broadband. My 'home' broadband is a WiFi to cellular router, due to a total lack of phone lines in my area.

 

I get significantly better connectivity (five time faster or more) by using a directional antenna pointed at the recieving cell tower. There are various phone apps available that can show which cell tower you are connected to and overlay it's physical location on a map, allowing you to point the antenna.

Edited by steve_gallimore|1
Link to comment
Share on other sites

Thank you, Steve. I see some portable Yagi antennas which give about 10 dB gain (about 25x), I'll keep that in mind if I find myself in problematic areas. In general, cell towers are well distributed in the Chicago metropolitan area, but there are exceptions in some suburbs. Cell towers are considered "visual pollution," and a surprisingly large number of people think they cause cancer. Cell companies try to accommodate suburbs by disguising cell towers as trees. They're easy to find, because lodgepole pines are not that common in the largely oak forests of NE Illinois.
Link to comment
Share on other sites

If you use more than one device simultaneously, including separate audio and video, you must take latency into account. Video will always lag real time by a minimum of 2 frames (@30 fps) due to processing time in the camera. Of course there is a considerable delay between the broadcast signal and appearance on the live-streaming site, from 4 to 30 seconds or more. Fortunately this does not affect audio synchronization.

 

Audio recorded in the camera may or may not be synchronized. Sony A7xxx cameras have the cryptic option for audio of "Live" or "Lip-Sync". The former passes audio through the camera without delay, making it easier to monitor the sound while directly observing the subject. The audio is still injected into the recording or HDMI signal, but leading the video by about 2 frames. You must use "Lip Sync" for the audio and video recording to be synchronized.

 

Each subsequent processing step adds further latency, including encoding for streaming, which may vary depending on the type of compression. Audio embedded in the video stream is not affected by downstream processing, so it behooves you to inject and synchronize the audio as far upstream as possible. I prefer to inject audio from a separate mixer as a separate input to the switcher. The latency is on the order of 2 frames, which can be ignored, or a delay can added to the mixer output. If you are streaming from a laptop (e.g., using Wirecast or OBS Project), use the audio in the incoming video stream. If you use the mixer as an I/O device, expect huge problems synchronizing it with the process stream.

 

All is well and good if all the video signals arrive at the same time, or within 2 frames. I find that latency of direct connections, HDMI or SDI, are insignificant. I also find negligible latency in high-quality WiFi video connections. There is a significant exception. Processing delays in iOS devices (and probably Android, et. al.) can range from 2 to 10 SECONDS. Not only is the sound awry, but the video latency is too great to match with other video sources. Audio recorded in the iOS device is synchronized with its video, but the output stream won't match the others.

 

Nearly anything can be fixed in post for re-broadcast, even variable lag between cameras. Make sure you record ISO's (Individual Source Output), the Program stream (switched output) and audio. I always do this because I can fix any problems in Premiere Pro, including color grading, and upload full 1080p60 without quality compromises required for reliable live-streaming. I use the Program recording if possible, to save time. However I can do the switching more precisely in Premiere Pro - it just takes about 3x real time to execute. I usually mix multi-track audio in post and add it to the edited video.

 

CAVEAT: Invest in a good set of sound-isolating headphones. I use Sony noise-cancelling phones for extreme isolation. That way you can ignore the delays while watching a monitor. Make sure no delayed sound is heard by the talent (or picked up by the microphones). That can bring the session to a sudden and dramatic halt.

Link to comment
Share on other sites

I've been on Zoom with multiple people for conferences. Suddenly, the voice received slows down and then stops after a few seconds. Then about ten seconds later it picks up again. Do you know what could cause that and how to correct it? Good luck with your live streaming. It seems really complex to do it right.
Link to comment
Share on other sites

You probably have a problem with your connection speed. You can check it using a free utility, SPEEDTEST.COM. If you have the option, use a hard-wired ethernet connection to your modem/router. WiFi can look good in burst mode, but uploading speed tends to get flaky with continuous streams.You connectivity is affected by other users on your service. That has been problematic since everyone is working from home or streaming out or boredom. There aren't any options in Zoom that can modify the bandwidth needed. Voice is affected the same as the video, since it is VOIP.
Link to comment
Share on other sites

The WiFi channel change can help if someone nearby is using the other one. Your phone should skip channels if that occurs, but not always reliably. If you're using a VPN, you might need to turn it off temporarily.

 

Let's take this off line, in messages.

Link to comment
Share on other sites

How do you know if the sound and video are synchronized? What can you do to fix it?

 

The best way (short of time code) to synchronize multiple video streams and a separate sound track is by comparing the sound tracks. This is a combination of visual and audio observations. Premiere Pro, for example, lets you inspect the audio waveform and listen as you step through one frame at a time. I look for a percusive sound near the beginning of the track, step through to find the onset of the sound, and place a marker on the track. This marker is also visible in the sequence, where it can be compared with all of the other tracks. You can click and drag a track, and the marker will snap to the marker in another track. There is a "Synchronize" command to automate this process, but I only use it if the time difference is large, and one marker is off-screen. The audio and video tracks for a file are locked and move together (unless you unlink them).

 

Occasionally the video and audio parts of a file will be out of sync. This can be caused by a setting in the camera or use of an external microphone. Digital tape would lose sync if there were a video dropout, and there were a lot of dropouts. Not all tape capture software locks the audio and video together. (I don't miss tape.) If I suspect a problem, I locate a visual cue which corresponds to an identifiable sound on the audio track. For a person speaking, the letters "b" and "p" work well. For musical instruments, I look for a key pressed or string plucked or bowed. Pianos are easy, violins are hard. A violin note starts soft and only reaches full volume after 4-6 frames. I watch for the bow to touch the string and move, and match this with the first trace of sound. Premiere Pro lets you set separate markers on both the video and audio portions.

 

Once you have marked the audio and video, you can "Unlink" them and slip one or the other until the markes "snap". Relink them, and you are good to go.

 

When live-streaming and inserting a separate audio stream, from a mixer for example, you don't have the luxury of post processing. You may have to add a delay to the audio (it's always the video which lags). The delay, in milliseconds or frames, must be determined empirically beforehand. I use a combination of Zoom and Sound Devices recorders, which allow a variable delay in the output. This presumes all of your video sources behave the same. I have problems with iOS devices, which may suffer excessive delays, probably due to processing. WiFi connections cause no significant delay, but BlueTooth is usually delayed 1/4 second or more.

Link to comment
Share on other sites

I had a Sony A7iii shut down due to overheating this weekend. This is the first time in two years, since I began using A7 cameras for video. We hear stories that it is a commonplace occurence, but my experience is that most of the heating occurs with the battery and internal memory storage, usually within 30 minutes (the imposed time limit before requiring a restart). I use external power (via a dummy battery) and storage (an Atomos Ninja V via HDMI). It occurred after about one hour in a room without AC which reached 90 deg F. The temp limit was set to "Standard."

 

I increased the temperature limit to "Maximum", and hopefully the AC will be fixed before I record there again. The camera body was noticeably warm to the touch, but not scalding like the little Sony RX0.

 

The FS5 has a smaller sensor (Super 35), several times the internal space, and a cooling fan. I would use it more except fully caged and rigged, it weighs 17 pounds with the lens (PZ 18-115 f/4) and requires a heavy video tripod and head. The FS kit weighs just under 50 pounds with peripheral gear. Stripped down, it weighs about 7 pounds (sans battery), and might fit in my streaming travel kit with some rearrangement.

 

My "Travel Kit" is a 30" Thinktank "Video Production" roller with two cameras (A7iii and VX700), video monitor/recorders, switching and audio gear, batteries and accessories. At 75+ pounds it is a bit much to handle, but rolls on level surfaces and has stair climbing rails. It goes places I can't use my warehouse cart (e.g., private homes). It's a better choice, overall, than a half dozen smaller cases and several trips between the van and job. Best of all, every piece of gear and cable has its place. That fact alone cuts my setup and strike time in half.

 

This is my audio setup, used in this case with the customer's Slingstudio encoder, two cameras and two iOS devices. My preferred setup includes an 8-channel Black Magic switcher, multi-view monitor, program video recorder, and either a laptop running Wirecast, or a Teradek VidiU Go wireless encoder. I'll try to get some photos.

 

IMG_0206.thumb.jpeg.9fda4d8c0310c4367b59fb5a451bfd52.jpeg IMG_0205.thumb.jpeg.93a00416cab2ed51b97ef880d0f73c5f.jpeg

Link to comment
Share on other sites

I've had the opportunity to use an interesting device the last couple of weeks - a Slingstudio Hub (www.myslingstudio.com). This is a compact device which connects to up to 4 cameras using WiFi camera adapters, one camera via HDMI, and up to 4 iOS devices by WiFi. It is operated from a software control program under OSx or iOS. I've only used it with Apple devices. There may be other choices, but the website is down for maintenance. The hub receives the inputs, up to 1080p30, and encodes them for a single broadcast channel by WiFi or ethernet. The control software has a speed test which analyzes the signal quality and guides you to set a compatible output resolution, up to 1080p30. For that, you would need an upload speed of at least 12 Mb/s, 18 Mb/s for safety. The hub can run 1-1/2 to 2 hours on a detachable battery, or on AC.

 

There is an USB-C port and an optional dongle with 3 USB-3 ports and a gigabyte ethernet port. Given a choice, the ethernet connection is best. The Slingstudio will not bond WiFi and ethernet for load-sharing.

 

As you see, it is a very powerful and versatile system which is highly intuitive and requires a minimum amount of cabling. The camera adapters can be up to 300' away, line of sight, but 100' is probably more likely in an urban environment. The adapters have a 1-1/2 hour internal battery, or can run from a USB power source (5 VDC).

 

Although you can configure up to 10 cameras, only 4 can be used at one time. Other sources can be dragged from a list into the multi-view panel at any time, even when live. Sources can be switch on a touch screen (iOS) or mouse (OSx). You can also set up automatic switching based on time. This is a great feature for a solo operator, and adds variety to the broadcast.

 

There is an audio mixer window which lets you use audio from any or all cameras, direct or AFV (Audio Follows Video), or an analog input (-10 dbV) and stereo (unbalanced) 3.5 mm jack. I feed that from a mixer, muting the camera feeds.

 

You have the option of recording any or all of the following channels: Multiview, Program (post switching), and individual channels (ISO's). Recording is done to an SD card or a USB device attached to the dongle. While a thumb drive may work, I recommend using an SSD. The SD card may be too slow, and thumb drives are slower yet. Any one of the video sources can be directed to a mini-HDMI output port. I think I can use that with a Teradek VidiU Go, for bonded internet and multi-destination broadcasting.

 

Video latency is very short ( 2 frames) via WiFi camera adapters, HDMI, and an iPhone. Unfortunately the video latency is about 6 seconds from the customer's iPad. I haven't found a solution yet, and Slingstudio has nothing to offer either. For everything else, I have to dial in about 70 msec delay for the output of my mixing board for near perfect sync.

Edited by Ed_Ingold
Link to comment
Share on other sites

  • 2 weeks later...

It's been and interesting two days. I have a concert to stream in a couple of weeks which involves a Klezmer band with 9-10 musicians and vocalists. I had to dig out my Midas rack mixer and 32/16 digital stage box. Their sound tech will use that box as a splitter, so she can do FOH and I can record multichannels without affecting each other. Despite its small size, it has all the capabilities of a studio sound board. There are almost limitless configurations, all of them wrong except one. I also need to make the live-streaming internet connection bulletproof. I will use the Teradek VidiU Go and CORE cloud service, as mentioned above, which is fairly complex in its own right.

 

Not surprisingly, the best way to prepare is to set everything up, verify the configuration and operation, then check everything again. In other words, you must practice using your gear, much like a pianist memorizing a concerto. In the field, you must check every connection, both cameras and audio, and make sure there is signal continuity and the levels are set. It's not hard to do, but easy to forget something, have a bad connection, etc. I'm tempted to make up a check list and put it on a knee board like a Navy pilot.

 

In short, don't take your gear for granted, nor your knowledge of its intricacies. Looking for cables and adapters chews up precious time, and show time doesn't change. So does shaking out knots in long cables. Use the same care when it's time to close up and go home. Put everything away in the right place and condition, like you might need it in a storm. Believe me, every event has stormy moments.

 

Practice. Practice.Practice

Link to comment
Share on other sites

I forgot to mention a very important subject. Before you go on a job, make sure your software and firmware are up to date. Updating firmware often changes settings in the equipment. You have to check every setting and menu item to make sure they are what you want.

 

I use iPads for remote control, and have been caught unaware when the software is no longer compatible with the hardware or DAW. A case in particular is Avid Eucontrol, which allows DAW software like Pro Tools and Nuendo to be mirrored in an iPad. The app, Avid Control, is updated automatically in the iPad, but you must manually update the host software in the DAW to maintain compatibility.

 

If you don't have time to check out upgrades thoroughly before a job, DON"T UPGRADE! That goes for OSX upgrades too. Wiindows is not the only thing prone to buggy upgrades.

Link to comment
Share on other sites

I used the Teradek VidiU Go for the first time in a real project. It worked perfectly. Despite an unreliable internet connection, I had no interruptions while streaming 1080p60 video. When Comcast went crazy, one of the cellular nodes picked up the difference. I have two modems, but usually only one goes on line if there's WiFi or ethernet available. There are other bonded encoders (e.g., LiveU), but in the same price range. About anything is better than the cellular modems you get at a phone store. Phone store data plans are expensive, and slow you down once you exceed 2 GB or so (about 1.5 hours of streaming).

 

If you're freelancing, you never know what support you will get at the venue. Sometimes it boils down to paying the money up front for reliability vs making excuses on your way out the door.

Link to comment
Share on other sites

I'm trying out a PTZOptics 30x SDI camera (Pan, Tilt, Zoom), which will give me more flexibility in a multi-camera shoot. While it's not especially light (3 lbs, 1.4 kgm) or small (6.5" cubed), it is well within the load limit for a medium-duty light stand or tripod.

 

The most important attribute is, of course, remote control. I find myself in situations which make it hard to move about to operate or make adjustments to a camera. A salon type recital would be a typical example, often in a private home or small recital hall. When shooting a play, recital or large ensemble, it's best to shoot from a balcony, where you get a direct view of everyone on stage. About half the places I shoot don't have a balcony, but I have light stands up to 22' tall which could accomplish the same effect. At other times the balcony is full, especially in the first row, Then there is the opportunity to shoot from other angles while maintaining concert decorum (not be the guy dressed in black, hidden in the viola section).

 

Some of the positive features I'm exploring include...

  • Remote control via infrared (useless beyond about 15', or less in a large space)
  • Remote control via ethernet, including building WiFi or (more secure) a small WiFi transmitter. There is control software for mobile devices as well as a laptop.
  • Video via HDMI, ethernet (RTSP or RTMP/s), and SDI (preferred). I have an SDI/HDMI to WiFi remote with practically zero latency, if cable runs are impractical
  • POE (Power Over Ethernet) is possible (preferred). Alternatives include an AC power supply (last choice), or a video battery with D-tap connectors when cable runs are impractical or AC is not available.
  • 30x zoom lens, from 60 deg (~ 30 mm equivalent) to 2.3 deg (900 mm equivalent). Perfect for working from the back of a large auditorium.
  • Outstanding video quality and sharpness, even from a 1/2.7" sensor.

The "cons" of this camera include ...

  • No built-in microphone. Needs a line level (-10 dB) feed, which could be a battery-operated blogging microphone.
  • Audio is embedded only in ethernet or HDMI, not SDI (stunning omission). Alternative: inject audio into the SDI stream prior to broadcast or recording.
  • Real time internet preview works only in Windows, not OSX. Alternative - use an in-line SDI monitor.
  • No time code options. In order to sync video streams in post, you need time code, audio, or a recorder which handles all video streams at once.
  • Operating instructions are a "work in progress." In at least one instructional video, the presenters look things up while on camera.

I've put in a lot of time filling in the blanks, and I hope to work with their support team in this endeavor. I'm sure I'll have more to say on this topic.

Link to comment
Share on other sites

I forgot to mention an extremely useful feature. You can store. up to 255 shots, including position within 0.1 deg and zoom level, and up to 255 cameras. Ten presets are available for each camera from a number pad. Once you have identified "targets," you can lock on in a couple of seconds. It takes about 15 seconds to set up a shot with a regular camera on a tripod, if you're good. Even so, rapid pans are distracting. You can hide the motion by switching to another camera, or by freezing the frame prior to the move. The former method is esthetically better, but takes practice to coordinate actions.

 

For a church service, I might designate the pastor, reader, choir director and choir. For a symphony concert I might spot the conductor, principal violin, violin section, wind section, etc. - sections v frequent soloists. The B roll would be a wide or medium shot.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...