Printer Friendly

Unified Communications: being able to connect enterprise devices isn't the real goal.

The continued evolution of unified communications (also known as unified communications and collaboration, or UCC) has relied on a number of key learnings from the videoconferencing, Voice over Internet Protocol (VoIP), and streaming industries. After all, if the goal is to unify all voice- and video-based enterprise communications--from live conferences to searchable on-demand streams, delivered to any type of end-point device--there needs to be collaboration between vendors in each of the three industries.

This Buyers' Guide attempts to provide insight into UCC trends and the impact that each may have on purchasing products and services that combine streaming with other forms of enterprise video. There's another Buyers' Guide in this year's Industry Sourcebook that deals with enterprise video platforms (page 162), and the two guides can be used hand-in-hand when shopping for enterprise streaming video solutions.

Let's look briefly at how videoconferencing solutions are merging more tightly with streaming solutions.

Just Another End Point

One newer approach to merging streaming media and videoconferencing has been tried numerous times, and in numerous ways, over the almost 20-year history of collaborative streaming.

The controller for multipoint videoconferences--the kind made by Cisco, Lifesize, Polycom, Tandberg, or Vidyo--is called a multipoint control unit (MCU). Multiple conference rooms connect to the MCU, with each of these rooms or "end points" able to see the other rooms, although not at the same time.

If only three to four end points join, the resulting screen image can accommodate all rooms in a tiled, Hollywood Squares format. Otherwise, switching between rooms is often determined by the host of the event (known as "chair control") or by an algorithm that tracks who is speaking and switching accordingly (known as "video follows audio" based on the fact that someone at a particular end-point room often needs to speak for 3-5 seconds before the video switches over to them).

The new-old idea is to add some form of streaming encoder or recorder to a multipoint video call, which acts as an invisible end-point participant, recording the multipoint call. Since H.323, the IP standard for videoconferencing, relies on a profile of the H.264 codec, it's then easier to package up the multipoint recording to publish to an online video platform or even a learning management system (LMS) for education or enterprise.

Sonic Foundry, a company that has 2 decades of experience with rich-media recording and tight integrations with various LMS offerings, has recently launched a version of this end-point recording solution that moves a step closer toward a holistic unified communications approach.

Streaming UCC Graphics

Rather than just record the multipoint H.323 call, Sonic Foundry's Mediasite Join also records the H.239 synchronized data from the second screen.

Most videoconferencing setups today have one screen to view participants and another to view the PowerPoint or webpages being shown by the call's moderator. This two-screen setup, based on an idea from PictureTel called People+Content back in 2000, allowed graphics to be sent to one screen and video to another, rather than earlier solutions that used a video-based document camera or VGA-to-composite video converter. Prior to that, those of us in videoconferencing had to rely on something called T.120, which was the basis for Microsoft's NetMeeting.

The two big issues with H.239 over the past 15-odd years have been how to capture the graphics in synch with the video without blowing the bandwidth budget and how to decide which end point gets to deliver the graphics channel to all the other end points.

The first issue has a practical challenge, as unified communications relies on guaranteeing that both the talking-heads video and simultaneous graphics stay in sync during on-demand playback. One way to approach this was to record a second stream, solely dedicated to the graphics channel, but that required a significant amount of bandwidth to record graphics, most of which are just presentation slides, or static images, that might last for minutes on end.

For a number of years, then, the industry had to rely on two different quality encodes: one for talking heads and a higher-quality one for graphics.

The second issue has been addressed in H.239 by an inelegant solution: For multipoint conferencing, only one end point in the conference can send the additional graphics channel to all other end points. On the other hand, this second channel is essentially a broadcast (unidirectional video) stream rather than a collaborative (bidirectional video) stream. That reduces the computational complexity at the end points, since they only need to decode this second channel.

The advent of high-definition (HD) cameras made this task a bit easier, since the resolution of both the graphics and an HD camera were about on par, but the impending advent of 4K presents a daunting task of sending two simultaneous 4K streams.

Making UCC Content Searchable

Some solutions use a frame grabber to snag still images of the graphics, in an attempt to keep the overall bandwidth down. This works, as long as there is no full-motion video being delivered on the H.239 channel, so look for solutions that can differentiate between still images and full-motion video being delivered on the second video channel.

A few videoconferencing-streaming hybrid solutions are even converting these still images into searchable text, which is what's most uniquely new about this present crop of streaming-as-an-end-point solutions.

With recent advances in speech-to-text conversion, and the ability to time stamp text from the H.239 graphics capture, search now means putting spoken word and graphical text into a searchable context.

"Join allows viewers to use SmartSearch to find any spoken word, phrase or slide text," says Sean Brown, senior vice president at Sonic Foundry. "Not only can they search, but users can also quickly navigate video with Mediasite-generated slide previews and chapter headings."

Keep in mind this search functionality takes a bit of time, since the still images must be processed through optical character recognition (OCR) in the same way that a scanner might convert a scan of a paper document into searchable text. But the power of cloud-based services means that many of these still-image grabs can be off-loaded to be OCRed while the main videoconference is being recorded, which limits the amount of time needed to post-process the UCC content and make it available on demand at an OVP, LMS, or enterprise portal.

Socializing Your UCC Solution

We covered quite a bit of this in a recent article, "Streaming Meets Unified Communications: Convergence Is on the Way" (go2sm.com/unifiedcomms), but a few points bear repeating.

First, social media platforms haven't yet figured out the enterprise and UCC, but they are a natural fit for presentation-centric video delivery. With the advent of Facebook Live, we're already seeing a number of how-to articles and tutorials around preparing for a live stream on Facebook. It's only a matter of time before we see synchronized graphics delivery--if the social media platforms realize the value of doing so.

Second, lead-generation justification for social media can only become stronger if there's a tight tie between UCC platforms and social media platforms. With business-focused social platforms like LinkedIn (now owned by Microsoft) beginning to offer on-demand video content, we think UCC buyers should ask these companies about their live video strategies. One question to ask is whether or not a traditional MCU-based videoconferencing system could be easily tied into a live social platform, allowing for a much wider reach during live videoconferences.

Championing Interoperability

If there's one thing that unified communications systems attempt to do well--and generally succeed at--it is interoperability. From videoconferencing to VoIP solutions, the use of session initiation protocols (SIP) and standardized codecs for both voice and video have been at the core of enterprise UCC solutions.

At the same time that the debate in streaming has raged on about H.264 versus H.264--or AVC versus HEVC, if you prefer the letters--the same discussions have been ongoing in UCC circles.

The UCI Forum, a group that worked to maximize value of UCC via interoperability, assessed HEVC a few years ago, around the same time that it began looking at the interoperability of SIP on IPv6 networks (named, unsurprisingly, SIPv6).

In 2014, the UCI Forum merged with the International Multimedia Telecommunications Consortium (IMTC), and the IMTC has been busy attempting to find a balance for HEVC. Unlike the streaming world, though, which is more concerned with licensing HEVC at scale, the UCC world is interested in making sure that HEVC end points can interoperate with one another.

In the early days of H.264, years before the idea of AVC for streaming had been ratified, there were significant interoperability issues between H.263 and H.264 videoconferencing end points. These issues were both codec incompatibilities as well as processing power incompatibilities.

Even as videoconferencing moved toward a common H.264 codec, some end points weren't able to decode at the same quality as other end points. Maintaining compatibility meant choosing to go with the lowest common denominator: whichever device had the lowest-quality decoding and bandwidth capabilities became the de facto setting for other end points, even those capable of double or triple the quality.

Today, as we look at HEVC for unified communications, consortiums like the IMTC have established working groups to guarantee that we don't have the same level of interoperability issues that plagued UCC in H.264's early days.

The IMTC has set up the Scalable and Simulcast Video Activity Group (SSV AG) to address HEVC for UCC applications. The SSV AG has completed its H.265 HEVC Modes specification, which "defines a mechanism to enable devices with a lower display resolution, lower processing power and access to limited network bandwidth to participate in video conferences with high-end, multi-monitor telepresence systems while optimizing the bandwidth consumed by each leg of the conference."

In other words, scalable HEVC video. We saw scalable H.264 video, and the streaming industry almost embraced it, but the functionality is baked in to HEVC, so there's a likelihood that the UCC folks will be able to teach us a trick or two on how to handle multiple simultaneous bandwidths.

By Tim Siglin

Tim Siglin is a streaming industry veteran and longtime contributing editor to Streaming Media magazine.

Comments? Email us at letters@streamingmedia.com, or check the masthead for other ways to contact us.
COPYRIGHT 2017 Information Today, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:buyers' guides
Author:Siglin, Tim
Publication:Streaming Media
Date:Mar 1, 2017
Words:1721
Previous Article:Media servers: from 4K to MP4, media server software keeps innovating our industry toward its OTT future.
Next Article:Assemble the perfect production platform for Facebook Live: streaming to Facebook is easy, but if you want to do it right, you'll need to build the...
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters