Building online video players.
Understanding the Choices Available
Before going too far into the exploration, there are certain basic business rules that need to be understood. Do you need a streaming player, or is progressive download a better option?
Progressive download is one of the easier features to support. These days, all HTML5 browsers natively support progressive download as an option for any H.264-encoded content. When choosing this option, you should understand what progressive download actually entails.
Progressive download allows for a single video file to be downloaded to the client's computer and played back as it is downloaded. It does not need to wait for the entire file to finish downloading before playback begins, but the features you can build into the player are extremely limited.
Adaptive bitrate (ABR) isn't an option; progressive download only allows for one file to be played.
Seeking presents user experience problems on some platforms, as it pauses playback (buffering) until the file has been downloaded as far as the point to which the user is seeking. This isn't an issue on platforms such as HTML5, which can use byte range requests, allowing seeking to happen nearly instantaneously.
Also, it becomes difficult to protect the content, since by the end of the video, the entire file is sitting in the user's cache.
Ultimately, progressive download is a viable option for short-form video (usually described as videos less than 10 minutes in length, although the vast majority of short-form video is less than 2 minutes) that does not need to be protected. The reason for this is that a short video can often be downloaded quickly, which negates the seeking issues, and if the content does not require protection, the lack of content protection ceases to be an issue. Likewise, on videos only a few minutes in length, most modern internet connections will be able to download the content in its single bitrate, which mitigates the lack of ABR.
If the content is not short-form, or requires protection, you should consider a streaming option.
The nature of streaming video is that individual pieces of the video are downloaded and played back, as opposed to progressive download, which downloads the entire file. The benefit of streaming is that before any segment of the video is downloaded, you can decide what bitrate the player should play, enabling ABR logic. Additionally, seeking becomes much easier with streamed content, as there is no requirement to download any skipped content, which allows for a much more responsive experience. If you start at the beginning, watch for 30 seconds, then seek an hour forward, the 59 minutes and 30 seconds you skipped do not need to be downloaded, so the player can continue nearly seamlessly from the new point the user requests.
The downside to building a streaming player is it is inherently more complex and therefore requires more time to build and more code to run. However, for long-form content and content which requires protection, streaming is a requirement in nearly all cases.
PLATFORMS TO SUPPORT
Another common question is which platforms the content needs to play on. If you only need to support a desktop browser, that implies a significantly different level of complexity than if you need to support Android or iOS, set-top boxes, connected TVs, or gaming consoles.
Some of these platforms provide HTML5-compliant browsers, which make progressive download an option, however, that is not true for all or even most of them. It should be understood that if a streaming application is required across multiple platforms, it will, most likely, require writing (or implementing, if you are using an off-the-shelf OVP) several different codebases.
There are many worthwhile OVPs and players on the market today that you should consider. Among them are Brightcove's Video.js, JWPlayer, Kaltura's player, Ooyala's player, thePlatform's player, and Adobe's Primetime player.
The available players, their licensing models, and the features they support will likely have changed between the time this article was written and when it was published. As you begin the process of building a video application, you should review all of the available options and determine if any of them are a good fit for your needs. If so, using an OVP is highly recommended. However, there are times when you'll find that none of the off-the-shelf solutions are a good fit, either for feature or economic reasons. In those cases, building your own player becomes the best solution.
Choosing to Build
After doing your research, you might find that your specific requirements do not fit well with any of the existing OVP solutions. In that case, you will likely decide to build your own solution. First you will need to determine which platforms you want to support and which technologies you want to use.
If you need to have your videos available for consumers to watch on their personal computers, you will need a desktop player. While there is a long and varied history of technologies supported on the desktop, today there are three primary platforms: Adobe Flash, Microsoft Silverlight, and HTML5.
Increasingly, companies are choosing to deploy a hybrid solution, which prioritizes one of the platforms and fails over to another in case the first platform is unavailable. For example, some choose to build their player to first attempt to play in HTML5, but if that is not available (for older browsers, as an example), they try in Flash.
The hybrid solution most commonly combines Flash and HTML5, as Silverlight is currently available on less than 60% of the desktops worldwide, while Adobe Flash is still installed in more than 90% of all desktops.
To play video in Flash, you need an instance of the flash.media.Video class. This class can natively handle progressive downloaded video. With the addition of a streaming server, such as Adobe Media Server, or Wowza Streaming Engine, playback of streaming video is also possible.
To build a Flash video player, you will need a class similar to that shown in Figure 1 (next page).
In the constructor of this class, an instance of the Net-Connection class is created and event listeners are added to the instance. The connect method connects the Flash player to a media server. In this case, it is called and passed null as an argument, indicating the file is being served from the local file system or from a web server and not from a media server. The NetConnection will fire a NET_ STATUS event when the connection is ready.
The netStatusHandler method creates a switch over the event.info.code property of the NetStatusEvent object. There are dozens of different events that could be handled in this method, but the two most common are NetConnection.Connect.Success and Netstream.Play.StreamNotFound. The success event indicates that the NetConnection is ready. The StreamNotFound event indicates that a 404 not found error was returned when trying to play the video. In this example, when the success event is received, the connect-Stream() method should be called.
The connectStream method creates an instance of the NetStream class called stream and has the NetConnection instance passed to it. Next, the client property of the stream object is set to this, which tells the stream where its callback methods are located. In this example, the onMetaData and onPlay-Status methods are the specific methods needed by the client.
Next an instance of the Video class is created. This is the surface on which the actual playback occurs. The stream is attached to the video with the attach-NetStream() method. Finally, the play() method of the stream is invoked and the video is attached to the display list.
This particular example shows a progressive download scenario. If this were a file from a media server via one of the Real Time Protocols (RTMP, RTMPE, etc.), the connect method of the NetConnection object would be passed a URL to the server.
HTTP streaming to Flash is more difficult; it involves downloading individual files and handing them to the appendBytes method of the NetStream class. While you can do this on your own, it is much simpler to use a framework such as OSMF (Open Source Media Framework) if you need HTTP Streaming in Flash.
Unlike Flash, playing a file via progressive download is trivial. Here is a simple sample.
<html> <head> <title>Sample HTML5 Progressive download</title> </head> <body> <video src="http://techslides.com/demos/ sample-videos/small.mp4" controls autoplay></ video> </body> </html>
Inside an HTML page, a video tag is created and pointed at a valid MP4 file. Optional attributes of the video tag can indicate that user interface elements should be included (controls) and that the video should start automatically (autostart).
Streaming to HTML varies based on the browser. Safari on MacOS (and iOS) allows for an HLS manifest to be specified as the source of the video tag. The manifest file (.m3u8) can specify several different video sources; the browser itself determines which source to play and when to switch between sources. Many modern browsers (including Chrome, IE11, Safari8, and Opera) support Media Source Extensions (MSE), which allow for individual segments of video to be downloaded and handed to the browser. Before each segment is downloaded, computations can determine which bitrate to use, allowing for adaptive streaming logic to be built into the application. While no production version of Firefox currently supports MSE, some prerelease versions do.
A basic MSE player looks like this (Figure 2). In this example, variables hold mimetype and codec information about the video. An instance of the MediaSource class is created and other variables are declared that will hold data references to the sourceBuffer, current segment number, and maximum segment number. As the page loads (as handled in the window.onload function), the startup() method is called.
In startup(), event listeners are added to the MediaSource instance and a reference is created to the video object that is declared in the HTML. Lastly, the MediaSource is set to be the source of the Video object.
When the sourceopen event (or webkitsourceopen, depending on browser implementation) occurs, the opened() method is called.
In opened(), the mimetype and codecs are concatenated into a variable, which creates a source buffer. An XMLHttpRequest object is created, which will be used to load the individual segments. The first request with the XMLHttpRequest object is used to load the initialization file, which is needed to inform the video tag about the type of video being played. An event handler follows, which is called automatically as the file is done loading. The contents of the response are read into a Uint8Array, which is appended into the sourceBuffer. An eventHandler is added on the sourceBuffer, so it can react to the updateend event.
Lastly, the send() method of the XMLHttpRequest object is called, instantiating the request.
When the sourceBuffer is done accepting the header, the loadSegment method is called.
In loadSegement(), we are checking if the current segment number is still less than the maximum segment number. (In this simple example, there are 65 segments to the video. In a more realistic scenario, the number and location of the segments would be specified in a manifest file.) If the segment is still in range, the getSegment() method is called with the segment number. When that is done, the segment number is incremented.
The getSegment() method creates a new XMLHttpRequest object and specifies the URL of the next segment in the open method. A similar onload method is used to read and append the segment. As each segment is appended into the sourceBuffer, the updateend event fires, which continues the process until all the segments have loaded.
While this simple approach demonstrates the basics of working with Media Source Extensions, there is much more complexity which is routinely needed in modern applications (ABR logic, multichannel audio, seeking, DVR, DRM, advertising, analytics, etc.). Fortunately, open source projects such as dash.js provide a framework that includes these features in an extensible format so they can be customized as needed for each project.
As you can see, there are many different options to consider when deciding to implement a video player. One of the main things to consider is which browsers you need to support. If you can handle supporting just the latest HTML5 browsers, an MSE-based player will probably suit you well. If you also need to support earlier browsers, you should consider a plugin-based approach, such as Flash, or perhaps a hybrid approach which can use Flash for those without MSE capabilities, and HTML5/MSE for those with supporting browsers.
Jeff Tapper (firstname.lastname@example.org) is a senior technologist at Digital Primates. He has been building internet applications since 1995 and has authored or coauthored more than a dozen books on internet technologies. He was the initial architect of the dash.js project and is a frequent speaker at conferences around the world, including IBC, NAB, Streaming Media East and West, and many more. Jeff is based in New York City.
Comments? Email us at email@example.com, or check the masthead for other ways to contact us.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||how-to's and tutorials|
|Date:||Mar 1, 2015|
|Previous Article:||How to package video for multiscreen delivery.|
|Next Article:||Aerial video: the do's and don'ts of shooting video using drones and quadcopters.|