Printer Friendly

Method of DASH segments into a MMTP stream for switching contents under a hybrid broadcasting environment.

1. Introduction

Due to development of broadcasting technology and emergence of various new broadcasting media, broadcasting environment has been changed from a unidirectional transmission of A/V data to a bidirectional one of legacy A/V data with additional data. Data broadcasting technology has been standardized by major standardization organizations such as Digital Video Broadcasting (DVB),European Telecommunications Standards Institute (ETSI) in Europe, Advanced Television Systems Committee (ATSC), Society of Cable Telecommunications Engineers (SCTE) and CableLabs in North America[1]. However, in case of transmitting additional data with A/V data, data broadcasting system uses a method of putting these additional data to a terrestrial signal, which is disadvantageous because of limited bandwidth.

In order to solve this problem, various standards have been established for Hybrid Broadcast Broadband (HBB), which is a interworking service between broadcasting and broadband networks. Hybrid Broadcast Broadband TV (HbbTV), which was first standardized in Europe, is a standard consortium for harmonizing existing broadcasting and broadband contents on the internet to provide various hybrid services to terminal users through internet-connected television or set-top box. Likewise, ATSC provided the NRT service of using frequency idle band in ATSC 2.0 standard[2]-[3]. With regard to these standardizations, various researches have been conducted on hybrid broadcast services which uses MPEG-DASH as broadband technology and MPEG-2 TS[4] as broadcast technology[5]-[6].

Recently, users might want to consume multimedia services without being constrained by transport network. However, MPEG-2 TS packets have fixed length of 188 bytes, which is not appropriate for IP environments and is too small for configuring higher resolution contents such as UHD video sequences[7]-[9]. Therefore, ATSC is currently enacting ATSC 3.0 standard[10] as a next generation broadcasting system from February 2012. The ATSC 3.0 standard includes methods of supporting hybrid broadcasting system combined with the internet, which can provide various services in the form of Over The Top (OTT) by bypassing the conventional unidirectional broadcasting service[11].

There have been other researches which share interests with ours. There are papers whose name are "Cross-Layer Fairness-Driven Concurrent Multipath Video Delivery over Heterogeneous Wireless Networks" and "Social-Aware Rate Based Content Sharing Mode Selection for D2D Content Sharing Scenarios." As mentioned, these papers share similar interests with ours. However, they has different points of views with ours. Firstly, because the former focused on transmitting same contents from a heterogeneous wireless network, it is not suitable for broadcasting and convergence. Secondly, the latter has an issue of content sharing between device-to-devices. Thus, it is not applicable to broadcast environments.[12]-[13].

Under a hybrid broadcasting system combined with Internet, we conducted a research on providing seamless service through network switching from RF network to IP network, and also designed service scenarios that can be used in hybrid broadcasting environment. To fulfill requirements for these service scenarios, synchronization on frame lever with UTC time seems critic, which is hard to be achieve in ATSC 3.0 standard on DASH/MMT-based hybrid environment. Thus, we propose a method of signaling which enables notifying terminal users that what kinds of contents to be received through broadband network for each service scenarios and a method of synchronizing those contents acquired through heterogeneous network.

In this paper, Section 2 describes overall contents of ATSC 3.0 standard, Section 3 describes service scenarios and the developed system for hybrid broadcasting. On the basis of the proposed hybrid system, Chapter 4 analyzes the results through the system, and finally a future work is described in Section 5.

2. ATSC 3.0 delivery system

The ATSC 3.0 system consists of technologies for both broadcast and broadband. This section explains broadcasting and broadband systems in Section 2.1 and Section 2.2, respectively.

2.1 Broadcasting system in ATSC 3.0

As the fisrt step of receiving broadcasting signal, a receiver should access to a low level signaling stream with pre-defined IP address and port number to receive Low Level Signaling (LLS) data. The receiver can obtain information for desired broadcasting signal through Service List Table (SLT) located in the received LLS data. Section 2.1.1 describes the LLS and SLT in detail, and Section 2.1.2 describes the overall MPEG Media Transport (MMT)[14] received through SLT.

2.1.1 Low Level Signaling

Low Level Signaling (LLS) data would be encapsulated in UDP packets. The UDP packets which carry LLS data have pre-defined IP address and port number. When the receiver starts receiving the broadcasting signal from RF network, it first finds LLS data by accessing IP address of 224.0.23.60 and destination port of 4937,which are pre-defined IP adress and port number for LLS, and confirms SLT among the LLS data and the information about a broadcasting service currently being transmitted. A receiver can construct a service map, which enables a user to select a service, by scanning SLT, which contains necessary information for constructing a service map.

ATSC 3.0 standard defined a syntax for LLS table as Table 1[10]. This LLS table, which includes LLS data, is formatted and transmitted in a bit stream structure as UDP packets. The first byte of LLS table indicates an unique identifier for the LLS table and name of the field is 'LLS_table_id'.

When "LLS_table_id" sets to 0x01, this means that payload includes SLT. SLT contains information for each services transmitted via broadcasting network to enable quick scanning and acquisition of thes service.

In ATSC 3.0 standard, SLT is defined in the form of XML and includes following elements. SLT consist of one or more service(s). Each service(s) is designed in a hierarchical structure, and has elements such as Simulcast TSID, BroadcastSvcSignaling, and SvcInetUrl, etc. The BroadcastSvcSignaling provides service elements, such as channel information for each service and Protocol, PlpId, IpAddress, and UdpPort information[10].

Section 2.1.1 describes the LLS and SLT, and Section 2.1.2 explains the MMT standard through SLT analysis.

2.1.2 MPEG MEDIA TRANSPORT (MMT)

MPEG Media Transport(MMT) is a standard for transporting multimedia data. MMT standard includes signaling, transport protocol and encapsulation format, and also supports both unidirectional and bidirectional network environments.

Fig. 1 indicates an end-to-end structure of MMT as defined in MMT standard. MMT end-to-end structure consists of MMT sending entity and MMT receiving entity, and each entities can exchange data through MMT Protocol (MMTP).

MMT sending entity recieves contents from Packge providers and/or Asset providers and provide these contents to MMT receiving entity through MMTP.

Sending entity has a process of sending a service to receiving entity which is as follows. First, sending entity receives contenst. Secondly, it converts the contest to media processing unit(MPU)s with MMT's own encapsulation format. Finally, it packetizes the MPUs with MMT's own transport protocol, whose name is MMTP, and sends those packest through various networks to receiving entity. While doing this, it communicates with receiving entity using signaling messages. On receiving entity's side, it does similar process of sending entity's in the opposite direction.

For communication between sending and receving entities, both entities exchange signaling data through MMTP and this signaling enables to control transporting and/or consuming of contents.

Fig. 2 shows layered structure of MMTfor encapsulating, delivering contents and signaling for controlling transportation and consumption of them. As shown in Fig. 2, MMT defines 3 functions whose names are 'Encapsulation Function', 'Delivery Function', and 'Signaling Function'[14].

Encapsulation function defines Asset and Media Processing Unit(MPU) format, which supports independently consumable media data format based on ISO Based Media File Format(ISOBMFF)[15]. Asset is a gathering of MPUs which share same Asset ID. Asset can contain timed media such as MPEG-2 TS file, MP4 file, etc. and/or non-timed media such as web pages, text, picture, etc. To classify each Assets from others, each Assets has its own Asset identifier which can be represented with URI, UUID[16], etc. An MPU is a self-decodable and self-renderable media unit which inherits ISOBMFF. An MPU can be generated not only with timed media but also with non-timed media. Every MPUs in an Asset can be distinguished with sequence numbers and doesn't be overlapped on timeline. When those MPUs are delivered through MMTP, they would be seperated into small segments named Media Fragment Unit(MFU) which is usually consists of one access unit(AU) or part of an AU.

Delivery Function defines MMT's own transporting protocol named MMTP. MMTP is a packet-switching transport protocol which allows distribution and convergence of contents, has its own packet format and supports both streaming and download service in single stream using multiplexing. Also, MMTP supports functions such as multiplexing, calculating network jitter, QoS. MMTP has three modes which are MPU mode, GFD(Generic File Delivery) mode, and Signaling Message mode, where MPU mode is for transporting MPUs which can be used for real-time services such as broadcast, streaming, etc., GFD mode is for download service which is counterpart of NRT service in MPEG-2 TS system, and Signaling Message mode is for exchanging signaling messages through MMTP.

Signalling function defines signaling messages which can be used to control delivery and consumption of media contents. There are 5 main messages defined in MMT standard and many other messages are defined in Ammendments. Five main messages are PA message, MPI message, MPT message, DCI message, CRI message. PA message includes various information about Package, MPI message includes Composition Information(CI), MPT message includes information of assets in a Package, CRI message includes mapping information of Network Time Protocol(NTP) timestamp and System Time Clock(STC) of MPEG-2 TS, DCI message includes device capability.

Section 2.1 describes the LLS and MMT standards used in the broadcasting system of ATSC 3.0. Section 2.2 describes the broadband system that provides DASH content using HTTP.

2.2 Broadband system in ATSC 3.0

2.2.1 Dynamic Adaptive Streaming over HTTP (DASH)

Section 2.2.1 describes a structure of DASH, which is a technical standard of streaming services in broadband network. DASH is especially designed for an adpative streaming service in variable IP network situations over HTTP protocol with request and response functionalities. Therefore, under DASH scheme, a client downloads and uses media files stored in a server usingmanifesto file whose name is Media Presentation Description (MPD)[17].

As shown in Fig. 3, there are a server and one or more client(s) in DASH service environments. The server encodes eache video int a variety of video quality, splits them into segments, which are which are video sequences segmented by smaller time units suitable for HTTP transmission and describe them in MPD. Client firstly receives MPD, chooses a relevant quality media to download on MPD according to its bandwidth situation, and repeats downloading and playback for streaming services. 'DASH File Generator' consists of MPD Generator and Segment genrator. The Segment Generator generates segments by dividing the video into different quality and time, and the MPD genrator generates an MPD file which describes each segments with information such as location, resolution, relationships, etc.

As shown in Fig. 4, MPD provides necessary information for initiating media playback and HTTP URL which enables to download media segments according to characteristics of bit rate, language, resolution, etc.

In an MPD, elements which provide location information for each meda files are composed with hierarchical structure as shown in Fig. 5, and the structural functions and roles of each layers are as follows. Top level elements of MPD describe profile, service type, service start/end times, buffer related information, and so on. A Period element may provide a start time and a duration of each period, and information about each segments. An AdaptationSet element describes information about the language, maximum / minimum bandwidth, screen information, frame rate, etc. of a content. Also, a Representation element is to show a quality of the contents, bandwidth, segment related url, and attributes. Finally, Segment elements that are sub-elements of Representation include SegmentList, SegmentTemplate, and SegmentBase -Information.

3. Proposed hybrid broadcasting configuration

Section 2 described broadcasting and broadband technologies applied in ATSC 3.0 standard. Here, section 3 describes service scenarios we assumed for our proposing system which is for hybrid broadcasting based on the relevant standards describes the implementation system for verifying the service scenarios. Section 3.1 describes the service scenarios, Section 3.2 describes a sender-side of the system, and Section 3.3 describes a receiver-side of the system.

3.1 Service Scenario

There can be various use cases and service scenarios in implemeting hybrid Broadcasting systems. In this paper, we propsed three service scenarios by using hybrid broadcasting broadband environemnts as follows.

-- When a broadcaster transmits a content nationwide or globally, intermediate advertisements are to be inserted in the middle of the content depending on locations of users. In other words, local broadcasters can replace the original ads with other ads they want to send. In this case, local broadcasters have a server for their own ads and let clients to replace the ads based on our proposing system.

-- 3D contents usually consist of two video sequences, which are both left and right video sequences. Thus, when a broadcaster tries to provide a 3D service and/or when a client wants a 3D service, broadcaster needs to transmit two video sequences under limited bandwidth. Due to this limitation, one of the video sequences are set to be smaller in terms of the resolution or a size of a video sequence. However, the resolution and size of videos are getting larger as technology evolves, and it doesn't seem possible to transmit both left and right videos at the same time without changing any resoultion and/or size of them. This case can be sorted out by transmitting a particular video through broadcast(terrestrial) network and another through IP network and synchronization between those contents can be acieved with our proposing system.

-- As clients want richer, more immersive and personalized services, they might want additional media which provide additional viewpoints, information, etc. Also, individual broadcasters provide additional media with additional viewpoints and the clients want to consume these media with broadcasters', These cases can be implemented by receiving main service stream through terrestrial network from broadcaster and receiving additional service stream from IP network with our proposing system.

In Section 3.1, we proposed three service scenarios for hybrid broadcasting. In order to show a realization of the proposed scenarios, this paper conducted an experiment on "intermediate advertisements inserted in the content and advertisements" out of those three scenarios. The sender structure and broadband URL signaling method for verifying the experiment are described in Section 3.2 and the client structure and synchronization method are described in Section 3.3.

3.2 Sender structure

A sender-side structure in the hybrid broadcasting system proposed in this paper is illustrated in Fig. 6. As shown in Fig. 6, the sender structure consists of Low-Level Signaling part, Broadcasting one and Broadband one.

LLS/SLT Generator in the Low-Level Signalling part generates LLS/SLT information by receiving MPD URL information, channel, Broadcasting IP adress, and port numbers[10]. In SLT XML, described in Section 2.1.1, there is a URL for internet access of the service through svcInetUrl, and a field indicating the type of file that can be downloaded through svcInetUrl via urlType. The values 0 ~ 3 in Table 3 below are existing values and have urlType such as SLS and ESG server. The service scenarios for each hybrid broadcasting described in Section 3.1 are further defined in Table 2.

Also, MPU Generator in the Broacsting part of Fig. 6. receives a ISOBMFF content and generates MPU files as defined in ISO/IEC 23008-1[14]. MPT Message Generator receives mpu_sequence_number and media type information from MPU files, and generates MP table. The MPT message consists of all or part of the MP table, which take a similar role as PMT in MPEG2-TS systems. MP table provides identifier, position information, descriptor, Package_ID, etc. of each Assets in the corresponding Package. The UDP / MMTP packetizer receives MPU files and MPT messages, and converts them into a MMTP stream, which is transmitted through UDP[14].

Finally, DASH Segmentator in Broadband part takes ISOBMFF contents as inputs, and generates audio and video segments with constant duration, various qualities. MPD Generator generates a MPD file by receiving URL information of media segment, codec, and resolution information, etc. The final output of the Broadband part is both MPD and media segments, which would be delivered via HTTP network. A client receives an MPD and requests media segments to be played [16].

3.3 Client structure

Section 3.3 presents a client-side structure for hybrid broadcasting as shown in Fig. 7. The client-side structure consists of two subsystems, namedBroadcasting and Broadband subsystems, repsectively. Detailed description of those subsystems is as follows.

First, a client receives a broadcasting signal, and the received signal is delivered to a module named UDP Receiver module. The UDP Receiver acquires information such as IP address and port, finds LLS packets whose IP address is 224.0.23.60 and destination port is 4937 and delivers it(them?) to LLS/SLT Parser. The LLS/SLT Parser obtains information, such as the MMTP packets of a braodcasting service and URL information for a broadband service. SLT Parser delivers values of "SLT.Service. BroadcastSvcSignaling @slsDestinationIpAddress" and "SLT.Service.BroadcastSvcSignaling@slsDestinationUdpPort" of service whose value of "SLT.Service@serviceCategory" is "0x01(Linear A/V Service)" and that of "SLT.Service. BroadcastSvcSignaling@slsProtocol" is "0x02(MMTP)" to UDP Reciver. UDP receiver filters MMTP packets with slsDestinationIpAddres and slsDestinationUdpPort values. MMTP Packet Parser filters MMTP packets which includes MMT signaling messages using type (= 0x02) field and packet_id (= 0x00) field. Signaling Message Parser finds MPT message whose value of 'message_id' field is '0x0010' and finds 'packet_id's of audio/video assets using 'MMT_general_location_info' indicated in the MPT message. MMTP Packet Parser filters MMTP packets which include audio / video assets using 'type (= 0x00)' field and 'packet_id (= default asset packet_id)' field. MMTP Packet Parser delivers MMTP payloads which include media data to MPU Generator and MPU Generator re-creates MPU files from them. Frame/Time Data Generator receives MPU files as inputs, generates time information of each frames and buffers them. Finally, Decoder&Renderer gets frame data from Frame/Time Data Generator according to related time information.

For broadband subsystem, SLT Parser sends MPD URL to MPD manager, MPD manager acquires MPD through HTTP server, and requests/responses via corresponding URL. Through the MPD, corresponding media segments are obtained through HTTP server through request & response procedures.

MPU timestamp descriptor in MMT Package (MP) table located in MMT Package Table(MPT) message provides an absolute presentation time of MPUs with NTP timestamp[18]. Table 3 shows the syntax of MPU timestamp descriptor in MMT Standard[14]. With 'mpu_sequence_number' and 'mpu_presentation_time', MPU timestamp descriptor provides an absolute playback time of the first sample of each MPUs in an Asset.

Synchronization in a single MPU inherits that of ISOBMFF. In ISOBMFF system, synchronization is achieved by using 'timescale', 'sample duration'. 'Timescale' field of ISOBMFF provides a value that how many time 1 second should be divided in, and "sample_duration" indicates how long a sample will be displayed on the screen based on "timescale". These fields are used to calculate when to play a particular sample based on the playback time of the first sample.

In order to synchronize MPUs of MMT system and segments of DASH system, we newly defined a value of @schemeIdUri and parameters/description of value which are located in SupplementalProperty descriptors. Newly defined value of @schemeIdUri is "urn:mpeg:dash:utc:ntp-segment:2017" and parameters/description of value is as shown in Table 4.

Fig. 8 illustrates a synchronization method between MMT and DASH. Synchronization between MMT and DASH contents can be accomplished with UTC time. First, MMT system uses MPU_timestamp_descriptor located in MP table to indicate absolute presentation time of the first frame of an MPU, whose mpu_sequence_number is specified in the descriptor, in terms of UTC. Second, DASH system can use NTP timestamp in SupplementalProperty@value as we defined to indicate absolute presentation time of the first frame of the first segment of an Adaptation set, which the segment belongs to, in terms of UTC as well.

For seamless replacing MPUs with DASH segments, we calculated a deadline time to request the first segment with an inequation(1).

[mathematical expression not reproducible] (1)

A buffer in a DASH system should have segments and sum of their duration should be bigger than @minBufferTime. Calculating deadline time for DASH segments is achieved by the inequation (1) above. First, calculate needed data size in buffer by multiplying @minBufferTime and bandwidth of accoding Representation. Second, divide this data with bitrate which can be calculated during downloading MPD. Finally, subtract it from NTP time for segment.

In MMT system, UTC time of currently presented frame is calculated with following steps. First, divide duration of every previous frames with timescale. Second, add all of them to MPU presentation time.

Thus, a client should request for the first DASH segment before UTC time of currently presented frame gets bigger than deadline time.

In section 3, we provided some assumed service scenarios, structures of Sender/Client while introducing changing method for seamless playback and synchronization between two heterogeneous systems. In section 4, we analyze test results of proposed system in section 3.

4. Test Result and Analysis

In Section 3, we described a proposed system we designed to verify a proposed scheme. This chapter shows the results of actual implementation and verification of the proposed system. We experimented on Windows 7 operating system(OS) and implemented the system using C ++ language in Visual Studio 2013 which is an integrated development environment (IDE). The HTTP server uses freeware named 'hfs' (HTTP File Server). Internet environment of the experiment was 100Mbps, a port number of UDP server was 21002, and that of HTTP server was arbitrarily determined as 21004.

Fig. 9 and Fig. 10 below shows the result of the hybrid broadcasting sender porposed in the previous section. SLT information and MPD URL information and urlType of Broadband DASH produced by the low level signallin part is shown in Fig. 9.

Fig. 10 shows an example of generated MP table. 1 in shown MP table is the asset_ID value and the content of the MPU_Timestamp_descriptor of the asset is highlighted part. Since the tag value of the descriptor is 0x0001, it is MPUtimestampdescriptor, mpusequencenumber is "1" as shown in 2, and the value of mpu_presentation_time is same as 3. When converting the value of "mpu_presentation_time" to UTC time, it becomes "2017-04-10T08: 06: 41Z " value.

Fig. 11, Fig. 12, Fig. 13 shows a screen reproduced sequentially using a value of MPU_Timestamp_descriptor and a value of SupplementalProperty@value of MPD in the execution window of Client.

Fig. 11. show that MMTP packets are received via UDP. And it starts at "2017-04-10T08: 06: 41Z " in UTC time using "mpu_presentaion_time" which is indicated in MPU_timestamp _descriptor.

Advertisement screen is as shown in Fig. 12. In order to play this screen, the deadline time can be calculated with SupplementalProperty@ value of the MPD and MPU_Timestamp_ descriptor through above-mentioned formula(1). And client starts downloading at deadline time and confirms that the content is played seamlessly.

It can be seen from Fig. 13 that the playback of the advertisement received through the broadband is completed and seamlessly reproduced with the MMT content.

5. Conclusion

In this paper, we proposed service scenarios and verification system for them which enables inserting DASH segments over the Internet while using MMT system over broadcasting channel defined in ATSC 3.0. In this process, a new value has been defined for ATSC 3.0 standard for switching contents between Internet and Broadcasting channel. In addition, we also defined a new descriptor and a value for MPD of DASH standard. In this paper, we have verified the one of proposed service scenarios using HD video, but these service scenarios can be applicable to future broadcasting such as UHD service. Many follow-up studies can be extended from this paper for more immersive and interactive hybrid broadcasting services.

References

[1] R.J. Crinon, D. Bhat, D. Catapano, G. Thomas, J.T. Van Loo, AND Gun Bang, "Data Broadcasting and Interactive Television," in Proc. of the IEEE, vol: 94, pp.102-118, January, 2006. Article (CrossRef Link).

[2] Hyun-Jeong Yim, Hee-Jin Lee, Soon-Bum Lim, Byungjun Bae, Heung Mook Kim, Namho Hur, "A study on 3D representation of declarative content for web platform based hybrid TV," in Proc. of Broadband Multimedia Systems and Broadcasting (BMSB), 2014 IEEE International Symposium on, 25-27 June 2014. Article (CrossRef Link).

[3] ISO/IEC 13818-1:2013, "Information technology--Generic coding of moving pictures and associated audio information: Systems--Part 1"

[4] ATSC Sandard: A/107-ATSC 2.0 Standard (15 June 2015)

[5] Kugjin Yun, Won-Sik Cheong, Gwangsoon Lee, Xiaorui Li, and Kyuheon Kim, "Design of Synchronization and T-STD Model for 3DTV Service over Hybrid Networks" ETRI Journal, vol. 38, Number 5, October 2016. Article (CrossRef Link).

[6] J.L Feuvre, Cyril Concolato, "Hybrid Broadcast Services using MPEG DASH," in Proc. of Media Synchronization Workshop 2013, 2013.

[7] Youngkwon Lim, Shuichi Aoki, Imed Bouazizi, Jaeyeon Song, "New MPEG Transport Standard for Next Generation Hybrid Broadcasting System With IP," IEEE Transactions on Broadcasting, vol. 60, pp. 160-169, April 2014. Article (CrossRef Link).

[8] Kyungmo Park, Youngkwon Lim, Doug Young Suh, "Delivery of ATSC 3.0 Services With MPEG Media Transport Standard Considering Redistribution in MPEG-2 TS Format," IEEE Transactions on Broadcasting, vol. 62, pp.338-351, January 2016.

[9] MinKyu Park, Yong Han Kim, "An Overhead Comparison of MMT and MPEG-2 TS in Broadcast Services," JBE, vol. 21, No. 3, pp436-449, May 2016. Article (CrossRef Link).

[10] ATSC Candidate Standard: Signaling, Delivery, Synchronization, and Error Protection(A/331) September 21, 2016.

[11] Yiling Xu1, Shaowei Xie, Hao Chen, Le Yang, and Jun Sun, "DASH and MMT and Their Applications in ATSC 3.0," http://www.cnki.net/kcms/detail/34.1294.TN.20160205.1525.004.html, published online February, 2016. Article (CrossRef Link).

[12] Changqiao Xu, Zhuofeng Li, Jinglin Li, Hongke Zhang, and Gabriel-Miro Muntean, "Cross-Layer Fairness-Driven Concurrent Multipath Video Delivery Over Heterogeneous Wireless Networks," IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, vol. 25, No. 7, July 2015. Article (CrossRef Link).

[13] Dan Wu, Liang Zhou, and Yueming Cai, "Social-Aware Rate Based Content Sharing Mode Selection for D2D Content Sharing Scenarios," IEEE Transactions on Multimedia, Vol. PP, Issue 99. Article (CrossRef Link).

[14] Information technology--High efficiency coding and media delivery in heterogeneous environments--Part 1: MPEG media transport (MMT) Draft text of ISO/IEC 23008-1 2nd edition, w15229.

[15] ISO/IEC 14496-12:2008, "Information technology--Coding of audiovisual objects--Part 12: ISO Base Media File Format".

[16] Information Technology--Procedures for the Operation of Object Identifier Registration Authorities: Generation of Universally Unique Identifiers and Their use in Object Identifiers, ITU-T Rec. X.667, October 2012.

[17] Information Technology--Dynamic Adaptive Streaming Over HTTP (DASH)--Part 1: Media Presentation Description and Segment Formats, document ISO/IEC 23009-1:2014, International Organization for Standardization (ISO), 2014.

[18] IETF RFC 5905, "Network Time Protocol Version 4: Protocol and Algorithm Specification," 2010.

Jeonho Kang (1), Dongjin Kang (1) and Kyuheon Kim (1)

(1) Kyunghee Univ., Korea

(E-mail:gaonam@khu.ac.kr, cpffh0729@khu.ac.kr, kyuheonkim@khu.ac.kr)

(*) Corresponding author: Kyuheon Kim

Received May 11, 2017; revised July 28, 2017; accepted August 17, 2017; published December 31, 2017

This work was supported by Institute for Information & communications Technology Promotion(IITP) grant funded by the Korea government(MSIT) (No. 2015-0-00231, Development of generation and consumption of Jigsaw-liked Ultra-Wide Viewing Spacial Media)

https://doi.org/10.3837/tiis.2017.12.016

Jeonho Kang received the B.S. and M.S. degrees in electronics engineering from Kyung Hee University, Yongin, Korea, in 2010 and 2012, respectively. He is currently pursuing the Ph.D. degree in electronics engineering with Kyung Hee University, Korea. His current research interests include MPEG systems, digital broadcasting technologies and image processing.

Dongjin Kang received the B.S. degrees in electronics engineering from Kyung Hee University, Yongin, Korea, in 2015 respectively. He is currently pursuing the Ph.D. degree in electronics engineering with Kyung Hee University, Korea. His current research interests include MPEG systems, digital broadcasting technologies and image processing.

Kyuheon Kim received the B.S. degree in electronic engineering from Hanyang University, Seoul, Korea, in 1989, and the M.Phil. and Ph.D. degrees in electrical and electronic engineering from the University of Newcastle upon Tyne, U.K., in 1996. from 1996 to 1997, he was with Sheffield University, U.K., as a Research Fellow. from 1997 to 2006, he was with the Electronics and Telecommunications Research Institute, Korea, as the Head of Interactive Media Research Team, where he standardized and developed T-DMB specification, and conducted a Head of Korean delegates for MPEG standard body from 2001 to 2005. Since 2006, he has conducted research at Kyung Hee University, Seoul, Korea. He has published numerous technical papers. His current research interests include interactive media processing, digital signal processing, and digital broadcasting technologies. Dr. Kim was a recipient of the Ministry Award from the Ministry of Information and Communication in 2003 and the Prime Minister Award in 2005.
Table 1. Common Bit Stream Syntax for LLS Tables

Syntax                    No. of Bits  Format

LLS table() {
 LLS table id             8            Uimsbf
 provider id              8            Uimsbf
 LLS table version        8            Uimsbf
 switch (LLS_table_id) {
  case 0x01:
   SLT                    var
   break;
  case 0x02:
   RRT                    var
   break;
  ...
 }
}

Table 2. Applications in each class

urlType       Meanig

0             Reserved
1             URL of Service Layer Signaling Server
2             URL of ESG server
3             URL of Service Usage Data Gathering Report server
4             URL of Indivisual Advertisement
5             URL of additional media for 3D broadcast
6             URL of additional broadcasting
Other values  Reserved for futrue use

Table 3. MPU_timestamp_descriptor

Syntax                         No. of bits  Mnemonic

MPU timestamp descriptor () {
 descriptor tag                16           Uimsbf
 descriptor length              8           Uimsbf
 for (i=0; i<N; i++) {
  mpu sequence number          32           Uimsbf
  mpu_presentation time        64           uimsbf
 }
}

Table 4. SupplementalProperty@value attributes for the NTP time for
segment scheme

SupplementalProperty@value  Use  Description
parameter

NTP timestamp               M    The value means time of first frame of
                                 first media segment
                                 with the NTP protocol as defined in
                                 IETF RFC 5905 for
                                 getting the appropriate time.

Legend:
M=Mandatory, O=Optional
COPYRIGHT 2017 KSII, the Korean Society for Internet Information
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Dynamic Adaptive Streaming over HTTP; MPEG Media Transport Protocol
Author:Kang, Jeonho; Kang, Dongjin; Kim, Kyuheon
Publication:KSII Transactions on Internet and Information Systems
Article Type:Report
Date:Dec 1, 2017
Words:5101
Previous Article:Non-square colour image scrambling based on two-dimensional Sine-Logistic and Henon map.
Next Article:Human Visual System-aware Dimming Method Combining Pixel Compensation and Histogram Specification for TFT-LCDs.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |