Printer Friendly

An Experimental Platform for QoE Studies of WebRTC-based Multi-Party Video Communication.


The open source Web Real-Time Communication (WebRTC) project (1) has, since it was established by Google nearly 6 years ago, been used in numerous applications supporting e.g., videochat and file transfer. WebRTC is a royalty-free and open framework that makes it possible to have high-quality real-time communications between different parties, by using a web browser supporting WebRTC (e.g., Google Chrome, Opera, Mozilla Firefox). The strength and appeal of many WebRTC-based applications from a user perspective lies in their simplicity and the fact that they can be accessed from any device connected to the internet (regardless the type of connection and device used), as long as the browser used supports WebRTC. Moreover, some of these browsers even enable to easily collect application level performance statistics, which provide insight into potential performance issues at different levels. Given the many affordances and possible experiences (e.g., different devices, different types of network access, many parties in one call) and the availability of the WebRTC performance statistics, WebRTC is a very interesting use case from a Quality of Experience (QoE)-point of view. For example, in a multi-party scenario where the conditions are asymmetric, evaluating and predicting QoE is not that straightforward. Even if only one party in a call suffers from reduced performance (has a bad network connection or calls from a device with constrained CPU), the other parties in the call may also suffer a lot in terms of quality impairments and interruptions.

It is currently not fully understood how such technical and non-technical factors play a role in this respect, and what this implies for users and their Quality of Experience (QoE), i.e., their degree of delight or annoyance when using the application [1]. This may largely be due to the fact that evaluating QoE in such complex and diverse settings is not straightforward. Yet, such increased insights are highly valuable as they can guide the optimisation of QoE for WebRTC applications in different scenarios and circumstances. More concretely, more profound insights into the (ii) origins and occurrence of performance issues, (ii) impairments which may hinder smooth communication and a pleasureable experience during a call from the user point of view, and (iii) the ways in which users (may) react to this, are needed.

In this paper, we introduce an experimental platform that can help to gain deeper insights into the above issues. The platform can be used for conducting both controlled laboratory and longitudinal "living lab", empirical studies on QoE of WebRTC-based video communication and for collecting different types of complementary data. It consists of a testbed that (i) enables real-time audiovisual conversations for up to eight parties, (ii) provides real-time logging of performance- and session-related statistics, and (iii) enables the collection of both explicit and implicit user feedback. It also contains a dashboard that enables easy data visualisation and that can be used to perform a number of in-depth data analytical approaches. Next to the introduction of the experimental platform, we present results from a small-scale subjective study in which the testbed was used to conduct a controlled laboratory study on QoE in the context of WebRTC based audiovisual communication. More specifically, we investigated whether and how packet loss influences experienced quality in a two-party call setting.

The paper is organized as follows: in Section II, we briefly point to relevant related work on WebRTC QoE and experimental testbeds for investigating QoE in the context of multi-party video communication. Next, we introduce the experimental platform, its features and different components in more detail. In Section IV, we present a use case (subjective user study, its set-up and obtained results) and demonstrate the affordances and applicability of the testbed. Finally, in Section V we summarize the paper and point to the limitations of the testbed as well as the opportunities for future research.


An increasing number of studies has looked at QoE and factors bearing an influence on it in the context of multi-party video-conferencing (also called telemeeting QoE). Broadly speaking, three classes of QoE influence factors have been defined in the literature [2]: namely human, system and context influence factors. It goes beyond the scope of this paper to comprehensively classify existing studies along these three categories, therefore we point to [3] for a useful overview and instead, limit the overview here to a number of examples. When it comes to human factors, for instance the work presented in [4] indicated that the extent to which a participant is involved in a conversation matters. Similarly, in [5], participants were found to have a different sensibility to delay depending on how active they took part in a conversation. When it comes to contextual factors, [3] argue that e.g., the task and setting of a conversation (e.g., business vs. leisure) are likely to correspond with different quality thresholds and expectations. A useful example here is [6], which shares results from a series of studies. These results point amongst others to differences in QoE depending on the task and social context in which the conversational tests took place.

The primary focus in most studies published so far however has been on system-level influencing factors (e.g., different types of devices, video quality, network limitations and their implications) or on how to enable investigating to which extent they matter. A testbed for investigating QoE of videomediated communication in controllable and reproducible lab settings was reported in [7]. This testbed allows modification and monitoring of network and media conditions in real-time. Numerous studies has been conducted using the later testbed. Schmitt et al. investigated in [5] the impact of asymmetric delay on QoE in video-mediated group discussions. The results indicated a negative impact of asymmetric delay on QoE. In [8], the relationship between QoE and different layout, video quality (low vs. high quality) and network limitations (packet loss) was studied. The results showed a slightly larger impact of packet loss distortions on users' QoE than a reduction of the video quality. In [9], the impact of different encoding bitrate was investigated. The authors claim that 1 Mb/s is sufficient as encoding bitrate for HD streams in video conferencing.

Some previous studies have also focused more specifically on QoE in the context of WebRTC-based video communication. In [10], one of the few studies comparing different types of devices in the context of mobile multi-party telemeetings via WebRTC, the authors pointed to the importance of device capabilities and processing power as a key factors, especially in the context of WebRTC. In [11], Ammar et al. investigated the impact of performance-related potential Video QoE killers for two-party video conferencing. The results showed that the performance indicators that could be used to identify severe video freezes are significant drops in data rate, non-zero packet loss ratios, non-zero Picture Loss Indication, and non-zero bucket delay.


In this section, we describe the different main components of our experimental platform in more detail. More specifically, it consists of a testbed that (i) enables real-time audiovisual conversations in constrained conditions (e.g., by adding network impairments) (ii) provides real-time logging of objective and subjective data and (iii) enables the easy capturing of explicit and implicit user feedback. Additionally, it encompasses a dashboard that allows to visualise the different data sources in a comprehensive way and to perform a number of data analytical procedures.

A. WebRTC Testbed

1) Testbed Topology: Our WebRTC testbed consists of a WebRTC-based application called (2), a network emulator and a controller. The testbed enables video conferencing communication for up to eight clients and is designed to run in a controlled environment. Fig. 1 illustrates a simplified version of our testbed with a 3-party setup.

In the testbed, we use a research version of, which is nearly identical to the public version of The main differences are situated at the level of (i) the feedback form that is displayed when a caller leaves a session (a more fine-grained feedback form is used and this feedback form can be easily modified in the research version), and (ii) the link used to access (which is slightly different). Apart from these minor differences, the research version of the application can be used by users in the same way and with the same look and feel as the public version of the application.

The network emulator controls the network conditions and provides network emulation functionality by emulating network impairments. It allows: (i) bandwidth throttling; (ii) adding delay and its variation (jitter); (iii) adding packet loss and emulate packet burst losses, hereafter referred to as Mean Lost Burst Size (mlbs). In our testbed, the network conditions are controlled by NetEm.Additionally, our testbed can also allow CPU throttling, controlled on the client side.

The controller remotely controls the network conditions in real-time at a running system, which is required, since modern networks have typically fluctuating performances during one session. It is also worth noting that the clocks at the clients' devices need to be properly synchronized via the Network Time Protocol (NTP) in order to take the temporal aspects of the collected data into account.

2) Data Acquisition: The developed testbed enables the collection of different types of data, including explicit, subjective and implicit, behavioral user feedback, and performancerelated statistics. We will now discuss the latter in more detail.

2.1) WebRTC Statistics in Google Chrome: The webrtcinternals (3) functionality enables observation of the performance of the WebRTC connections locally in the browser. The stats gathered here are based on the statistics specified in the WebRTC Application Programming Interfaces (APIs). The W3C WebRTC statistics define objects to observed Real-Time Transport Protocol (RTP) statistics for the different data channels and tracks. For more background on these statistics and how they can be classified we refer to [11]. In webrtcinternals, a JSON file contains all PeerConnection objects defined in W3C APIs as well as some Google-specific stats. The reports include several media and network statistics, such as (video)frame rate, packet loss, and bitrate. End users can view these statistics in real-time or download them in a single file any time during a session, or download the statistics immediately after a session before the browser window is closed.The statistics are collected per browser, which means that in order to assess the performance of a multi-party session, the statistics from all browsers used in the session need to be recorded, downloaded, and (manually) combined and synchronized. Despite their inherent limitations, these statistics can be used to study QoE aspects of WebRTC services [12].

2.2) WebRTC Statistics in recently launched a customizable WebRTC Analytics interface called It provides real-time visualization of WebRTC service performance stats. This interface integrates the performance stats of all parties involved in a call (gathered with a sampling time of every 10 seconds). The stats files collected from Google Chrome and are both stored in a JSON format, but with a different structure and to some extent a different content.

2.3) Network performance: Network-oriented performance measurement tools, such as Wireshark, are complementary to WebRTC API stats and can be used for QoE troubleshooting and network diagnosis, etc. Compared to the data gathered from these network performance measurement tools, WebRTC API stats are application-oriented and more directly linked to how the user actually experiences the application and what is perceivable at the user side.

2.4) Explicit user feedback and questionnaires: Our testbed enables gathering of explicit QoE feedback immediately after a session ends (post-experience). The provided feedback can have different forms, e.g., a star rating, or answering a short questionnaire. Questions can relate to different quality features and how they were perceived by the user, to the context of the call, e.g., professional vs. leisure context, to the affective state of the user, the experienced impact on the interaction between the different conversation partners, etc. As mentioned, the requested feedback can be modified depending on the focus of the study. Both closed and openended questions are supported, as the questionnaires can be easily defined in an electronic format and displayed to the participants in an experiment. The feedback form at the end of a session can also be made an intrinsic part of an experiment, e.g., in the sense that the completion of a post-experience questionnaire can trigger the next step of the experiment. Additionally, further insights into the user's personal traits can be gained by integrating corresponding questions into the questionnaires before or after the experiments. Biases and attitudes, e.g, towards the application under investigation, as well as familiarity and criticality with online multi-party video communication in general and in particular, should be considered.

2.5) Behavioral User Feedback: When users experience different types and gradations of quality impairments (e.g., video freezes, bad or no audio), they may react in different ways. Users reactions may, for example, include: muting audio, disabling video, changing video resolution, aborting the call, retrying to connect, etc. Our testbed offers the possibility of measuring and monitoring what is going on at the performance level, what is perceivable by users and allows to gain an insight into how how users (actively or passively) react on a range of audiovisual impairments and their related tolerance levels. Evaluating and understanding such active users' reactions may help to improve WebRTC-based applications and optimization of QoE in constrained scenarios.

2.6) Audio and Video Recordings: The transmitted audio and video contents of each individual participant are recorded on the sending and on the receiving side. This allows us to analyze the conversation after the experiment from the perspective of each participant. We created a web interface tool for visualizing an experiment session and analyzing audio and video contents along with objective and subjective data. This tool is described in Section III-B. Furthermore, quality measurement tools for voice and audiovisual analysis based on human perception (e.g., PEXQ (4)) can also be used in evaluating the recorded audio and video contents. These tools provide a perceptual evaluation of audio / video quality for Mean Opinion Score (MOS) scoring of such telemeeting systems and applications.

2.7) Integration of additional methods and tools to capture user feedback: As these different data types are gathered in real-time, and can easily be visualised using the dashboard, the experimenter can easily have a look at the gathered data and ask for clarifications or additional information after the experiment, for instance in an individual debriefing interview for each participant or focus group discussion. Moreover, the testbed can easily be augmented with tools allowing e.g., the collection of physiological data, which can be gathered continuously and unobtrusively and which may provide more detailed insights into the user's affective state.

B. WebRTC Dashboard

The WebRTC dashboard consists of a web interface tool that makes it possible to visualize the most relevant factors that may influence user's QoE in an easy and straightforward way. The procedure is as follows: A real-time audiovisual conversation using the research version of takes place and Google Chrome is used as browser to access Then, Real-time logging of performance and session-related statistics of the session is performed. For this purpose, we use both Google Chrome's WebRTC internals functionality and the WebRTC Analytics interface Additionally, subjective user feedback (collected at the end of a session), implicit and behavioral user feedback (collected during a session), as well as audio and video recordings, are collected. Finally, the developed dashboard replays the collected data and exhibits them in a graphical and interactive way. See Fig. 2 for an illustration.

The WebRTC dashboard is composed of two main panels: Conversation Handler and WebRTC stats. The Conversation Handler is responsible for handling WebRTC-based conversations. It allows users to choose and manage the settings of a conversation, and customize the presentation of the collected data in the WebRTC stats panel. The WebRTC stats provides performance results and offers an evaluation, both objective and subjective, of audio and video contents (5).


In this section, we report on the set-up and results from a small-scale subjective study on the impact of packet loss on perceived quality in the context of a two-party audiovisual conversation using The study took place in a controlled lab setting and under different network conditions, enabled by the WebRTC testbed. The features and the practical applicability of our experimental platform are further illustrated here.

A. General set-up and methodology

The experiment consisted of 2 parts. First the participants were welcomed by the experimenter, who briefed them about the scope of the study and who explained the procedure of the experiment in detail. The participants received instructions about their task and about the post-session questionnaire to fill in. Next, they were asked to sign a written consent form and were invited to fill in a pre-questionnaire with some general information about themselves and their familiarity with multiparty video communication. Before moving to the test rooms, they were given to possibility to ask any questions they still had and the experimenter provided clarifications if needed. For the second part (and start of the actual experiment), they were escorted to different test rooms. They were seated within a viewing distance (around 70 cm) from the monitor, as recommended by ITU-T P.910 [13]. Participants were then asked to begin the experimental task (see further), and they were exposed to the first of five experimental conditions (see Section IV-D for details).

The whole procedure had a total duration of one hour, with an active QoE testing part of about 30 min. The QoE testing part contains 5 conditions and includes a 3 to 4 min break in-between the conditions. Each testing condition consisted of two-party video conference session using with a duration of 4 min. After each session participants were asked to rate the perceived quality using a 5-point absolute category rating (ACR) scale with the following values: (1) bad; (2) poor; (3) fair; (4) good; (5) excellent. To be more precise, we asked the following questions, which were answered within 2 min after each session:

Q1: How would you rate the overall audiovisual quality of the session (the overall combined audio and video quality)? ACR scale: 5-Excellent; 4-Good; 3-Fair; 2-Poor; 1-Bad.

Q2: How would you rate the video quality of the session? ACR scale: 5-Excellent; 4-Good; 3-Fair; 2-Poor; 1-Bad.

Q3: How would you rate the audio quality of the session? ACR scale: 5-Excellent; 4-Good; 3-Fair; 2-Poor; 1-Bad.

Q4: Which quality-related issues have you experienced during the session? Several answers are possible. Answer: Audio problems: bad audio or no audio at all; Video problems: bad video or no video at all; Bad synchronization between audio and video; Not applicable (never experienced any problems); Other, please specify.

Q5: Have you considered quitting the session because of qualityrelated issues? Answer: Yes; No.

Q6: Did you perceive any reduction in your ability to interact with the other party during the session? Answer: Yes (If Yes, specify the problem if you could); No.

In total 30 test conversations took place (6 groups and 5 conditions per group, see further). The subjective assessment followed the test procedure described in ITU-T Recommendation P.1301 [14] as much as possible.

B. Test Environment

The experiment was conducted in an office environment. The physical parameters during the testing sessions were slightly different for different participants, as the conversation partners were located in separate rooms. However the two different rooms that were used were selected so that they had very similar dimensions, lighting and background conditions. Both rooms were equipped with identical desktop computers, displays, webcams and headsets (see Table I for details).

C. Test participants

In total, we collect data from 12 participants, including 2 females and 10 males aged between 22 and 41 years (mean 27.67, median 25.5), recruited among the employees and students at the Norwegian University of Science and Technology (NTNU), Norway. All participants reported to frequently use online video communication: for 4 out of 12 participants this means 2-3 times per month and the remaining 8 use online video communication services at a weekly basis (3 of them use such services even daily). Generally speaking, all participant indicated in the pre-questionnaire that good audio quality is important to very important. For video quality, the expectations are still high, yet slightly lower than for audio. 5 out of 12 participants indicate to be rather indifferent to the video quality. Participants were divided into six groups with 2 members each to take part in the test sessions. They were placed together with another participant that they knew, so that the being unfamiliar with the conversation partner would not be an additional influence factor to consider.

D. Experimental Conditions

To study the impact of packet loss on QoE ratings, we consider the following five conditions:

C1: No packet loss (plr = 0% and mlbs = 0 packet)

C2: plr = 10% and mlbs = 1.5 packets

C3: plr = 10% and mlbs = 3 packets

C4: plr = 20% and mlbs = 1.5 packets

C5: plr = 20% and mlbs = 3 packets

where plr = packet loss ratio and mlbs = mean loss burst size.

The controlled variables (also called independent variables) are listed in Table II. Packet losses were remotely introduced (see Fig. 1) 30 seconds after the beginning of each session and lasted for three minutes, followed by 30 seconds of good quality (no packet losses), which was needed to be be able to collect the performance statistics via the WebRTC internals functionality.

In this experiment, a symmetric call condition was considered, so every participant would see similar distortions. Note that the five experimental conditions were randomly ordered for each of the six groups, so that potential confounding effects due to the following of a specific order of the experimental conditions could be avoided.

E. Conversation task

To assess the conversational quality of multi-party video conferencing, it is necessary to involve conversation partners in an appropriate conversation task. Within our lab experiment, we employed the survival task from ITU-T P.1301 [14], and we adapted it for a two-party video conferencing. The goal of the survival task is to explore the performance characteristics of a decision-making group. Participants were invited to imagine themselves in a survival situation based on an accident (space rocket, plane, etc.). They have a list of twelve items (listed with photographs to avoid that participants take too much time to look at their sheet), which were divided into two different 6-item lists, one for each participant. They have to share their objects, then discuss them together and come to an agreement on six objects to be selected (see ITU-T P.1301 [14] for details).

F. Experimental results

Fig. 3 depicts the average quality ratings with 95% confidence intervals for the three dependent variables (audiovisual, audio and video quality) in the five experimental conditions. It shows that both plr and mlbs seem to impact the overall audiovisual quality, as evaluated by the participants. Lower audiovisual quality ratings were found when plr is set to large value, typically plr = 20%. This pattern was also reflected in the answers to questions 5 and 6 (see section IV.A): In the experimental conditions 4 and 5 (for both plr = 20% applies), the percentages of participants that considered quitting the session are respectively 17% and 33% (compared to 0% in conditions 1 and 3 and 8% in condition 2). Similarly, one out of four participants felt that it was more difficult to interact with the other party in condition 4. For condition 5, this percentage increases even further (42%). On the other hand, the perceived audio quality rating decrease slowly when either the plr or mlbs increases. The only case with poorer audio quality ratings (average MOS equal to 2.7) is for plr = 20% and mlbs = 3. Finally, the perceived video quality ratings are highly impacted with the increasing values of plr and mlbs. It is also worth noting that smaller values of the mlbs --high frequency of packet loss bursts with a small number of consecutive packet losses per burst--lead to lower video quality ratings. More specifically, for the same plr values, for instance when plr = 10%, the frequency of the packet loss bursts has a larger influence on the video quality than larger burst sizes--large number of consecutive packet losses per burst--with low frequency of loss bursts.


In this paper, we introduced an experimental platform for conducting research on Quality of Experience (QoE) and experienced quality in the context of WebRTC-based video communication. The platform consists of a testbed which enables the set up and running of QoE studies in a controlled environment with up to 8 parties in one call, and a Dashboard that enables easy data visualization and in-depth data analytics. The use of our experimental platform was demonstrated though a small-scale pilot test in a controlled lab setting. The focus was on study of the impact of packet loss on quality ratings in a two-party WebRTC-based video communication and indicated that the experienced audio quality was worst in the test scenario which had a packet loss ratio of 20 % and a mean loss burst size of 3. The experienced video quality was however worst when participants were exposed to a packet loss ratio of 20 % and a mean loss burst size of 1,5. We showed that the platform allows us to control and directly manipulate several system- and in particular network-related parameters. As the experimental platform also provides real-time logging of performance and session-related statistics and as it enables the easy collection of explicit and implicit user feedback, it is also a very promising tool for data gathering outside of the lab, in test users' natural environment. More specifically, it enables to run longitudinal user studies, which could focus on how performance issues and short and longer-term QoE ratings are related, and how they may be interlinked with a set of other technical and non-technical influence factors which are typically hard (or impossible) to investigate in a controlled lab environment only approach.


The authors gratefully thank Eirik Fosser, Marianne Melhoos and Lars Nedberg for their Master's thesis work and contributions to the project, and the team and team for their input and support.


[1] K. Brunnstrom, S. A. Beker, K. De Moor, A. Dooms, S. Egger, M.-N. Garcia, T. Hossfeld, S. Jumisko-Pyykko, C. Keimel, M.-C. Larabi, B. Lawlor, P. Le Callet, S. Moller, F. Pereira, M. Pereira, A. Perkis, J. Pibernik, A. Pinheiro, A. Raake, P. Reichl, U. Reiter, R. Schatz, P. Schelkens, L. Skorin-Kapov, D. Strohmeier, C. Timmerer, M. Varela, I. Wechsung, J. You, and A. Zgank, "Qualinet White Paper on Definitions of Quality of Experience," Mar. 2013, qualinet White Paper on Definitions of Quality of Experience Output from the fifth Qualinet meeting, Novi Sad, March 12, 2013. [Online]. Available:

[2] U. Reiter, K. Brunnstrom, K. De Moor, M.-C. Larabi, M. Pereira, A. Pinheiro, J. You, and A. Zgank, Factors Influencing Quality of Experience. Cham: Springer International Publishing, 2014, pp. 55-72. [Online]. Available: 4

[3] J. Skowronek, K. Schoenenberg, and G. Berndtsson, "Multimedia Conferencing and Telemeetings," in Quality of Experience: Advanced Concepts, Applications, and Methods. Springer, 2014.

[4] J. Skowronek, F. Schiffner, and A. Raake, "On the influence of involvement on the quality of multiparty conferencing," in 4th International Workshop on Perceptual Quality of Systems, Vienna, 2013, pp. 25-30.

[5] M. Schmitt, S. Gunkel, P. Cesar, and D. Bulterman, "Asymmetric Delay in Video-Mediated Group Discussions," in International Workshop on Quality of Multimedia Experience (QoMEX), Sep 2014, pp. 19-24.

[6] G. Berndtsson, M. Folkesson, and V. Kulyk, "Subjective quality assessment of video conferences and telemeetings," in Packet Video Workshop (PV), 2012 19th International, May 2012, pp. 25-30.

[7] M. Schmitt, S. Gunkel, P. Cesar, and P. Hughes, "A QoE Testbed for Socially-aware Video-mediated Group Communication," in Proceedings of the 2Nd International Workshop on Socially-aware Multimedia, 2013, pp. 37-42.

[8] S. Gunkel, M. Schmitt, and P. Cesar, "A QoE study of different stream and layout configurations in video conferencing under limited network conditions," in Quality of Multimedia Experience (QoMEX), 2015 Seventh International Workshop on, May 2015, pp. 1-6.

[9] M. Schmitt, J. Redi, P. Cesar, and D. Bulterman, "1mbps is enough: Video quality and individual idiosyncrasies in multiparty hd video-conferencing," in 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), June 2016, pp. 1-6.

[10] D. Vucic and L. Skorin-Kapov, "The impact of mobile device factors on QoE for multi-party video conferencing via WebRTC," in Telecommunications (ConTEL), 2015 13th International Conference on, July 2015, pp. 1-8.

[11] D. Ammar, K. D. Moor, M. Xie, M. Fiedler, and P. Heegaard, "Video qoe killer and performance statistics in webrtc-based video communication," in 2016 IEEE Sixth International Conference on Communications and Electronics (ICCE), July 2016, pp. 429-436.

[12] D. Ammar, P. Heegaard, M. Xie, K. De Moor, and M. Fiedler, "Revealing the dark side of webrtc statistics collected by google chrome," in 2016 Eighth International Conference on Quality of Multimedia Experience (QoMEX), June 2016.

[13] "ITU-T P.910: Subjective video quality assessment methods for multimedia applications ," April 2008.

[14] "ITU-T P.1301: Subjective Quality Evaluation of Audio and Audiovisual Multiparty Telemeetings ," July 2012.

Doreid Ammar (*), Katrien De Moor ([dagger]), Poul Heegaard ([dagger])

(*) Emlyon Business School, Ecully, France

([dagger]) NTNU, Norwegian University of Science and Technology, Trondheim, Norway {katrien.demoor|poul.heegaard}



(3) Chrome: chrome://webrtc-internals, and in Opera: opera://webrtc-internals

(4) PEXQ:

(5) This dashboard available at:
Table I: Technical specifications

Hardware  Description

Desktops  HP Compaq Elite 8100 SFF--Intel[R] Core[TM] i7
Displays  HP LP2465 24" 1920 x 1200 (60Hz)
Headsets  Koss SB45
Webcams   Microsoft LifeCam Studio

Table II: Controlled variables

Controlled  Description           Levels

plr         Packet loss ratio     0 (none), 10, 20
mlbs        Mean Lost Burst Size  0 (none), 1.5, 3
COPYRIGHT 2018 The Society of Digital Information and Wireless Communications
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:quality of experience; web real-time communication
Author:Ammar, Doreid; De Moor, Katrien; Heegaard, Poul
Publication:International Journal of New Computer Architectures and Their Applications
Article Type:Report
Date:Apr 1, 2018
Previous Article:Power Efficient Gurumukhi Unicode Reader Design and Implementation on FPGA.
Next Article:Evaluation of Mobile Application Prototype in Context of Design Against Human and Computer Interaction.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters