The World Wide Web is still among the most prominent Internet applications. While the Web landscape has been in perpetual movement since the very beginning, these last few years have witnessed some noteworthy proposals such as SPDY, HTTP/2 and QUIC which could disrupt the Web status quo and profoundly reshape the protocols family at application layer. We’re working toward the definition and assessment of objective metrics (such as SpeedIndex, Above-The-Fold and variants) related to quality of user experience [PAM-18],[SIGCOMM-QoE-16],[INFOCOM-IC-16],[GLOBECOM-03] and on gathering and analyzing subjective user feedback [PAM-17], [PAM-18].
In all cases, we make our code and dataset available below!
- We are proud that our [PAM-18] work has won the Best dataset award !
- We are proud that our [SIGCOMM-QoE-16] work has received the Best paper award ! and was reprinted in ACM SIGCOMM Comput. Commun. Rev.
- We were proud that our [INFOCOM-IC-16] work was Finalist at the IEEE Infocom 2016 Innovation Challenge !
- Our latest effort has been accepted at QoMEX 2018! more info coming
- We have devised an approximation of Above-the-Fold (ATF) metric, that was published at [PAM-18]
- We have released an implementation of ATF as a chrome plugin on GitHub and on the chrome app store (see Code section)
- We have release the about 9,000 MOS points used to verify the match with user QoE (see Dataset section) in [PAM-18]
- We are preparing a demo, for further details please see the Paper section, or have a look at the following video for a quick idea!
- Our work on studying HTTP/1.1 vs HTTP/2 with objective QoE and subjective MOS metrics appeared at [PAM-17].
- Our work on defining approximated SpeedIndex metrics appeared at [SIGCOMM-QoE-16]
SpeedIndex approximations: ByteIndex and ObjectIndex
In particular we propose in [SIGCOMM-QoE-16] two replacement metrics for Google’s SpeedIndex, namely ObjectIndex and ByteIndex, that are structurally similar to the SpeedIndex but tremendously simpler to compute. In a nutshell we argue that, to some extent, the objects (or bytes) that are received by the browser (or the network card) can provide a first approximation of the visual completeness of the rendering process. We test SpeedIndex, ObjectIndex and ByteIndex (along with other metrics) on the Alexa top-100 dataset, finding high levels of correlation among these metrics as shown in the arc diagram
- Here you can download the datasets of our [SIGCOMM-QoE-16] paper
Approximate Above the Fold (AATF)
Arguing for limitations of SpeedIndex match to faithfully represent user QoE, more recently we proposed [PAM-18] a simple way to approximate Google’s Above-the-Fold (ATF) time. By coupling the ATF knowledge to refine the SpeedIndex and ByteIndex-like metrics, to gather a closer match of user quality of experience measured with subjective tests (see the MOS section below). As in the example out of the 154 images in the page, only 8 are direclty visible “above the fold”: the time it takes to download and render these images (5.37s) is much lower than the time it takes to download all the data in the page (16.11s). However the user will just notice the above the fold content in a first time.
We have designed and implemented an approximation of Google’s Above-the-Fold (ATF) metric.
- you can find a description of the approximation in [PAM-18] along with a thorough testing against ITU-T and IQX models, and data-driven models built on the dataset using standard machine-learning techniques
- the code, from which a screenshot is shown above, is available on GitHub and on the chrome app store (see Code section)
User Mean Opinion Score (MOS)
In [PAM-17] we engineered a testbed and collected over 4,000 subjective MOS metrics to contrast HTTP/1.1 vs HTTP/2 from a user viewpoint. We have complemented this measurement campaign for our Above the Fold (ATF) work, collecting over 9,000 MOS points [PAM-18]. A very detailed description of the dataset is available in the (aptly named) “The Good, The Bad and The Ugly” technical report [DIRECTORSCUT-16] (however shall you use the dataset please cite one of our [PAM-17] [PAM-18] papers, thanks!)
- We are preparing a Jupyter nootebok with code to help you get started on this dataset. In the meanwhile, you can grab these datasets below!
- We have previously released the dataset of our [PAM-17] paper, accounting for over 4,000 MOS points, but as we have been collecting more MOS points, we suggest you to grab the lastest [PAM-18] version
- [466KB compressed, 24MB raw] webmos-pam17.arff.gz (md5sum: ff286e98d3d9bc6524ef3929928044eb)
- [1.5MB compressed, 5MB raw] webmos-9k.arff.gz a dataset comprising over 9,000 MOS points
- [486KB compressed, 1.9MB raw ] webmos-9k-sanitized.arff.gz the sanitized dataset comprising over 3,000 MOS points, that we describe and use in [PAM-18] and [QoMEX-18]
IEEE International Conference on Computer Communications (INFOCOM'18), Honolulu, United States, 2018.
International Conference on Passive and Active Network Measurement (PAM), Berlin, Germany, 2018, (keyword=newnet,webqoe).
Passive and Active Measurements, 2017, (keyword: QoE; Quality of Experience; DOM; onLoad; TTFB; TTFP; Abovethe-fold; SpeedIndex; ByteIndex; ObjectIndex; MOS).
ACM SIGCOMM Workshop on QoE-based Analysis and Management of Data Communication Networks (Internet-QoE 2016),, 2016, (keywords: Quality of Experience; DOM; onLoad; TTFB; TTFP; Above- the-fold; SpeedIndex; ByteIndex; ObjectIndex; MOS. Note: Selected as Best Paper in the workshop for reprint in ACM SIGCOMM Comput. Commun. Rev.).