Skip to content
This repository was archived by the owner on Sep 10, 2022. It is now read-only.
This repository was archived by the owner on Sep 10, 2022. It is now read-only.

Production Web Apps Performance Study Q4/16 - Q1/17 #1

@addyosmani

Description

@addyosmani

Goals

  • Understand the cost of JS Parse/Compile/FunctionCall times on apps
  • Discover how quickly apps become interactive on average mobile hardware
  • Learn how much JavaScript apps are shipping down on desktop + mobile

Sample information
6000+ production sites using one of React, Angular 1.0, Ember, Polymer, jQuery or Vue.js. Site URLs were obtained from a combination of Libscore.io, BigQuery, BuiltWith-style sites, framework wikis. Sample sets were 10% eye-balled to verify usage of frameworks. Sets not reliable were discarded from the final study.

URLs: https://docs.google.com/a/google.com/spreadsheets/d/1_gqtaEwjoJGbekgeEaYLbUyR4kcp5E7uZuMHYgLJjGY/edit?usp=sharing

Trivia: All in all, 85,000 WebPageTest results were generated as part of this study. Yipes.

Tools used in study
WebPageTest.org (with enhancements such a JS cost, TTI, aggregated V8 statistics added thanks to Pat Meenan as the project progressed), Catapult (internal Google tool), Chrome Tracing.

Summary observations

metrics comparison

breakdowns

mobile-desktop-stury

screen shot 2017-02-06 at 5 19 56 pm

This data may be useful to developers as it shows:

  • Real, production apps using their favorite stacks can be much more expensive on mobile than they might think.
  • Closer attention to lower parse times and time-to-interactive points is likely required if you're choosing something off the shelf.
  • Some, but not all, apps are shipping larger bundles. Where this is the case invest in code-splitting & reducing how much JS is used.

Where are the medians and aggregates?

The primary goals of this study were to highlight trends looking at the different data-sets available to me as a whole. Initially, I focused on summarizing this data at a per-framework level (e.g React apps in set 1 exhibited characteristic A). After reviewing this with the Chrome team, we decided presenting per-framework breakdowns were more susceptible to the takeaway being "oh, so I should just use framework X over Y because it is 2% better" instead of the important takeaways about parse/compile being a problem we all face.

To that end, the below charts are generated locally by fetching each of the WebPageTest reports for data-sets, iterating over a particular dimension (e.g Time-to-interactive, JS parse time) and getting the medians for different sets that are then plumbed into either Google Sheets or Numbers for charting. If you wish to recreate that setup yourself, you can grab the CSVs from the below reports.

Raw WebPageTest runs - Round 2 (January, 2017)

Raw WebPageTest runs - Round 1 - Older study (December, 2016)

I put together this graphic when internally sharing the first version of this study. I decided to redo it as at least the network throttling setup from this study wasn't the same between the 2-3 web perf tooling systems used. This meant that while overall weight, time in script (parse/eval), FMP and load time were fine, the TTI numbers could not be concretely confirmed as 100% accurate. Instead, I redid the study once we added support for TTI to WebPageTest and I'd trust the numbers there (Round 2) a lot more.

screen shot 2016-12-12 at 9 18 38 am

Other data sets generated (Dec, 2016)

Note: many of the below data sets were generated before we installed Moto G4s in WebPageTest and had to use the Moto G1 instead. Some of the data sets will also be using earlier versions of the time-to-interactive metric and should not be directly compared in most cases to the latest data from 2017. This is historical data that's interesting and may be worth reexploring where particular data-sets didn't end up making it to the final study results.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions