The Process I Followed to Optimise the Performance of a React App

A case study of improving our React Progressive Web App (PWA)

TK
15 min readFeb 1, 2021
Photo by Jaromír Kavan on Unsplash

Originally published at https://leandrotk.github.io

In the last quarter, I started working on a new team with different business goals. It was a great team, really fun to work with, and we did a lot of amazing work that I’m proud of.

One of the projects we managed over the last four months was web-performance improvements for the application we were working on. This article intends to share the improvements we made and the things we learned throughout this process.

Context

Before we start a conversation about web performance, it’s important to show the context behind this work.

The first thing I want to mention is the fact that the application we started to work on has a two-year old codebase. It’s a React PWA using webpack 3, Babel 6, React Reduce 5, and so on. A small number of hooks. Mostly class components.

The project didn’t have real ownership by a team or a responsible engineer to take care of the codebase. Different teams need to add features here and there in the PWA but don’t actually own the codebase. Effect: The codebase grows in features, bugs, and tech debt, but it isn’t improved or refactored.

With this context, we already had a lot of space to improve the codebase. This project was our focus and started to be our own responsibility as well.

My colleague and I became service owners for this project. The idea of a service owner is someone (or two people, in this case) to be the focal point that can clear doubts and manage the tech debts, issues, bugs, etc. Basically, it’s someone that takes care of a project.

Our team was focused on providing the best experience for house owners (landlords) — to ease their understanding of the product, register new houses or apartments, and manage the rental and sale of their houses.

Together with the product manager and designer, we created a roadmap of features we wanted to ship that quarter. At the same time, performance is a critical piece of the puzzle to provide a good experience for users. We can start with the basic two metrics: page-load time and time to interactivity. There’s a correlation (and sometimes causality) between these metrics and user experience.

We also wanted to ship A/B tests to make sure performance wasn’t a variable that could affect the results of these tests. Basically, we wanted to prevent performance regressions from influencing the tests (but we needed metrics — we’ll talk about that soon).

Our team wasn’t a performance-expert team. But the company has a team called Core UX that’s mainly focused on web performance. This team had experience with frontend performance in the first three quarters of 2020.

The Process

Our first idea was to understand the metrics we wanted to track and to take care of and do any discovery tasks in order to understand potential issues and how we could improve the user experience and app performance. Along with that, we also tried to have a channel with the web-performance team to learn from them, ask questions, and try to find performance issues to fix.

So we opened a new Slack channel to ease into this whole idea. We also had a biweekly meeting with them so we could show them what we were working on and see what they were working on. We could also discuss possibilities for improving performance and had time to ask questions and open discussions.

With this open relationship, we were able to learn faster and prioritize any low hanging–fruit tasks to get to faster results with little or no effort at all. We’ll discuss this in depth later in the Performance Improvements section.

The whole process was documented: the meetings, our learning, our discoveries, and the performance fixes.

Metrics and Measurements

We had the first discussion about the metrics we wanted to track, and my team started to learn more about them. For those of us that didn’t have much familiarity, at first, it felt like a bunch of acronyms that we didn’t truly understand. FCP, LCP, FID? What’s that?

To understand these terms, I like to first understand the user experience metrics, because it’s all linked together.

So for user experience metrics, I like this user-centric performance metrics blog post by Google that defines these metrics:

Perceived load speed: how quickly a page can load and render all of its visual elements to the screen.

Load & Runtime responsiveness: how quickly a page can load and execute any required JavaScript code in order for components to respond quickly to user interaction

Visual stability: do elements on the page shift in ways that users don’t expect and potentially interfere with their interactions?

Smoothness: do transitions and animations render at a consistent frame rate and flow fluidly from one state to the next?”

I like this because it’s very relatable. As website users, we can understand these metrics (and the possible frustration when using a poor-performance website).

This is also interesting because we can map the user-centric metrics to the performance metrics we commonly see.

First contentful paint (FCP): measures the time from when the page starts loading to when any part of the page’s content is rendered on the screen.

Largest contentful paint (LCP): measures the time from when the page starts loading to when the largest text block or image element is rendered on the screen.

First input delay (FID): measures the time from when a user first interacts with your site (i.e. when they click a link, tap a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to respond to that interaction.

Time to Interactive (TTI): measures the time from when the page starts loading to when it’s visually rendered, its initial scripts (if any) have loaded, and it’s capable of reliably responding to user input quickly.

Total blocking time (TBT): measures the total amount of time between FCP and TTI where the main thread was blocked for long enough to prevent input responsiveness.

Cumulative layout shift (CLS): measures the cumulative score of all unexpected layout shifts that occur between when the page starts loading and when its lifecycle state changes to hidden.”

I built a simple table to map the performance metrics to the user-centric metrics so it’s easier to understand each acronym.

First Contentful Paint (FCP) -> Perceived Load Speed, First Input Delay (FID) -> Load and Runtime Responsiveness, etc.

As I said earlier, this relation is very interesting and makes us focus not only on bits and bytes but also on the user experience as a whole.

Tooling, Auditing, and Knowledge Sharing

After we got a better understanding of user experience and performance metrics, we wanted to start tracking them. There’s a difference between lab and field metrics. According to Google:

  • Lab metrics: Using tools to simulate a page load in a consistent, controlled environment.
  • Field metrics: On real users actually loading and interacting with the page.

Lab metrics

For the lab metrics, we set up Lighthouse in our CI using Lighthouse CI. So for every pull request (PR) opened, we run Lighthouse to gather performance-related data and lock PRs until we fix the performance issue.

With this tool, we can validate various aspects of the PWA (accessibility, SEO, best practices, and performance) but also add assertions to break PRs when it surpasses the budget threshold we set.

For example, we can add assertions related to JavaScript and images sizes (in bytes):

This JavaScript object is part of the configuration we can use to gather different information about performance. To better understand the configuration for Lighthouse CI, you can read the documentation.

Another very cool tool we’re using for lab metrics is SpeedCurve. It’s super simple to set it up and to start gathering data. This tool works better for unlogged pages because we add the URL of the website and, based on the website load and interaction, it’ll collect performance metrics.

The SpeedCurve dashboard is very flexible in its ability to show (or hide) the metrics we want to focus on. In our case, we wanted to see the evolution of the JavaScript total size, First Contentful Paint, Largest Contentful Paint, Cumulative Layout Shift, JS Total Blocking Time, Backend (TTFB) Time, and the Lighthouse Performance Score.

This is working very cool for our landing and home pages.

The last tool we set up is an in-house tool the performance team built. This is a tool to analyze the app bundles, and it has three main features now:

  • Bundle analyze report: Collects and saves the bundle-analyzer HTML results.
  • Bundle budgets: Sets up a budget configuration to add a threshold for the bundle sizes. It breaks the PR if the size of a bundle surpasses the threshold.
  • Bundle changes: Shows the bundle-size changes between the PR and the master (or main) branch. It helps us easily answer “did it increase/decrease the bundle size for x?”

This tool is run in our CI pipeline for every PR, and the result is shown in the GitHub PR. (It uses Danger behind it.)

These tools are very interesting because

  • They help us prevent performance regressions
  • They also create awareness about the web performance and its metrics and provide a way to share knowledge

Field Metrics

For now, we’re using Instana to collect real user performance–related data.

The next step for real user monitoring (RUM) is to track more user behavior in our application to gather web-vitals metrics in the PWA flow.

Performance Improvements

In this section, I want to detail the process behind each discovery and the fixes we did to improve performance and user experience in our application.

Landing page

We started with our landing page. The first action was to analyze the JavaScript bundle size using Webpack Bundle Analyzer.

Note: Two years ago, the team responsible for the landing page decided to use a tool to develop the landing page with React, but this time around, we removed React from the application to reduce the bundle size served in the landing page.

And this is what we got:

Analyzing the JavaScript bundle size using Webpack Bundle Analyzer.

We can analyze a lot of things here, but one thing that got our attention was the React library in our landing page bundle. As I wrote above in the note, React isn’t being used in production, and we’re unintentionally serving it in production, making our users download the library without the need to do so.

We had a constant inside a React component file. And we were importing that constant in the landing page.

So by importing this constant, we were also importing React.

A possible simple fix was to separate this constant outside of the React component file and to import it from this new file.

Import the constant from the new file:

Let’s see the bundle size impact after this change:

Bundle size after our simple fix.

We reduced 95 KB! It’s interesting to think we can have a huge impact with a small change after carefully analyzing our bundles. This will be the process behind each improvement we make in the rest of this article:

  1. Analyze the bundles.
  2. Fix the performance issue.
  3. Gather results, and keep track of the metrics.

Let’s run the bundle analyzer again. We get this:

Result from running the bundle analyzer again.

The first things that get our attention are the appboy.min.js and the transit.js libraries. appboy is Braze, a library we use for communication, and transit is a library to transform JSON data into our app state.

The braze library was very similar to the React library. It was an import statement in a file that the landing page was using but not really using.

It was importing Braze in the file and using the instance as a default value for a function. The simple solution was to remove the import statement and enforce that every place that was using the aFunction function passes the braze instance. So we don't need to import Braze and add a default value to the parameter:

When we run the bundle analyzer again, we get an astonishing result.

The AnnounceYourHouse landing page was reduced to 90 KB.

The AnnounceYourHouse landing page was reduced to 90 KB. We were able to remove almost 50% of the main landing bundle.

We also improved the bundle size of the PriceSuggestion landing page a lot — lowering it from 115 KB to 4 KB was an amazing result.

For the transit library, we did a temporary workaround solution. We imported the library to transform the string JSON saved in the local storage to get information from a single attribute from this object.

The temporary solution was to verify if the string included the info we wanted so we could remove the need to use the transit library.

Performance improvements: Before: 90 KB. After: 48 KB.

We could improve the bundle size of the main landing page a lot, removing almost 50% of the bundle.

In the metrics section, we had set up SpeedCurve to track the performance of some pages in this journey. So for every improvement we made in our application, we kept track of the metrics in these tools.

The total size of the landing page reduced drastically: -2.16 MB.

The total size of the landing page reduced drastically: -2.16 MB.

The Lighthouse Performance Score went up from 73 to 97:

The Lighthouse Performance Score went up from 73 to 97

The Largest Contentful Paint was improved by one second.

The Largest Contentful Paint was improved by one second.

When running npm run bundle:analyzer, we also noticed a big dependency in our vendor chunk.

When running npm run bundle:analyzer, we also noticed a big dependency in our vendor chunk.

In the vendor chunk, we noticed all of the icons from Material UI. Every time a user enters the website, if the chunk is not cached in the browser, it’d need to download the whole chunk. If it’s a big chunk to download, it has an impact on the performance and, consequently, on the user experience.

This is a common problem when importing a Material UI icon in a React component.

One of our components was using an internal component library that used the named-import style to import the Material UI icon. Without a proper Babel plugin, this also adds the rest of the unused icons to the vendor chunk.

We came up with two solutions:

  1. To fix the import from this internal component library, we stopped using the named import.
  2. Add the Babel plugin and configure the app to not add unused modules.

As this internal component library was the first and the deprecated version of our design system, we didn’t want to keep maintaining it. The best approach was to stop using this library anymore and to migrate the codebase to the new design system library (we’re working on it).

This performance project wasn’t our main project in the quarter, so we had less time to focus on it in the sprint. The Babel plugin was a more straightforward and simple solution for us at that moment.

We basically needed to add this new Babel plugin, babel-plugin-transform-imports, and to configure the babelrc:

And with it, we prevented the full import of the library in the vendor chunk.

we prevented the full import of the library in the vendor chunk.

The vendor became way smaller. We also had some impact in the main chunk (the next chunk will talk soon).

Performance improvements: Vendor: ~2.83 MB. Main: ~530 KB.

With this simple analysis and configuration, we were able to reduce the vendor chunk by more than 50% and the main chunk by 28%. (The vendor chunk is still 2.83 MB and could be improved. We’ll see later!)

This was a huge improvement for the whole app, as these chunks were downloaded on each page, if not cached in the browser.

Main Chunk

The main chunk has some common modules among all parts of the application. But after running the bundle analyzer, we got this:

Running the bundle analyzer on the main chunk.

The main chunk is the bottom-left block in the bundle. One thing that got our attention was some of the containers and components in the bundle. Why are there some components that are specific to only one page, yet we’re making our users download the whole main chunk?

The issue was simple: Our code splitting wasn’t working properly.

Our initial idea was to make sure all of the routes had dynamic imports for our components to code split in each router entry point. And this was the problem: Not all route entry points had loadable components, so they were joined in the main chunk instead of creating their own chunk for that specific route and page.

In this application, we were using, at that time, react-loadable, so the idea was to simply create these loadables and to use them for each route entry point.

Running the bundle analyzer, we got this:

The main chunk is way smaller, and webpack created more page-specific chunks as well.

The main chunk is way smaller, and webpack created more page-specific chunks as well.

Performance Improvements: Vendor: ~690 KB, Main: ~700 KB.

The result was huge. The main chunk got more than 50% smaller, and the vendor chunk also decreased by 29%.

Caching the Biggest Dependencies

Reading this article, you probably saw some of the big dependencies in our bundle, like Firebase, Braze, Immutable, and so on.

Every time we do a new product release, our build system generates a new bundle with the chunks. If anything related to the vendor chunk changes, webpack will generate a new hash for the chunk. Thus, the browser won’t have a cached version for this chunk, and it’ll make the user download it again.

But sometimes, or most of the time, we don’t really change these larger dependencies (only when the dependency is upgraded), and we’re making our users pay for that huge chunk.

Our idea was to split these big dependencies into their own chunk to make sure the browser has a cached version of this chunk and the user doesn’t need to download it again until it’s needed.

As we were using webpack 3 at that time, we needed to use the CommonsChunkPlugin to split these dependencies into their own chunk.

We created a list of all of the biggest dependencies:

Biggest dependencies: firebase, @braze, transit, @material-ui, react-dom, aplitude-js, immutable-js

It was mapped as a list data structure in our webpack config as well:

Along with CommonsChunkPlugin, we just needed to iterate through this list to create each chunk.

Along with CommonsChunkPlugin, we just needed to iterate through this list to create each chunk.

We can see the vendor chunk got way smaller and some new chunks were created.

Running the application, we can also test the download of each separate chunk.

Running the application, we can also test the download of each separate chunk.

And we got a really cool result:

Performance improvements: Vendor: 1.2 MB

The user still needs to download the dependencies, but after downloading them the first time, the browser will cache them. They won’t need to be downloaded again until we bump their version. If we change the vendor chunk, webpack only generates a new hash for the vendor and doesn’t change the other dependencies.

We saw some nice improvements in the SpeedCurve dashboard:

As expected, we saw a huge improvement in the JavaScript size: -1.43 MB.

As expected, we saw a huge improvement in the JavaScript size: -1.43 MB.

Decreasing the JavaScript size also had an impact on the total time the user is blocked to interact with the page: -1.2 seconds.

Decreasing the JavaScript size also had an impact on the total time the user is blocked to interact with the page: -1.2s.

The speed index is a metric to show how quickly the contents of a page are visibly populated. We improved the page’s loading speed by 2.2 seconds.

We improved the page’s loading speed by 2.2 seconds.

And the Largest Contentful Paint went from 6 seconds to 3.75 seconds.

And the Largest Contentful Paint went from 6 seconds to 3.75 seconds.

Recap

To recap what we saw in this article, let’s look at the list of things we did during this journey:

  • Measure: Metrics as the foundation of performance improvements.
  • Lock: Prevent regressions and scale the performance knowledge.
  • Analyze: With data and metrics, analyze the possible problems.
  • Improvements: Code.
  • Impact: Measure the before and the after.

I’d also recommend talking to more experienced people in this performance domain if it’s possible.

Next Steps

We have more things to do, but we didn’t have time to focus on those things in the last quarter. This is a list of things that come to my mind now:

  • More metrics: Run for logged pages, UX metrics (engagement, bounce rate), business metrics (conversion).
  • Manage requests: Server requests caching.
  • More analysis: backend, chunks, prefetching, etc.
  • Removable dependencies: Analyze big dependencies that can be removable or replaced.
  • webpack upgrade: Bump to v5 — cache, optimization, code splitting, tree shaking, etc.
  • webpack optimization: The need to build faster.
  • Keep studying: Learn more to discover more opportunities.

Resources

I have some resources I used along the way while doing this project. I hope they can be helpful for you too: Web Performance Studies.

Originally published at https://leandrotk.github.io on Jan. 21, 2021.

--

--