Speed Essentials: Key Techniques for Fast Websites (Chrome Dev Summit 2018)


[MUSIC PLAYING] KATIE HEMPERIUS:
Today, Houssein and I are going to talk
with you about how you can make your site fast. We’re going to focus on
three things, images, web fonts, and JavaScript. We’ve chosen to focus
on these three things because they are
the three largest components of most websites. In addition, they’re likely to
be the three largest components of your performance budget. We hope that after
this presentation, you’ll go home and make
changes to your website. Know that during
this process, you can lean on both
Lighthouse and web.dev for additional resources. Almost everything we cover today
can be audited by Lighthouse. In addition, at web.dev can find
guides, code samples, and demos of everything we cover today. So let’s start by
talking about images. Images are taking over the web. On many sites, images
alone would consume the entire performance budget. On some sites, it
would far exceed that. I think the reason why
these numbers are so bad lies in the fact that
performant images are the result of many
steps and optimizations. As a result, they’re not
going to happen accidentally. A performant image is
the appropriate format, is appropriately compressed,
is appropriate for the display, and is loaded only
when necessary. To be successful
with images, it’s imperative that you automate
and systematize these things. Not only is this going
to save you time, but it’s going to ensure that
these things actually get done. There’s much more to
images than meets the eye. At a bits and bytes
level, an image is as much a byproduct
of its image format and its compression as
its visual subject matter. You can think
about image formats as choosing the right
tool for the job. The image format
that you choose will determine what features and
image has, for instance, whether it supports
transparency or animation, as well as how it
can be compressed. The first image format that
I want to talk about today is the animated GIF. You should not be fooled by
their crappy image quality. They’re actually
huge in file size. This 1 and 1/2 second
clip is 6.8 mgs as a GIF. As a video, however, is 16
times smaller, at 420 megabytes. This is not uncommon. Animated GIFs can be anywhere
from five to 20 times larger than the same
content served as a video. This is why if you’ve ever
inspected your Twitter feed, you may have noticed
that the content labeled as GIF is not actually a GIF. Twitter does not
serve animated GIFs. If you upload an animated
GIF, they will automatically convert it to video. The reason for the
drastic difference in file sizes between
videos and animated GIFs lies in the differences between
their compression algorithms. Video compression algorithms
are far more sophisticated. Not only do they compress
the contents of each frame, what they do what is known
as inner frame compression. And you can think of
this as compression that looks at the difs
between the different frames. The first step in switching
from animated GIFs to video is to convert your content. You can use the ffmpeg
command line tool for this. Next, you’ll need
to update your HTML, and replace image
tags with video tags. The code I have up on the
screen is technically correct, but it’s probably not
what you want to use. Instead, you want to make sure
to add the four attributes I’ve highlighted up on the screen. That’s going to give
your video that GIF look and feel even though
it’s not a GIF. Now, we’ll switch gears and talk
about a much more modern image format, and that,
of course, is WebP. WebP is no longer a
Chrome only technology. Last month, Microsoft Edge
shipped support for WebP. In addition, Mozilla Firefox
announced their intent to ship WebP. Currently, 72% of global web
users have support for WebP. And given these
recent developments, you can expect this
number to only increase. This is a big deal
because WebP images are 25% to 35% smaller than
the equivalent JPEG or PNG. And this translates into some
really awesome improvements in page speed. When the Trib united
support for WebP, they found there was a 30%
improvement in page load times on WebP supported browsers. By far the biggest hesitation
I see around adopting WebP is a fear that you can’t
both serve WebP and support non-WebP browsers. And this is not true. The picture and the
source tags make it possible to do precisely this. You can think of the picture tag
as a container for the source and image tags that it contains. The source tag is used
to specify multiple image formats of the same image. The browser will download the
first, and only the first, image that is in a
format that it supports. So in this example I
have up on the screen, the Chrome browser would
download the WebP version, a Safari browser would
download the JPEG version. The great thing about
this is that even though all major browsers have
supported picture and source tag since 2015, however,
if say, a 2014 browser were to encounter this,
it would still work, because those
browsers would just download the image
specified by the image tag. If you haven’t
noticed, I’ve been talking about image formats. But I want to kind
of go on a tangent and squeeze in a mention
of the AV1 video format. And the reason why I
wanted to squeeze it in is that it is the future
of video on the web. The reason why it’s the
future of video on the web is that it compresses video
45% to 50% better than what is currently typically
used on the web. It is still fairly new, so
it’s not really practical for you to be implementing
it on your site yet. However, I encourage you to
attend Francois and Angie’s talk at 3:30 today,
where they’re going to be diving into
AV1 in more detail. Image compression
is a topic that’s tightly coupled
to image formats. Image compression algorithms
are specific to the image format that they compress. However, all image
compression algorithms can be broken down into
lossless and lossy compression. Lossless compression
results no loss of data. Lossy compression does
result in loss of data, however can achieve
greater file size savings. At a minimum, all sites should
be using lossless compression, no questions asked. However, for most
people, it’s going to make sense to be
slightly more aggressive and use lossy
compression instead. The trick with
lossy compression is finding that sweet spot
between file size savings and image quality for
your particular use case. Many lossy compression tools
use the scale of zero to 100 to represent the image quality
of the compressed image, with zero being the worst,
and 100 being the best. If you’re looking for a place
to start with lossy compression, we recommend trying out a
quality level of 80 to 85. This typically reduces
file size by 30% to 40%, while having a minimal
effect on image quality. By far, the most popular
tool for image compression is Imagemin, and it can be used
with just about everything. Imagemin is used in conjunction
with various Imagemin plugins. And you can think
of these plugins as implementations of different
image compression algorithms. Up on the screen, I’ve put the
most popular Imagemin plugins for various use cases. However, these are by no means
the only Imagemin plugins available. Image sizing is something
I think many sites could be doing a much better job at. We have so many
types of devices, and specifically sizes
of devices that access the web these days. However, we insist on serving
them all the exact same size of image. Not only does this have
transmission costs, but it also creates
additional work for the CPU. A solution, of
course, is to serve multiple sizes of an image. Most sites find success
serving anywhere from three to five
sizes of an image. And in fact, this is
exactly what Instagram does. Instagram uses this technique
throughout their site. However, one use case where they
were able to measure its impact was with their Instagram embeds. For context, Instagram embeds
allow third-party sites to display Instagram
content on their own site. As a result of serving
multiple image sizes, Instagram was able to
reduce image transfer size by 20% for their
Instagram embeds. Two popular tools for imagery
resizing are Sharp and Jimp. The biggest difference
between the two is that Sharp is faster,
and when I say faster, I mean faster at
image processing. However, it requires that you
compile C and C++ to install it. In addition to creating
multiple sizes of your images, you’ll need to update your HTML. You’ll want to add the source
set and sizes attributes. The source set
attribute allows you to list multiple versions
of the same image. In addition to
including the file path, you’ll also want to include
the width of the image. This saves the browser from
having to download the image to figure out how large it is. The size attribute tells
the browser the width that the image will
be displayed at. By using the information
contained in the source set and sizes attribute, the
browser can then figure out which image to download. Lazy loading is the
last image technique that I’ll be
talking about today. Lazy loading is a strategy
of waiting to download a resource until it is needed. In addition to images, it can
be applied to resource types like JavaScript. Image lazy loading
helps performance by using that bottleneck that
occurs on initial page load. In addition, it saves user
data by not downloading images that may never be used. Spotify is an
example of a website that uses this technique
very effectively. On this particular
page that I pulled up, image lazy loading
was the difference between loading a mg of
images on initial page load and 18 mgs of an image
on initial page load. That’s a huge difference. Two tools to look into
for image lazy loading are lazysizes and lozad. And you implement them both
more or less the same way. Add the script
your site, and then indicate which images
should be lazy loaded. However, just because this is a
fairly simple to use technique does not mean that’s
not important. In fact, it is so important
that native lazy loading is coming to Chrome. [APPLAUSE] Native lazy loading
means that you’ll be able to take
advantage of lazy loading without having to add
third-party scripts on your site. It’ll be available
for both images and cross origin I-frames. And you can truly be lazy when
it comes to implementing it. If you make no
changes to your HTML, the browser will simply
decide which resources should be lazy loaded. If you do care, however, you
can use the lazy load attribute to specify which attributes
should or should not be lazy loaded. Fonts can cause
performance problems because they are typically
large files that are downloaded from third-party sites. As a result, they can
take a while to load. This leads to the
phenomenon known as the flash of invisible text. And shockingly, this affects two
out of every five mobile sites. Flash of invisible
text looks like this. Instead of a user being
greeted with text on your site, they’re greater
with invisibleness. Not only is this frustrating,
but it also looks bad. What you want to
encourage instead is the flash of unstyled text. And this is when the
browser initially displays text using a system
font and then swaps it out for the custom
font once it has arrived. The good news here is that this
fix is literally a one-liner. Everywhere in your CSS where
you declare a font face, add the line font display swap. This tells the browser to
use that swapping behavior that I just talked about
in the previous slide. Now I’m going to
hand the mic over to Houssein who’s going to
talk with you about techniques you can use with
your JavaScript. [MUSIC PLAYING] HOUSSEIN DJIRDEH:
So Katie showed a number of
techniques that could be quite useful for the images
and web fonts in your site, as well the few exciting things
coming to the Chrome platform in the near future, like
native lazy loading. For the rest of this
talk, we’ll go over some other important
things you should be doing before the JavaScript
that makes up your application. Earlier in this
session, we saw how images can make up the
majority of a site with regards to number of bytes sent. However, we also send
a significant amount of JavaScript to browsers. If we take a look
at HTTP archive data once again, as of last
month, the median amount of JavaScript that we
shipped to mobile web pages was about 370 kilobytes. For desktop, the
number was about 420. Now JavaScript code still needs
to be uncompressed, parsed, and executed by the browser. So in reality, we’re
looking at about a megabyte of uncompressed code
that needs to be sent– that needs to be for an
application of this size. Users who try to access this
with low-end mobile devices will notice a much
poorer performance. But why are we, as
developers, shipping way more JavaScript code than
we’ve ever done before? There are a number of reasons. One of them being the
amount of dependencies that we pull into
our applications and how easy that
process has become. Front-end tooling has come a
long way in the past decade, but there has been some cost. So what can we do to
continue to try and build robust and fully
fledged applications, but not at the expense
of user experience? The very first thing we can
and should consider doing is splitting our bundle . The idea behind
code splitting is instead of sending all the
JavaScript code to your users as soon as they load the very
first page of your application, is to only send them what they
need for their initial state. And then allow them to fetch
future chunks on demand. The easiest way to get
started with code splitting is by using dynamic imports. Now dynamic imports
has been supported in Webpack for quite some time. And it allows you to import
a module asynchronously where a promise gets returned. Once that promise
finishes resolving, you can do what you need to
do with that piece of code. The idea behind
dynamic imports is you want to make sure that
it fires on certain user interactions. And you want to do this to make
sure that you only fetch code when it’s actually needed. If you happen to be using
another module bundler, like parcel or roll
up, you can still use dynamic import to code
split where you see fit. Now, a number of JavaScript
libraries and frameworks have provided abstractions
on top of dynamic imports to make the process
of code splitting easier with your
current tooling. With view, for example, you
can define async components. And they’re just functions
that return a promise that resolve to the set components. By using that with
dynamic imports, you can attach
async opponents into your routing configurations. So that only when a
certain route is reached, only then will the code
and lives in that component be fetched. Angular has a very
similar pattern. In its router, you can use
the load children attribute and you can use it
to connect a feature module to a specific route. With load children, you
can define a dynamic import with Ivy. And Ivy’s a new rendering
engine that the Angular team is working on. When you do this approach, all
the code, all the components, all the services that
live in the feature module will only get loaded when
that route is reached. In the meantime, you
can use load children, but you just need to
use a relative file path to the feature module. With React, libraries
like React Loadable and loadable
components have allowed us to code split on
the component level while taking care
of other things, like showing a loading
indicator, or an error state, where applicable. However, with React 16.6, the
lazy method was introduced. And this allows you to code
split while using suspense. Now suspense is a feature
that the react team has been working on for quite some time. And it allows you to suspend how
certain component trees update your state or update
the DOM, depending on how all of its
children components have fetched their data. Another very useful technique
that ties in well to code splitting your bundle
is by using preload. Preload allows us
to tell the browser that if we have a late
discovered resource or a resource that’s fetched
late in the request chain that we’d like to download it
sooner because it’s important. So by doing this, we’re telling
the browser to prioritize. To use preload, you only
need to add a link element to the head of
your HTML document and you need to
have a rel attribute with a value of preload. The as attribute is used
to define what type of file you’d like to load. Now as developers, it’s
also important to make sure that the code that
we write works well in all the browsers people
use to access our site. So if we happen to include ES
2015, 2016, or later syntax, we also want to include
backwards compatible formats so all the browsers can
still understand them. This usually involves adding
transforms for any newer syntax that we use and polyfills
for any newer features. Now because
transpiling means we’re adding code on
top of our bundle, or application ends
up being larger than it was originally written. One way to make sure that
we only transpile the code that’s actually needed is
by using babel/preset-EMV. This preset takes
the hassle out of us trying to micromanage
which plugins and polyfills we need to add. And it does this by
allowing us to specify a target list of browsers and
letting babel handle the rest. You can add this preset
into your list of presets in your babel
configuration, and you can use the target’s attributes
to define that set of browsers that you’d like to reach. Now this is a
browser list query. So if you use tools like
order auto prefix or before, you may already be
familiar with it. Using a percentage, like
here, is one type of query you can use. And it allows you to
target browsers that cross a certain global market share. The use built-in attribute
allows us to tell babel how to handle adding polyfills. The usage value means that
babel only automatically include polyfills to
files when it’s actually needed for features that
need to be transpiled. Now this is the behavior we all
want, to only transpile code when it’s required. So although babel/preset-EMV
means that we can limit the amount of transpiled code
that we have to make sure that we only include what’s necessary
for all the browsers we plan to target, what if there was a
way to differentially serve two different types of bundles? One, that’s largely
un-transpiled, for newer browsers that don’t
need nearly as many polyfills, and another legacy bundle,
that contains more polyfills, is a bit larger, but is
needed for older browsers. We can do this by using
JavaScript modules. Now JavaScript
modules or ES modules allow us to write blocks of
code that import and export from other modules. But the amazing thing
about using modules with babel/present-EMV is that
we can have it as a target, instead of a specific
browser query. One site that’s actually using
this module approach today is “The New York Times.” And they’re using it for one
of their flagship articles of the year, piling in real time
for the 2018 midterm elections. They’re using Sapper as their
client-side framework, which contains a number of progressive
enhancements baked in, like automatic code splitting. But they’re also using roll up
to emit module chunks as well. They’re using a fairly
simple heuristic to make sure that users who
have older browsers download a larger more polyfilled
bundle, but users who are using newer browsers
can only download smaller and slimmer module. A very simple way to
make sure that users who access your app only
download one or the other is by using the module,
no module technique. When you define a script
element with type module, browsers that understand modules
will download that normally. But they’ll know to ignore
any script element that has the no module attribute. Similarly, browsers that
don’t understand modules will ignore any script
elements that have type module. But since they can’t identify
what no module means, they’ll download
that bundle as well. So here, we can get the
best of both worlds. Shipping the right
bundle to our users, depending on what
browser they use. If you happen to have
critical modules that you’d like to download sooner,
you could do that by also preloading them as well. And you just need to specify
a module preload value to the rel attribute. So we’ve talked
about a few things you can do to improve the code
that you ship to your users, but if you’re thinking of adding
any of these optimizations, it could be useful to try
and keep an eye on things. And there are tools out
there that could actually make this easier. The code coverage tab
within Chrome DevTools allows you to see the
size of all your bundles, as well as how much of it
is actually being used. You can access it by
opening the Command menu and just typing in coverage. If you’re using Webpack,
Webpack bundle analyzer can be a very handy tool. And it gives you a nice
heat map visualization of your entire bundle. You can zoom in, see which
parts of your bundle are larger and which parts of your
bundle are smaller. And if you’ve ever
wanted to find the cost of a specific library,
you can use bundle phobia. You could type the
name of a package and see how large it is, as
well as how much of an impact it can make to your application
in terms of download time. You can also scan
your packages on file to see how much of an impact
all your packages make. Now as useful as it is to
use tools to manually keep an eye on how things are
doing with your bundle size, it can be especially useful to
also include checks into your build workflow. One tool that could
actually help here, that can allow you to
set performance budgets, is the Lighthouse CI. So instead of only running a
Lighthouse in the Chrome audits panel, or as a
Chrome extension, you can also run Lighthouse
in CI and have it included as a status
check into your workflow. You could specify certain
Lighthouse categories and set scores for them so that
merges and pull requests only get included if
those scores are met. Now a site that’s
actually taking steps to add a number of these
optimizations is UniQlo. They’re a clothing
retailer base out of Japan. And they’re taking steps
to improve their entire web architecture, beginning
with their Canadian site. They’ve identified a number
of critical resources and decided to try and
download them sooner, and they’re doing this
by preloading them. They’ve done this
with some images, some core fonts, as well as a
number of cross origin fetches. They then also
identified that they can code split and try to get
some wins that way as well. They took the correct first
step of code splitting at the route level. And just by doing
that alone, they noticed an almost half size
reduction in their bundle size. They also moved on to code split
their localization package. And noticed that they can
get their bundle size down to 200 kilobytes. After this, they even
added more optimizations, such as using a
pre-act compatibility layer for the react bindings,
to get their bundle size to about 170 kilobytes. While doing all
of this, they made sure to also set budgets
so their whole team can stay in sync. And they’re using another
open source tool to help here, called Bundle Size. They’ve set 80 kilobyte budgets
for each one of the chunks, and then allowed them to stay
under a 200 kilobyte total for all of their scripts. While adding these
optimizations, they noticed a two-second
time to interactive reduction for users that use
low-end mobile devices and have weak connections. Now you might think two
seconds is not that much. But it can make an impact
for your customers. After these
optimizations were added, they notice a 14%
reduction in bounce rate, a 31% increase in
average duration, and a 25% increase in
pages viewed per session. Now there were other
things also being added to the site
at the same time, but they know that performance
played a very huge factor here. So we’ve talked about
quite a few things that you can do today to
improve how your site performs. But what can Chrome do
as a browser as well? For users that opt in
to Data Saver mode, Chrome will try show a
lightweight version of the page where possible. And it does this by
minimizing data used as well as showing cache
content whenever it can. Now as developers, you can
also tap into this as well. And you could do this by using
the network information API. If you look at the navigator
connection save data attribute you can identify
whether your users actually have data saver enabled. And you could try and serve a
slightly different experience to make sure things are
fast for them as well. You can also use the
effective type attribute, and use that to actually be
able to serve different assets conditionally depending on what
connection your user is having. The very last thing that
I do want to mention is although me and
Katie have talked about a lot of the
things that you can do to improve your
site, every application is built differently. Every team is different. Every tool chain is different. So this isn’t something
you need to start doing wholesale and
including everything at once. By setting budgets and keeping
an eye on your bundle size from the very beginning,
you can include performance enhancements as a
step by step procedure and make sure your
site never regresses. Performance doesn’t need
to be an afterthought. Almost everything we’ve
talked about is in web.dev, so I highly suggest you take
a look if you haven’t already. We hope you enjoyed this talk
as much as we enjoyed giving it. Thank you. [MUSIC PLAYING]

, , , , , , , , , , , , , , , , , , , , , , ,

Post navigation

11 thoughts on “Speed Essentials: Key Techniques for Fast Websites (Chrome Dev Summit 2018)

  1. 1) The "image" part of this talk is extremely well executed. Simple and efficient. Every dev should see that.
    2) I tried rel=modulepreload and, for now, it seems less efficient than bundling.

  2. Thank you for sharing. This is really useful! Is there any way I can access the slides of the talks from the dev summit for reference ?

  3. Making websites fast is one thing but the increase of Cookie banners, GDPR notice, and asking for notification permission (sometimes altogether at once) is ruining the UX no matter how fast the page loads.

  4. Thanks for the awesome content, definitely a step in the right direction on improving overall web performance experience.
    Brilliant talks coming from chrome dev summit so far… Keep it up Googlers!

  5. Ahh, yes, we should replace the "flash of invisible text" with the "flash of unstyled text". Anyone else remember when the "flash of unstyled text" was the browser default but every best practice guide discussed how to replace it with the "flash of invisible text"?

Leave a Reply

Your email address will not be published. Required fields are marked *