Largest Contentful Paint (LCP) is one of the most important user experience and web performance metrics. LCP is part of Web Vitals, essential metrics for a healthy website defined by Google. It measures the loading performance. In this guide, you will understand what LCP is, how to measure it, and, how to improve its score. Keep reading!
What is LCP?
LCP or Largest Contentful Paint is a new web performance and user experience metric. It aims to reflect the loading performance of a web page. It tells when the biggest element visible in the viewport is loaded.
If we are talking about an e-commerce product page, the largest contentful element could be the product image. On a news article page, it would rather be the introduction text block or even the large hero image.
LCP is an important performance metric, if the largest element in the viewport is loading fast, the user will feel a pleasing experience and won’t leave.
Good, average, and bad LCP score?
Good (fast) LCP is when the largest content element loading within 2.5 seconds. On a domain level, try to reach the 75th percentile of page load across mobile and desktop users.
Google suggests a simple evaluation of the LCP score:
Good LCP | Needs improvements | Poor LCP |
Less than 2.5 seconds | 2.6 – 4.0 seconds | More than 4.0 seconds |
I’m writing a book about Web Performance, subscribe to let you know when it is out!
What are the eligible elements for LCP?
There are only a limited number of considered HTML elements to fire as the largest content. For the moment, only elements in Element Timing API are considered. Additional elements may be included in the future as more research is done.
LCP could be fired on these elements:
- Image Html elements <img>
- <image> element embedded in an <svg>
- The poster image in a <video> HTML element
- CSS background image
- Text bock level element
How to measure LCP?
It exists several ways to measure Largest Contentful Paint on a given web page/site. Synthetic and Real User Monitoring (field) tools are both possible.
Speetals tool
Tool type: Field (real users monitoring)
The simplest and easiest tool to measure, monitor, and track the progress of the Largest Contentful Paint. It supports domain-level, and page-level data but also competitors’ performance.
Here is how domain-level tracking looks like:
Then you can easily find what page type is the worst performer on LCP:
And finally, you can track individual page type LCP and detect wins or regressions (as we see in the following screenshot Good LCP moved from 77.1% to 53.5% overnight!).
Web Vitals Chrome extension
Tool type: Synthetic
It’s in my opinion the simplest way to track LCP and other Web Vitals metrics (First Input Delay and Cumulative Layout Shift). Just install the extension and on every web page you load the extension adds a layer showing the metrics.
Caveats: The score depends on many factors such as your connection, your memory usage, your CPU, installed Chrome extensions…etc.
WebPagetest tool
Tool type: Synthetic
WebPageTest is one of the most useful and advanced web performance synthetic tools. It does now support Web Vitals. Fill in with your website URL and choose the best settings that reflect your audience, and hit the “start test” button. Wait for the results page to check Web Vitals section in the table.
Google Lighthouse
Tool type: Synthetic
Another way to measure LCP in synthetic is Google Lighthouse in Chrome Dev Tools. Go to Lighthouse tab and hit the “Generate report” button.
When the test is complete, you can check the LCP scoreamong the other metrics.
Lighthouse shares the same color codes for evaluating the LCP. Red for poor, Orange for ‘needs improvements’, and green for a good score.
Apart from the LCP score, Google Lighthouse highlights a very useful hint: The HTML element triggering for LCP. Scroll down the report to the diagnostics section and click on “Largest Contentful Paint Element”
Receive more web performance tips, subscribe now!
TIP: Dock the Dev Tools to the bottom, when you hover the HTML element, the element on-page will be visually highlighted.
The same caveats as the Chrome extension apply. Lighthouse can be subject to variability depending on the audit context.
Chrome Dev Tools Performance Panel
Tool type: Synthetic
Additional way to audit LCP exits in Chrome Dev Tools Network tab. Run a performance profiling, check the timing sections. Click on the LCP marker to have interesting details at the bottom.
If you hover over the Related Node, it will highlight the element in the viewport the same way with Lighthouse.
Chrome User Experience Report (CrUX)
Tool type: field
Synthetic tools are useful, they let us measure, debug and monitor web performance in a given managed context. Unfortunately, they are not sufficient and don’t reflect the real-life and the real user experience. Field tools come in handy to help us better understand how things perform. Chrome User Experience Report provides many performance metrics and it is updated monthly.
You can take advantage of CRUX data in two ways:
- Running SQL queries in BigQuery
- Exploring the data with Data Studio CrUX connector
=> Create your Web Vitals dashboard here https://g.co/chromeuxdash
PageSpeed insights
Tool type: field and synthetic
Google PageSpeed Insights is another useful tool to measure and report web performance on both synthetic and field. Go to the tool, fill in a URL, and hit ANALYZE button.
Let’s audit the Baby care Amazon’s listing page. Link to PageSpeed Insights report. Let’s dive into this report to better understand how to read it.
The first important section is field data of the tested URL (Baby care category):
For the trailing last 28-day period, 94% of the sessions have a good LCP (<2.5 seconds), 4% an average LCP, and 2% are facing a poor LCP. The 1,3 s value represents the 75th percentile of all the sessions.
Note that this data comes from the CrUX dataset. The only difference is that PSI is updated daily and aggregates data for a 28-day period. However, CrUX is updated every second Tuesday of the month.
The second important section in PSI is the Origin summary of field data. It gives an aggregation of the metrics for the whole domain name www.amazon.com. Tick the checkbox “Show origin summary” to show up this section.
We can read this section the same way as the URL one. Here, on the domain level, we have a worse distribution for good LCP, only 84% compared to 94%. We could maybe (with precaution) acknowledge that the Baby Care page is not the less performing one.
The 75th percentile value of LCP for the origin is rather better, 1.8 s.
The third important section is the Lighthouse (lab) report generated after analyzing the URL on a mid-tier device (Moto G4) with these Network configurations:
- Latency: 150ms
- Throughput: 1.6Mbps down / 750 Kbps up.
- Packet loss: none.
Is it normal to have disparate values of LCP between field and lab data? YES.
Field data represents the real-life experience for real users having various and different configurations (hardware, CPU, memory, internet connection, latency…etc). That’s why we could have different and contradictory scores.
Measure LCP with JavaScript
Chrome team shared a Web Vitals JavaScript library on Github.
How to improve LCP score ?
My approach is, before starting optimizations you should first define what should be your LCP elements per page type. As an example, for an e-commerce website, we could have this table:
Page type | LCP candidates | Optimizations |
Home page | Slider images | |
Product listing page | h1, first products thumbnails, hero image | |
Product details page | Product image, h1 | |
Blog post article | h1, hero image, first text bloc |
Subscribe and receive free web performance tips:
Optimize Time To First Byte:
If assets take time to arrive at the browser, their rendering will be delayed. Optimizing TTFB is the first thing to do in your LCP optimization journey. Some ways to have a fast server response:
- Optimize the code logic to take less calculation time
- Optimize your database
- Have a good server sizing (CPU, Memory..etc)
- Route users to the nearest CDN
- Cache, cache, and cache assets
- Server Side rendering / pre-rendering
Reduce transfer size of critical elements:
By reducing the transfer size, elements arrive faster to the browser and have better chances to be displayed faster to users.
- Compress images and provide responsive versions
- Enable Gzip or Brotli compression
- Minify CSS and JavaScript
- Remove non used CSS code
- Remove non used JS code
- Implement a service worker
Prioritize critical elements loading
- Preload hero elements
<link rel="preload" as="image" href="my-hero-image.jpg">
If you are implementing responsive images, it is also possible to preload (Chrome only)
<link
rel="preload"
as="image"
href="wolf.jpg"
imagesrcset="wolf_400px.jpg 400w, wolf_800px.jpg 800w, wolf_1600px.jpg 1600w"
imagesizes="50vw"
>
- Preload important resources (Fonts)
<link rel="preload" href="font.woff2" as="font" type="font/woff2" crossorigin>
- Defer non-critical CSS
- Defer non-critical JavaScript
- Inline critical CSS
- Preconnect to third party origins (especially when they are needed for the critical rendering)
<link rel="preconnect" href="https://mycdn.com">
What’s next?
Define your LCP per page type, work on optimizing it and iterate! Check how to optimize the other Web Vitals metrics :
One comment
This post is really a pleasant one it assists new net people, who are wishing for blogging.