The library helps us get data into GA—it sends an event for each metric with a timestamp, so each metric is recorded as a unique hit—but GA doesn’t have any reports for making sense of the data. So we have to pull the data out via the API and assemble it into reports ourselves.
Luckily this isn’t too hard to do. Here’s what I’ve been doing.
I’m using a Python script with a couple of functions for outputting different report values to the console. For now I’m doing ad hoc reporting so I just copy the values into charting tools and spreadsheets.
The real benefit of having Web Vitals data in GA is the ability to segment with all of the other dimensions in GA. That’s where most of the action is. So when I’m running the script I’m usually trying out different dynamic segments and comparing them.
For each function/report you specify which metric to calculate—LCP, FID, CLS, FCP, or TTFB. There are three different reports:
Good - Needs Improvement - Poor Distribution
This function groups the hits into Good, Needs Improvement, and Poor buckets, using the thresholds defined by Google. It returns an array of counts in the order
[good, needs improvement, poor].
This function returns a single percentile value for the date range.
Percentile Over Time
This function returns a time series with values for a given percentile. So you can, for example, look at the experience of the 75th percentile and see how it’s changing over time. You can modify the time delta to get results in daily, weekly, or monthly increments.
Examples and Script
Here are some examples of reports you can generate using GA segments: