Improving performance at Zoosk – RUM Monitoring vs. Synthetic Monitoring

At Zoosk, one of the bedrocks of our engineering culture is a strong focus on continuous improvement in all facets of our work. For the client teams, part of that means continuously looking for ways to improve the performance and responsiveness of our client applications for our end users. Towards the beginning of this year, we set out to look for different ways to do that.

Let’s get started.

When thinking about performance, there are a number of ways to attack the problem.  The golden rule of performance is don’t improve your performance without measuring first. That being said, we knew we needed a solid system to measure and track the different changes we were going to make, but there are a number of ways to attack the problem.

One of the most common ways to benchmark front end performance is to use synthetic monitoring. This refers to scripting and monitoring different web transactions on experiences throughout your site. There are a large number of excellent third parties to choose from, all at different price points. We’ve tried a number of them with varying degrees of success at Zoosk. Our current synthetic monitoring service for our web clients, GTMetrix, is relatively inexpensive and simple. It’s great for things like trending waterfall graphs of our web resources and page sizes. It also performs a Google Page Speed and YSlow score for all monitored pages which makes it very easy to see at a granular level if we’ve introduced a change in how our page loads after a release.


GTMetrixPageSize
GTMetrix Waterfall

 

Great, let’s start optimizing!

Not so fast. This ends up only being part of the story. Let’s take a step back and think about what exactly we are trying to optimize. We care about performance because we want to create the quickest and most responsive experience possible for our users. That’s an important distinction. If we started optimizing purely on the data from our synthetic monitoring, we’d be optimizing for a few computers hitting our website around the world. While we obviously love computers at Zoosk, concentrating on optimizing for our users is probably a better use of our time. ;D If we were just using synthetic monitoring, we would probably improve performance for our users as well. But, it might not affect them at all. For example, we defer a large number of resources like tracking pixels and third party JS libraries to the bottom of the page load. Pulling those items completely off of our page would definitely improve our synthetic monitoring score but would have virtually no impact to our end user. Also, our monitoring machines are almost guaranteed to have stable connections. This isn’t necessarily true for real world users, especially mobile users with high latency connections.

This is where RUM, Real User Monitoring, comes in. RUM refers to passively recording the interactions of the users on your application. In this context, we’re using it to collect real user data pertaining to the performance timing of our web and native applications. RUM is fantastic because now we’re getting data from the actual people we want to optimize for.

Ok, now we’re really ready!

Well, the last thing we needed was something catchy and fun that our team could rally behind. I was inspired by what Twitter did a couple years back when they started to optimize around a metric called Time to Tweet. Being that we’re a dating site…

TTF

This metric does a great job encapsulating the perceived responsiveness for our end users. For web clients, this means the time it takes from a URI going into the navigation bar of the browser to when the user sees their first pretty face in our search results. Similarly for native clients, this begins when a user starts the application to when their first search result is loaded.

The next question we had to answer was, How do we want to aggregate all this user data? The main consideration was if we wanted to build something internally, or if there was a third party that could get us most of the functionality that we needed. LinkedIn posted a great blog article about the system they built in-house to get realtime feedback on performance changes on their site. We also evaluated third parties like keen.io, Parse, and GA (Google Analytics) for the job.

After going through the trade-offs, we decided to go with GA mainly because of the low cost to entry. All of our clients are already sending data to GA for page level analytics, so we decided to try out their User Timings feature for tracking TTF. The main things we had to give up were the timeliness of our results and the customization level of our visualizations. We ended up with dashboards for each of our client teams that look something like what we have below.

Web Dashboard

WWW Dashboard v2

Android Dashboard

Android Dashboard

You can see GA does a pretty good job with their visualizations, and gives you a lot of dimensions to pivot on out of the box.  You can breakdown your audience by device type, operating system, browser, location, and much more.  There are some definite drawbacks which I’ll go through in the next post, but overall it worked out pretty well.

So which is better?

In closing, both systems have their pros and cons, and we actually decided to keep both.  Our synthetic monitoring system is very useful keeping a history of more granular resources like web page sizes and page load waterfall graphs while our RUM monitoring and Time to Flirt keep us very aware of why we constantly optimize for performance in the first place.

In the next post, I’ll go over the improvements that came out of the initiative and the lessons learned from the whole process.  Also, in the time since we started this initiative, there have been a slew of exciting improvements by third parties in RUM monitoring that I’ll talk about as well.

References