There’s no one best way to do analytics, just as there is no one “god metric” for the editorial team. Good analytics uses a wide variety of different approaches and sources of data to help with day-to-day short-term optimization and longer term planning. In this article we will not be talking about a comprehensive approach to editorial analytics nor are we going to focus on a particular metric and its merits or lack thereof. This article is about setting up a proper foundation for measuring article performance on a day-to-day basis.
To better connect with our readers, it’s necessary to objectively compare the relative performance of our articles. We need to carefully determine what’s working and what isn't, so that we can find where the potential is and wisely decide what needs to be culled.
We cannot go off of guesswork or feelings in this. We can only rely on hard performance data. Neither, can we judge the success of content in isolation. If we fail to take into account content’s relative performance, we could easily be wasting our efforts - letting potentially viral content sit and languish, while investing limited resources on duds.
How can we properly compare content performance
We need to compare content performance on an “apples to apples” basis. In order, to compare the performance of one content over another, first we need to establish a baseline over article lifecycle.
Here’s the traffic for a typical article over its lifecycle. We publish the article, and share it with our immediate network. Traffic peaks the second day as the story makes its way through our network's network and then it slowly dies out.
Now, let's take a look at a sample post a.
At first, it doesn't seem to be performing as well as our typical story - at least for the first 3 days. As a matter of fact, if we had used a single average value to rate its performance, post a would have been identified as a below average post. Looking at the performance over its entire lifecycle though, we can clearly see that it's a special post that was able to sustain traffic for up to 6 days.
How can we achieve a true "apples to apples" comparison between our content efforts?
Once we establish the baseline and have access to our content performance over its lifecycle, we can easily compare posts. Take posts a and b below.
If we were to identify the better post on Thursday, we’d go with post b as it brought in more people than post a. But when we look closer, it’s clear that post b is actually under-performing as it’s below the 2nd day average point. And post a is over-performing because it’s bringing in more people than a typical story on its 4th day.
Can we improve this model further?
Yes, we can! The next step for this model would be to to incorporate the influence of promotion on an article’s performance and create a set of baselines that would enable more meaningful comparisons across a wide range of content based on content type, section, author, publish date and time, word count, etc. A benchmark that would allow us to determine how well a certain article performs in comparison to a similar article that received the same level of promotion. This way we can also measure the relationship between impact and effort rather than measuring the former while ignoring the latter.
While it takes time and effort to set up analytics to measure content performance properly, in the long run, we’ll have a much better picture of how well our content is performing and see if we’re moving towards our business goals. We’ll be able to put actual numbers against our objectives, measure performance over time, and optimize our content accordingly.
If you’re interested in conducting data studies for your publication, we would be happy to help - connect at firstname.lastname@example.org. You can also signup for a trial to see your sites’ “article model” and track content performance in real-time - right out of the box.