If the Television Critics Association press tour of 2014, wrapping up Tuesday and Wednesday with presentations from PBS, has had a catchphrase, it's "audience measurement."
Critics heard an extended presentation from Nielsen on the very first day of tour about the company's plans to begin measuring viewership on mobile devices and, more generally, about its scramble to keep up – catch up, really – with the way television works now. Many days later, a panel of research analysts from broadcast as well as premium and basic cable outlets made what was in part a pitch to reporters to stop relying on overnight ratings, given the fact that it's not unusual for shows to increase their audiences 40 or 50 percent (or more) once DVR viewing from even the next three days is included. FX, in fact, has committed to not releasing overnight ratings for its shows at all, arguing that they're simply too misleading to be taken seriously. It will make its ratings announcements a few days later.
The easy read on the FX decision, of course, is that they don't want bad overnight ratings to be reported, and it's not that that isn't any part of it – they certainly aren't doing this so that you don't report in haste on their out-of-the-gate smashes. And networks have complained for a long time that this show or that one is misreported as a disaster because nobody waits for the reliable numbers; this is not a new phenomenon.
But it does require a bit of a shift in perspective when you realize that time-shifting has evolved to the point where watching a new episode of a show is only sort of something that happens at a particular time, such that you can meaningfully describe it in terms of what happened at that time. (That being true, while again not a new phenomenon, is a newer phenomenon than the chatter about it being true.) If you're only talking about measuring shows relative to each other, it might seem unimportant provided that all shows grow by roughly the same percentage. If everybody gains the same advantage from DVR viewing, the relative numbers are still right.
Everybody doesn't, though. And while it is an insistent battle cry of fans of low-rated shows that whatever they love is secretly hugely popular but not being measured properly, what emerged from the data was that it's more a matter of there being kinds of shows that are vastly more time-shifted than others. Reality shows are time-shifted relatively little, presumably because people want to know what happened before they get spoiled. Comedies are time-shifted more, and then dramas are time-shifted the most, meaning that if you take a reality show and a drama that are neck and neck in overnight ratings, the odds are that a few days later, a lot more people will have watched the drama.
Of course, the looming question is: assuming those numbers are in fact wonky in exactly the same way everybody is telling us they are, what difference does it make?
Readers are curious about ratings for two reasons, in my experience. The first is that they want to know whether shows are going to be canceled or not. The second is that they're curious about cultural stuff: whether a weird thing is a hit, whether a terrible thing tanks, and, fundamentally, what other people are interested in. (Some are also interested in the ins and outs of industry successes and failures in terms of producers' and executives' fates, but not many.)
That's where you find the nut of the problem, really. It's not only a measurement problem. It's also a contextual problem. Even in a hypothetical world of perfect information in which everybody could instantly know what how many people are watching what show, on what platform, at what time, with what fast-forwarding capability, what numbers are meaningful?
David Poltrack, the Chief Research Officer at CBS, made this pitch in talking about different kinds of numbers and whether Live +3 ratings (which include live viewing and the next three days on DVR, but do not count VOD, do not count apps, do not count Hulu and so forth): "I would like to make the point that your responsibility as reporters is, for most of you, is to report television as the social, cultural phenomenon that it is as well as the economic phenomenon it is. So Live+3 ratings are an economic phenomenon, but they don't reflect the cultural phenomenon of the medium, since they are only a limited part of the audience. If you're reporting on the economics of the business, Live+3 ratings are relevant. If you're reporting on cultural phenomenon called television, Live+3 ratings are far less relevant."
In other words, he says, what numbers are meaningful depends on what you're using them for. If you're talking about what makes money and what might get canceled, maybe you only care about those Live +3 ratings (which are already much more inclusive than what they call "live plus same day" or "Live +SD," which is basically just everybody who time-shifts until later on that same evening). But if you're actually trying to figure out what the viewership of something is, and what its cultural penetration is, and how many people like it, that number isn't so helpful. It seems self-evident, but also unsatisfying. What is the cultural relevance of audience size, past "huge hit" stories?
Furthermore, there we were, listening to four network research folks talk about ratings when ratings do not, by any stretch of the imagination, mean the same thing at every network. Kim Lemon, the Executive Vice President of Program Planning and Scheduling and Research for Showtime, made the point himself that Showtime doesn't really care in the same way about ratings, since it's not ad-supported, and it certainly doesn't have the same issues with online versus on-television viewing. As long as you have to be a Showtime subscriber to do it, they don't care whether you watch on your TV or on your phone or on a plane. All enthusiasm, for them, is of equal monetary value, essentially. It just has to make you subscribe. (Incidentally, their numbers are even more slanted toward time-shifted viewing — in part, probably, because their top stuff airs on the crammed-full Sunday night schedule.)
At CBS, on the other hand, it's very squishy to get information about how much it matters to them if you watch a show online. You're a viewer from a cultural standpoint, but you're not worth money the same way you are if you're eyeballing ads and watching live. (You will watch ads online too, but different ones.)
The bottom line is this: it's not as simple as "measurement." It has to with the placing of data in context, which is a lot harder than figuring out how to count up video views. I spoke to a showrunner who talked about the frustration of having a show that's good, that everybody thinks is good, that is never mentioned without the parenthetical that it is ratings-challenged or little-watched. And very often, those perceptions come from initial overnight ratings.
But what if they didn't? What if the ratings were perfectly accurate and complete, and your show was little-watched or ratings-challenged? When movies are discussed in terms of their quality, there is no expectation that you always mention their low box office, except perhaps in the case of high-budget intended blockbusters.
Creative and economic success have been uncoupled in critical discussions of film (and music and books) to a greater degree than in television. That's in part because shows are ongoing business concerns with futures to consider, but it's also in part because television's populist reputation and history has created an environment in which, if you are not being widely seen, there is a perception that you are failing in whatever your project is. Which, from an economic standpoint, if you are on a broadcast network in particular, you are. But which, from a creative standpoint, you are not, necessarily. (Consider the fact that in film, people who analyze box office and people who act as critics are mostly different people; in television, they're often the same hybrid critic/reporters, only some of whom have a strong background in ratings and scheduling and such.) It's not that people don't separate good from popular, but there's always that "struggling"/"cult" caveat, more than in other fields.
Nina Tassler of CBS talked in her executive session about what the network's job is, and this is what she said: "We are still broadcasters. We're still looking [at] No. 1, are we entertaining the greatest number of people, and are we making the most amount of money doing that? Those are the two boxes we have to check." She still talked about trying to make great content, she still talked about how great some of their shows (like The Good Wife) are. But she was straightforward about it: the most people, the most money. So if your show is not making the most money by appealing to the most people, your show is not succeeding as a business concern. But I think writing about television is perhaps more likely than writing about other creative fields to surround projects with a stench of general failure (or irrelevance) based on instant popularity or the lack thereof, which doesn't follow logically.
All these folks phrased their objections to the way ratings are being reported as issues of accuracy: overnight ratings are terribly incomplete, they argued. The subtext, of course, feels completely self-interested: overnight ratings make things look like they're being watched by fewer people than they are. But the actual lesson felt a little more nuanced: it's not just what the numbers are, but what the numbers communicate, that's gotten progressively foggier. The economic value proposition has very much come unglued from sheer viewer counts on Showtime; it's even shaky on FX, where there is ad support to consider but also the degree to which your viewers consider your network an essential part of their cable package. Consider what's happened when networks face off with cable companies over carriage deals — the network needs you to not just like their shows, but love their shows.
It's not just that we don't know what the real audience size is (though we don't). It's also that it wouldn't entirely be clear yet, to anybody, what it would mean if we did.