Printer Friendly

Signal to noise: what do approval numbers mean?

Let's take inventory. The FDA approved 37 drugs in 2012, which made everyone happy, since it was the highest total in years. But then in 2013, the agency only approved 27, which occasioned quite a few columns and blog posts wondering if 2012 was an anomaly. Did that year's total represent some compounds backed up in the queue that all came out in a bunch? Was 2013 an example of good old regression to the mean? Or, taking the optimistic view, was it the outlier in what was still going to be a new and better world? I'm tempted to declare all of that speculation to be wasted electrons, because here's the problem: our sample size is just too small.

You can definitely try to spot trends in a few decades' worth of approval data--although good luck with that, because the criteria for approval have certainly changed over the years, so you're sort of comparing apples to pomegranates. At least there are enough numbers to try to draw some conclusions, even if they don't mean as much as we'd like them to. But a single year's total? No chance. If we're lucky, in another 10 or 20 years we'll be able to put 2012 and 2013 into some sort of context, but we just don't have the ability yet. We'll know more about what these numbers mean about the time they cease to be able to do us any good. Imagine how much worse it is to try to draw conclusions from approval numbers inside a single company, where the data set is that much smaller.

This problem can be generalized, unfortunately. For an industry that lives and dies on reproducible, meaningful data, we end up making a lot of (non-clinical) decisions based on some pretty paltry numbers. Look at the situation inside any individual company, when someone comes along and reorganizes the place. The re-org is supposed to improve productivity--well, you'd hope that's what it's supposed to do, otherwise you get uncomfortable visions of the pilots in that old "Far Side" strip, announcing turbulence ahead while they jerk the airplane around and high-five each other.

How do you measure whether the new company structure has helped? It's for sure that the next year's worth of projects were already in the hopper, in some form, before the re-org was announced. The year after that will show effects of the old regime as well, and those will probably extend even a bit further. How long do you have to go before everything you see is a result of the new system?

If you have a figure in mind for that, then how does that number compare with the number of years the new system is actually going to be in place? Someone else will, at some point, have a bright idea of how to rearrange things, or someone higher up evaluates the new system anyway, ready or not. My guess is that a significant number of these re-org ideas never really get completely underway, and that's only counting the ones where a real change took place (as opposed to the ones where a bunch of posters go up, training meetings are attended, but everything sort of ends up back the way it started).

There's an even larger problem to consider. Every organization is unique, given the history behind it, its institutional memory, and the people that are staffing it. So in that sense, every change that's made is going to be an N of 1, something that might have come out differently in a different time or place. Now, in the labs we'd avoid collecting data under these conditions, wouldn't we? The cell assays had better be within range of the last runs, or something is assumed to have gone wrong with the cells. The in vivo studies should repeat, or there's big trouble. If you do two Phase II trials on the same population, they should both work, and Phase III had better recapitulate the efficacy seen in Phase II.

But we can't mess with our own organizations this way--there's not enough time and money in the world to do it. Here's a thought experiment: imagine if a drug company announced that they were going to split their R&D organization into two parts, run on different lines with a different organizational structure, but with similar numbers of chemist, biologists, and other staff. And what's more, both of these new divisions would work on the same targets and the same drugs, just to find out if one setup worked better than another. It's a weird dream, but that's what you'd have to do to really figure any of this out. Don't look for it any time soon.

Maybe, though, this experiment actually has been run a few times under rather less controlled conditions. After all, companies large and small do end up working on the same targets, more or less simultaneously. And each of them has their own style, their own criteria, and their own culture. In medicinal chemistry, we often try to compare "matched molecular pairs" to see what the effects of single changes are. Has anyone tried to go back over the history of drug development to compare "matched targets"? The problem is that you'd have to know what the real workings of each company might have been at the time, which might be impossible. Companies themselves don't always understand how they actually work, as opposed to what it says on the org chart. If we found that Company X actually out performed Company Y on a given target, we'd still be faced with explaining why that happened. And we'd still be faced with the same small data set problem that I mentioned before.

So this leaves us in a tough position when it comes lo judging the big CEO-level decisions. There aren't enough data points; there probably never will be. There are lower-level things that can actually be measured, but they're subject to well-known observer effects. For example, let it be known in the med-chem labs that you're counting compounds, and lo, compounds will appear. You may not find them appealing, but by gosh they'll be there for you. The same thing will happen if you get hard-core about, say, the number of clinical candidates nominated. Those aren't necessarily the ones you're actually spending money to develop, mind you, and the wider the split between those numbers, the more worried you should be. But if you say that you're going to nominate X number of compounds per year, then yeah, you probably will, for all the good that will do you.

But the number of drugs you get on the market, that's a figure, small though it may be, that cannot be manure-ified. So let me advance a hypothesis: there is an inverse relationship between how easy it is to get drug discovery metrics and how important they are. I wish that weren't true, but I'm afraid that it is. CP
COPYRIGHT 2014 Rodman Publishing
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:THE LOWE DOWN
Publication:Contract Pharma
Date:Mar 1, 2014
Previous Article:Cold chain headaches: a teachable moment.
Next Article:Clinical trial data transparency: will a new J&J initiative jumpstart this movement?

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters