Tuesday, June 24, 2014

After the New Yorker piece, what of disruptive innovation?

I don't read a lot of books aimed at the MBA crowd, but one set I have liked, and sometimes cite here, are Clayton Christensen's on inovation and disruption.  As you may have heard, a recent article in the New Yorker by Jill Lepore took a gimlet-eye view to the whole concept and raised serious questions about Christensen's methods.  This was then summarized by another author in Slate and since then Christensen has responded in part via a Business Week interview.  He's also scheduled to be interviewed on PBS this weekend, so likely there will be further developments.  Indeed, after sketching this out on the commute home I discovered a Financial Times article whose tone is very similar to what I have written below.

Lepore's piece is definitely worth reading, though I definitely don't agree with Edward Tufte (another author I admire & often cite) that this was an "A+" article. There's also parts that can be just skipped over: Lepore engages in a shooting-fish-in-a-barrel exercise going after the army of business consultants and pseudo-journalists who mindlessly bleat "disruption!" and "innovation!".

Christensen has written a raft of books (I've read maybe 4 of them) as well as a pile of academic papers, so one New Yorker piece can't cover all the bases.  Indeed, one of Christensen's  complaints in the Business Week interview is that Lepore did not appear read any of the academic work that fleshed out the work or dealt with issues that arose after the books were published. An even bigger complaint of his is that Lepore didn't interview him and give him an opportunity to address the issues she raises.

the key issue Lepore raises is the very real possibility that Christensen has picked and trimmed his data.  In particular, she charges that he misclassified several companies in his analyses as failures, when they actually succeeded.  Some of this rings true, but a lot gets down to definitions.  For example, if a company sidesteps from the hard disk drive market to floppies, has it been successfully disrupted (as Christensen claims) or survived (as Lepore claims).

A gaping weakness in Lepore's piece is that she does not address one of the key observations in Christensen's books: that successful new entrants in a field can take on a powerful incumbent by gradually working their way up the food chain, with the incumbent willingly ceding lower margin businesses.  In particular, Lepore blames U.S. Steel losing market to mini-mills purely on labor issues, whereas Christensen in his books and his rebuttal interview shows that mini-mills progressively took over product segments.  That's a useful pattern to model other industries, whereas "labor trouble" is quite generic. This occurred to me before the Business Week article appeared; too bad I didn't beat them to the punch.

Another area of contention is the issue of whether large companies can successfully sustain small innovators.  Christensen cites many examples in his books of cases in which insufficient separation resulted in the main company squashing the promising subsidiary.  GM's Saturn division is a great example: originally intended to be very different from any other GM division, but ultimately lost all independence.  Delta's Song and United's TED discount airline forays show a similar pattern; we flew Song once and it was definitely a different vibe than Delta, but I predicted it would be killed off because it wasn't truly separate.  On the other hand, as Lepore points out, sometimes it seems to be some convenient post-hoc labeling to get things to work well.

Lepore also piles on Christensen's public failures, in particular an investment fund that quickly folded and his prediction that iPhone would flop.  Christensen says that  rules at his academic institutions prevented him from having any role managing the fund.  I'd also point out (and I am not positive on actively managed funds) that perhaps a fund concept is correct, but if you don't have the capital and perseverance, you may never see success (which is one reason I'm no fan of active management).  On the iPhone, Christensen admits failure but then tries to cram it into his theory: he had seen it as a sustaining innovation in cell phones whereas it ended up disrupting computers.  I think that's a stretch, or at least iPhone survived long enough to disrupt by being just a better cell phone but it was a success as a high end product first.

Ultimately, the challenge for any theory of business practices is that there a many, many ways for companies to fail, ranging from poor execution to the untimely death of a key player to utterly unpredictable world events. It is a mistake to see disruption as a panacea, as the bleating business  crowd sometimes seems to.  And by disruption, I mean in a very specific sense: to fit Christensen's model a technology must be new and it must be inferior to existing  technologies it is perceived to compete with. The appropriate strategy in this case is some combination of flanking and sniping: go for markets the incumbent has ignored (either because they don't yet exist or they are too small to bother with.  The creation of new markets, often by accident, is an idea that Lepore seems to have missed entirely. 

Lepore also falls into the naive trap that it is the incumbents who decide if a disruptive attack succeeds, rather than the customers. In particular, she claims that disruption theory is irrelevant to education or medicine, because teachers and doctors have strong dedications to their field.  Doesn't matter, and history shows it.  As my one adviser once pointed out, there used to be doctors who made careers performing ulcer surgery; Tagamet and the other H2 blockers essentially eliminated this specialty. CVS' MinuteClinics and similar concepts are significantly altering the primary care landscape, and platforms like FitBit will almost certainly radically change how doctors interact with patients. Similarly, MOOCs and the University of Phoenix institutions are changing the educational landscape.  I'm not one at all to claim these are all positive effects, but they are real.  Ignoring their reality is a certain recipe for being blindsided by the changes they are bringing on.

As you might expect, while I might watch companies I deal with for disruption, my strongest interest is in the genomics space. Here are a few that have happened or are worth watching

"Next generation" sequencing in some ways resembles a disruptive attack on the earlier automated fluorescent Sanger market.  In particular, when the  first NGS machines showed up, they really were useless for any of the previous applications of sequencing, plus they didn't play well with most of the existing ecosystem.  But, on a cost angle it's hard to claim the early 454 or Solexa instruments were cheaper, even if you  figure in all the ancillary equipment for Sanger (colony pickers & prep robots) they made obsolete.  It took a lot of method development, plus radical shifts in what to do with sequencing, that led to short read technologies dominating.  I also suspect, though I haven't made a close study, that a progressive nibbling away of markets occurred. I would tend to think that only recently, long after Illumina, 454 and the rest had overtaken Sanger, were many applications really switched over from Sanger.  Debate on that point welcome!

Ion Torrent was intended to be a disruptive attack on Illumina and 454, offering shorter, noiser reads but at low cost. Illumina, the incumbent, blunted that with the MiSeq, narrowing the cost gap between established and upstart. It would be an interesting study to see how the old 454 market has broken between Ion and Illumina, but I'm guessing Illumina has really taken the bulk and Ion can't claim much credit for driving Roche out.  It's worth addressing a criticism found in some of the comments on these various pieces: even if  a technology doesn't steal an existing market (one commenter claimed minicomputers didn't disrupt mainframes since mainframes maintained certain markets), having potential future markets denied by an upstart can be just as devastating.

On the other hand, Oxford Nanopore's MinION really does look like it fits Christensen's model  With a price point far below the incumbents, and the early results oozing out of the MAP showing much lower read accuracy, this is a potential disruptor. But, that will require either existing markets that can tolerate the low quality (and value the cost/portability  advantages) or entirely new markets that just don't need quality but can't exist without lower cost (or portability).

What other technologies might follow Christensen's disruptive path to success? I'd look for things coming out of the DIY Bio crowd: cheap&crude PCR thermocyclers perhaps -- but maybe those aren't really different enough.  Cheap digital PCR destroying the qPCR market? Perhaps a really cheap lab-on-a-chip technology replacing a lot of assays?  Or cellphone parts reconstituted as microscopes and cell counters.  The crystal ball is murky!  

I'm glad the Lepore piece came out, because it has made me think carefully about Christensen's work, and while I don't believe I was an excessive cheerleader before, I'll be even more careful in the future.  Conversely, it would be very unfortunate if many take away the idea that there is "no there there" in disruption innovation theory, and they can just ignore it.  The Lepore piece is a useful cool shower, but shouldn't be mistaken for an ice cold one.

8 comments:

AMac said...

KR, thanks. This is a difficult sort of essay to write: a review of a cloud of articles and books that few readers know. I learned a lot about Christensen's disruption theory through Lepore's criticisms. Regarding the future of sequencing, yours isn't the only Magic 8 Ball to come up "Answer Hazy, Try Again Later." But you've provided a useful way to think about it. Without bleating.

Unknown said...

Hi Keith, really nice analysis. I haven't read the number of C Christensen's books as you have (I have only an 'N=2') but have also agreed with the New Yorker piece as having some valid points. A takedown based upon cherry-picked data (similar to what happened months ago to Malcom Gladwell) and there is a fair argument to be made that the truth is in the middle somewhere between the two points.

And you point out rightly that NGS is a showcase of this, however the time it takes to overcome the Sanger to NGS shift will take a lot longer than the classical 'disruptor' model, due to incumbent barriers (e.g. case law for forensics, or microbial testing in regulated markets). Not like an 8" hard-drive replaced by a 5.25" one, where it is the same functionality but only a different form-factor, that is irrespective of the much lower accuracy of NGS on a per-base basis (as you know, but the casual observer doesn't).

Where you can give credit with things Ion Torrent (disclaimer: I'm an employee of Thermo Fisher Scientific) is that they provided the absolutely-needed competition in a free market to drive Illumina to continue innovating. You can bet that there would not be any NextSeq on the market today if it weren't for the Proton and the upcoming PII chip, and that Illumina would not be lowering their prices nearly as aggressively (even back to the days of the Solexa 1G) if there wasn't an alternative choice for customers such as the Applied Biosystem's SOLiD 2. (Okay I'm dating myself with such an ancient example from late 2007.)

I've heard from many who tell me that Illumina behaves in the same way as Applied Biosystems used to do back in the HGP days. (Okay, I wasn't with AB back then but I do hear a lot of stories.) One can make the reasonable argument that the NHGRI cost/MB chart that NISC maintains has flattened out due to the lack of competitive pressure to lower costs further; on that point time will tell.

On the usefulness of the model, it reminds me of the 'blue ocean / red ocean' book and model - good for explaining the past but poor at predicting the future.

Lastly regarding what will come out of the DIYbio crowd, I'm happy there's education going on, not that optimistic on innovation. DIY thermal cyclers and people doing molecular biology in their apartment closet is something people will dabble in, but I wouldn't expect much to come out of it. I just expect disruption to come from the usual places - university laboratories or industry spin-outs, from the hands of experienced practitioners (who could be working in apartment labs, nothing wrong with that).

Lastly - it was great to meet you in person a few weeks ago! Will be sure to reach out when I'm in Boston again.

Keith Robison said...

Dale: All good points. Ion serving to goad Illumina is something a bit outside CC's model, but I would agree it was (and continues to be) important to the field.

There are indeed many barriers to new rivals taking on incumbent technologies.

On the DIY side, my thought is that they are developing some really cheap devices which may not have the performance of "the big boys", but perhaps offer other advantages. A company

Another thought: robotics could be ripe for disruption, as the existing manufacturers have done a dismal job of making their robots easy to program in a cross-robot way; for a lot of tasks cheap, less accurate robots might work -- and be desirable if they were easier to gang together. Robots that are slower but use cell phone cameras to make sure they don't screw up might be another angle (all the robots I've dealt with TRUST that you've laid the deck out correctly).

I don't see DIY Bio as a huge force, but it is out there & some of those Arduino tinkerers could well prototype new generations of lab instruments.

MIchael Rhodes said...

the underlying problem I see with the base theory is that the disrupting technology is inferior to what is out there in the first instance. Either fee market theory is boney (another discussion) or for those buying there is a benefit to the new technology (even if its just cost) or it would not be purchased. I don't think NGS as a whole can be seen as coming out of nowhere (one claim for disruption) as the granting agencies actually told the companies what there targets were 10 years ahead of time(a very unusual situation).

Keith Robison said...

Michael:
Inferiority of the insurgent technology is at the heart of CC's thesis; if you haven't read any of the books, skim the 1st or second one.

Steel mini-mills are perhaps the clearest example of this. When first launched, they cut costs by using recycled steel as feed. The trade-off is that this led to far less control on the composition of the output, which meant the steel was unsuitable for anything other than very low-grade, low margin markets such as rebar. But after conquering rebar, the mini-mills progressively improved the quality of their output so they could take on higher value markets. Because the incumbents didn't value each market, as it was their worst one, they easily ceded each in turn.

Sure, the granting agencies said a lot of things. But that doesn't lead to commercially successful technologies. Plenty of grants went out for microfluidic or 384-capillary Sanger, but those didn't go anywhere. But more importantly, when 454, Solexa, Helicos & SOLiD launched, they couldn't compete with Sanger in existing markets, nor did they fit well into the existing ecosystem. Microbial genome assembly was an early win, but it took a lot of development to really beat Sanger -- and lead to tackling really large genomes.

Anonymous said...

Not quite sure if "Inferiority of the insurgent technology" really fits NGS.

For me, NGS is just different than sanger. I do different things with NGS than I ever did with Sanger, and I still use Sanger because it makes no sense using NGS for certain things. NGS allows for clonal (great advantage when looking at low mutation rate in your sample....) high-throughput massive parallel sequencing. That's what it was designed for, that's what it always did. That market barely existed at the sanger-only time. Really, Celera was one of the first to get something similar running with Sanger in order to show HUGO that you can sequence (most of) a eukaryote genome by simple shotgun sequencing. But really, this high-throughput sequencing was limited to very few institutions. Today, how many patients with rare diseases get an exome? A completely unthinkable application in the pre-NGS era. Even if the quality of the sequences is somewhat poorer, it is an application that is nearly impossible to do with sanger on a routine basis. If on the other hand, you want to resequence 400bp in a single sample, Sanger will probably be the way to go for quite some time, because it's simpler and cheaper than trying to do that on NGS. That is a quite common situation in genetic diagnostics as well! I find it unlikely that NGS will replace Sanger in these settings any time soon.

Keith says: "But more importantly, when 454, Solexa, Helicos & SOLiD launched, they couldn't compete with Sanger in existing markets, nor did they fit well into the existing ecosystem. Microbial genome assembly was an early win, but it took a lot of development to really beat Sanger " Sure, but that is a totally different problem. NGS is best (or at least easiest) if you can throw a native DNA sample at it that has few enough bases to read them all in one run with sufficient coverage. So for any other scenario, the ecosystems needed to be developed, specifically the enrichment. But I don't consider that an issue of NGS per se.

So yes, the per run throughput needed to grow, the overall quality needed to grow, the ecosystem needed to develop. But all that basically for people to do things they were totally unable to do with just sanger.....

Best wishes
Lars

Keith Robison said...

Lars,

The dichotomy Christensen attempts to define is sustaining innovation vs.disruptive innovation. In this scheme, what you describe is precisely why next gen was a disruptive change vs. capillary Sanger: completely new markets/ecosystems were required for the new technology & out could not compete with the earlier technology in established markets

Sanger still had a foothold, but that keeps shrinking as next-gen keeps displacing Sanger. It's easy to forget that the first human exome sequences were by Sanger, but of course that was very short lived. Now various techs are chewing away at clinical Sanger. With the massively parallel approach about a decade old, it is easy to see their dominance add inevitable, but the skew of failed next gen companies testifies to how hard it was.

Anonymous said...

Comments as written on the quality of ONT data are, to be fair, very premature.