Since the dawn of computing, there has always been a tension between centralized computing and computing at the edge. Mainframes were centrally managed, but as processing became cheaper; they gave way to minicomputers and servers. One reason for this was the high cost of bandwidth—as Microsoft’s Jim Gray once remarked, “compared to the cost of moving bits around, everything else is free.” So companies ran those servers near end users. This proximity came at a cost, however. The companies bought packaged software, which they tailored to their own needs and ran themselves. That caused fragmentation, so software vendors had to support a wide range of versions and deployments at the same time, reducing their ability to innovate. To address this, in the late nineties a number of Application Service Providers emerged. Their offering was reminiscent of the managed IT services that IBM, EDS, CGI and others had offered in the past, but it included the operation—and sometimes ownership—of the server and application stack. They ran software someone else made, and tried to streamline operations. Unfortunately, there was still a tremendous variety in client software. These ASPs had to contend with the twin horsemen of client sprawl and a stalled dot-com market, and many of them failed. But they weren’t wrong—just early. Meanwhile, a new generation of software companies built with only the web in mind. Salesforce.com, Taleo, Netsuite and others drew a different dividing line between themselves and their customers, choosing to own the entire software stack rather than license it from others. Borrowing a page from Henry Ford, customers could have any version of the software they wanted—as long as it was the current one, delivered by them, accessed through a web browser. And so the Software-as-a-Service market was born.