A Retrospective Analysis on the Road to Red Hat

I left Cisco and joined Qumranet on Dec 16th, 2005. My first day on the job was in New York and we met with Morgan Stanley’s core IT infrastructure team. 5 minutes into that first meeting, and I knew that the Qumranet journey was going to be a wild ride.

1. You Will Fail, BUT, Fail Fast and Iterate Fast …

We initially started building a hardware solution – a converged IO switch for blade servers that would also offer a variety of services (high availability, fault tolerance etc.) to applications that were running on those blades – especially in a virtualized environment. It was similar in concept to the Cisco UCS but more generic and targeted towards IBM, HP and Dell blade servers.

However, after discussing the whole concept with some large prospective customers, it became clear that we needed to focus. The area where we saw the most amount of interest was in fault tolerance using virtualization. So in Jan 2006, we shelved the hardware project, and doubled down on Xen based fault tolerance. After working on that for a few months, we hit some really hard technical challenges and concluded that it would be too long, and too hard of a road to go down. In mid 2006, we started researching other areas like predictive fault tolerance, storage caching/replication, JAVA over VMM etc. In doing so, Avi Kivity, the lead engineer at Qumranet hit upon an interesting and elegant approach to doing full virtualization on x86 (late 2006).

From the time Avi gave me my first demo of a 5K lines of code loadable kernel module that converted Linux into a hypervisor and ran a full fledged Windows guest, I knew that KVM was going to be big.

Image

The KVM architecture - Started out as 5,000 lines of code

It was one of the toughest times, where we were experimenting with a variety of different technologies, and exploring a set of different products/technologies. However, we ended up in a good place, and 10 months into it, I had learned my first startup lesson – Be relentlessly customer focused, fail fast, and iterate fast till you feel that you have hit something big.

2. Product Strategy and Target Markets: Target incumbents or go after adjacent markets?

But you see this all the time. A startup may have great technology, but thats a long way away from a scalable business. How were we going to productize it? Which markets/segments were we going to go after? How were we going to sell it? And to whom? Keep it proprietary, or go the open source route.

Given that VMware was already out there taking the market by storm, and Xen, the only serious contender at that time, was already open source (but had some architectural limitations), we felt that the only way to disrupt the market was to go the open source route.

KVM was an instant success in the open source/upstream community and got accepted into the kernel (early 2007) in record time (Xen to this day is not fully accepted into the upstream Linux kernel). From its early days it became clear that KVM was going to be the defacto virtualization standard in Linux.

This was fantastic progress, but the business model/monetization questions were getting critical. Should we build a solution targeted towards the proven server virtualization market, and attempt to give VMware a run for their money, or target VDI, a nascent market with high, but unproven potential, and no established incumbent? Citrix was still largely focused on terminal services, although they had started talking about XenDesktop, and VMware didn’t have anything formal in VDI at that time, other than running a WinXP VM on ESX and connecting to it via RDP.

Startups try to find a market where there isn’t an established incumbent, and then go and try to create a “category” for themselves – and that was the point in favor of VDI. But at the same time, the market had clearly understood the value of server virtualization. It was quantifiable. It was tangible. And it was without doubt, going to be a huge opportunity. The benefits of VDI on the other hand were extremely hard to quantify. There was value in the solution for several use-cases, but there were really no capex savings to speak of (especially given $300 PCs) and the case had to be made around centralized management, security and other “intangibles”. Furthermore, VDI had to be sold to desktop administrators, who back in 2006, were not very familiar with the concept of virtualization, and that would likely make it a harder sell.

Despite all that, we decided on VDI because watching VMware go from strength to strength in server virtualization was, frankly, a bit daunting. Microsoft hadn’t really started server virtualization, but was going to, and the only other startup in the space, Virtual Iron seemed to be struggling. Their value proposition of “80% of the features for 20% of the cost” was not working that well. The cost value proposition works in 2 cases: 1. It’s a relatively more mature market, and customers realize that they need a strategic (preferably lower cost) alternative or 2. It’s a larger company and the product with that value proposition is a natural extension to the company’s core business/product (e.g. operating systems and virtualization). Although server virtualization was a big market at that time, it was not a “mature market”. We had barely scratched the surface, so customers weren’t really strongly demanding an alternative to VMware. Secondly, Qumranet was not a large company (like Microsoft or Red Hat) so its not that we could sell our server virtualization solution as an add-on or extension. It was going to be extremely challenging to effectively compete in the server virtualization market with our limited resources. VDI *had* to be our market entry strategy, and we had to figure out how to make it work.

Image

SolidICE Architecture - KVM Hypervisor, SolidICE Management, SPICE remote rendering technology

With that, we busied ourselves completing an end-to-end hosted VDI solution that encompassed KVM for the virtualization layer, SPICE (a remote rendering technology that Yaniv Kamay built from the ground up specifically for VDI), and a centralized management system. We called the whole thing SolidICE.

3. Sales model: In-house Vs. Outsourced. The latter is OK to start with, but doesn’t work in the long term

While we were progressing technically, we still had to figure out how to sell it. If one builds out even a small enterprise sales force, and hires a VP, Sales and he/she brings on 3 people you are looking at a $1m/year increase in your burn rate. So we went with an outsourced sales model with a firm out of the NY area. That worked exceedingly well in setting up meetings with customers, because they had all the contacts, but when it actually came to closing real deals, it wasn’t that effective. A part of it was where VDI was from an adoption curve perspective, and how customers were interested in just piloting the technology, not really deploying it at scale. However, I think a bigger part of it was due to a structural problem with the outsourced sales model.

The issue is that the outsourced sales company is looking at it from a portfolio theory perspective. They may have 10 companies that they are working for, so they are not 100% vested in any one’s success. In aggregate, a few will work out, and their business is structured around that critical assumption. They do not need your product to succeed. Conceptually, it’s similar to the VC model. At the end of the day, you need a sales force that’s hungry and that “eats or doesn’t eat” based on whether they sell your product and only your product. This was another critical lesson in the Qumranet journey and one that has implications even for larger firms trying to diversify. More on that later.

4. Ease of trials: Make it as easy as possible for prospective customers to try your product, and design that from the get go.

Image

Qumranet VDI Proof of Concept Appliance - 4 1RU x86 servers with Intel VT

We had decided on the outsourced model, and that firm was busy setting up meetings with their 300+ contacts at various companies in the NY area. By mid-late 2007, we were pitching our solution to banks, pharmaceutical companies, insurance companies and hedge funds. They were all familiar with VMware server virtualization, and were keen to understand what we had to offer in the VDI space. However, doing PoCs was difficult. In those days, Intel VT was not a given on every server. Also, the desktop teams that we were pitching VDI to, did not necessarily have timely access to 3 servers and the storage needed to conduct a proper PoC. And unlike server virtualization, they did not have it as a “line item” in their 2007 budget. Hence, we explored the idea of shipping around a VDI PoC appliance that had all the SolidICE software pre-installed and configured on standard x86 servers in order to speed up the evaluation process. The idea being that a prospective customer could simply plug that into their environment and instantly start a VDI PoC. Needless to say, that wasn’t a highly scalable model.

Ideally, we would have liked to do a downloadable trial model, but the software in its early days was fairly complex to install. Simply put, in mid-late 2007 we couldn’t guarantee that a customer would have a seamless downloadable trial experience. It was too early. Downloadable trials work well in 2 scenarios – 1. The market understands the value proposition, the product is mature, has a set of reference customers, and then you want to scale deployments, or 2. You have a defined strategy to attach to an incumbent product. Companies like vKernel, Quest, Veeam and others, have built entire business models around targeting VMware customers and providing them with add-on tools that add additional value to a customers VMware deployment. That’s ideal for a downloadable trial/freemium model. SolidICE was a different case. It was a complete end-to-end VDI infrastructure and was not ready for that. In hindsight, we would have prioritized these ease of trial features much higher and gone this route.

Nevertheless, we continued by focusing on a few customers that demonstrated real interest.

5. Rapid prototyping and continuous innovation

While we were figuring out the sales model and process, we also started engaging with analysts in the space to generate some broader awareness. By early 2008, after a few successful PoCs, we publicly launched SolidICE on April 30th 2008 at Networld/Interop in Las Vegas.

Shortly thereafter, in May 2008, Brian Madden visited our office, and posted this video of some of that early technology (on the VDI side). http://www.youtube.com/watch?v=S4DZwYqnyJM We went on to sponsor Briforum 2008 (mid June 2008) and talked not only about SolidICE but also about advanced technologies we were working on in our R&D labs including:

  1. ICEbox – KVM running on a laptop hosting a corporate Windows XP VM and a personal Windows XP VM. The KVM hypervisor was invisible to the end user. This was driven directly from requirements from a large bank.
  2. SolidICE CBC mode – KVM running invisibly on a physical desktop, running an individual’s Windows XP VM but the system was also able to schedule other data center server VMs on the individual’s desktop hardware when that was not in use, or was being lightly used
  3. Image

    VDI over the WAN: caching desktops closer to the branch office with centralized management

    SPLICE – an end-to-end system that allowed you to cache Windows desktops on servers in the branch office, giving users local performance, but also giving IT centralized management in very high latency WAN scenarios

And this was all back in mid-2008! Almost 4 years ago. My Briforum 2008 presentation with Itamar Heim driving the demo is archived here. In fact, one of the main reasons that ICEbox and CBC mode (client hypervisors) were relatively easy for us was the inherent architectural advantage of KVM. Since KVM leverages stock Linux, and converts Linux itself into a hypervisor, we got instant hardware compatibility. That meant, that our KVM hypervisor could run on pretty much any hardware that ran Linux (and had Intel VT or AMD-V extensions) – and that gave us the widest range of physical desktop/workstation and laptop support without any driver issues. Interestingly, if you look at XenDesktop’s HCL even today, its still extremely limited, and VMware tried a bare metal client hypervisor, but gave up.

6. You need the stars to align. In our case, Red Hat needed an alternative to Xen.

While we were dealing with core business building problems in the VDI space, Red Hat started realizing that Xen had some limitations from an architectural perspective, and in addition, the Xen community was getting fragmented. Citrix/Xensource, Red Hat, IBM, Novell, Sun, Cannonical and others were pulling it in very different directions. It was getting clearer that Red Hat needed an alternative, and KVM was ideal since it was completely integrated with the Linux kernel, and had already been accepted upstream.

Red Hat acquired Qumranet in September 2008, and that’s how we got here. Paul Cormier (EVP of Products and Technologies at Red Hat) asked me to run their virtualization business centred around the assets acquired from Qumranet – KVM, the SolidICE management system (back-end, UI and API), and SPICE.

Next up, the genesis of RHEV.


About this entry