Linux in Government: How Linux Reins in Server Sprawl
People write a lot about utility computing these days. The interest seems high. VMware gave a seminar in Dallas this past week and had 850 attendees. That followed a well-attended seminar by IBM's business development group on "On-Demand Business". Yet even with the high visibility in the media over the past year, many IT managers seem lost when I discuss utility computing with them.
I realize buzzwords come and go. People find it so easy to dismiss "utility computing" as another fad. Even after noting its undeniable benefits, people's eyes glaze over when one attempts to discuss this topic. I think many of my colleagues avoid the subject, because some vendors have said they want to sell IT as an independent service, similar to water or telephone service.
I personally find that objectionable. One can see the benefit to the vendor but not to IT departments. Within the context of cost containment and efficient use of resources, utility computing doesn't mean install a meter.
When I think of utility computing I think of frugality. I want to get the most out of what I already have. In business, we often say, if it ain't broke don't fix it. In other words, don't rip and replace the technologies that work. Instead, acquire tools that pull resources together and allow us to manage and consolidate, become more productive and eliminate duplication of effort. Linux has addressed this area more than any other operating system.
In typical data centers, you find one application tied to one or more physical servers. Most applications require different computing power based on use. In the past, we always sized hardware based on peak usage. This habit has resulted in what analysts called server sprawl. You may only reach peak usage one day a year. The rest of the time, usage goes down. That concept works great for electric companies, but not for computing.
Ultimately, dedicated servers create the "silo" effect we discussed in last week's article. Silos do not provide for efficient use of hardware resources. Many server utilization rates run around 10-15 percent overall for an organization. Obviously, the ROI on such environments becomes unacceptable, especially to stakeholders.
Blame process automation on the situation we have today. A decade ago, capturing and managing transactions and eliminating processes that did not add value brought on the prominence of enterprise resource programs. As we collected transactional data, the number of ways to store it grew proportionately. That has given rise to products such as network attached storage and storage area networks, NAS and SAN, respectively.
Ultimately, we used technology to create efficiencies, and those technologies became our next inefficiencies. Some business theorists used to say that the solution to the problem becomes the next problem. That has happened within the enterprise.
Numerous studies exist discussing server utilization rates. Companies such as IBM and HP tell us that Intel server utilizations run in the frightening low area of 10 to 15%. We easily can see how application silos syndrome results in these low rates and high costs of storage. We also can find numerous case studies that demonstrate how to raise rates, consolidate hardware and integrate processes across numerous silos.
Linux virtualization has become the primary technology in use by major solution providers today. Linux and virtualization technology, including VMware, allow for
a consolidation ratio of four to five workloads per CPU or higher
decreased capital and operational costs
improvements in server management
more robust infrastructures
In previous times, we solved the problem of needing dedicated resources that grew and shrunk on mainframes using VM/370. Linux on the IBM S/390 and zSeries mainframes rekindled the concept. Then, about three years ago, IBM and VMware got together and co-marketed a solution using IBM's xSeries 440 and VMware ESX Server.
Note: You can find a downloadable Redbook on the subject (note the date) here.
Little did we know that IBM and VMware were starting an industry. According to Dan Kuznetsky of IDC, "The switch to commodity-based servers has resulted in more companies pursuing a virtualization strategy." Referring to overall virtualization software revenue, he said, "It's growing three times faster than the revenue growth for operating system software."
Operating systems manage hardware on which they run. Like any operating system, Linux schedules or arbitrates CPU cycles, allocates memory and handles input output devices. When we virtualize the CPU, memory and input and output, an operating system--whether UNIX or Windows--becomes divorced from the hardware. The operating system becomes a guest on the physical hardware, but does not manage the hardware.
Linux has many features that make it a better host for guest operating systems than other OSes. Some of the contributions by IBM have made this possible. Linux runs well on servers and has done so in the past. But, it never enjoyed advanced mainframe capabilities. With IBM's OpenPower initiative, features taken from mainframes have become available on Linux. IBM sees the most important of these features as its Virtualization Engine, which is composed of many technologies. The engine enables systems to create dynamic execution partitions and dynamically allocate I/O resources to them.
Linux also has become outstanding with simultaneous multithreading (SMT) and hyper-threading technology. These technologies enable two threads to execute simultaneously on the same processor. This technology becomes essential when becoming a host for guest operating systems.
The 2.6 Linux kernel fits well with IBM's SMT technology. Prior to 2.6 of the kernel, Linux thread scheduling was insufficient, and thread arbitration took a long time. The 2.6 kernel fixed this problem and greatly expanded the number of processors on which the kernel could run.
Although a viable, low-cost solution to server sprawl existed three years ago, we're only now seeing buzz around it. If you look around, you can see the IT industry gearing up to solve the problem. The scalability and development of Linux clusters and grid computing has not only led the way in this area, it currently provides the best solutions.
You can see some different approaches to Linux virtualization. We already have discussed VMware to some extent, which runs Windows and Linux on the same server. It also benefits from the advances made in the Linux 2.6 kernel. In many cases, enterprises choose to use VMware because it runs Linux, Windows and Solaris.
Xen has created quite a stir around virtualization circles even though it does not run Windows. An open-source project, Xen uses paravirtualization. Novell bundled Xen with SUSE 9.3. In February 2005, the Linux kernel team said Xen modifications would become part of the standard Linux 2.6 kernel. So essentially, Linux will come with the ability to run virtual machines natively. Imagine the benefits of a computer system able to run multiple instances of Linux at the same time. I can think of several situations in the past when I wanted exactly that capability.
Xen modifies the kernel so that Linux knows it runs virtualized. Xen provides performance enhancements over VMware. Ultimately, people feel that Xen will run Windows.
Another technology worth noting is Virtual Iron. Formerly Katana Technology, Virtual Iron has a product that allows a collection of x86 servers to allocate anywhere from a fraction of one CPU to 16 CPUs to run a single OS image. Xen and VMware chop up the resources of a single system. Virtual Iron makes kernel modifications and requires specialized connections between servers.
Some startups, including Virtual Iron, have formed and found funding from investment banks. As these startups begin to market their products, we can only wonder if IT managers will recognize the value proposition.
The typical suspects have started their campaigns to discredit Linux and the kernel team. One of the most vocal, Sun Microsystems, says Linux doesn't belong in the data center. If Microsoft were to say that, it would have look pretty dumb.
Linux has come a long way since I began using it to learn UNIX. Today, Linux has a place in a world of devices, such as digital phones and PDAs, in the making of feature movies, in running the most powerful computers in the world, in running sonar arrays on nuclear submarines and as a desktop platform. As a solution for on-demand business, it looks to be getting a lead because of its capability as a host for virtual guest operating systems.
Tom Adelstein is a Principal of Hiser + Adelstein, an open-source company headquartered in New York City. He's the co-author of the book Exploring the JDS Linux Desktop and author of an upcoming book on Linux system administration to be published by O'Reilly. Tom has been consulting and writing articles and books about Linux since early 1999.