How to Think About Virtual Machines

What's Really Going on Inside VMware and Hyper-V

Posted on by

Posts in this series:
Part 1
Part 2 Part 3 Part 4 Part 5

They are a part of our daily lives. Using VMware or Hyper-V under Windows, we run dozens, hundreds, and for large shops, even thousands of them. What are they? Virtual Machines. Flocks, swarms, should I say it, clouds of machines that are the mere software imagined hardware.

But what are they? Where did they come from? How do they work in principle? How can you think about them without having to get a masters in Computer Science, or even without having to get a VCDX certification. In the next few blogs, I’ll ponder aloud these questions.

Let me cover some stuff you already know. Virtual Machines are “pretend” computers running guest operating systems (OS) that, in turn, run your applications. The “hardware” on which the guest OS runs is really the imagination of software called the hypervisor.

We all run programs, processes, applications, whatever you want to call them, to get something done. I have a browser, an email reader, iTunes, and my Plan 9 Drawterm program, all running on my iMac. Each of these can be considered an application. They are the reasons I have the computer.

Going from the desktop to the machines in your rack, the ones without anyone sitting in front of them, those machines run fundamentally different programs than are run on the desktop, but they are still just user programs. Simple Mail Transfer Protocol (SMTP), Internet Message Access Protocol (IMAP), Hypertext Transfer Protocol (HTTP), aka your web server, all run as user programs. Databases like MySQL, Oracle, Access, SAP, all are user programs as well. They are the reason for the servers in the first place.

And all these programs have at least one thing in common; they think they are the only ones on the machine. How did all this, the hypervisor, kernel, and user, come about?

A Brief History of Operating Systems

To understand Virtual Machine technology a short stroll through the Museum of the Mind, computer operating system section, is in order. (In fact, only the Museum of the Mind can have an exhibit like this. Software has boring physical exhibits.) It’ll take a couple of blogs to cover it all. Bear with me.

The first computers ran the one program that was loaded into them. The EDSAC read the same paper tape used on the British Creed & Company Model 85. For a while, all scientific computers read paper tape- punched tape printed on a teleprinter typewriter- like the Friden Flexowriter. IBM, jolted into action by rival Remington buying out Eckert and Mauchly’s computer making Univac. Reason? Univac worked with cards, IBM’s bread and butter. The IBM 701 scientific computer and 702 commercial computer followed.

Both were hooked to already existing punch card equipment. The binary of the program was loaded as a deck into the machines, a button on the console, (look at all those blinking lights!) and the first card zaps into the computer’s memory. Then, after a very, very short pause, the rest of the program is read in. The first card contained the code to read in the rest of the deck. It was the boot loader.

How can a single 80 column card hold the boot loader? Easy. The machine was designed with instructions to read cards from the card hopper straight into memory. That’s right. No OS needed. No device driver needed. Just issue the READ UNIT instruction and the card data appears, usually at a fixed location in memory.

Million Dollar Office Machine

But using the machine for a single job at a time, in some cases with programmers signing up for time on the machine, was a terrible way to get returns on a million dollar investment. So companies quickly started coming up with ways to get more jobs done. Usually this meant having operators collect the punch cards and run them, to make things go more quickly, providing any output back to the user.

This was, in fact, the way the first operational computer, the Cambridge EDSAC, was used. You punched your program on paper tape, clipped handwritten instructions with it, put it on a peg board for an operator to run through the machine. She would feed in the paper tape, watch it run, and when finished, would tear the output off the teleprinter, clipping it back on the paper tape and putting the whole thing back on the output pegboard. A lot of work for a few scraps of paper, but the answers to the calculation problems were like magic.

That was mostly for number crunching applications. For data processing, the magnetic tape, and a bit later the magnetic disk, were used to hold large amounts of data, and the punch card industry had long experience in making fourteen inch wide green bar reports. Interfacing the computer to those printers, while not trivial, wasn’t a great leap of technology.

What was needed was a way to automatically run all the jobs in a batch and get as many jobs per hour through the machine. That started a set of what were called “executives,” programs that remained in memory and loaded jobs from a deck of cards with “job” cards delimiting them. Read one job, running it, print it, repeat. Keep the card reader and printer going.

The next thing noticed in this evolution was that “jobs” spent most of their time waiting for the card reader to read a card, or the line printer to print a line, a very slow business. If we could run more than one job at a time then we could keep the CPU busy while all those very slow mechanical processes were going on. The IBM folks called it multiprogramming. Load more than a single job in memory at a time and let them share the CPU. One would run while others were waiting for I/O.

But there would be a problem. If one program had a bug, it might write all over one of the other running programs. That other program would then crash for no fault of its own. What was needed was a way to protect each program from one another. Come to think of it, this would help keep the “executive” protected as well.

IBM’s System/360 was a big leap in a lot of ways, and this was one of its most revolutionary areas. It had two states, “privileged” and “problem.” The “executive” ran in the “privileged” state and the user jobs ran in the “problem” state. New hardware kept each of the user jobs separate from one another. Each 2048 bytes of the limited core (a 32 K byte machine was large in those days) was marked with a key. The Program Status Word (PSW) of the running user job had a key. When the job read or stored data, the key of the memory and the key in the PSW were checked. If the keys matched, great. If not, bang, the executive would run the job out of the system with an ABEND 0C4, keeping other programs safe.

Today, they call the “executive” the “kernel,” and the user job, a user process. Did you ever wonder why they call them “user” processes? Because back in the batch days, the users were the folks who put their precious decks of punched cards, clamped lovingly by healthy, natural tan rubber bands, complete with diagonal magic marker strips on the top of the deck, just in case the worst ever happens, into the box on the window of the glass wall that separated the computer operators from the unwashed masses of users. Their jobs were the user jobs. Their processes were the user processes. So today we call the problem state as the user mode.

So, the situation looks like this. The user applications, processes, whatever you want to call them, all run in non-privileged mode, with the OS conceptually underneath all those processes, is doing all the privileged work, keeping the I/O protected in memory and thus keeping each process protected.

What does this have to do with virtual machines? Stay tune next week and find out. The story continues.

About the Author

Brantley CoileInventor, coder, and entrepreneur, Brantley Coile invented Stateful packet inspection, network address translation, and Web load balancing used in the Cisco LocalDirector. He went on to create the Coraid line of storage appliances, a product he continues to improve today.

Sign up to have interesting musings delivered direct to your inbox.