Virtualizing The CPU
In this lesson, you will be introduced to the concept of virtualizing the CPU with the help of simple examples.
We'll cover the following
Running cpu.c
You can see our first program below. It doesn’t do much. In fact, all it does is call Spin()
, a function that repeatedly checks the time and returns once it has run for a second. Then, it prints out the string that the user passed in on the command line, and repeats, forever.
#ifndef __common_h__ #define __common_h__ #include <sys/time.h> #include <sys/stat.h> #include <assert.h> double GetTime() { struct timeval t; int rc = gettimeofday(&t, NULL); assert(rc == 0); return (double) t.tv_sec + (double) t.tv_usec/1e6; } void Spin(int howlong) { double t = GetTime(); while ((GetTime() - t) < (double) howlong) ; // do nothing in loop } #endif // __common_h__
Let’s say we save this file as cpu.c
and decide to compile and run it on a system with a single processor (or CPU as we will sometimes call it). Here is what we will see:
prompt> gcc -o cpu cpu.c -Wallprompt> ./cpu "A"AAAAˆC prompt>
Try it out yourself!
#ifndef __common_h__ #define __common_h__ #include <sys/time.h> #include <sys/stat.h> #include <assert.h> double GetTime() { struct timeval t; int rc = gettimeofday(&t, NULL); assert(rc == 0); return (double) t.tv_sec + (double) t.tv_usec/1e6; } void Spin(int howlong) { double t = GetTime(); while ((GetTime() - t) < (double) howlong) ; // do nothing in loop } #endif // __common_h__
Not too interesting of a run — the system begins running the program, which repeatedly checks the time until a second has elapsed. Once a second has passed, the code prints the input string passed in by the user (in this example, the letter “A”), and continues. Note the program will run forever; by pressing “Control-c” (which on UNIX-based systems will terminate the program running in the foreground), we can halt the program.
Running multiple instances of cpu.c
Now, let’s do the same thing, but this time, let’s run many different instances of this same program. The snippet below shows the results of this slightly more complicated example.
prompt> ./cpu A & ./cpu B & ./cpu C & ./cpu D &[1] 7353[2] 7354[3] 7355[4] 7356 ABDCABDCA ...
Try it out yourself! Run the following command in the widget below:
./cpu A & ./cpu B & ./cpu C & ./cpu D &
#ifndef __common_h__ #define __common_h__ #include <sys/time.h> #include <sys/stat.h> #include <assert.h> double GetTime() { struct timeval t; int rc = gettimeofday(&t, NULL); assert(rc == 0); return (double) t.tv_sec + (double) t.tv_usec/1e6; } void Spin(int howlong) { double t = GetTime(); while ((GetTime() - t) < (double) howlong) ; // do nothing in loop } #endif // __common_h__
It turns out that the operating system, with some help from the hardware, is in charge of this illusion, i.e., the illusion that the system has a very large number of virtual CPUs. Turning a single CPU (or a small set of them) into a seemingly infinite number of CPUs and thus allowing many programs to seemingly run at once is what we call virtualizing the CPU, the focus of the first major part of this course.
Of course, to run programs, and stop them, and otherwise tell the OS which programs to run, there need to be some interfaces (APIs) that you can use to communicate your desires to the OS. We’ll talk about these APIs throughout this course; indeed, they are the major ways in which most users interact with operating systems.
You might also notice that the ability to run multiple programs at once raises all sorts of new questions. For example, if two programs want to run at a particular time, which should run? This question is answered by a policy of the OS; policies are used in many different places within an OS to answer these types of questions, and thus we will study them as we learn about the basic mechanisms that operating systems implement (such as the ability to run multiple programs at once). Hence, the role of the OS as a resource manager.