Changing Thread Path of Execution

Every thread has a context structure, which is maintained inside the thread’s kernel object. This context structure reflects the state of the thread’s CPU registers when the thread was last executing.

Every 20 milliseconds or so (as returned by the second parameter of the GetSystemTimeAdjustment function), Windows looks at all the thread kernel objects currently in existence. Of these objects, only some are considered schedulable. Windows selects one of the schedulable thread kernel objects and loads the CPU’s registers with the values that were last saved in the thread’s context. This action is called a context switch.

The code primary thread (main function) below creates a new thread where its entry point is ThreadFunc1, and while it is running it suspends this secondary and changes its path of execution to the address of another function.


DWORD WINAPI ThreadFunc1(PVOID pvParam)
    _tprintf_s(_T("I am ThreadFunc1\n"));

    _tprintf_s(_T("Exiting ThreadFunc1\n"));

    return 0;

DWORD WINAPI ThreadFunc2(PVOID pvParam)
    _tprintf_s(_T("I am ThreadFunc2\n"));

    _tprintf_s(_T("Exiting ThreadFunc2\n"));
    return 0;

int _tmain(int argc, TCHAR* argv[])
    // create a new thread with ThreadFunc1 as its entry-point
    HANDLE hThread = chBEGINTHREADEX(NULL, 0, ThreadFunc1, NULL, 0, NULL);


    // lets give the thread some time to do some work


    CONTEXT cThread;
    // get control registers such as EIP (instruction pointer)
    cThread.ContextFlags = CONTEXT_CONTROL;
    GetThreadContext(hThread, &cThread);

    // change the target thread path of execution to ThreadFunc2
    cThread.Eip = (DWORD)ThreadFunc2;
    SetThreadContext(hThread, &cThread);


    WaitForSingleObject(hThread, INFINITE);

    return 0;



How process can skip from its JOB – A taste of dirty application

I am writing this post while I can’t believe if what I did is normal or not, I will mention what I`ve did then I will pose some questions that really need answer.

Job Object is a very powerful concept in Windows, and it us used to impose limitations on process assigned to it, some of this limitations enhance process security. For more information about jobs refer Jobs post.

An important information you should know about Jobs:

Once a process is part of a job, it cannot be moved to another job and it cannot become jobless (so to speak). Also note that when a process that is part of a job spawns another process, the new process is automatically made part of the parent’s job. However, you can alter this behavior in the following ways

Let’s consider this scenario:

you write a Host application that runs other applications and manages them, this applications may have unknown implementation (you are not its author), and you create these application as a child process (Client process) of your Host process. Imagine that some of these client process are infected by some virus which requires to cross the limitations you imposed by your Job (e.g spawn other children processes, access USER objects outside the process, etc..). To ensure that your client processes run in a safe environment you impose the necessary restrictions.

Imagine if one application (infected one) was written to skip from the Job assigned to it and run as a jobless process with no restrictions! HOW will you control such thing? This is horrible and this is what I could do! I though that doing such thing is not doable as it is very dangerous, but I could do it.

Dirty application code

#include <iostream>
#include <Windows.h>
#include <tchar.h>
#include <strsafe.h>
#include "XWinAssist.h"
using namespace std;

#define HostJobName _T("XHostJob")
#define AppName _T("cmd.exe")
#define AppCount 5

void CreateJoblessSelf()
    const int cchSize = 128;
    DWORD dwSize = cchSize;
    TCHAR szProcessName[cchSize];
    QueryFullProcessImageName(GetCurrentProcess(), 0, szProcessName, &dwSize);

    STARTUPINFO si = { sizeof(si) };

    // Here I create myself again in the same console window
    // I specify that I want to create myself and skip the current job
    BOOL fCreate = CreateProcess(NULL, szProcessName, NULL, NULL, FALSE, CREATE_BREAKAWAY_FROM_JOB, NULL, NULL, &si, &pi);

        _tprintf_s(_T("Failed to create jobless self\n"));


void StartSomeProcess()
    const int iSize = 128;
    TCHAR szCommandLine[iSize];
    _tcscpy_s(szCommandLine, iSize, AppName);
    STARTUPINFO si = { sizeof(si) };

    // Start some cmd.exe prcess, for sure this can be any process you imagine
    BOOL fCreate = CreateProcess(NULL, szCommandLine, NULL, NULL, FALSE, CREATE_NEW_CONSOLE, NULL, NULL, &si, &pi);

        _tprintf_s(_T("Failed to start some process\n"));


int _tmain(int argc, TCHAR* argv[], TCHAR* envp[])
    BOOL bInJob = FALSE;
    PROCESS_INFORMATION pi =  { 0 };

    // find if we are already assigned to a job
    IsProcessInJob(GetCurrentProcess(), NULL, &bInJob);

    // I will try to skip from the job already assigned to me
    if (bInJob)
        _tprintf_s(_T("\nProcess already in a job\n"));
        _tprintf_s(_T("Spawining JOBLESS SELF …\n"));

        // instantiating myself but with no restrictions

        _tprintf_s(_T("\nRestarting …\n"));
        return 0;
        // Reaching this line means that I am totally free and have no job restrictions
        _tprintf_s(_T("\nMUHAHAHAHA … I am JOBLESS Process (Evil Laugh)\n"));

    // Spawn some process to mobilize system resources
    // this could be some other infectious job
    for(int i = 0; i < AppCount; ++i)



  • Microsoft Win7 Professional
  • Visual Studio 2008 Team Suite

Questions and Exclamations

I have some questions in my mind I need answer for, I tried to search online but with no useful information.

  1. Is there any permission or access rights required to allow a process to spawn other process outside its job? I think there should be some.
  2. Can this behavior be really dangerous and be used to make havoc in the system?
  3. Is this considered as a security flaw?


You often need to treat a group of processes as a single entity. For example, when you tell Microsoft Visual Studio to build a C++ project, it spawns Cl.exe, which might have to spawn additional processes (such as the individual passes of the compiler). But if the user wants to prematurely stop the build, Visual Studio must somehow be able to terminate Cl.exe and all its child processes. Solving this simple (and common) problem in Microsoft Windows has been notoriously difficult because Windows doesn’t maintain a parent/child relationship between processes. In particular, child processes continue to execute even after their parent process has been terminated.

When you design a server, you must also treat a set of processes as a single group. For instance, a client might request that a server execute an application (which might spawn children of its own) and return the results back to the client. Because many clients might connect to this server, it would be nice if the server could somehow restrict what a client can request to prevent any single client from monopolizing all of its resources. These restrictions might include maximum CPU time that can be allocated to the client’s request, minimum and maximum working set sizes, preventing the client’s application from shutting down the computer, and security restrictions.

Microsoft Windows offers a job kernel object that lets you group processes together and create a "sandbox" that restricts what the processes can do. It is best to think of a job object as a container of processes. However, it is also useful to create jobs that contain a single process because you can place restrictions on that process that you normally cannot.

If the process is already associated with a job, there is no way to move away from it: both for the current process or any other spawn process. This is a security feature to ensure that you can’t escape from the restrictions set for you.

By default, when you start an application through Windows Explorer, the process gets automatically associated to a dedicated job, whose name is prefixed by the "PCA" string.

Placing Restrictions on a Job’s Processes

After creating a job, you will typically want to set up the sandbox (set restrictions) on what processes within the job can do. You can place several different types of restrictions on a job:

  • The basic limit and extended basic limit prevent processes within a job from monopolizing the system’s resources.
  • Basic UI restrictions prevent processes within a job from altering the user interface.
  • Security limits prevent processes within a job from accessing secure resources (files, registry subkeys, and so on).

Once a process is part of a job, it cannot be moved to another job and it cannot become jobless (so to speak). Also note that when a process that is part of a job spawns another process, the new process is automatically made part of the parent’s job. However, you can alter this behavior in the following ways:

  • Turn on the JOB_OBJECT_LIMIT_BREAKAWAY_OK flag in JOBOBJECT_BASIC_LIMIT_INFORMATION‘s LimitFlags member to tell the system that a newly spawned process can execute outside the job. To make this happen, you must call CreateProcess with the new CREATE_BREAKAWAY_FROM_JOB flag. If you call CreateProcess with the CREATE_BREAKAWAY_FROM_JOB flag but the job does not have the JOB_OBJECT_LIMIT_BREAKAWAY_OK limit flag turned on, CreateProcess fails. This mechanism is useful if the newly spawned process also controls jobs.

  • Turn on the JOB_OBJECT_LIMIT_SILENT_BREAKAWAY_OK flag in the JOBOBJECT_BASIC_LIMIT_INFORMATION‘s LimitFlags member. This flag also tells the system that newly spawned processes should not be part of the job. However, there is no need to pass any additional flags to CreateProcess. In fact, this flag forces new processes to not be part of the job. This flag is useful for processes that were originally designed knowing nothing about job objects.

Jobs Sample

My StartRestrictedProcess function places a process in a job that restricts the process’ ability to do certain things:

void StartRestrictedProcess() {
   // Check if we are not already associated with a job.
   // If this is the case, there is no way to switch to
   // another job.
   BOOL bInJob = FALSE;
   IsProcessInJob(GetCurrentProcess(), NULL, &bInJob);
   if (bInJob) {
      MessageBox(NULL, TEXT("Process already in a job"),

// Create a job kernel object.
HANDLE hjob = CreateJobObject(NULL,

// Place some restrictions on processes in the job.

// First, set some basic restrictions.

// The process always runs in the idle priority class.
jobli.PriorityClass = IDLE_PRIORITY_CLASS;

// The job cannot use more than 1 second of CPU time.
jobli.PerJobUserTimeLimit.QuadPart = 10000; // 1 sec in 100-ns intervals

// These are the only 2 restrictions I want placed on the job (process).
SetInformationJobObject(hjob, JobObjectBasicLimitInformation, &jobli,

// Second, set some UI restrictions.
jobuir.UIRestrictionsClass = JOB_OBJECT_UILIMIT_NONE;     // A fancy zero

// The process can't log off the system.

// The process can't access USER objects (such as other windows)
// in the system.
jobuir.UIRestrictionsClass |= JOB_OBJECT_UILIMIT_HANDLES;

SetInformationJobObject(hjob, JobObjectBasicUIRestrictions, &jobuir,

// Spawn the process that is to be in the job.
// Note: You must first spawn the process and then place the process in
//      the job. This means that the process' thread must be initially
//      suspended so that it can't execute any code outside of the job's
//      restrictions.
STARTUPINFO si = { sizeof(si) };
TCHAR szCmdLine[8];
_tcscpy_s(szCmdLine, _countof(szCmdLine), TEXT("CMD"));
BOOL bResult =
      NULL, szCmdLine, NULL, NULL, FALSE,
// Place the process in the job.
// Note: If this process spawns any children, the children are
//      automatically part of the same job.
AssignProcessToJobObject(hjob, pi.hProcess);

   // Now we can allow the child process' thread to execute code.

   // Wait for the process to terminate or
   // for all the job's allotted CPU time to be used.
   HANDLE h[2];
   h[0] = pi.hProcess;
   h[1] = hjob;
   DWORD dw = WaitForMultipleObjects(2, h, FALSE, INFINITE);
   switch (dw - WAIT_OBJECT_0) {
      case 0:
         // The process has terminated...
      case 1:
         // All of the job's allotted CPU time was used...

   FILETIME CreationTime;
   FILETIME ExitTime;
   FILETIME KernelTime;
   FILETIME UserTime;
   TCHAR szInfo[MAX_PATH];
   GetProcessTimes(pi.hProcess, &CreationTime, &ExitTime,
      &KernelTime, &UserTime);
   StringCchPrintf(szInfo, _countof(szInfo), TEXT("Kernel = %u  |  User = %u\n"),
      KernelTime.dwLowDateTime / 10000, UserTime.dwLowDateTime / 10000);
   MessageBox(GetActiveWindow(), szInfo, TEXT("Restricted Process times"),

   // Clean up properly.


Jobs API Table



BOOL IsProcessInJob
   HANDLE hProcess,
   HANDLE hJob,
   PBOOL pbInJob);

check whether or not the current process is running under the control of an existing job by passing NULL as the second parameter

HANDLE CreateJobObject(
   PCTSTR pszName);

create a new job kernel object

HANDLE OpenJobObject(
   DWORD dwDesiredAccess,
   BOOL bInheritHandle,
   PCTSTR pszName);
BOOL SetInformationJobObject(
   HANDLE hJob,
   JOBOBJECTINFOCLASS JobObjectInformationClass,
   PVOID pJobObjectInformation,
   DWORD cbJobObjectInformationSize);

place restrictions on a job

BOOL UserHandleGrantAccess(
   HANDLE hUserObj,
   HANDLE hJob,
   BOOL bGrant);

Grants or denies access to a handle to a User object to a job that has a user-interface restriction. When access is granted, all processes associated with the job can subsequently recognize and use the handle. When access is denied, the processes can no longer use the handle

BOOL QueryInformationJobObject(
   HANDLE hJob,
   JOBOBJECTINFOCLASS JobObjectInformationClass,
   PVOID pvJobObjectInformation,
   DWORD cbJobObjectInformationSize,
   PDWORD pdwReturnSize);

once you have placed restrictions on a job, you might want to query those restrictions.


A process in a job can call QueryInformationJobObject to obtain information about the job to which it belongs by passing NULL for the job handle parameter. This can be very useful because it allows a process to see what restrictions have been placed on it. However, the SetInformationJobObject function fails if you pass NULL for the job handle parameter because this allows a process to remove restrictions placed on it.


BOOL AssignProcessToJobObject(
   HANDLE hJob,
   HANDLE hProcess);

This function tells the system to treat the process (identified by hProcess) as part of an existing job (identified by hJob). Note that this function allows only a process that is not assigned to any job to be assigned to a job, and you can check this by using the already presented IsProcessInJob function.

BOOL TerminateJobObject(
   HANDLE hJob,
   UINT uExitCode);

kill all the processes within a job



Windows® via C/C++, Fifth Edition

Remote Procedure Call (RPC)

a remote procedure call (RPC) is an inter-process communication that allows a computer program to cause a subroutine or procedure to execute in another address space(commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. That is, the programmer writes essentially the same code whether the subroutine is local to the executing program, or remote.

An RPC is initiated by the client, which sends a request message to a known remote server to execute a specified procedure with supplied parameters. The remote server sends a response to the client, and the application continues its process. While the server is processing the call, the client is blocked (it waits until the server has finished processing before resuming execution).

An important difference between remote procedure calls and local calls is that remote calls can fail because of unpredictable network problems. Also, callers generally must deal with such failures without knowing whether the remote procedure was actually invoked.

Sequence of events during a RPC

  1. The client calls the Client stub. The call is a local procedure call, with parameters pushed on to the stack in the normal way.
  2. The client stub packs the parameters into a message and makes a system call to send the message. Packing the parameters is called marshalling.
  3. The kernel sends the message from the client machine to the server machine.
  4. The kernel passes the incoming packets to the server stub.
  5. Finally, the server stub calls the server procedure. The reply traces the same steps in the reverse direction.

The illustration shows a more detailed sequence, the client application calls a local stub procedure instead of the actual code implementing the procedure. Stubs are compiled and linked with the client application. Instead of containing the actual code that implements the remote procedure, the client stub code:

  • Retrieves the required parameters from the client address space.
  • Translates the parameters as needed into a standard NDR format for transmission over the network.
  • Calls functions in the RPC client run-time library to send the request and its parameters to the server.

The server performs the following steps to call the remote procedure.

  1. The server RPC run-time library functions accept the request and call the server stub procedure.
  2. The server stub retrieves the parameters from the network buffer and converts them from the network transmission format to the format the server needs.
  3. The server stub calls the actual procedure on the server.

The remote procedure then runs, possibly generating output parameters and a return value. When the remote procedure is complete, a similar sequence of steps returns the data to the client.

  1. The remote procedure returns its data to the server stub.
  2. The server stub converts output parameters to the format required for transmission over the network and returns them to the RPC run-time library functions.
  3. The server RPC run-time library functions transmit the data on the network to the client computer.

The client completes the process by accepting the data over the network and returning it to the calling function.

  1. The client RPC run-time library receives the remote-procedure return values and returns them to the client stub.
  2. The client stub converts the data from its NDR to the format used by the client computer. The stub writes data into the client memory and returns the result to the calling program on the client.
  3. The calling procedure continues as if the procedure had been called on the same computer.


A stub is a piece of code used for converting parameters passed during a Remote Procedure Call (RPC).

The main idea of an RPC is to allow a local computer (client) to remotely call procedures on a remote computer (server). The client and server use different address spaces, so conversion of parameters used in a function call have to be performed, otherwise the values of those parameters could not be used, because of pointers to the computer’s memory pointing to different data on each machine. The client and server may also use different data representations even for simple parameters (e.g., big-endian versus little-endian for integers.) Stubs are used to perform the conversion of the parameters, so a Remote Function Call looks like a local function call for the remote computer.

Stub libraries must be installed on client and server side. A client stub is responsible for conversion of parameters used in a function call and deconversion of results passed from the server after execution of the function. A server stub is responsible for deconversion of parameters passed by the client and conversion of the results after the execution of the function.


Marshalling (similar to serialization) is the process of transforming the memory representation of an object to a data format suitable for storage or transmission. It is typically used when data must be moved between different parts of a computer program or from one program to another.

Marshalling is a process that is used to communicate to remote objects with an object (in this case a serialized object). It simplifies complex communication, using custom/complex objects to communicate – instead of primitives.

The opposite, or reverse, of marshalling is called unmarshalling (or demarshalling, similar to deserialization).

Marshalling vs Serialization

To “marshal” an object means to record its state and codebase(s) in such a way that when the marshalled object is “unmarshalled”, a copy of the original object is obtained, possibly by automatically loading the class definitions of the object. Marshalling is like serialization, except marshalling also records codebases. Marshalling is different from serialization in that marshalling treats remote objects specially.

To “serialize” an object means to convert its state into a byte stream in such a way that the byte stream can be converted back into a copy of the object.


Symmetric Multiprocessing Part 1

    int num = 0;
    for (int i = 1; i <= 1000; i++)
        num += i;

This for loop is what they don’t call multiprocessing , this for loop is an example of how most of us write programs, we often specify our algorithms as a sequence of instructions, and then let our processor executes this instructions in sequence one at time. Each instruction is executed in a sequence of operations (Fetch, Decode, Execute) which is called the instruction cycle, Taking a deep look at this cycle we see that each of the (Fetch, Decode, Execute) phase is achieved by a set of micro-operations which are no more than multiple control signals generated at the same time in concurrent manner.

Nowadays, computer hardware prices has dropped, and computer designers strive more and more for parallelism, usually to improve performance, We examine here symmetric multiprocessing (SMP), a very popular approach to parallelism by replicating processors.

Before we start our discussing about SMP, lets first know what type of parallel processors SMP fits. here are a categorization of the most common parallel processor architectures:

  1. Single instruction single data (SISD) stream:

    • A single processor executes a single instruction stream to operate on data stored in a single memory, this the view of most of use to our computers.
    • The above for loop is an example of a single instruction stream, and a single data.
  1. Single instruction multiple data (SIMD) stream:
    • A single machine instruction controls the simultaneous execution of a number of processors, where each processor has an associated data memory, so that each instruction is executed on a different set of data by the different processors, Vectors and array processors are example of SIMD.
    • SIMD instructions are widely used to process 3D graphics and cryptography, although modern graphic cards with embedded SIMD have largely taken over this task from the CPU, an example is CUDA, a parallel computing architecture developed by NVIDIA.
    • Intel`s MMX and AMD`s 3DNow! are examples of SIMD.
    • An application that may take advantage of SIMD is one where the same value is being added (or subtracted) to a large number of data points, a common operation in many multimedia applications. One example would be changing the brightness of an image. Each pixel of an image consists of three values for the brightness of the red, green and blue portions of the color. To change the brightness, the R G and B values are read from memory, a value is added (or subtracted) from them, and the resulting values are written back out to memory.
    • Check this SIMD programming with Playstation 3.
  2. Multiple instruction single data (MISD) stream:
    • A sequence of data is transmitted to a set of processors, each of which executes a different instruction sequence.
    • Pipeline architectures belong to this type.
  3. Multiple instruction multiple data (MIMD) stream:
    • A set of processors simultaneously execute different instruction sequence on different data sets.
    • Machines using MIMD have a number of processors that function asynchronously and independently
    • MIMD architectures may be used in a number of application areas such as computer-aided design/computer-aided manufacturing, simulation, modeling.
    • MIMD organization can be further divided by means of how processors communicate with each other.
      • Each processor has a dedicated memory, then it self-contained, and it acts independent of the other processors. Communication among those computer processors are either via fixed paths or networks, such systems are called cluster.
      • If the processors share a common memory, then each processor access date and code in a shared memory, and they communicate with each other via the main memory.
    • Shared memory multiprocessors can be further classified into 2 categories based on how processes are assigned to processors:
      1. Master/Slave architecture:
        • The OS kernel always run on a particular processor. The other processors may only execute user processes and may be OS utilities.
        • Master processor is responsible for scheduling processes or threads, If a slave processor needs a service (I/O request), then it should send a request to the master and wait for the service to be performed.
      2. Symmetric Multiprocessor architecture:
        • The kernel can execute on any processor, and each processor does self-scheduling from the pool of available threads, allowing portions of the kernel to execute in parallel.
        • SMP complicates the OS, because it must ensure that two processors don’t choose the same process, and this requires the OS to apply synchronization techniques among processors.


To be continued …


Operating Systems: Internals and Design Principles (6th Edition), William Stallings

Operating Systems Overview

I dealt with computers first when I was in the elementary school around 1998, where I used to play Sky Roads on DOS computers. 2000 was the year I convinced my father to buy a computer at home, Some guy came at home and installed Windows so that we can deal with the computer, This Windows was like a mysterious thing or like a ghost that controls my PC, Actually I didn’t pay attention to know what Windows is all about rather than it is a kind of “Operating System” ! but as an end-user I was very good at using it.

Now as a computer science student, I figured out what Operating System – abbreviated OS – is all about, this article intend to give a brief overview to Operating Systems.

What is OS?

  • A program that controls the execution of application programs, and act as an interface between the application and the computer hardware.
  • It has 3 objectives:
    • Convenience; makes computer more convenient to use.
    • Efficiency; allows us to use computer resources in an efficient manner.
    • Ability to evolve; allow for further development to allow to system functions.

OS as a user/computer interface

  • OS provides a variety of services in the following areas:
    1. Program development: editors, debuggers they are tools supplied with the OS.
    2. Program execution: automate a number of steps to execute a program.
    3. Access to I/O devices: act as a façade to I/O devices.
    4. Controlled access to files: provide protection mechanism to control access to files.
    5. System access: protect system data and resources from unauthorized users.
    6. Error detection and response: detect program and hardware errors so as to clear the error condition with the least impact on the running applications.
    7. Accounting: monitoring system resources and collect usage statistics, help in judging whether to upgrade the resources or it is efficient enough.

OS as a Resource Manager

  • Memory allocation is controlled by the OS and the MMU (Memory Management Unit).
  • The OS decides when I/O device can be used by a program in execution.
  • Control access to and use of files.
  • The processor operation itself is controlled by the OS, that OS decides how much time the processor can spend on a particular program.

What makes OS evolve ?

  1. Hardware upgrades and new types of hardware: New types of hardware require that the OS be able to deal with it, so OS should be built with support for that hardware.
  2. New services: OS offers new services demanded by the users or system managers.
  3. Bug Fixes: Bugs appear over time, and detected by users, so OS should be fixed for this bugs, and sometimes fixing a bug raise another bug.

OS Evolution

In the dark ages, when there was no OS from late 1940s to mid-1950s (I call this years: before OS, abbreviated BOS like BC) the programmer had to deal directly with the computer hardware, These computers were run from a console consisting of display lights, toggle switches, some form of input device.

  • Serial Processing:

    • This systems presented two main problems
      1. Scheduling: users had to sign-up sheet to reserve computer time, users couldn’t`t know precisely how long it will take to finish their program.
      2. Setup time: A single program called a job had to be installed before used with its compiler and code, saving the object program and linking and so on to run the program.
    • Users have access to computer in series.
    • Simple Batch Systems
      • Monitor: a software program that handle executing the jobs provided by the user on tapes or disks.
  • Multiprogrammed Batch Systems

    • The I/O devices are much slower than the processor, leaving the processor idle most of the time waiting for the I/O devices to finish their operations.
    • Uniprogramming: the processor starts executing a certain program and when it reaches an I/O instructions, it must wait until that I/O instructions is fully executed before proceeding.
    • Multiprogramming: in contrast to the uniprogramming, when a job needs to wait for an I/O instruction, the processor switches to another job executing it until the first job finishes its waiting I/O instructions, the processor continue to swap between jobs as it reaches an I/O operation.
    • Multiprogramming batch system must rely on certain hardware capabilities such as process switching when swapping between program execution.
    • Interrupt-driven I/O or DMA helps a lot in multiprogramming environments, allowing the processor to issues an I/O command and proceed executing another program.

  • Time-Sharing Systems

    • As multiprogramming allows processor to handle multiple batch jobs at time, it can allow the processor to handle multiple interactive jobs at time, through time sharing.
    • Time Slicing: there is a system clock that generates interrupts at a constant rate, allowing the OS regain control and assign the processor to another process.

Time sharing and multiprogramming raise a host for new problems

  • If multiple jobs are in memory they must be protected from interfering with each other.
  • File systems must be protected from access by unauthorized users.
  • The programs contention for resources (mass storage, printer, … ) must be handled by the OS.

Major Achievements in OS

  • The Process
    • Possible Definitions:
      • A program in execution.
      • An instance of a program running on a computer.
    • The interrupt helped programmers in developing early multiprogramming and multiuser interactive systems.
    • Errors caused by handling more than one process at time is:
      1. Improper synchronization
      2. Failed mutual exclusion: allow only one routine at a time to perform an update against the file.
      3. Non-determinate program operation: programs may interfere with each other when they share memory and their process is interleaved by the processor.
      4. Deadlocks: 2 programs hung up waiting for each others to release a resource.
    • The execution context or the process state is the internal data by which the OS is able to control the process.
    • The context contains the content of the processor registers, as with information to use by the OS as the priority of the process.
  • Memory Management
    • Process Isolation: OS should prevent independent processes from interfering with each other.
    • Automatic allocation and management: Programs should be dynamically allocated, and allocation should be transparent to the programmer.
    • Support of modular programming: Programmers should be able to write their own programs and create, destroy and alter the size of it dynamically.
    • Long-term storage: saving information for extended periods of time.
    • Protection and access control.
  • Information Protection and Security
    • the use of time-sharing systems, computer networks has brought concern for the protection of information.
    • We are concerned with the problem of controlling access to the computer system.
    • Work in this area can be grouped in:
      • Availability: protect the system against interruption.
      • Confidentiality: users cannot read data for which access is unauthorized.
      • Data integrity: protection of data from unauthorized modification.
      • Authenticity: verification of the identity of the users and validity messages.
  • Scheduling and Resource Management
    • Any resource allocation and scheduling policy must consider:
      1. Fairness: jobs of the same class competing for a resource are to be given equal and fair access to that resource.
      2. Differential responsiveness.
      3. Efficiency: OS should maximize processor utilization, minimize response time, and accommodate as many users as possible.
    • OS elements involved in the scheduling of process and the allocation of resources in a multiprogramming environment:
      • Short-term queue: contains processes in the main memory and are ready to run as soon as the processor is made available.
      • Short-term scheduler: decides which process in the short-term queue to use the processor, a common strategy is to give each process some time in term (round-ribbon) technique.
      • Long-term queue: list of all new jobs waiting to use the processor, the OS adds jobs to the system by transferring process from the long-term queue to the short-term queue.
      • I/O queues: each device has a queue for the processes waiting to use that device; it is the OS that decide which process to assign to an available I/O device.


Operating Systems: Internals and Design Principles (6th Edition), William Stallings

%d bloggers like this: