Blockchain technology operating system surveyor at knowledge and skill Operating System is a program that acts as an intermediary between multi task user of a computer and the computer hardware ( Overview and Examples ) # Manguntam check point software tech view my R & D development .
Operating system goals
•Execute user programs and make solving user problems easier
•Make the computer system convenient to use
•Use the computer hardware in an efficient manner. It provided a basis for application programs and acts as an intermediary between the computer user and the computer hardware.
A computer system can be divided roughly into four components – the hardware, the operating system, the application programs and the users.
The hardware – which consists of CPU, memory and I/O devices, provides the basic computing resources for the system.
The application programs define the ways in which these resources are used to solve users’ computing problems.
The operating system controls and co-ordinates the use of hardware among the various application programs for the various users.
Operating System Definition
OS is a resource allocator
- Manages all resources
- Decides between conflicting requests for efficient and fair resource use
- OS is a control program
- Controls execution of programs to prevent errors and improper use of the computer
- No universally accepted definition
- Everything a vendor ships when you order an operating system” is good approximation
But varies wildly
- “The one program running at all times on the computer” is the kernel. Everything else is either a system program (ships with the operating system) or an application program
Operating system from the user view- The user’s view of the computer varies according to the interface being used. While designing a PC for one user, the goal is to maximize the work that the user is performing. Here OS is designed mostly for ease of use. In another case, the user sits at a terminal connected to a mainframe or minicomputer. Other users can access the same computer through other terminals. OS here is designed to maximize resource utilization to assure that all available CPU time, memory and I/O are used efficiently. In other cases, users sit at workstations connected to networks of other workstations and servers. These users have dedicated resources but they also share resources such as networking and servers. Here OS is designed to compromise between individual usability and resource utilization.
Operating system from the system view-From the computer’s point of view, OS is the program, which is widely involved with hardware. Hence OS can be viewed as resource allocator where in resources are – CPU time, memory space, file storage space, I/O devices etc. OS must decide how to allocate these resources to specific programs and users so that it can operate the computer system efficiently. OS is also a control program. A control program manages the execution of user programs to prevent errors and improper use of computer. It is concerned with the operation and control of I/O devices.
Description:-
- Computer-system operation
- One or more CPUs, device controllers connect through common bus providing access to shared memory
- Concurrent execution of CPUs and devices competing for memory cycles
Computer-System Operation
- I/O devices and the CPU can execute concurrently
- Each device controller is in charge of a particular device type
- Each device controller has a local buffer
- CPU moves data from/to main memory to/from local buffers
- I/O is from the device to local buffer of controller
- Device controller informs CPU that it has finished its operation by causing an interrupt
Common Functions of Interrupts
- Interrupt transfers control to the interrupt service routine generally, through the interrupt vector, which contains the addresses of all the service routines
- Interrupt architecture must save the address of the interrupted instruction
- Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt. A trap is a software-generated interrupt caused either by an error or a user request
- An operating system is interrupt driven
Interrupt Handling
- The operating system preserves the state of the CPU by storing registers and the program counter
- Determines which type of interrupt has occurred:
- polling
- vectored interrupt system
- Separate segments of code determine what action should be taken for each type of interrupt .
A blockchain operating system uses blockchain as a support system that runs in the background of a computer system or platform. For instance, your Android mobile or Windows PC needs a local installation of the respective operating system (OS) on the smartphone’s memory or on the PC’s hard disk, and all transactions and commands are executed locally. A blockchain-based OS captures all commands and transactions from a user’s device where authenticating, executing, and recording them occurs on the blockchain.
KEY TAKEAWAYS
- A blockchain operating system leverage blockchain ledger technology to run computer systems or networks in whole or in part.
- In order to function as an OS, blockchain protocols must allow for the execution of computer code and commands from users.
- Initially aimed as OS for mobile phones or other connected smart devices, blockchain operating systems tout a high level of data security and user anonymity.
Understanding Blockchain Operating Systems
Beyond the standard payment processing system of the popular Bitcoin cryptocurrency, blockchain is finding extensive use all across the technology stack. The emerging fad among the technical development on distributed ledger technology is the blockchain operating system.
A blockchain essentially works as a ledger turned transaction processing engine. Whether you need a payment processed, or you need to arm your cryptokitty with the latest gadget on the Ethereum platform, or if you want to track your high-cost wine shipment right from the vineyard to your doorstep on the VeChain blockchain, all such applications of blockchain are based on authenticating, recording, and processing transactions.
Any standard operating system, be it Microsoft Windows, Apple Mac, or mobile systems like Android or iOS, also executes transactions based on user commands issued through mouse-clicks or screen-taps where all the tasks get completed locally on the device. The same concept is extended to the use of a blockchain for device OS, where its use for working as an operating system is seen as a more efficient OS.
Examples of Blockhain OS Efforts
Early attempts to build blockchain-based OS first emerged for mobile and smartphone use, and it was a cloud-based virtual system. All the necessary transaction processing occurs on the cloud-hosted blockchain-based data center, with the user only issuing necessary commands through the taps on the device touchscreen.
What Is a Blockchain Operating System?
A blockchain operating system uses blockchain as a support system that runs in the background of a computer system or platform. For instance, your Android mobile or Windows PC needs a local installation of the respective operating system (OS) on the smartphone’s memory or on the PC’s hard disk, and all transactions and commands are executed locally. A blockchain-based OS captures all commands and transactions from a user’s device where authenticating, executing, and recording them occurs on the blockchain.
KEY TAKEAWAYS
- A blockchain operating system leverage blockchain ledger technology to run computer systems or networks in whole or in part.
- In order to function as an OS, blockchain protocols must allow for the execution of computer code and commands from users.
- Initially aimed as OS for mobile phones or other connected smart devices, blockchain operating systems tout a high level of data security and user anonymity.
Understanding Blockchain Operating Systems
Beyond the standard payment processing system of the popular Bitcoin cryptocurrency, blockchain is finding extensive use all across the technology stack. The emerging fad among the technical development on distributed ledger technology is the blockchain operating system.
A blockchain essentially works as a ledger turned transaction processing engine. Whether you need a payment processed, or you need to arm your cryptokitty with the latest gadget on the Ethereum platform, or if you want to track your high-cost wine shipment right from the vineyard to your doorstep on the VeChain blockchain, all such applications of blockchain are based on authenticating, recording, and processing transactions.
Any standard operating system, be it Microsoft Windows, Apple Mac, or mobile systems like Android or iOS, also executes transactions based on user commands issued through mouse-clicks or screen-taps where all the tasks get completed locally on the device. The same concept is extended to the use of a blockchain for device OS, where its use for working as an operating system is seen as a more efficient OS.
Examples of Blockhain OS Efforts
Early attempts to build blockchain-based OS first emerged for mobile and smartphone use, and it was a cloud-based virtual system. All the necessary transaction processing occurs on the cloud-hosted blockchain-based data center, with the user only issuing necessary commands through the taps on the device touchscreen.
For instance, Hong Kong-based NYNJA Group Ltd. has a strategic collaboration with Amgoo smartphone makers for its blockchain-based NYNJA virtual operating system (vOS). The two companies will work with telecom operators in Latin America to provide NYNJA vOS users with an initial block of data upon activation. The vOS supports a communication layer offering text, voice, video conferencing, and project management tools, a secure payments layer for commercial transactions, and a multi-currency wallet that supports Bitcoin, Ethereum, and all ERC-20 compatible tokens.
The OS platform also supports a marketplace for commercial activities—like allocating the skilled ‘gig economy workers to specific job demands from the users, and a market for users to buy and sell goods. The vOS is supported by its native cryptocurrency called NYNJAcoin or NYN.1
Special Considerations
All benefits and advantages of blockchain are expected to be available to blockchain OS users.
Whatever a user does on their Android or iOS mobiles, or Windows or Mac PCs is prone to be captured by the respective apps, ISPs, as well as OS manufacturers who may record all user activities in the OS logs. Blockchain-based OS offers benefits of security and privacy, and the de-regulated, decentralized use of OS.
The concept is still evolving, and real-world use is limited. However, if it succeeds in offering a smooth and clutter-free working of the device OS, it may not be too far to see more and more devices running on such blockchain OS.
Operating System Structure
=======================
An OS provides an environment within which programs are executed. One of the most important aspects of OS is its ability to multi program. Multi programming increases CPU utilization by organizing jobs (code and data) so that the CPU always has one to execute.
OS keeps several jobs in memory. This set of jobs can be a subset of jobs kept in the job pool, which contains all jobs that enter the system. OS picks and begins to execute one of the jobs in memory. The job may have to wait for some task, such as I/O operation to complete. In a non multi programmed system, OS simply switches to and executes another job. When that job needs to wait, CPU is switched to another job and so on. As long as at least on job needs to execute, CPU is never idle.
Multi programmed systems provide an environment in which the various system resources are utilized effectively but they do not provide for user interaction with the computer system. Time sharing or multi tasking is a logical extension of multi programming. In time sharing systems, CPU executes multiple jobs by switching among them but the switches occur so frequently that the users can interact with each program while it is running.
Time-sharing requires an interactive computer system, which provides direct communication between the user and the system. A time shared operating system allows many users to share the computer simultaneously. It uses CPU scheduling and multi programming to provide each user with a small portion of a time shared computer.
A program loaded into memory and executing is called a process.
Time-sharing and multi programming require several jobs to be kept simultaneously in memory. Since main memory is too small to accommodate all jobs, the jobs are kept initially on the disk in the job pool.
This pool consists of all processes residing on disk awaiting allocation of main memory. If several jobs are ready to be brought into memory and there is not enough space, then the system must choose among them. Making this decision is job scheduling.
Having several programs in memory at the same time requires some form of memory management. If several jobs are ready to run at the same time, the system must choose among them. Making this decision is CPU scheduling.
In a time-sharing system, the operating system must ensure reasonable response time which is accomplished through swapping where processes are swapped in and out of main memory to the disk.
Virtual memoryis a technique that allows the execution of a process that is not completely in memory. It enables users to run programs that are larger than actual physical memory.
Protection and security- If a computer system has multiple users and allows the concurrent execution of multiple processes, then access to data must be regulated. Hence, mechanisms ensure that files, memory segments, CPU and other resources can be operated on by only those processes that have gained proper authorization from the OS.
Protection is a mechanism for controlling the access of processes or users to the resources defined by a computer system. This mechanism must provide means for specification of the controls to be imposed and means for enforcement. Protection improves reliability by detecting latent errors at the interfaces between component sub systems.
It is the job of security to defend a system from external and internal attacks. Such attacks spread across a huge range and include viruses and worms, denial of service attacks, identity theft and theft of service.
Introduction:-A distributed system is a collection of physically separate, possibly heterogeneous computer systems that are networked to provide the users with access to the various resources that the system maintain. Access to a shared resource increases computation speed, functionality, data availability and reliability. The protocols that create a distributed system can greatly affect that system’s utility and popularity.
It is an application that executes a collection of protocols to coordinate the actions of multiple processes on a network, such that all components cooperate together to perform a single or small set of related tasks
A distributed system have the following characteristics:-
- Fault-Tolerant: It can recover from component failures without performing incorrect actions.
- Highly Available: It can restore operations, permitting it to resume providing services even when some components have failed.
- Recoverable: Failed components can restart themselves and rejoin the system, after the cause of failure has been repaired.
- Consistent: The system can coordinate actions by multiple components often in the presence of concurrency and failure. This underlies the ability of a distributed system to act like a non-distributed system.
- Scalable: It can operate correctly even as some aspect of the system is scaled to a larger size. For example, we might increase the size of the network on which the system is running. This increases the frequency of network outages and could degrade a "non-scalable" system. Similarly, we might increase the number of users or servers, or overall load on the system. In a scalable system, this should not have a significant effect.
- Predictable Performance: The ability to provide desired responsiveness in a timely manner.
- Secure: The system authenticates access to data and services
A network is a communication path between two or more systems. Distributed systems depend on networking for their functionality.
Networks are characterized based on the distances between their nodes.
- A local area network (LAN) connects computers within a room, a floor or a building.
- A wide area network (WAN) links buildings, cities or countries.
- A metropolitan area network (MAN) could link buildings within a city.
Special Purpose Systems
Classes of computers whose functions are limited and objective is to deal with limited computation domains-
Real Time Embedded Systems: Embedded computers are devices found from car engines and manufacturing robots to VCR’s and microwave ovens. These have specific tasks to accomplish. Embedded systems almost always run real time operating system.
Multimedia systems: Most operating systems are designed to handle conventional data such as text files, programs, and word processing documents and spread sheets. A recent trend is incorporation of multimedia data into computer systems. Multimedia data consist of audio and video files as well as conventional files.
Handheld systems: Handheld systems include personal digital assistants (PDA’s), cellular telephones many of which use special purpose embedded operating systems.
Distributed system
================
A distributed system is a collection of physically separate, possibly heterogeneous computer systems that are networked to provide the users with access to the various resources that the system maintain. Access to a shared resource increases computation speed, functionality, data availability and reliability. The protocols that create a distributed system can greatly affect that system’s utility and popularity.
It is an application that executes a collection of protocols to coordinate the actions of multiple processes on a network, such that all components cooperate together to perform a single or small set of related tasks
A distributed system have the following characteristics:-
- Fault-Tolerant: It can recover from component failures without performing incorrect actions.
- Highly Available: It can restore operations, permitting it to resume providing services even when some components have failed.
- Recoverable: Failed components can restart themselves and rejoin the system, after the cause of failure has been repaired.
- Consistent: The system can coordinate actions by multiple components often in the presence of concurrency and failure. This underlies the ability of a distributed system to act like a non-distributed system.
- Scalable: It can operate correctly even as some aspect of the system is scaled to a larger size. For example, we might increase the size of the network on which the system is running. This increases the frequency of network outages and could degrade a "non-scalable" system. Similarly, we might increase the number of users or servers, or overall load on the system. In a scalable system, this should not have a significant effect.
- Predictable Performance: The ability to provide desired responsiveness in a timely manner.
- Secure: The system authenticates access to data and services
A network is a communication path between two or more systems. Distributed systems depend on networking for their functionality.
Networks are characterized based on the distances between their nodes.
- A local area network (LAN) connects computers within a room, a floor or a building.
- A wide area network (WAN) links buildings, cities or countries.
- A metropolitan area network (MAN) could link buildings within a city.
Special Purpose Systems
Classes of computers whose functions are limited and objective is to deal with limited computation domains-
Real Time Embedded Systems: Embedded computers are devices found from car engines and manufacturing robots to VCR’s and microwave ovens. These have specific tasks to accomplish. Embedded systems almost always run real time operating system.
Multimedia systems: Most operating systems are designed to handle conventional data such as text files, programs, and word processing documents and spread sheets. A recent trend is incorporation of multimedia data into computer systems. Multimedia data consist of audio and video files as well as conventional files.
Handheld systems: Handheld systems include personal digital assistants (PDA’s), cellular telephones many of which use special purpose embedded operating systems.
Operating system service
=====================
OS provides an environment for execution of programs. It provides certain services to programs and to the users of those programs. OS services are provided for the convenience of the programmer, to make the programming task easier. One set of SOS services provides functions that are helpful to the user –
a. All OS have a user interface (UI).Interfaces are of three types-
- Command Line Interface uses text commands and a method for entering them.
- Batch interface commands and directives to control those commands are entered into files and those files are executed.
- Graphical user interface is a window system with a pointing device to direct IO, choose from menus and make selections and a keyboard to enter text.
b. Program execution System must be able to load a program into memory and run that program. The program must be able to end its execution either normally or abnormally.
c. IO operations A running program may require IO which may involve a file or an IO device. For efficiency and protection, users cannot control IO devices directly.
d. File system manipulation Programs need to read and write files and directories. They also need to create and delete them by name, search for a given file, and list file information.
e. Communications One process might need to exchange information with another process. Such communication may occur between processes that are executing on the same computer or between processes that are executing on different computer systems tied together by a computer network. Communications may be implemented via shared memory or through message passing.
f. Error detection OS needs to be constantly aware of possible errors. Errors may occur in the CPU and memory hardware, in IO devices and in the user program. For each type of error, OS takes appropriate action to ensure correct and consistent computing.
Another set of OS functions exist for ensuring efficient operation of the system. They are-
- Resource allocation: When there are multiple users or multiple jobs running at the same time, resources must be allocated to each of them. Different types of resources such as CPU cycles, main memory and file storage are managed by the operating system.
- Accounting: Keeping track of which users use how much and what kinds of computer resources.
- Protection and security: Controlling the use of information stored in a multiuser or networked computer system. Protection involves ensuring that all access to system resources is controlled. Security starts with requiring each user to authenticate himself or herself to the system by means of password and to gain access to system resources .
System call
==========
System calls provide an interface to the services made available by an operating system.The mechanism used by an application program to request service from the operating system. System calls often use a special machine code instruction which causes the processor to change mode (e.g. to "supervisor mode" or "protected mode"). This allows the OS to perform restricted actions such as accessing hardware devices or the memory management unit.
An example to illustrate how system calls are used: Writing a simple program to read data from one file and copy them to another file-
a) First input required is names of two files – input file and output file. Names can be specified in many ways-
One approach is for the program to ask the user for the names of two files. In an interactive system, this approach will require a sequence of system calls, to write a prompting message on screen and then read from the keyboard the characters that define the two files. On mouse based and icon-based systems, a menu of file names is displayed in a window where the user can use the mouse to select the source names and a window can be opened for the destination name to be specified.
b) Once the two file names are obtained, program must open the input file and create the output file. Each of these operations requires another system call.
Possible error conditions – When the program tries to open input file, no file of that name may exist or file is protected against access. Program prints a message on console and terminates abnormally. If input file exists, we must create a new output file. If the output file with the same name exists, the situation caused the program to abort or delete the existing file and create a new one. Another option is to ask the user (via a sequence of system calls) whether to replace the existing file or to abort the program. When both files are set up, a loop reads from the input file and writes to the output file (system calls respectively). Each read and write must return status information regarding various possible error conditions. After entire file is copied, program closes both files, write a message to the console or window and finally terminate normally.
Application developers design programs according to application programming interface (API). API specifies set of functions that are available to an application programmer. Three of the most common API’s available to application programmers are the Win32API for Windows Systems; POSIX API for POSIX based systems (which include all versions of UNIX, Linux and Mac OS X) and Java API for designing programs that run on Java virtual machine.
functions that make up the API typically invoke the actual system calls on behalf of the application programmer.
1. Program portability – An application programmer designing a program using an API can expect program to compile and run on any system that supports the same API.
2. Actual system calls can be more detailed and difficult to work with than the API available to an application programmer.
The run time support for most programming languages provides a system call interface that serves as a link to system calls made available by OS. The system call interface intercepts function calls in the API and invokes the necessary system call within the operating system. A number is associated with each system call and the system call interface maintains a table indexed according to these numbers. System call interface then invokes the intended system call in the OS kernel and returns the status of the system call and return any values. System calls occur in different ways, depending on the computer in use – more information is required than simply the identity of the desired system call. The exact type and amount of information vary according to the particular OS and call. Three general methods are used to pass parameters to OS-
I. Pass the parameters in registers
II. Storing parameters in blocks or tables in memory and the address of the block id passed as a parameter in a register
III. Placing or pushing parameters onto the stack by the program and popping off the stack by the OS.
System Program
==============
System programs provide a convenient environment for program development and execution. They can be divided into these categories-
- File management: These programs create, delete, copy, rename, print, dump, list and manipulate files and directories.
- Status information: Some programs ask the system for the date, time, and amount of available memory or disk space, number of users.
- File modification: Text editors may be available to create and modify the content of files stored on disk or other storage devices.
- Programming language support: Compilers, assemblers, debuggers and interpreters for common programming languages are often provided to the user with the OS.
- Program loading and execution: Once a program is assembled or compiled, it must be loaded into memory to be executed. System provides absolute loaders, relocatable loaders, linkage editors and overlay loaders.
- Communications: These programs provide the mechanism for creating virtual connections among processes, users and computer systems.
In addition to systems programs, OS are supplied with programs that are useful in solving common problems or performing operations. Such programs include web browsers, word processors and text formatters, spread sheets, database systems etc. These programs are known as system utilities or application programs.
Operating System Structure
Operating systems of commercial systems started as a small, simple and limited systems. Example is MS-DOS. It was written to provide the most functionality in the least space, so it was not divided into modules.
But the interfaces and levels of functionality are not separated. It was also limited by the hardware. Layered approach: With proper hardware support, OS can be broken into pieces that are smaller and more appropriate. OS can then retain much greater control over the computer and over the applications that make use of the computer. Under the top down approach, the overall functionality and features are determined and are separated into components. A system can be made modular in many ways – one method is the layered approach in which the OS is broken up into number of layers (levels). The bottom layer is the hardware and the highest layer is the user interface.
The main advantage of layered approach is simplicity of construction and debugging. The layers are selected so that each uses functions and services of only lower level layers. This approach simplifies debugging and system verification. The major difficulty with layered approach involves defining the various layers. They tend to be less efficient than other types.
Operating system generation
=========================
Operating systems are designed to run on any class of machines at a variety of sites with a variety of peripheral configurations. The system must then be configured or generated for each specific computer site, a process known as system generation (SYSGEN). This SYSGEN program reads from a given file or asks the operator of the system for information concerning the specific configuration of the hardware system or probes the hardware directly to determine what components are there.SYSGEN program obtains information concerning the specific configuration of the hardware system.
•Booting – starting a computer by loading the kernel
•Bootstrap program – code stored in ROM that is able to locate the kernel, load it into memory, and start its execution
System Boot
•Operating system must be made available to hardware so hardware can start it
•Small piece of code – bootstrap loader, locates the kernel, loads it into memory, and starts it
•Sometimes two-step process where boot block at fixed location loads bootstrap loader
•When power initialized on system, execution starts at a fixed memory location
Firmware used to hold initial boot code. The following information must be determined:
a) What CPU is to be used? What options are installed? For multiple CPU systems, each CPU system must be described.
b) How much memory is available?
c) What devices are available?
d) What operating system options are desired or what parameter values are to be used?
Once this information is determined, it can be used in several ways. It can be used by the system administrator to modify a copy of the source code of the OS. OS is then completely compiled. It is also possible to construct a system that is completely table driven. All the code is always part of the system and selection occurs at execution time rather than compile time or link time. The major differences among these approaches are the size and generality of the generated system and the ease of modification as the hardware configuration changes.
Operating system interface
=======================
There are two fundamental approaches for users to interface with the operating system. One technique is to provide a command-line interface or command interpreter that allows users to directly enter commands that are to be performed by the operating system. The second approach allows the user to interface with the operating system via a graphical user interface or GUI.
Command Interpreter:-Some operating systems include the command interpreter in the kernel. Others, such as Windows XP and UNIX, treat the command interpreter as a special program that is running when a job is initiated or when a user first logs on (on interactive systems). On systems with multiple command interpreters to choose from, the interpreters are known as shells. For example, on UNIX and Linux systems, there are several different shells a user may choose from including the Bourne shell, C shell, Bourne-Again shell, the Korn shell, etc. Most shells provide similar functionality with only minor differences; most users choose a shell based upon personal preference.
The main function of the command interpreter is to get and execute the next user-specified command. Many of the commands given at this level manipulate files: create, delete, list, print, copy, execute, and so on. The MS-DOS and UNIX shells operate in this way. There are two general ways in which these commands can be implemented.
In one approach, the command interpreter itself contains the code to execute the command. For example, a command to delete a file may cause the command interpreter to jump to a section of its code that sets up the parameters and makes the appropriate system call. In this case, the number of commands that can be given determines the size of the command interpreter, since each command requires its own implementing code.
An alternative approach—used by UNIX, among other operating systems implements most commands through system programs. In this case, the command interpreter does not understand the command in any way; it merely uses the command to identify a file to be loaded into memory and executed.
Thus, the UNIX command to delete a file
rm file.txt
would search for a file called rm, load the file into memory, and execute it with the parameter file.txt. The function associated with the rm command would be defined completely by the code in the file rm. In this way, programmers can add new commands to the system easily by creating new files with the proper names. The command-interpreter program, which can be small, does not have to be changed for new commands to be added.
Graphical User Interfaces: - A second strategy for interfacing with the operating system is through a user-friendlygraphical user interface or GUI. Rather than having users directly entercommands via a command-line interface, a GUI allows provides a mouse-basedwindow-and-menu system as an interface. A GUI provides a desktop metaphorwhere the mouse is moved to position its pointer on images, or icons, on thescreen (the desktop) that represent programs, files, directories, and systemfunctions. Depending on the mouse pointer's location, clicking a button on themouse can invoke a program, select a file or directory known as a folder or pull down a menu that contains commands.
Graphical user interfaces first appeared due in part to research taking place in the early 1970s at Xerox PARC research facility. The first GUI appeared on the Xerox Alto computer in 1973. However, graphical interfaces became more widespread with the advent of Apple Macintosh computers in the 1980s. The user interface to the Macintosh operating system (Mac OS) has undergone various changes over the years, the most significant being the adoption of the Aqua interface that appeared with Mac OS X. Microsoft's first version of Windows—version 1.0—was based upon a GUI interface to the MS-DOS operating system. The various versions of Windows systems proceeding this initial version have made cosmetic changes to the appearance of the GUI and several enhancements to its functionality, including the Windows Explorer.
Traditionally, UNIX systems have been dominated by command-line interfaces, although there are various GUI interfaces available, including the Common Desktop Environment (CDE) and X-Windows systems that are common on commercial versions of UNIX such as Solaris and IBM's AIX system. However, there has been significant development in GUI designs from various open source projects such as K Desktop Environment (or KDE) and the GNOME desktop by the GNU project. Both the KDE and GNOME desktops rim on Linux and various UNIX systems and are available under open-source licenses, which means their source code is in the public domain.
Process Control Block
===================
Each process is represented in the operating system by a process control block also called task control block. It contains many pieces of information associated with a specific process including-
Process State: The state may be new, ready, running, waiting, halted etc.
Program Counter: The counter indicates the address of the next instruction to be executed for this process.
CPU registers: The registers vary in number and type depending on the computer architecture.
CPU scheduling information: This information includes a process priority, pointers to scheduling queues, and other scheduling parameters.
Memory management information: This information may include such information as the value of base and limit registers etc.
Accounting information: This information includes the amount of CPU and real time used, time limits etc.
I/O status information: This information includes the list of I/O devices allocated to the process, etc.
Threads:-Process is a program that performs a single thread of execution. A single thread of control allows the process to perform only one task at one time.
Process Scheduling:-The objective of multi programming is to have some process running at all times to maximize CPU utilization. The objective of time-sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. To meet these objectives, the process scheduler selects an available process for program execution on the CPU. For a single processer system, there will never be more than one running process.
Scheduling Queues: -As processes enter the system; they are put inside the job queue, which consists of all processes in the system. The processes that are residing in main memory and are ready and waiting to execute are kept on a list called the ready queue. This queue is stored as a linked list. A ready queue header contains pointers to the first and final PCB’s in the list. Each PCB includes a pointer field that points to the next PCB in the ready queue. When a process is allocated the CPU, it executes for a while and eventually quits, is interrupted or waits for the occurrence of a particular event such as the completion of an I/O request. The list of processes waiting for a particular I/O device is called a device queue. Each device has its own device queue.
Operation on process
===================
jobs. Note that a parent needs to know the identities of its children. Thus, when one process creates a new process, the identity of the newly created process is passed to the parent.
A parent may terminate the execution of one of its children for a variety of reasons, such as these:
• The child has exceeded its usage of some of the resources that it has been allocated. (To determine whether this has occurred, the parent must have a mechanism to inspect the state of its children.)
• The task assigned to the child is no longer required.
• The parent is exiting, and the operating system does not allow a child to continue if its parent terminates.
Some systems, including VMS, do not allow a child to exist if its parent has terminated. In such systems, if a process terminates (either normally or abnormally), then all its children must also be terminated. This phenomenon, referred to as cascading termination, is normally initiated by the operating system.
To illustrate process execution and termination, consider that, in UNIX, we can terminate a process by using the exit ( ) system call; its parent process may wait for the termination of a child process by using the wait () system call. The wait () system call returns the process identifier of a terminated child so that the parent can tell which of its possibly many children has terminated. If the parent terminates, however, all its children have assigned as their new parent the init process. Thus, the children still have a parent to collect their status and execution statistics.
Interprocess communication
========================
Processes executing concurrently in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system.
Any process that does not share data with any other process is independent. A process is cooperating if it can affect or be affected by the other processes executing in the system. Clearly, any process that shares data with other processes is a cooperating process.
There are several reasons for providing an environment that allows process cooperation:
• Information sharing. Since several users may be interested in the same piece of information (for instance, a shared file), we must provide an environment to allow concurrent access to such information.
• Computation speedup. If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer has multiple processing elements (such as CPUs or I/O channels).
• Modularity. We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads
• Convenience. Even an individual user may work on many tasks at the same time. For instance, a user may be editing, printing, and compiling in parallel.
Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to exchange data and information. There are two fundamental models of interprocess communication:
(1) Shared memory
(2) Message passing.
In the shared-memory model, a region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes. The two communications models are contrasted in Figure.
Both of the models just discussed are common in operating systems, and many systems implement both. Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. Message passing is also easier to implement than is shared memory for intercomputer communication. Shared memory allows maximum speed and convenience of communication, as it can be done at memory speeds when within a computer.
Shared memory is faster than message passing, as message-passing systems are typically implemented using system calls and thus require the more time-consuming task of kernel intervention.
Shared-Memory Systems:-Interprocess communication using shared memory requires communicatingprocesses to establish a region of shared memory. Typically, a shared-memoryregion resides in the address space of the process creating the shared-memorysegment. Other processes that wish to communicate using this shared-memorysegment must attach it to their address space. Recall that, normally, theoperating system tries to prevent one process from accessing another process'smemory. Shared memory requires that two or more processes agree to removethis restriction. They can then exchange information by reading and writing data in the shared areas. The form of the data and the location are determined bythese processes and are not under the operating system's control. The processesare also responsible for ensuring that they are not writing to the same locationsimultaneously.
Let us consider the producer-consumer problem, which is a common paradigm for cooperating processes. A producer process produces information that is consumed by a consumer process. For example, a compiler may produce assembly code, which is consumed by an assembler. The assembler, in turn, may produce object modules, which are consumed by the loader. The producer-consumer problem also provides a useful metaphor for the client-server paradigm. We generally think of a server as a producer and a client as a consumer.
For example, a web server produces (that is, provides) HTML files and images, which are consumed (that is, read) by the client web browser requesting the resource.One solution to the producer-consumer problem uses shared memory. To allow producer and consumer processes to run concurrently, we must have available a buffer of items that can be filled by the producer and emptied by the consumer. This buffer will reside in a region of memory that is shared by the producer and consumer processes. A producer can produce one item while the consumer is consuming another item. The producer and consumer must be synchronized, so that the consumer does not try to consume an item that has not yet been produced.
Two types of buffers can be used. The unbounded buffer places no practical limit on the size of the buffer. The consumer may have to wait for new items, but the producer can always produce new items. The bounded buffer assumes a fixed buffer size. In this case, the consumer must wait if the buffer is empty, and the producer must wait if the buffer is full.
Let's look more closely at how the bounded buffer can be used to enable processes to share memory. The following variables reside in a region of memory shared by the producer and consumer processes
#define BUFFER_SIZE 10
typedef struct {
. . .
} item;
item buffer[BUFFER_SIZE];
int in = 0;
int out = 0;
The shared buffer is implemented as a circular array with two logical pointers: in and out. The variable in points to the next free position in the buffer; out points to the first full position in the buffer. The buffer is empty when in == out; the buffer is full when ((in 1) % BUFFER_SIZE) == out. The producer process has a local variable next Produced in which the new item to be produced is stored. The consumer process has a local variable next Consumed in which the item to be consumed is stored.
I/O system
==========
Management of I/O devices is a very important part of the operating system - so important and so varied that entire I/O subsystems are devoted to its operation. Consider the range of devices on a modern computer, from mice, keyboards, disk drives, display adapters, USB devices, network connections, audio I/O, printers, special devices for the handicapped, and many special-purpose peripherals.
I/O Subsystems must contend with two trends: (1) The gravitation towards standard interfaces for a wide range of devices, making it easier to add newly developed devices to existing systems, and (2) the development of entirely new types of devices, for which the existing standard interfaces are not always easy to apply.
Device drivers are modules that can be plugged into an OS to handle a particular device or category of similar devices.
I/O Hardware
- I/O devices can be roughly categorized as storage, communications, user-interface, and other
- Devices communicate with the computer via signals sent over wires or through the air.
- Devices connect with the computer via ports, e.g. a serial or parallel port.
- A common set of wires connecting multiple devices is termed a bus.
Buses include rigid protocols for the types of messages that can be sent across the bus and the procedures for resolving contention issues.
Figure illustrates three of the four bus types commonly found in a modern PC:
1. The PCI bus connects high-speed high-bandwidth devices to the memory subsystem and the CPU.
2. The expansion bus connects slower low-bandwidth devices, which typically deliver data one character at a time (with buffering).
3. The SCSI bus connects a number of SCSI devices to a common SCSI controller.
4. A daisy-chain bus is when a string of devices is connected to each other like beads on a chain, and only one of the devices is directly connected to the host.
One way of communicating with devices is through registers associated with each port. Registers may be one to four bytes in size, and may typically include ( a subset of ) the following four:
1. The data-in register is read by the host to get input from the device.
2. The data-out register is written by the host to send output.
3. The status register has bits read by the host to ascertain the status of the device, such as idle, ready for input, busy, error, transaction complete, etc.
4. The control register has bits written by the host to issue commands or to change settings of the device such as parity checking, word length, or full- versus half-duplex operation.
- Interrupt handling can be relatively expensive (slow), which causes programmed I/O to be faster than interrupt-driven I/O when the time spent busy waiting is not excessive.
- Network traffic can also put a heavy load on the system. Consider for example the sequence of events that occur when a single character is typed in a telnet session, as shown in figure 13.15. (And the fact that a similar set of events must happen in reverse to echo back the character that was typed.) Sun uses in-kernel threads for the telnet daemon, increasing the supportable number of simultaneous telnet sessions from the hundreds to the thousands.
- Other systems use front-end processors to off-load some of the work of I/O processing from the CPU. For example a terminal concentrator can multiplex with hundreds of terminals on a single port on a large computer.
- Several principles can be employed to increase the overall efficiency of I/O processing:
- Reduce the number of context switches.
- Reduce the number of times data must be copied.
- Reduce interrupt frequency, using large transfers, buffering, and polling where appropriate.
- Increase concurrency-using DMA.
- Move processing primitives into hardware, allowing their operation to be concurrent with CPU and bus operations.
- Balance CPU, memory, bus, and I/O operations, so a bottleneck in one does not idle all the others.
- The development of new I/O algorithms often follows a progression from application level code to on-board hardware implementation, as shown in Figure 13.16. Lower-level implementations are faster and more efficient, but higher-level ones are more flexible and easier to modify. Hardware-level functionality may also be harder for higher-level authorities (e.g. the kernel) to control .
As systems have developed, protection systems have become more powerful, and also more specific and specialized. To refine protection even further requires putting protection capabilities into the hands of individual programmers, so that protection policies can be implemented on the application level, i.e. to protect resources in ways that are known to the specific applications but not to the more general operating system.
Compiler-Based Enforcement
- In a compiler-based approach to protection enforcement, programmers directly specify the protection needed for different resources at the time the resources are declared.
- This approach has several advantages:
- Protection needs are simply declared, as opposed to a complex series of procedure calls.
- Protection requirements can be stated independently of the support provided by a particular OS.
- The means of enforcement need not be provided directly by the developer.
- Declarative notation is natural, because access privileges are closely related to the concept of data types.
- Regardless of the means of implementation, compiler-based protection relies upon the underlying protection mechanisms provided by the underlying OS, such as the Cambridge CAP or Hydra systems.
- Even if the underlying OS does not provide advanced protection mechanisms, the compiler can still offer some protection, such as treating memory accesses differently in code versus data segments.
There are several areas in which compiler-based protection can be compared to kernel-enforced protection:
Ø Security. Security provided by the kernel offers better protection than that provided by a compiler. The security of the compiler-based enforcement is dependent upon the integrity of the compiler itself, as well as requiring that files not be modified after they are compiled. The kernel is in a better position to protect itself from modification, as well as protecting access to specific files. Where hardware support of individual memory accesses is available, the protection is stronger still.
Ø Flexibility. A kernel-based protection system is not as flexible to provide the specific protection needed by an individual programmer, though it may provide support which the programmer may make use of. Compilers are more easily changed and updated when necessary to change the protection services offered or their implementation.
Ø Efficiency. The most efficient protection mechanism is one supported by hardware and microcode. Insofar as software based protection is concerned, compiler-based systems have the advantage that many checks can be made off-line, at compile time, rather that during execution.
The concept of incorporating protection mechanisms into programming languages is in its infancy, and still remains to be fully developed. However the general goal is to provide mechanisms for three functions:
Ø Distributing capabilities safely and efficiently among customer processes. In particular a user process should only be able to access resources for which it was issued capabilities.
Ø Specifying the type of operations a process may execute on a resource, such as reading or writing.
Ø Specifying the order in which operations are performed on the resource, such as opening before reading.

Komentar
Posting Komentar