BY:
Captain Jason D. Grose, USMC
Lieutenant Sam Chance, USN
Lieutenant Joachim Richter, German Navy
Lieutenant Errol Campbell, USN
Major Clyde Richards, USA
FOR
CS3030
17 September 2001
(slides
for this paper)
INTRODUCTION
The topic of device management, while initially appearing
rather simple, involves intricate details that effect the entire
computing system. It is so important, that to do justice to
the concept, it is necessary to study aspects leading up to
the overall device management goal: to ensure the devices connected
to a computer work in harmony.
In this paper, we will give the ready the basic theory of
device management. In order to provide concrete examples, we
then compare two different software operating systems to show
how each addresses device management in two different ways.
Explaining the software portion of device management would
quite literally cover merely half the equation. To complete
our overview, we will discuss how hardware is also vital to
successful device management.
Finally, we will introduce an exciting new technology that
we feel is the future of device management. This technology
promises to overcome some of the shortcomings of software and
hardware conflicts pertaining to device management. But before
we get to that, we will provide a quick overview of basic device
management theory.
THEORY OF DEVICE MANAGEMENT
To understand device management, it is necessary to begin
with an explanation of devices. Devices generally fall
into one of three categories: dedicated, shared and virtual.
Dedicated devices are assigned to only one job at a time.
Examples may include tape drives, printers, and plotters.
The disadvantage of dedicated devices is they must be allocated
to a single user for the duration of a job’s execution.
Devices in the next two categories are generally preferred.
Shared devices can be assigned several processes. The
assigned processes are interleaved and carefully controlled
by the device manager. Any conflicts are resolved based
on predetermined policies. Virtual devices are a combination
of the other two; that is, they are dedicated devices which
have been transformed into shared devices. For example
printers may use a spooling program the reroutes all print requests
to a disk.
Next, we must understand different media devices we will
address later. Storage media are divided into two groups:
sequential access media and direct access media devices.
As their names imply one accesses data, or records, one at a
time; whereas, direct access can store or access records sequentially
or directly. Speed and sharability are primary tradeoffs.
Sequential Access Media
As an alternative to paper storage media, magnetic tape
was developed for routine secondary storage. It is now
used for routine archiving and storing back up data.
Data, or records, on magnetic tape are stored serially,
one after the other. Each record is physically located
at some position on the tape. Therefore the tape must
forwarded, or reversed, to the physical location of the record
resulting in a very time-consuming task.
The tape consists of tracks; one track for each data bit,
plus an additional track for parity. The number of characters
that can be stored on a given length of tape is determined by
the density of the tape.
Records may be stored individually or in blocks. “Blocking”
records increases the speed at which records may be accessed,
known as transfer rate. Additionally, more data may be
stored on a given length of tape as less “overhead” is required
to manage the records. Still, magnetic tape is a poor
choice for routine secondary storage as average access time
is too great.
Direct Access Media
Direct access storage devices, also known as random access
storage devices, are devices that can read or write to a specific
place. Two general categories are fixed head and movable
head devices.
Fixed head drums represent an example of early types of
these devices. They were very fast, but very expensive
and held less data. Fixed head disks are a similar concept,
just on a different plane. Tracks are comprised of concentric
circles on each platter, or disk. While fixed head devices
are faster than movable head devices, they store less data and
are more expensive.
Movable head devices have one read/write head that floats
over the surface of the disk. If disks are “stacked” with
a read/write head for each platter, a virtual cylinder is formed.
Using the same track (i.e., track zero) on each platter to store
data results in faster write times, and illustrates the data
cylinder, or drum, concept.
Access Time
Access time can be affected by as many as three important
factors: seek time, search time, and transfer time. Seek
time (the slowest of the three) is the time it takes to position
the head over the desired location on the disk Search
time, or rotational delay is the time it takes to rotate the
disk until the desired track is under the head. Finally,
transfer time, the fastest of the three, is the time it takes
to move/copy the data from the disk to main memory. For
fixed head devices, access time equals search time plus transfer
time. Whereas, for movable head devices, access time is
seek time plus search time plus transfer time. Still,
movable head devices are more common as they store more data
and cost less.
CD Technology
CD-ROMs are an example of optical disc storage. They
provide high density, reliable storage. The optical disc
drive functions similarly to the magnetic drive. Two of
the most important parameters of optical disc drives are sustained
data transfer rate and average access time. Both of these
have improved over time as the technology has advanced from
single-speed to hex-speed drives. Another important feature
is cache size which acts as a buffer by transferring anticipated
blocks of data to memory for readily available use.
I/O subsystem
The components of the I/O subsystem consist of the I/O channels,
I/O control units, and I/O devices (e.g. disk drives, tape drives,
printers, etc.).
The pieces of the I/O subsystem must work in harmony.
An analogy we will use is a mythical taxicab company dispatcher.
The dispatcher handles incoming calls as they arrive and finds
transportation. The dispatcher organizes the calls in
an order most efficient for his available resources. Once
the order is set the dispatcher communicates with the drivers
who (ideally) pick up and deliver the passengers. As you
might imagine, problems or “conflicts” inevitably occur.
The I/O subsystem’s components function similarly.
The channel keeps up with I/O requests from the CPU and passes
them down the line to the appropriate control unit (taxicab/driver).
The I/O devices act as the vehicles.
I/O Channels
I/O channels are placed between the CPU and the control
units and they synchronize the fast speed of the CPU with the
slower I/O devices. They make it possible for I/O operations
and processor operations to overlap so the CPU and I/O can process
concurrently. Channels use channel programs which
specify the action to be performed by the devices and controls
data transmission between main memory and the control units.
The channel sends one signal for each function, which is interpreted
by the I/O control unit.
Channels are as fast as the CPU. Thus they are able
to direct several control units by interleaving commands (just
as several taxicab drivers can be controlled by one dispatcher).
Additionally, each control unit can control several devices.
Greater flexibility can be achieved by connecting more than
one channel to a control unit or by connecting more than one
control unit to a device. These multiple paths increase
reliability as redundancy is built into the system.
To keep the device manager running efficiently three problems
must be resolved:
- The device manager must know which components are free
or busy
- It must be able to accommodate requests that enter during
heavy I/O traffic
- It must accommodate the disparity of speeds between the
CPU and the I/O devices
Channel Status Word
The I/O operation’s completion is signaled by a hardware
flag that is tested by the CPU. This flag is made up of
three bits and resides in the Channel Status Word, a pre-defined
location in main memory. Each component of the I/O subsystem,
channel, control unit, and device, is represented by one of
the three bits.
Polling and Interrupts
There are two common methods used to test the status of
I/O paths: polling and interrupts.
Polling uses a special machine instruction to routinely
check the flag status. The disadvantage to polling is
the frequency at which the flag is tested. If the flag
is tested too frequently, processor time is wasted. If
it is checked to seldom, the device may remain idle too long.
Interrupts are more efficient for testing the flag.
A hardware mechanism test the flag during each machine cycle,
instead of the CPU. Thus, status of I/O devices is continually
monitored. Devices are employed using a pre-defined priority
scheme.
Direct memory access (DMA) is a technique that allows a
control unit to access main memory directly. In this scheme
the majority of the required data for a given operation is transferred
to and from memory without CPU intervention. The frees
the CPU to execute other tasks.
Buffers are used to synchronize data movement between relatively
slow devices and the very fast CPU. Buffers are implemented
at various places in the system, and temporarily store data.
I/O Requests
The device manager divides I/O requests into three parts,
with each part handled by a specific software component of the
I/O subsystem (i.e., I/O traffic controller, I/O scheduler,
and I/O device handler).
The I/O traffic controller monitors the status of every
device. Its three main tasks include determining if at
least one path is available, which path to select if more than
one is available, and if all paths are busy, when one will be
open. To achieve these tasks it uses a database containing
the status and connections of each unit in the I/O subsystem.
The I/O scheduler allocates the devices, control units and
channels. Some systems allow the I/O scheduler to give
preferential treatment to I/O requests from “high-priority”
programs. The I/O scheduler synchronizes its work with
the traffic controller to satisfy I/O requests.
The I/O device handler processes I/O interrupts, handles
error conditions, and provides detailed scheduling algorithms,
which are extremely device dependent. Each I/O device
has its own algorithm.
Common Seek Strategies
A seek strategy for the I/O device handler is the predetermined
policy that the device handler uses to allocate access to a
device. It determines the order in which the processes
access the device. Minimal seek time is the goal.
Some of the most common seek strategies include First Come
First Served (FCFS), Shortest Seek Time First (SSTF), , and
SCAN and its variations: LOOK, N-step SCAN, C-SCAN, and C-LOOK.
A seek strategy should minimize mechanical (arm) movement,
minimize average response time, and minimize variance in response
time.
FCFS is the simplest algorithm to implement; however, on
average, it does not achieve any of the three goals. FCFS
is a disadvantage due to extreme arm movement (e.g., track
0, 13, 2,…)
SSTF is advantageous in that it minimizes overall seek time;
however, its disadvantageous in that it favors easy-to-reach
jobs, and postpones traveling to those that are out of the way.
SCAN uses a directional bit to indicate whether arm movement
is toward or away from the center of the disk. The arm
moves methodically back and forth from the outer track to the
inner track servicing requests in its path.
LOOK, also known as the elevator algorithm, is a variation
of SCAN. In this model the arm does not necessarily go
all the way to either edge unless there are requests there.
It effectively “looks ahead” for requests.
N-step SCAN holds all requests until the arm starts on its
way back. Requests that arrive while the arm is in motion
are grouped together for the arm’s next sweep.
With C-SCAN (Circular-SCAN), the arm picks up requests on
its INWARD sweep. After reaching innermost track arm moves immediately
to the outer track and begins servicing requests as it moves
toward the center again. C-SCAN is designed to provide
a more UNIFORM wait time.
C-LOOK is an optimization of C-SCAN; that is, it looks ahead
to the highest track with a request and goes to it and not simply
to the outermost track.
Which is best? It depends!
FCFS is good for light loads
SSTF is good for moderate loads
SCAN is good for light to moderate loads and eliminates the
problem of postponement
C-SCAN works well with moderate to heavy loads
The best algorithm may be a combination of more than one
scheme.
To successfully control hardware, software known as the
operating system (OS) is used to interface hardware and application
software. Just as several different types of hardware
exists, various OS’s are employed on computers. Of course,
there are innumerable software applications which allow users
to complete the endless variety of tasks. We will now
compare two of them.
SOFTWARE APPLICATION
Computer peripheral units such as printers, plotters, tape
drives, disk drives, keyboards or terminals are very common
devices. Because of their various characteristics, they need
to be managed by an operating system in order to meet both the
users´ and the devices´ needs.
In the following portion, we point out the differences of how
the to different operating systems -Windows 2000 and UNIX –
fulfil these differing needs.
Windows 2000
Windows 2000 is a menu-driven operating system (OS) that
uses graphical user interface (GUI) as its primary method of
communication with the user. The majority of Windows 2000 is
written in the language of “C” and the graphic component is
written in “C++.” Windows 2000 is a pre-emptive, multitasking,
multithreaded operating system, i.e. it allows a process to
break up into several threads of execution.
The Input-Output (I/O) system within Windows 2000 is packet
driven which basically means that every I/O request represented
by an I/O request packet (IRP) as it moves from one component
to another. This IRP is a data structure that controls the I/O
operation at each step.
What is the device-manager within Windows 2000?
The path between the operating system and virtually all
the hardware not on the computer’s motherboard goes through
a special program called a driver. Each device has his own driver
whose task is as a translator between the electrical signals
of the hardware and the programming language of the OS and application
programs.
The I/O manager creates the IRP and passes it to the appropriate
driver, disposing the packet when the operation is complete.
On the other hand, when a driver receives an IRP from the I/O
manager, it performs the operation. Afterwards, it passes the
IRP back to the I/O manager or passes the packet through the
I/O manger to a different driver for further processing.
In addition, the I/O manager manages buffers for I/O requests,
provides time-out support for drivers, and keeps track of which
file systems are loaded into the OS. One of the main managing
tasks for the I/O manager is to determine which driver is to
be called to process a request. For example, when a process
needs to open a file several times the I/O manager is to call
the appropriate driver. Therefore the I/O manager creates a
driver object and a device object to locate the needed information
the next time a process uses the same file. When the file is
opened, the I/O manager creates a file object and returns a
file handle to the process. Therefore, whenever the process
uses this file handle, the I/O manager can immediately find
the device object again.
The advantage of using objects in order to keep track of
information about drivers is that the I/O manager does not have
to know details about individual drivers. Instead, it
follows a pointer to locate the needed driver; a pointer from
the device object which received an I/O request and points back
to it’s driver object. Moreover, it is easy to assign drivers
to control additional or different devices.
UNIX
We explained that Windows 2000 controls devices with a hard
coded program built into each device. Conversely, UNIX
treats devices differently, i.e. it treats them as a special
type of file. Stored in device directory, these specials
files within UNIX OS are given descriptors that are able to
identify the devices and contain information about them.
UNIX, like Windows 2000, is written in “C” and uses a GUI.
It’s device drivers are part of the UNIX kernel and when a UNIX
OS is purchased, it comes with device drivers to operate the
most common peripheral devices. But note that there is
no single standardized version of the UNIX OS although it is
able to run on all sizes of computers using a wide range of
microprocessors.
Device Classifications
UNIX divides the I/O system into two separate systems: the
block I/O system and the character I/O system. Each device
is identified by two numbers (the minor and the major device
number) and a class. Each Class has a configuration table that
contains an array of entry points into the device drivers and
is the only connection between the system code and the device
drivers.
The block I/O system is used for devices that can be addressed
as a sequence of 512-byte blocks. This allows the device manager
to use buffering to reduce the I/O traffic. The Least-Recently-Used
(LRU) policy is used to empty a buffer to make room for
a new block. Every time a read command is issued, the I/O buffer
list is searched. If the requested data is already in a buffer,
it is made available to the process. If not, the data is physically
moved from secondary storage to an available buffer.
Within the character I/O system are devices which are handled
by drivers implementing character lists. Here is how it operates:
a subroutine puts a character on the list or queue, and another
subroutine retrieves the character from the list. Some devices
can actually belong to both classes – i.e. disk drives and tape
drives.
As mentioned earlier, UNIX uses directory files to handle
devices. These special files are used to maintain the hierarchical
structure of the file system. Users are allowed to read information
in directory files but only the system is allowed to modify
them.
The UNIX file management system organizes the disk into
blocks of 512 bytes each and divides the disk in four different
region:
1. the first region is reserved for booting,
2. region contains the size of the disk and the boundaries of
the other regions,
3. the third region includes a list of file definitions, called
the I-list and
4. the fourth region holds free blocks available for storage.
Actually device management within UNIX OS is file management.
The advantage of keeping device drivers as part of
the OS and not as part of the devices themselves is that UNIX
can be configured to run any device as long as the system administrator
is capable of changing the necessary code.
The choice between these two operating systems depends heavily
on individual needs and abilities. While the Windows family
of products are more widely used partly due to its ease, the
UNIX systems trades that simplicity for more control over the
system. Next we will discuss how device management is
controlled using two different hardware vice software applications.
HARDWARE APPLICATION
In understanding how a computer’s Input/Output subsystem
handles its devices, we must also understand how those devices
are interfaced with the computer’s Central Processing Unit (CPU).
In a typical personal computer (PC), there are several I/O buses,
which connect the CPU to its other components (except Random
Access Memory (RAM)). Such buses are the “highways” in which
data are moved from one component to another or from component
to CPU or RAM. Essentially, I/O buses are extensions to the
system bus (at a slower speed to accommodate slower devices).
On a modern PC Motherboard, the following I/O buses are typical:
The Peripheral Component Interconnect (PCI) bus is the high-speed
bus of the 1990s. It is used today’s and other computers for
connecting adapters, such as network-controllers, graphics cards,
sound cards, etc. The PCI bus is 32 bits wide, normally runs
at 33 MHz, and supports a maximum throughput of 132 MBps. The
bus is processor independent and can be used with 32 or
64 bit processors.
The Industry Standard Architecture (ISA) bus is an old,
low speed bus that is still a mainstay in even the newest computers,
despite the fact that it is largely unchanged since it was expanded
to 16 bits in 1984.15 It has a bus speed of 8 MHz..
The Advanced Graphic Port (AGP) bus is now commonly solely
is used for the graphics card. AGP is essentially a 66 MHz PCI
bus, which has been enhanced with other technologies making
it suitable for the graphics system.
Performance Factors
There are several performance factors which make the PCI
bus an ideal choice for handling a systems most demanding I/O
requirements. Four of these factors are burst mode, bus mastering,
high bandwidth, and expansion.
Burst Mode: Once an initial address is provided, the PCI
bus can transfer multiple sets of data in a row (a burst of
information).
Bus Mastering: The capability of devices on the PCI bus
to take control of the bus and perform transfers directly. The
PCI design supports full device bus mastering, in that it allows
bus mastering of multiple devices on the bus simultaneously.
It has arbitration circuitry that works to ensure no device
on the bus locks out any other device. At the same time
it allows any given device to use the full bus throughput if
no other device needs to transfer anything.
High Bandwidth: Current PCI specifications call for expandability
to 64 bits and 66 MHz speed which, if implemented, would quadruple
bandwidth over the current design. This design does currently
exist on non-PC platforms and servers; however, mainstream PCI
is still limited to 32 bits and 33 MHz.16
Expansion: The PCI bus offers a great variety of expansion
cards compared other system I/O buses. The most commonly found
cards are video cards, SCSI host adapters, and high-speed networking
cards. Hard disk drives are also on the PCI bus but are normally
connected directly to the motherboard on a PCI system.
As all these factors make the PCI bus a good choice for
device integration. As a result, it has become the most
task saturated I/O bus on today’s systems. How are the devices
integrated?
Interface Types
The two most common types of interfaces between the CPU
and a PC’s peripheral devices are SCSI and IDE. SCSI stands
for “Small Computer Systems Interface” while IDE stands for
“Integrated Device Electronics.” Each interface requires the
use of a host adapter whose job is to act as the gateway between
the SCSI or IDE bus and the PC’s internal I/O bus. Normally
that is the PCI bus since it supports the fastest transfer rates
of the PC’s I/O buses. It sends and responds to commands and
transfers data to and from devices on the bus and inside the
computer itself.
SCSI Interface
In the SCSI interface, devices on the SCSI bus talk to the
computer through a single device on the SCSI bus – called the
controller or host adapter. The controller sends and responds
to commands from the CPU via its interface on the PCI bus. The
SCSI bus is able to manage several devices because each one
has the ability to release the bus after being requested to
do a time consuming job, therefore leaving the bus free for
other devices to use for data transfer or receiving commands.
ATA/IDE Interface
The most popular interface used in modern PCs are ATA/IDE
– Advanced Technology Attachment/Integrated Device Electronics
(names are synonymous). It integrates the hard disk and other
IDE devices through an IDE controller (or host adapter), which
is normally built into the PC’s motherboard.
There are both physical limitations and technological issues
that make both interfaces ideal for certain situations.
SCSI Standards
Since 1986, several SCSI standards have been developed.
As it became increasingly difficult to continually define one
standard, SCSI-3 was established which defined different layers
and command subsets and allowed SCSI sub-standards to evolve
separately. Some of the key changes in SCI standards have been:
Increase in clock speed from 5 MHz to 40 MHz in the most recent
standard.
Increased bus width from 8 (Narrow) to 16 bits (WIDE), essentially
doubling the transfer rate.
Command Set Enhancements made it possible to connect other devices
that previously required proprietary controllers (CD-ROMS).
Command Queuing – allows multiple requests between devices on
the bus.
Double transitioning – allowing two transfers per cycle, increasing
overall throughput.
IDE Standards
Like SCSI, there have been numerous IDE standards as well.
Most modern computers use either ATA/ATAPI-4, 5 or 6; more commonly
known as UDMA/33, 66 or 100. The primary advantage over older
standards were:
ATAPI (AT Attachment Packet Interface). This allowed the connection
of other devices like CD-ROM drives, tape drives, and LS-120
drives to be attached through a common interface.
Direct Memory Access – relieved the CPU and system bus of responsibility
handling memory access processes.
Faster clock speeds (33/66/100 MHz) yielded increased throughput.
SCSI Performance
In considering performance, there are several key factors
that make SCSI an ideal choice for network servers or powerful
workstations. The regular SCSI 2 system can handle 8 devices
including the adapter itself while SCSI Wide handles 16 devices.
Each device has to be assigned a unique number going from ID
0 to ID 7. The SCSI devices can be internal (installed inside
the PC cabinet) or external. The host adapter is a device itself,
typically occupying ID 7.
SCSI performance is enhanced through its intelligent protocol,
which assures maximum utilization of the data transfer.
The basis of SCSI is a set of commands. Each individual
device holds its own controller, which interprets these commands
through a device driver. The advantage is that all commands
within the SCSI system are handled internally, meaning the CPU
does not have to control the process. With enhancements to the
command sets, SCSI offers the flexibility to connect a multitude
of devices (both internal and external) to include: hard drives,
CD-ROMS R & RWs, zip drives, tape drives, scanners, and
cameras. Conversely, users are not given as many options
with IDE.
IDE Performance
Older IDE hard drives used Programmed Input/Output (PIO).
This approach placed heavy demands on the CPU whenever the hard
drive needed to transfer data to or from memory. To alleviate
this, current drives use Direct Memory Access (DMA) where the
hard drive has direct access to the memory, freeing up the CPU
to accomplish other tasks.
Unlike SCSI, IDE doesn’t offer the flexibility of multiple
devices. A typical IDE setup consists of 2 IDE channels (normally
designated as primary and secondary) with the option of having
two devices per channel (designated as slave and master). Using
both channels allows for some multitasking, provided the devices
are connected properly. Only the two main controllers (primary
and secondary) are capable of multitasking. As such, the two
channels can process data simultaneously and independently.
Conversely, the two sub-channels (slave and master) do not multitask.
Only one operation is processed at a time, be it on the master
or on the slave channel. Until that operation is complete, the
channel is unavailable to process further commands, hence it
is limited to sequential access.
In the single device environment, the IDE device has a slight
edge over SCSI. An IDE utilizing DMA can quickly transfer data
to memory because there is less overhead involved. In the case
of the single SCSI device, the overhead involved in issuing
and moving commands acts as a slight hindrance.
Which Is Better?
Because the SCSI bus is managed more intelligently than
the IDE bus, SCSI has the clear advantage in the multi-device
environment. Because an IDE drive completes access instances
sequentially, the channel is unavailable for further commands
until the issued command is completed. Conversely, the SCSI
bus can queue numerous commands, allowing any of those commands
to be completed before the first issued command is completed.
The SCSI bus is also able to send commands to each of its devices
simultaneously, allowing for true multitasking.
Besides performance, there are several other factors to
consider in determining which of the two interfaces is best
for a given situation.
Hard drive bandwidth. Though today’s SCSI drives boast maximum
transfer rates of over 200MB/sec and 10,000 rpm, the most advanced
SCSI-3 (Fast-80DT) only supports a throughput of 160MB/sec.
Coupled with such high overhead required for SCSI interfaces,
the high rate of throughput in a SCSI system is not fully sustainable
and never fully realized. In comparison, a high-end IDE drive
offers similar sustained transfer rates since there’s less overhead
involved.
Price. IDE is far cheaper than SCSI. There is less overhead
involved. Cabling is rather inexpensive since it is shorter
and is not required to support such high bandwidths as SCSI.
IDE is more supported (controller is built into most motherboards)
as well. SCSI devices must be interfaced with either a
SCSI host adapter or a SCSI motherboard, both of which can be
costly.
Ease of Setup – SCSI more difficult since its cabling varies
for different standards. The host adapter and SCSI devices must
be configured and properly terminated. IDE is built into most
current motherboards, so configuration is done in the system
BIOS (firmware). There is no extra hardware necessary for an
IDE setup.
Expansion. IDE limited to up to 4 devices, 2 per IDE channel.
SCSI can be either 7 or 15 depending on narrow or wide. SCSI
also provides interface for external devices as well as internal.
IDE is internal only.
Just as the software was an important individual choice,
so does the hardware choice depend on individual needs and trade-offs.
But the difficulty in choosing which hardware device to pick
is changing. There is a technology on the horizon that promises
to simplify the hardware aspect of device management.
EMERGING TECHNOLOGY
The Uniform Driver Interface, better known as UDI, is a
software architecture that enables a single driver source to
be used with any hardware platform and any operating system.
Project UDI, an open industry group comprised of architects
and engineers from several different OS, system and I/O providers,
is developing the architecture and the specifications that define
UDI.
Project UDI began in 1993 and has largely been driven as
a grass roots effort amongst engineers from companies such as
Adaptec, Compaq (originally Digital), Hewlett Packard, IBM,
Interphase, Lockheed Martin, NCR, SCO, Sun, and Intel.
Concept Overview
Every operating system has its own set of unique interfaces
to which driver writers have historically written their device
drivers. A UDI environment abstracts these by taking OS-specific
services and projecting OS-neutral, strongly-typed procedural
calls for use by the driver writer to use instead. These interfaces
make up the bulk of the UDI Core Specification. In order to
ensure that compatibility between environments and drivers is
provided, versioning of these interfaces is strongly enforced.
The UDI core is extended through the use of metalanguages.
A UDI metalanguage is a set of interface calls that are specific
to a given technology or device model (e.g. SCSI, LAN or USB).
All UDI metalanguages share common properties and make use of
the generic UDI infrastructure, but are tailored to specific
technologies. Supporting a new technology, then, requires the
definition and implementation of a new metalanguage.
The environment includes interfaces for configuration, diagnostics,
error handling, interrupts, system services and hardware access.
UDI thus creates a completely specified and encapsulated environment
in which UDI-compliant drivers live. Therefore, UDI drivers
are not influenced by OS-specific factors; all those details
are hidden within each UDI implementation on each individual
OS. This is why UDI-compliant drivers are transparently portable:
they are truly OS-neutral.
Summary of Benefits
The UDI architecture provides interfaces and services for
fully portable device drivers. That is, at the source code level,
any driver can be recompiled to operate in any system. The benefit
to those using UDI drivers is that a UDI driver written for
one OS and platform may be used in any other OS and platform
supporting a UDI environment.
There are many differences among current operating systems
that influence the environment for device drivers and other
kernel modules. Some support kernel threads; others do not.
Some support preemption; others do not. Some support dynamically
loadable kernel modules; others do not. Variations in memory
management and synchronization models also impinge upon the
device driver environment.
Operating system differences will likely increase in the
future, as vendors move to support distributed systems, fault
tolerance/isolation, and other advanced features, using technologies
such as “microkernels”, I/O processors, and user-mode drivers.
UDI is operating system neutral. It abstracts OS services
and execution environments through a set of interfaces that
are designed to hide differences like those listed above. All
OS-specific policy and mechanisms are kept out of the device
driver. This allows UDI to be supported on a wide range of systems
such as traditional OS kernels, client/server LAN OSs, microkernel-based
OSs, and distributed or networked OSs.
Variations in hardware platforms add additional challenges
such as:
• Devices may be connected via different I/O buses, some proprietary,
on different systems.
• Different systems have different types of caches and buffers
in I/O data paths.
• Bus bridges in the path to an I/O device may introduce additional
alignment constraints.
• The “endianess” (byte ordering) of an I/O card may be different
from the endianess of the CPU on which the driver is running.
• Some systems access card registers via special I/O instructions;
others use memory-mapped I/O.
• Interrupt notification and masking mechanisms differ greatly
from system to system.
UDI is platform neutral. It abstracts all Programmed I/O
(PIO), Direct Memory Access (DMA), and interrupt handling through
a set of interfaces that hide the variations listed above.
UDI drivers are written in ISO standard C and do not use
any compiler-specific extensions. Thus, a single driver source
works regardless of compiler, operating system, or hardware
platform.
UDI helps IHVs:
• Reduced number of driver variants means lower development
and maintenance costs.
• Implicit synchronization and other techniques reduce driver
complexity.
• High-performance design features such as resource recycling
and parallelism are easy to achieve with UDI.
UDI helps operating system vendors:
• OS vendors can utilize drivers not directly targeted for their
OS.
• OS vendors can more easily take advantage of IHV-provided
solutions.
• UDI allows a high degree of flexibility in OS implementation.
• UDI allows high-performance implementations (such as copy-avoidance
and resource recycling) while retaining support for a large
number of devices via standardized drivers.
UDI provides location independence for drivers. This allows
drivers to be written without consideration for where the code
must operate (e.g., kernel, application, intra-OS, interrupt
stack, I/O front end). Code regions may even be divided among
multiple nodes in a cluster, if desired.
UDI imposes restrictions on shared memory, which, by design,
prevent the driver from affecting other portions of the system.
This allows the system to isolate and effectively “firewall”
the driver code from the remainder of the OS, improving reliability
and debuggability.
UDI scales well across all target platforms, from the low-end
such as embedded systems and personal computers to high-end
servers and multi-user MP platforms.
UDI provides strict versioning that allows evolution of
the interfaces while preserving binary compatibility of existing
drivers.
UDI facilitates rapid deployment of new I/O technologies
across a broad range of systems and architectures.
UDI provides a portable, flexible, fully functional environment
for device driver implementation, through a uniform set of platform-
and operating system-neutral interfaces. These interfaces define
paths for operating system access to device drivers for configuration,
diagnostics, I/O requests and interrupt handling. They define
paths for device driver access to system services, related device
drivers, and underlying I/O hardware.
The UDI architecture allows developers to support a device
with a single driver, applicable across the family of systems
supporting the UDI environment. This will, in turn, greatly
reduce the engineering cost and accelerate the availability
of I/O solutions for those systems.
CONCLUSION
As you can see, device management overall is an intricate,
involved, and sometimes confusing topic. We have given you the
basics that cover this important task required of both computer
hardware and software. As we stated before, the most important
goal of device management is to ensure the devices connected
to a computer work in harmony.
To that end, we provided two examples of software applications
by comparing different software operating systems, Windows 2000
and Unix, that showed how each addresses device management.
Next, we discussed how hardware was a vital contributor
to successful device management by comparing SCSI and IDE controllers.
That comparison led us into an exciting new technology that
we feel is the future of device management. UDI promises a bright
future to simplify and empower the field of device management.
ENDNOTES
1 www.howstuffworks.com/operating-system5.htm
2 Flynn, Ida M., McHoes, Ann McIver. Understanding Operating
Systems. Second Edition. PWS Publishing Company, Boston,
MA, (c) 1997. p. 302
3 ibidem, p. 306
4 ibidem, p. 314
5 www.KarbosGuide.com
6 Module 6c2; Chapter: About Operating Systems, p.8
7 Flynn, Ida M., McHoes, Ann McIver. Understanding Operating
Systems. Second
Edition. PWS Publishing Company, Boston, MA, (c) 1997. p.
316-317
8 ibidem, p. 348
9 ibidem, p. 333
10 www.PCGuide.com
11 Flynn, Ida M., McHoes, Ann McIver. Understanding Operating
Systems. Second
Edition. PWS Publishing Company, Boston, MA, (c) 1997. p.
351
12 Scott Mueller, “Upgrading and Repairing PC’s – 12th ed.”, Que
–2000
13 www.pcguide.com
14 www.karbosguide.com
15http://www.pcguide.com/ref/mbsys/buses/types/older.htm
16 http://www.pcguide.com/ref/mbsys/buses/types/pci.htm
17 This is a brief overview and summary of the Uniform Driver
Interface (UDI) that was abstracted from various UDI papers posted
on the project UDI website (www.project-UDI.org). Credit
is attributed to the copyright holders for any original concepts
and ideas presented.
18 Introduction to UDI (Technical White Paper) Version 1.0, http://www.projectudi.org/Docs/pdf/UDI_tech_white_paper.pdf,
August 31, 1999
19 Uniform Driver Interface Management Overview, http://www.projectudi.org/Docs/pdf/UDI_management_overview.pdf,
February 4, 1999
20 One-page UDI Data Sheet, http://www.projectudi.org/Docs/pdf/UDI_data_sheet.pdf,
August 13, 1999
21 UDI FAQ (Frequently Asked Questions) [HTML], http://www.projectudi.org/faq.html,
August 13, 1999
22 Intel Corporation White Paper, “UDI and I2O: Complementary
Approaches to Portable, High-Performance I/O,” http://developer.intel.com/go/dev_guides,
1999
ACRONYM SHEET
AGP: Advanced Graphics Port
API: Application Program Interface
ATA: Advanced Technology Attachment
ATAPI: AT Attachment Packet Interface
CD-ROM: Compact Disc Read Only Memory
CD-ROM R: Compact Disc Read Only Memory - Read
CD-ROM RW: Compact Disc Read Only Memory Read/Write
CPU: Central Processing Unit
CSW: Channel Status Word
DMA: Direct Memory Access
FCFS: First Come First Serve
H/W: Hardware
IDE: Integrated Device Electronics
IHV: Independent Hardware Vendor
I/O: Input/Output
IRP: I/O Request Packet
ISA: Industry Standard Architecture
LRU: Least Recently Used
MBPS: Millions of Bits Per Second
OS: Operating System
OSV Operating System Vendor
PC: Personal computer
PCI: Peripheral Component Interconnect
PIO: Programmed Input/Output
RAM: Random Access memory
SCO: Santa Clara Organization
SCSI: Small Computer Systems Interface
SSTF: Shortest Seek Time First
S/W: Software
UDI: Uniform Driver Interface
UDMA: Ultra DMA
USB: Universal Serial Bus
|