2. What PnP Should Do: Allocate "Bus-Resources"
2.1 What is Plug-and-Play (PnP)?
If you don't understand this section, read the next section Hardware Devices and Communication with them
Oversimplified, Plug-and-Play tells the software (device drivers) where to find various pieces of hardware (devices) such as modems, network cards, sound cards, etc. Plug-and-Play's task is to match up physical devices with the software (device drivers) that operates them and to establish channels of communication between each physical device and its driver. In order to achieve this, PnP allocates and sets the following "bus-resources" in hardware: I/O addresses, memory regions, IRQs, DMA channels (LPC and ISA buses only). These 4 things are sometimes called "1st order resources" or just "resources". Pnp maintains a record of what it's done and allows device drivers to get this information. If you don't understand what these 4 bus-resources are, read the following subsections of this HOWTO: I/O Addresses, IRQs, DMA Channels, Memory Regions. An article in Linux Gazette regarding 3 of these bus-resources is Introduction to IRQs, DMAs and Base Addresses. Once these bus-resources have been assigned (and if the correct driver is installed), the actual driver and the "files" for it in the /dev directory are ready to use.
This PnP assignment of bus-resources is sometimes called "configuring" but it is only a low level type of configuring. The /etc directory has many configuration files but most all of them are not for PnP configuring. So most of the configuring of hardware devices has nothing to do with PnP or bus-resources. For, example the initializing of a modem by an "init string" or setting it's speed is not PnP. Thus when talking about PnP, "configuring" means only a certain type of configuring. While other documentation (such as for MS Windows) simply calls bus-resources "resources", I sometimes use the term "bus-resources" instead of just "resources" so as to distinguish it from the multitude of other kinds of resources.
PnP is a process which is done by various software and hardware. If there was just one program that handled PnP in Linux, it would be simple. But with Linux each device driver does it's own PnP, using software supplied by the kernel. The BIOS hardware of the PC does PnP when a PC is first powered up. And there's a lot more to it than this.
2.2 Hardware Devices and Communication with them
A computer consists of a CPU/processor to do the computing and RAM memory to store programs and data (for fast access). In addition, there are a number of devices such as various kinds of disk-drives, a video card, a keyboard, network devices, modem cards, sound devices, the USB bus, serial and parallel ports, etc. In olden days most devices were on cards inserted into slots in the PC. Today, many devices that were formerly cards, are now on-board since they are contained in chips on the motherboard. There is also a power supply to provide electric energy, various buses on a motherboard to connect the devices to the CPU, and a case to put all this into.
Cards which plug into the motherboard may contain more than one device. Memory chips are also sometimes considered to be devices but are not plug-and-play in the sense used in this HOWTO.
For the computer system to work right, each device must be under the control of its "device driver". This is software which is a part of the operating system (perhaps loaded as a module) and runs on the CPU. Device drivers are associated with "special files" in the /dev directory although they are not really files. They have names such as hda3 (third partition on hard drive a), ttyS1 (the second serial port), eth0 (the first ethernet card), etc.
The eth0 device is for an ethernet card (nic card). Formerly it was /dev/eth0 but it's now just a virtual device in the kernel. What eth0 refers to depends on the type of ethernet card you have. If the driver is a module, this assignment is likely in an internal kernel table but might be found in /etc/modules.conf (called "alias"). For example, if you have an ethernet card that uses the "tulip" chip you could put "alias eth0 tulip" into /etc/modules.conf so that when your computer asks for eth0 it finds the tulip driver. However, modern kernels can usually find the right driver module so that you seldom need to specify it yourself.
To control a device, the CPU (under the control of the device driver) sends commands and data to, and reads status and data from the various devices. In order to do this each device driver must know the address of the device it controls. Knowing such an address is equivalent to setting up a communication channel, even though the physical "channel" is actually the data bus inside the PC which is shared with many other devices.
This communication channel is actually a little more complex than described above. An "address" is actually a range of addresses so that sometimes the word "range" is used instead of "address". There could even be more that one range (with no overlapping) for a single device. Also, there is a reverse part of the channel (known as interrupts) which allows devices to send an urgent "help" request to their device driver.
2.3 Addresses
The PCI bus has 3 address spaces: I/O, main memory (IO memory), and configuration. The old ISA bus lacks a genuine "configuration" address space. Only the I/0 and IO memory spaces are used for device IO. Configuration addresses are fixed and can't be changed so they don't need to be allocated. For more details see PCI Configuration Address Space
When the CPU wants to access a device, it puts the device's address on a major bus of the computer (for PCI: the address/data bus). All types of addresses (such as both I/O and main memory) share the same bus inside the PC. But the presence or absence of voltage on certain dedicated wires in the PC's bus tells which "space" an address is in: I/O, main memory, (see Memory Ranges), or configuration (PCI only). This is a little oversimplified since telling a PCI device that it's a configuration space access is actually more complex than described above. See PCI Configuration Address Space for details. See Address Details for more details on addressing in general.
The addresses of a device are stored in it's registers in the physical device. They can be changed by software and they can be disabled so that the device has no address at all. Except that the PCI configuration address can't be changed or disabled.
2.4 I/O Addresses (principles relevant to other resources too)
Devices were originally located in I/O address space but today they may use space in main memory. An I/0 address is sometimes just called "I/O", "IO", "i/o" or "io". The terms "I/O port" or "I/O range" are also used. Don't confuse these IO ports with "IO memory" located in main memory. There are two main steps to allocate the I/O addresses (or some other bus-resources such as interrupts on the ISA bus):
- Set the I/O address, etc. in the hardware (in one of its registers)
- Let its device driver know what this I/O address, etc. is
Often, the device driver does both of these (sort of). The device driver doesn't actually need to set an I/O address if it finds out that the address has been previously set (perhaps by the BIOS) and is willing to accept that address. Once the driver has either found out what address has been previously set or sets the address itself, then it obviously knows what the address is so there is no need to let the driver know the address --it already knows it.
The two step process above (1. Set the address in the hardware. 2. Let the driver know it.) is something like the two part problem of finding someone's house number on a street. Someone must install a number on the front of the house so that it may be found and then people who might want to go to this address must obtain (and write down) this house number so that they can find the house. For computers, the device hardware must first get its address put into a special register in its hardware (put up the house number) and then the device driver must obtain this address (write the house number in its address book). Both of these must be done, either automatically by software or by entering the data manually into configuration files. Problems may occur when only one of them gets done right.
For manual PnP configuration some people make the mistake of doing only one of these two steps and then wonder why the computer can't find the device. For example, they may use "setserial" to assign an address to a serial port without realizing that this only tells the driver an address. It doesn't set the address in the serial port hardware itself. If you told the driver wrong then you're in trouble. Another way to tell the driver is to give the address as an option to a kernel module (device driver). If what you tell it is wrong, there could be problems. A smart driver may detect how the hardware is actually set and reject the incorrect information supplied by the option (or at least issue an error message).
An obvious requirement is that before the device driver can use an address it must be first set in the physical device (such as a card). Since device drivers often start up soon after you start the computer, they sometimes try to access a card (to see if it's there, etc.) before the address has been set in the card by a PnP configuration program. Then you see an error message that they can't find the card even though it's there (but doesn't yet have an address yet).
What was said in the last few paragraphs regarding I/O addresses applies with equal force to most other bus-resources: Memory Ranges, IRQs --Overview and DMA Channels. What these are will be explained in the next 3 sections. The exception is that interrupts on the PCI bus are not set by card registers but are instead routed (mapped) to IRQs by a chip on the motherboard. Then the IRQ a PCI card is routed to is written into the card's register for information purposes only.
To see what IO addresses are used on your PC, look at the /proc/ioports file.
2.5 Memory Ranges
Many devices are assigned address space in main memory. It's sometimes called "shared memory" or "memory-mapped IO" or "IO memory". This memory is physically located inside the physical device but the computer accesses it just like it would access memory on memory chips. When discussing bus-resources it's often just called "memory", "mem", or "iomem". In addition to using such "memory", such a device might also use conventional IO address space. To see what mem is in use on your computer, look at /proc/iomem. This "file" includes the memory used by your ordinary RAM memory chips so it shows memory allocation in general and not just iomem allocation. If you see a strange number instead of a name, it's likely the number of a PCI device which you can verify by typing "lspci".
When you insert a card that uses iomem, you are in effect also inserting a memory module for main memory. A high address is selected for it by PnP so that it doesn't conflict with the main memory modules (chips). This memory can either be ROM (Read Only Memory) or shared memory. Shared memory is shared between the device and the CPU (running the device driver) just as IO address space is shared between the device and the CPU. This shared memory serves as a means of data "transfer" between the device and main memory. It's Input-Output (IO) but it's not done in IO space. Both the card and the device driver need to know the memory range.
ROM (Read Only Memory) on cards is a different kind of iomem. It is likely a program (perhaps a device driver) which will be used with the device. It could be initialization code so that a device driver is still required. Hopefully, it will work with Linux and not just MS Windows. It may need to be shadowed which means that it is copied to your main memory chips in order to run faster. Once it's shadowed it's no longer "read only".
2.6 IRQs --Overview
After reading this you may want to read Interrupts --Details for many more details. The following is intentionally oversimplified: Besides the address, there is also an interrupt number to deal with (such as IRQ 5). It's called an IRQ (Interrupt ReQuest) number or just an "irq" for short. We already mentioned above that the device driver must know the address of a card in order to be able to communicate with it.
But what about communication in the opposite direction? Suppose the device needs to tell its device driver something immediately. For example, the device may be receiving a lot of bytes destined for main memory and its buffer used to store these bytes is almost full. Thus the device needs to tell its driver to fetch these bytes at once before the buffer overflows from the incoming flow of bytes. Another example is to signal the driver that the device has finished sending out a bunch of bytes and is now waiting for some more bytes from the driver so that it can send them too.
How should the device rapidly signal its driver? It may not be able to use the main data bus since it's likely already in use. Instead it puts a voltage on a dedicated interrupt wire (also called line or trace) which is often reserved for that device alone. This voltage signal is called an Interrupt ReQuest (IRQ) or just an "interrupt" for short. There are the equivalent of 16 (or 24, etc.) such wires in a PC and each wire leads (indirectly) to a certain device driver. Each wire has a unique IRQ (Interrupt ReQuest) number. The device must put its interrupt on the correct wire and the device driver must listen for the interrupt on the correct wire. Which wire the device sends such "help requests" on is determined by the IRQ number stored in the device. This same IRQ number must be known to the device driver so that the device driver knows which IRQ line to listen on.
Once the device driver gets the interrupt from the device it must find out why the interrupt was issued and take appropriate action to service the interrupt. On the ISA bus, each device usually needs its own unique IRQ number. For the PCI bus and other special cases, the sharing of IRQs is allowed (two or more PCI devices may have the same IRQ number). Also, for PCI, each PCI device has a fixed "PCI Interrupt" wire. But a programmable routing chip maps the PCI wires to ISA-type interrupts. See Interrupts --Details for details on how all the above works.
2.7 DMA (Direct Memory Access) or Bus Mastering
For the PCI bus, DMA and Bus Mastering mean the same thing. Prior to the PCI bus, Bus Mastering was rare and DMA worked differently and was slow. Direct Memory Access (DMA) is where a device is allowed to take over the main computer bus from the CPU and transfer bytes directly to main memory or to some other device. Normally the CPU would make a transfer from a device to main memory in a two step process:
- reading a chunk of bytes from the I/O memory space of the device and putting these bytes into CPU itself
- writing these bytes from the CPU to main memory
With DMA it's a one step process of sending the bytes directly from the device to memory. The device must have DMA capabilities built into its hardware and thus not all devices can do DMA. While DMA is going on, the CPU can't do too much since the main bus is being used by the DMA transfer.
The old ISA bus can do slow DMA while the PCI bus does "DMA" by Bus Mastering. The LPC bus has both the old DMA and the new DMA (bus mastering). On the PCI bus, what more precisely should be called "bus mastering" is often called "Ultra DMA", "BM-DNA", "udma", or just "DMA", Bus mastering allows devices to temporarily become bus masters and to transfer bytes almost like the bus master was the CPU. It doesn't use any channel numbers since the organization of the PCI bus is such that the PCI hardware knows which device is currently the bus master and which device is requesting to become a bus master. Thus there is no resource allocation of DMA channels for the PCI bus and no dma channel resources exist for this bus. The LPC (Low Pin Count) bus is supposed to be configured by the BIOS so users shouldn't need to concern themselves with its DMA channels.
2.8 DMA Channels (not for PCI bus)
This is only for the LPC bus and the old ISA bus. When a device wants to do DMA it issues a DMA-request using dedicated DMA request wires much like an interrupt request. DMA actually could have been handled by using interrupts but this would introduce some delays so it's faster to do it by having a special type of interrupt known as a DMA-request. Like interrupts, DMA-requests are numbered so as to identify which device is making the request. This number is called a DMA-channel. Since DMA transfers all use the main bus (and only one can run at a time) they all actually use the same channel for data flow but the "DMA channel" number serves to identify who is using the "channel". Hardware registers exist on the motherboard which store the current status of each "channel". Thus in order to issue a DMA-request, the device must know its DMA-channel number which must be stored in a special register on the physical device.
2.9 "Resources" for both Device and Driver
Thus device drivers must be "attached" in some way to the hardware they control. This is done by allocating bus-resources (I/O, Memory, IRQ's, DMA's) to both the physical device and letting the device driver to find out about it. For example, a serial port uses only 2 resources: an IRQ and an I/O address. Both of these values must be supplied to the device driver and the physical device. The driver (and its device) is also given a name in the /dev directory (such as ttyS1). The address and IRQ number is stored by the physical device in configuration registers on its card (or in a chip on the motherboard). Old hardware (in the mid 1990's) used switches (or jumpers) to physically set the IRQ and address in the hardware. This setting remained fixed until someone remover the computer's cover and moved the jumpers.
But for the case of PnP (no jumpers), the configuration register data is usually lost when the PC is powered down (turned off) so that the bus-resource data must be supplied to each device anew each time the PC is powered on.
2.10 Resources are Limited
Ideal Computers
The architecture of the PC provides only a limited number of resources: IRQ's, DMA channels, I/O address, and memory regions. If there were only a limited number devices and they all used standardized bus-resources values (such as unique I/O addresses and IRQ numbers) there would be no problem of attaching device drivers to devices. Each device would have a fixed resources which would not conflict with any other device on your computer. No two devices would have the same addresses, there would be no IRQ conflicts on the ISA bus, etc. Each driver would be programmed with the unique addresses, IRQ, etc. hard-coded into the program. Life would be simple.
Another way to prevent address conflicts would be to have each card's slot number included as part of the address. Thus there could be no address conflict between two different cards (since they are in different slots). Card design would not allow address conflicts between different functions of the card. It turns out that the configuration address space (used for resource inquiry and assignment) actually does this. But it's not done for I/O addresses nor memory regions. Sharing IRQs as on the PCI bus also avoids conflicts but may cause other problems.
Real Computers
But PC architecture has conflict problems. The increase in the number of devices (including multiple devices of the same type) has tended to increase potential conflicts. At the same time, the introduction of the PCI bus, where two or more devices can share the same interrupt and the introduction of more interrupts, has tended to reduce conflicts. The overall result, due to going to PCI, has been a reduction in conflicts since the scarcest resource is IRQs. However, even on the PCI bus it's more efficient to avoid IRQ sharing. In some cases where interrupts happen in rapid succession and must be acted on fast (like audio) sharing can cause degradation in performance. So it's not good to assign all PCI devices the same IRQ, the assignment needs to be balanced. Yet some people find that all their PCI devices are on the same IRQ.
So devices need to have some flexibility so that they can be set to whatever address, IRQ, etc. is needed to avoid any conflicts and achieve balancing. But some IRQ's and addresses are pretty standard such as the ones for the clock and keyboard. These don't need such flexibility.
Besides the problem of conflicting allocation of bus-resources, there is a problem of making a mistake in telling the device driver what the bus-resources are. This is more likely to happen for the case of old-fashioned manual configuration where the user types in the resources used into a configuration file stored on the harddrive. This often worked OK when resources were set by jumpers on the cards (provided the user knew how they were set and made no mistakes in typing this data to configuration files). But with resources being set by PnP software, they may not always get set the same and this may mean trouble for any manual configuration where the user types in the values of bus-resources that were set by PnP.
The allocation of bus-resources, if done correctly, establishes non-conflicting channels of communication between physical hardware and their device drivers. For example, if a certain I/O address range (resource) is allocated to both a device driver and a piece of hardware, then this has established a one-way communication channel between them. The driver may send commands and other info to the device. It's actually more than one-way communications since the driver may get information from the device by reading its registers. But the device can't initiate any communication this way. To initiate communication the device needs an IRQ so it can send interrupts to its driver. This creates a two-way communication channel where both the driver and the physical device can initiate communication.
2.11 Second Introduction to PnP
The term Plug-and-Play (PnP) has various meanings. In the broad sense it is just auto-configuration where one just plugs in a device and it configures itself. In the sense used in this HOWTO, PnP means the configuring PnP bus-resources (setting them in the physical devices) and letting the device drivers know about it. For the case of Linux, it is often just a driver determining how the BIOS has set bus-resources and if necessary, the driver giving a command to change (reset) the bus-resources. "PnP" often just means PnP on the ISA bus so that the message from isapnp: "No Plug and Play device found" just means that no ISA PnP devices were found. The standard PCI specifications (which were invented before coining the term "PnP") provide the equivalent of PnP for the PCI bus.
PnP matches up devices with their device drivers and specifies their communication channels (by allocating bus-resources). It electronically communicates with configuration registers located inside the physical devices using a standardized protocol. On the ISA bus before Plug-and-Play, the bus-resources were formerly set in hardware devices by jumpers or switches. Sometimes the bus-resources could be set into the hardware electronically by a driver (usually written only for a MS OS but in rare cases supported by a Linux driver). This was something like PnP but there was no standardized protocol used so it wasn't really PnP. Some cards had jumper setting which could be overridden by such software. For Linux before PnP, most software drivers were assigned bus-resources by configuration files (or the like) or by probing the for the device at addresses where it was expected to reside. But these methods are still in use today to allow Linux to use old non-PnP hardware. And sometimes these old methods are still used today on PnP hardware (after say the BIOS has assigned resources to hardware by PnP methods).
The PCI bus was PnP-like from the beginning, but it's not usually called PnP or "plug and play" with the result that PnP often means PnP on the ISA bus. But PnP in this documents usually means PnP on either the ISA or PCI bus.
2.12 How Pnp Works (simplified)
Here's how PnP should work in theory. The hypothetical PnP configuration program finds all PnP devices and asks each what bus-resources it needs. Then it checks what bus-resources (IRQs, etc.) it has to give away. Of course, if it has reserved bus-resources used by non-PnP (legacy) devices (if it knows about them) it doesn't give these away. Then it uses some criteria (not specified by PnP specifications) to give out the bus-resources so that there are no conflicts and so that all devices get what they need (if possible). It then indirectly tells each physical device what bus-resources are assigned to it and the devices set themselves up to use only the assigned bus-resources. Then the device drivers somehow find out what bus-resources their devices use and are thus able to communicate effectively with the devices they control.
For example, suppose a card needs one interrupt (IRQ number) and 1 MB of shared memory. The PnP program reads this request from the configuration registers on the card. It then assigns the card IRQ5 and 1 MB of memory addresses space, starting at address 0xe9000000. The PnP program also reads identifying information from the card telling what type of device it is, its ID number, etc. Then it directly or indirectly tells the appropriate device driver what it's done. If it's the driver itself that is doing the PnP, then there's no need to find a driver for the device (since it's driver is already running). Otherwise a suitable device driver needs to be found and sooner or later told how it's device is configured.
It's not always this simple since the card (or routing table for PCI) may specify that it can only use certain IRQ numbers or that the 1 MB of memory must lie within a certain range of addresses. The details are different for the PCI and ISA buses with more complexity on the ISA bus.
One way commonly used to allocate resources is to start with one device and allocate it bus-resources. Then do the same for the next device, etc. Then if finally all devices get allocated resources without conflicts, then all is OK. But if allocating a needed resource would create a conflict, then it's necessary to go back and try to make some changes in previous allocations so as to obtain the needed bus-resource. This is called rebalancing. Linux doesn't do rebalancing but MS Windows does in some cases. For Linux, all this is done by the BIOS and/or kernel and/or device drivers. In Linux, the device driver doesn't get it's final allocation of resources until the driver starts up, so one way to avoid conflicts is just not to start any device that might cause a conflict. However, the BIOS often allocates resources to the physical device before Linux is even booted and the kernel checks PCI devices for addresses conflicts at boot-time.
There are some shortcuts that PnP software may use. One is to keep track of how it assigned bus-resources at the last configuration (when the computer was last used) and reuse this. BIOSs do this as does MS Windows and this but standard Linux doesn't. But in a way it does since it often uses what the BIOS has done. Windows stores this info in its "Registry" on the hard disk and a PnP/PCI BIOS stores it in non-volatile memory in your PC (known as ESCD; see The BIOS's ESCD Database). Some say that not having a registry (like Linux) is better since with Windows, the registry may get corrupted and is difficult to edit. But PnP in Linux has problems too.
While MS Windows (except for Windows 3.x and NT4) were PnP, Linux was not originally a PnP OS but has been gradually becoming a PnP OS. PnP originally worked for Linux because a PnP BIOS would configure the bus-resources and the device drivers would find out (using programs supplied by the Linux kernel) what the BIOS has done. Today, most drivers can issue commands to do their own bus-resource configuring and don't need to always rely on the BIOS. Unfortunately a driver could grab a bus-resource which another device will need later on. Some device drivers may store the last configuration they used in a configuration file and use it the next time the computer is powered on.
If the device hardware remembered its previous configuration, then there wouldn't be any hardware to PnP configure at the next boot-time. But hardware seems to forget its configuration when the power is turned off. Some devices contain a default configuration (but not necessarily the last one used). Thus a PnP device needs to be re-configured each time the PC is powered on. Also, if a new device has been added, then it too needs to be configured too. Allocating bus-resources to this new device might involve taking some bus-resources away from an existing device and assigning the existing device alternative bus-resources that it can use instead. At present, Linux can't allocate with this sophistication (and MS Windows XP may not be able to do it either).
2.13 Starting Up the PC
When the PC is first turned on the BIOS chip runs its program to get the computer started (the first step is to check out the motherboard hardware). If the operating system is stored on the hard-drive (as it normally is) then the BIOS must know about the hard-drive. If the hard-drive is PnP then the BIOS may use PnP methods to find it. Also, in order to permit the user to manually configure the BIOS's CMOS and respond to error messages when the computer starts up, a screen (video card) and keyboard are also required. Thus the BIOS must always PnP-configure devices needed to load the operating system from the hard-drive.
Once the BIOS has identified the hard-drive, the video card, and the keyboard it is ready to start booting (loading the operating system into memory from the hard-disk). If you've told the BIOS that you have a PnP operating system (PnP OS), it should start booting the PC as above and let the operating system finish the PnP configuring. Otherwise, a PnP-BIOS will (prior to booting) likely try to do the rest of the PnP configuring of devices (but not inform the device drivers of what it did). But the drivers can still find out this by utilizing functions available in the Linux kernel.
2.14 Buses
To see what's on the PCI bus type lspci
or lspci -vv
.
Or type scanpci -v
for the same information in the numeric code
format where the device is shown by number (such as: "device 0x122d"
instead of by name, etc. In rare cases, scanpci
will find a
device that lspci
can't find.
The boot-time messages on your display show devices which have been found on various buses (use shift-PageUp to back up thru them). See Boot-time Messages
ISA is the old bus of the old IBM-compatible PCs while PCI is a newer and faster bus from Intel. The PCI bus was designed for what is today called PnP. This makes it easy (as compared to the ISA bus) to find out how PnP bus-resources have been assigned to hardware devices.
For the ISA bus there was a real problem with implementing PnP since no one had PnP in mind when the ISA bus was designed and there are almost no I/O addresses available for PnP to use for sending configuration info to a physical device. As a result, the way PnP was shoehorned onto the ISA bus is very complicated. Whole books have been written about it. See PnP Book. Among other things, it requires that each PnP device be assigned a temporary "handle" by the PnP program so that one may address it for PnP configuring. Assigning these "handles" is call "isolation". See ISA Isolation for the complex details.
As the ISA bus becomes extinct, PnP will be a little easier. It will then not only be easier to find out how the BIOS has configured the hardware, but there will be less conflicts since PCI can share interrupts. There will still be the need to match up device drivers with devices and also a need to configure devices that are added when the PC is up and running. The serious problem of some devices not being supported by Linux will remain.
2.15 How Linux Does PnP
Linux has had serious problems in the past in dealing with PnP but most of those problems have now been solved (as of mid 2004). Linux has gone from a non-PnP system originally, to one that can be PnP if certain options are selected when compiling the kernel. The BIOS may assign IRQs but Linux may also assign some of them or even reassign what the BIOS did. The configuration part of ACPI (Advance Configuration and Power Interface) is designed to make it easy for operating systems to do their own configuring. Linux can use ACPI if it's selected when the kernel is compiled.
In Linux, it's traditional for each device driver to do it's own low level configuring. This was difficult until Linux supplied software in the kernel that the drivers could use to make it easier on them. Today (2005) it has reached the point where the driver simply calls the kernel function: pci_enable_device() and the device gets configured by being enabled and having both an irq (if needed) and addresses assigned to the device. This assignment could be what was previously assigned by the BIOS or what the kernel had previously reserved for it when the pci or isapnp device was detected by the kernel. There's even an ACPI option for Linux to assign all devices IRQs at boot-time.
So today, in a sense, the drivers are still doing the configuring but they can do it by just telling Linux to do it (and Linux may not need to do much since it sometimes is able to use what has already been set by the BIOS or Linux). So it's really the non-device-driver part of the Linux kernel that is doing most of the configuring. Thus, it may be correct to call Linux a PnP operating system, at least for common computer architectures.
Then when a device driver finds its device, it asks to see what addresses and IRQ have been assigned (by the BIOS and/or Linux) and normally just accepts them. But if the driver wants to do so, it can try to change the addresses, using functions supplied by the kernel. But the kernel will not accept addresses that conflict with other devices or ones that the hardware can't support. When the PC starts up, you may note messages on the screen showing that some Linux device drivers have found their hardware devices and what the IRQ and address ranges are.
Thus, the kernel provides the drivers with functions (program code) that the drivers may use to find out if their device exists, how it's been configured, and functions to modify the configuration if needed. Kernel 2.2 could do this only for the PCI bus but Kernel 2.4 had this feature for both the ISA and PCI buses (provided that the appropriate PNP and PCI options have been selected when compiling the kernel). Kernel 2.6 came out with better utilization of ACPI. This by no means guarantees that all drivers will fully and correctly use these features. And legacy devices that the BIOS doesn't know about, may not get configured until you (or some configuration utility) puts its address, irq, etc. into a configuration file.
In addition, the kernel helps avoid resource conflicts by not allowing two devices that it knows about to use the same bus-resources at the same time. Originally this was only for IRQs, and DMAs but now it's for address resources as well.
If your have an old ISA bus, the program isapnp should run at boottime to find and configure pnp devices on the ISA bus. Look at the messages with "dmesg".
To see what help the kernel may provide to device drivers see the directory /usr/.../.../Documentation where one of the ... contains the word "kernel-doc" or the like. Warning: documentation here tends to be out-of-date so to get the latest info you would need to read messages on mailing lists sent by kernel developers and possibly the computer code that they write including comments. In this kernel documentation directory see pci.txt ("How to Write Linux PCI Drivers") and the file: /usr/include/linux/pci.h. Unless you are a driver guru and know C Programming, these files are written so tersely that they will not actually enable you to write a driver. But it will give you some idea of what PnP type functions are available for drivers to use.
For kernel 2.4 see isapnp.txt. For kernel 2.6, isapnp.txt is replaced by pnp.txt which is totally different than isapnp.txt and also deals with the PCI bus. Also see the O'Reilly book: Linux Device Drivers, 3rd ed., 2005. The full text is on the Internet.
2.16 Problems with Linux PnP
But there are a number of things that a real PnP operating system could handle better:
- Allocate bus-resources when they are in short supply by reallocation of resources if necessary
- Deal with choosing a driver when there is more than one driver for a physical device
Since it's each driver for itself, a driver could grab bus-resources that are needed by other devices (but not yet allocated to them by the kernel). Thus a more sophisticated PnP Linux kernel would be better, where the kernel did the allocation after all requests were in. Another alternative would be a try to reallocate resources already assigned if a devices couldn't get the resources it requested.
The "shortage of bus-resources" problem is becoming less of a problem for two reasons: One reason is that the PCI bus is replacing the ISA bus. Under PCI there is no shortage of IRQs since IRQs may be shared (even though sharing is a little less efficient). Also, PCI doesn't use DMA resources (although it does the equivalent of DMA without needing such resources).
The second reason is that more address space is available for device I/0. While the conventional I/O address space of the ISA bus was limited to 64KB, the PCI bus has 4GB of it. Since more physical devices are using main memory addresses instead of IO address space, there is still more space available, even on the ISA bus. On 32-bit PCs there is 4GB of main memory address space and much of this bus-resource is available for device IO (unless you have 4GB of main memory installed).
There was at least one early attempt to make Linux a truly PnP operating system. See http://www.astarte.free-online.co.uk. While developed around 1998 it never was put into the kernel (but probably should have been).
Next Previous Contents