Table of Contents
BE DEVELOPER CLASSIFIEDS Be Developer Classifieds are an opportunity for Be developers to tell each other about projects they're working on, are thinking about working on, or would like to work on. Here's your chance to request assistance on a project, or make your talents known to other projects. For more information, check out the Classified Ads section in the Registered Be Developer area. BeOS Job Opening: BeOS FreeMWare Port Positions To all people who are or would like to be...
If you're interested, please email Nick Behnken (nbehnken@htc.net). BE ENGINEERING INSIGHTS: Kernel Programming on the BeOS: Part 1 By Mani Varadarajan mani@be.com
The kernel team is planning to document several aspects of kernel-level programming in a series of Newsletter articles. This will be useful to anyone who ever intends to write or directly use a driver, and will also be of general interest to those who are simply interested in knowing how the pieces of the kernel fit together. This article provides an overview of the entire series. Future articles will focus on the various components a developer can contribute -- user- and kernel-level drivers, modules, bus managers, busses, etc., and will contain helpful suggestions as to how to properly and safely integrate these pieces into the system. Introduction ------------ As most of you know, the BeOS kernel itself consists only of core functionality, sufficient to start the boot process and manage memory and threads. Also built into the kernel are the ISA bus manager, the PCI bus manager, the device file system (devfs) which manages /dev, the root file system (rootfs) which handles things in /, and a few other odds and ends. Since this is nowhere near enough to do anything useful, as early as the boot process the kernel uses add-ons to extend its functionality. File systems, device and bus drivers, for example, are all add-ons that are loaded by the kernel. These kernel add-ons can be broadly classified into three categories:
Device drivers and file systems, while they add functionality to the kernel, are accessible from user space, in the sense that you can open them and address them via file descriptors. Modules, however, are kernel-only extensions, since they provide functionality for drivers and other modules. The raison d'etre of modules will become clearer below. Note: some of this has been covered before in some detail in an article by Arve Hjonnevag. See <http://www.be.com/aboutbe/benewsletter/volume II/Issue20.html>. What exactly is a device driver? A device driver is something that actually talks to a specific device or class of devices. The communication usually involves some device-specific protocol -- for example, code that specifically addresses a graphics card, an Ethernet card, or serial port is a device driver. Similarly, each piece of code that speaks to a class of devices such as SCSI disks, ATAPI devices, ATA devices, etc., is also a device driver. This code actually understands the device itself and manages it. What is a module? As explained above, modules present a uniform API for use by other modules or drivers. This is a useful way of separating out commonly used functionality from a driver. Take the example of a SCSI device driver talking to a SCSI device. The device hangs off a SCSI bus, which in turn may be one of many busses on the system. All SCSI devices speak a common command set that is independent of the controller used to send the commands. Rather than have each SCSI driver (SCSI disk, SCSI CD, SCSI scanner, etc.) know how to deal with each of the possible types of SCSI cards, it would be nice if there were a generic interface to a SCSI card for each driver to use. This is accomplished by having one module implement each SCSI card, which then presents a generic API. These modules are further managed by a SCSI "bus manager" module, which knows how to deal with multiple busses and present them in an encapsulated format to each driver. The bus manager's API is what the driver has to deal with, reducing its complexity a great deal. We also have USB, IDE and PCMCIA bus managers. Another example of the use of the module architecture is a sound driver which publishes a MIDI device. MIDI functionality can be encapsulated in a module so that all sound drivers can access it, avoiding duplication in each driver. How do these relate to the Kernel? ---------------------------------- The kernel provides basic services so that drivers and modules can function. Typically these are
The kernel also provides access at user level to devices using a "Posixy" API. Devices can be opened by user programs through Posix calls such as open, read, write, and ioctl, which address the devices through file descriptors. These calls turn into system calls in the kernel, and are passed by devfs to the appropriate device driver, which then performs the specified operation. User- or Kernel-level Driver? ---------------------------- If you want to write a driver for a device, one of the first decisions you need to make is whether it should be at user or kernel level. A kernel-level driver was described above; a user-level driver does the same task by using an extant "raw" driver that knows how to handle that class of device. For example, if you wanted to write a driver for a SCSI scanner, you could write an add-on at user level that opens up the scsi raw device and send commands through scsi raw to the scanner. The alternative would be to write a conventional kernel-level driver to directly speak to the scanner. The main advantage of operating at kernel level is speed. In this case, one less context switch is required for command completion, so the latency is lower compared to user level. In addition, if your driver needs access to some peculiar hardware, it's rather difficult to do this using a raw driver from user space. The trade off is that it's much easier to debug at user level. You can use conventional debugging techniques and there's less chance of taking down the entire system in the process. The above is merely a brief introduction to the topic. Stay tuned for more in-depth pieces from various members of the kernel team.
DEVELOPERS' WORKSHOP: The High Cost of Memory: Using rtm alloc By Owen Smith orpheus@be.com "Developers' Workshop" is a weekly feature that provides
answers to our developers' questions, or topic requests.
To submit a question, visit
http://www.be.com/developers/suggestion_box.html.
The following transcript comes from a recent interview
between myself and Morgan le Be, the editor of SQUONK!
Magazine. [NOTE: any resemblance to other magazines, either
real or imagined, is entirely coincidental.]
Q: What is this rtm alloc, anyway?
Q: Who uses rtm alloc?
Q: But why wouldn't you just use malloc like everybody else?
For media nodes that must run in real time, however,
malloc is not good enough. Why? Because the virtual
memory management system costs you time, not only from
pushing blocks of memory around, but also from switching
between your thread and the kernel. And if this isn't
heinous enough, there's always the chance that your
memory will be swapped out to disk later on, resulting in
possible additional overhead whenever you try to access
the memory. For time-sensitive media nodes, this overhead
can be crippling, causing audio glitches or other
undesirable performance artifacts.
Q: How does rtm alloc solve this problem?
Q: What are the drawbacks to using rtm alloc?
Although memory is cheap, we realize that not everybody
may want to devote a significant chunk of their RAM
exclusively to media applications at the expense of the
rest of their system. For this reason, the media
real-time allocator will only lock its memory pools into
RAM if either Real-Time Audio or Real-Time Video is
enabled. These two options are set by the user by check
boxes in the Media Preferences panel.
Q: So, how do I use rtm alloc?
First, use rtm create pool to create the memory pool you
wish to use. Because this creates an area, which involves
the VM system, you should do this before the real-time
madness starts. You give it the size of the pool and the
name (which should be B OS NAME LENGTH bytes or less).
You get back an opaque pointer, rtm pool*, that uniquely
identifies your pool.
Keep in mind that, when creating pools, you should try to
specify the minimum amount of memory that you'll need.
Also note that there is some overhead for every pool that
you create, and the number of pools you can have is
limited, so it's better to create one pool for your
purposes rather than several smaller ones.
Another important note about pools: there is always one
pool available for use called the "default pool." This is
the pool used by the Media Kit for its own real-time
allocation needs. I will forego subtlety and say: don't
use the default pool! Create your own pool instead, so
that you don't starve the Media Kit and associated
classes.
Now, once you have a pool, use rtm alloc to get the
memory you need. You identify the pool and the amount of
memory to reserve. If this pool doesn't have enough
memory left, not all is lost! rtm alloc will try to
"grow" the pool by creating a new area big enough to hold
the memory, before failing. (Doing this could potentially
involve VM, though, so this will cost you.)
Once you're done with the memory, simply free it by using
rtm free:
As soon as you're done using the memory pool, be sure to
delete it, so that the system can reclaim your RAM for
others to use.
For an example of the real-time allocator in action, and how
you can redefine the C++ operators new and delete to use
real-time allocation, check out the sample code:
<ftp://ftp.be.com/pub/samples/media kit/rtm_test.zip>
This provides a couple of super-simple tests you can run on
either regularly allocated objects or rtm alloc'ed objects.
Hopefully, this code will demonstrate just how significant
the performance savings can be, though of course, your
actual mileage will vary.
Q: Can I use rtm alloc in other situations than media nodes?
Doing memory allocation from this pool (perhaps as a C++
allocator template?) is left as an exercise for the
reader.
A popular topic. Like the flu, it vanishes for a while, then
returns as a newer, more virulent strain. But far from me to
complain -- I like it when Business Week and Newsweek stir
up the pot. When Newsweek describes a brave new Ether in
which everyday objects swim in an ocean of IP packets, I'm
happy. That's because I look forward to using wireless PDAs
and to programming my VCR with a mouse click from a
NetPositive page somewhere in the house -- or in the world.
I'm happy because I see more and more opportunities for the
BeOS as the mutltimedia engine in these connected
applications.
Unsurprisingly, Bill Gates has views to share on the matter.
He argues in the same issue of Newsweek that the PC is with
us forever, the center and nexus of this brave new connected
world (I paraphrase here, with no warranties expressed or
implied). Apparently, his statements are in reaction to
predictions that the PC will disappear, giving way to
specialized, task-specific devices that do a better job for
less money and less aggravation. Part of Chairman Bill's
argument is that PCs will become easier to understand, with
a simpler user interface. They'll provide the simplicity of
a task-specific device, combined with the protean
permutations we've come to love in our computers.
Yes, I like personal computers. They're useful, fun, and if
you really invest enough time into making them work just
right, you'll grow hair on your chest. If I understand
correctly, The Chairman is promising chest alopecia some
time in our future. Other people agree with him and believe
that a collaboration between Intel and Microsoft will
deliver us to the Promised Land of The Simple PC. From
there, with a careful application of Moore's Law, The Simple
PC will become The Free PC, with the marginal cost of the
physical device absorbed by a service provider. Just like
free cell phones.
This is a seductive strategy. But last weekend, while
cleaning the shelves in my home office, I came across a Bob
CD-ROM. It promised in the early 90s what The SimplePC
promises now -- a simple, friendly interface. The fact that
it didn't work then doesn't necessarily predict that it
won't work in the future. Actually, I find Windows offers a
well-designed UI. But it does little to hide the growing
complexity underneath. The idea that you can put an
interface layer above the gas refinery, and the
foul-smelling problems beneath it will never percolate to
the surface is just that -- an idea -- a seductive one
without foundation in observed behavior. This is an instance
of a well-known sophism: It'll work because it would be cool
if it did. That's what we thought of handwriting
recognition.
I like PCs for what they do well; I agree with Microsoft's
Chairman that they're irreplaceable. But they can be
complemented. This isn't an either/or proposition. The
Post-PC world is a misnomer -- it makes good headlines, but
it masks a more complex reality. Just as automobiles
differentiated into all sorts of vehicles -- some of which
General Motors refused to acknowledge for years -- everyday
computing devices will continue to differentiate. In the
twilight of the PC-centric era, we are moving toward a
Web-centric stage and our dear PC will be one of many
devices happily swimming in the ocean of IP packets.
1997 Be Newsletters | 1995 & 1996 Be Newsletters Copyright © 1999 by Be, Inc. All rights reserved. Legal information (includes icon usage info). Comments, questions, or confessions about our site? Please write the Webmaster. |