Table of Contents
New Resource Editor for the BeOS
Thunder Munchkin Software, LLC would like to announce the release of its professional resource editor for the BeOS, BeResourceful. BeResourceful is Thunder Munchkin Software's contribution to the development of amazing applications for the BeOS. A full featured resource editor, even in its first release, BeResourceful is rapidly winning acclaim from the people who use it. Designed from the ground up to serve all BeOS developers -- from the novice to the hard core professional -- BeResourceful is being described by some as a developer's dream come true. Developers may expect to find clear, concise documentation, fully commented source code with a template project for creating their own royalty-free add-ons, and a fully expandable, powerful resource editor unlike any other available for the BeOS today. Thunder Munchkin Software is thoroughly committed to working with the developer community to make BeResourceful the #1 resource editor for the BeOS! More detailed information, screen shots, and access to our secure online store may be found at <http://beoscentral.com/home/TMS>. Please click on in!
BE ENGINEERING INSIGHTS: High-Resolution Timing, Revisited By Jeff Bush
In an earlier newsletter, Ficus wrote an interesting article about how to get high-resolution timing in the BeOS using a clever hack and a device driver ("Outsmarting the Scheduler," <http://www.be.com/aboutbe/benewsletter/volume_II/Issue27.html>). Thanks to a number of changes in Genki, this sort of black magic is no longer necessary. The kernel now offers several ways to get fine-grained, precise timing both from device drivers and user space. The sample code The PC speaker is a primitive piece of hardware. It basically has two states: on and off. You can drive it from a programmable rate generator, or you can directly modify the state by setting a bit. The classic way to mimic analog signals with this type of speaker is to pulse it very quickly -- faster than the speaker cone can physically move. The amount of time that current is applied to the output vs. the amount of time that it is not applied is controlled by the amplitude of the input signal, causing the speaker position to approximate the actual waveform. The sample driver will use this technique if it is compiled with the PLAY IN BACKGROUND macro set to 0. The down side to this approach is that it requires constant attention from the CPU. In order to get any kind of sound quality, you have to shut off interrupts, rendering the machine unusable. This is clearly undesirable. The sample code takes a rather unique approach to this problem. It programs a timer to wake it up periodically so it can move the speaker cone. Other threads continue to run normally, but at defined intervals, an interrupt occurs and the speaker driver code is executed (at interrupt level) to control the speaker. To get any quality, the interrupts need to occur frequently; around every 30-60us. Note that, although the machine is still useable, interrupting this frequently cuts down performance quite a bit. This is a rather extreme case, but it does illustrate how fine- grained you can get with kernel timers. You should also note that the standard user space synchronization primitives use this underlying mechanism, and you can now get very accurate delays using system calls such as snooze, read port etc, or acquire sem etc. You don't need to write a device driver to get accurate timing. Note that, although the machine is >still useable, interrupting this frequently cuts down performance >quite a bit. From the driver level, the timers are programmed by using a new API added for Genki. The declarations for these functions are in KernelExport.h. Programming a timer in a device driver basically involves calling the add timer function, like so: ret = add timer((timer*) &my timer, handler function, time, B ONE SHOT ABSOLUTE TIMER); The first parameter is a pointer to a timer structure. You can add your own parameters to this by defining your own structure and making kernel's timer struct the first element, for example: struct my timer { timer kernel timer; long my variable; . . . }; The second parameter to the add timer function is a hook function that should be called when the timer expires. It has the form: Note that the timer struct that you originally passed to add timer is also passed into this function, so you can access elements that you've added to that struct from the interrupt handler. The third parameter is the time that this interrupt should occur. How this is interpreted is determined by the fourth parameter. There are three basic modes that a timer understands:
When playing in background mode, the speaker driver determines the next time the speaker cone has to move and program a timer for that time. If it's under 20us, it doesn't actually program the timer. This is because there is some overhead to handling an interrupt. If you program a timer that's too short, you may end up wasting time and be late for the next event you want to handle. Likewise, when it sets up the interrupt, it will set it a little early to compensate for the interrupt latency. A macro called SPIN determines whether the code will spin in a loop when it is early to handle event, or just move the speaker cone early. In the case of the speaker driver, which is obviously not a high- fidelity component, this isn't really necessary. In fact, since these interrupts occur so frequently, machine performance is degraded significantly when it behaves like this. In drivers where timing is critical, spinning like this is a way to be accurate. A quick note about the implementation of timers. You may be familiar with the way timing is implemented on many operating systems (including BeOS before Genki). Normally, an operating system sets up a periodic interrupt that occurs every couple of milliseconds. At that time, the timeout queue is checked to see if there are expired timeouts. As a consequence, you can't get a dependable resolution of less than the interrupt period. In Genki, the kernel dynamically programs a hardware timer for the next thread to wake or sleep, allowing much finer resolutions. This, coupled with the preemptive nature of the kernel, is what allows the driver to accurately set timers for 30-60us. When it finishes playing a chunk of data, the interrupt handler releases a semaphore that the calling thread was waiting for. Normally, when a semaphore is released, the scheduler is invoked to wake up threads that may have been waiting for the semaphore. As you're probably aware, rescheduling from within an interrupt handler is very bad. Driver writers must use the flag B DO NOT RESCHEDULE when releasing a semaphore from an interrupt handler. This, by itself, was a limitation, however, because it meant that a thread waiting for some device driver call has to wait until the scheduler is invoked again (for some unrelated reason) before it will be run. This can be several milliseconds later. Generally, you want it to run as soon as possible. In Genki, an interrupt handler can now return
As you can see, there are a number of very useful facilities for low-latency device access and high-precision timing. In addition, many of the generic user-level synchronization primitives have become much more accurate. BE ENGINEERING INSIGHTS: BEOS: My Faithful Companion By Baron Arnold
I am here to spread propaganda. --+-@ At this typing there are 3407 open bugs against BeOS. All of them will be fixed in the next release. BEOS: HOW IT WORKS I have this machine at home without a sound card. I only have one machine at home. I dial in at 19.2. I'm getting pretty good at blind typing after two years of practice. R5 CODENAME: DRAGULA It was a long weird life before I got to Be. Sometimes I sit in this chat room full of wannabe dj's raving-is- a-way-of-life-style-misfits, linux wonks and " and nick/nick's. I'd guess 60% of the time I spend there I spend going blah blah blah blah blah about BeOS, bEos, BEos. I am SO biased that I HAVE to be honest, about what we can do, and what we cannot do. I'll PAY YOU $20 TO EAT A TWINKIE WIENER SANDWICH. I'll tell you what I cannot do, I cannot play tnet113.zip (Tetrinet) on beos. Want to know the only reason I boot into windows? No Tetrinet on bEOS. This is the developer newsletter right? Write or port it and I will send you a VERY limited edition 5038 CD single valued at $150 featuring the HIT single 5038, the smash oldie virtual (void), ALL OTHER known versions of virtual (void) AND as a VERY SPECIAL bonus, a "CANDLELIGHT" edition of the record's title track. TOOLS AND AUDIO Go get http://www.catastropherecords.com/5038.mp3 and the first one of you to file a feature request against MediaPlayer that politely requests it STREAM you the data gets a genki beta 5 t-shirt, "on me." (Be folk cannot participate in this contest.) You see, here in Kalifornia every second person I know has taken a day off work to be "Waiting for the DSL guy to show up." So I called and ordered it for Cat Rec Redwood City. Soon fresh bEos will be flowing down that pipe. I hear there's a big race to build the first real multitrack audio editor for "the Be" but IKMultimedia is out in front solving my problems. T-Racks is a MUST HAVE if you even THINK you're serious about putting any kind of audio anywhere for any reason. It's a rearranger/ smoosher/sparkler that worked wonders on 5038. When it ships I will BUY NOW. SOMEBODY BUMP MY HEAD You are a BEOS developer. You are a rock star to the max. You are a good friend in the mix. You can really whip a camel's ass. DTS comments have been opened up to the engineers for some feedback on anomalies you have filed. Your favorite operating system is maturing, in form and process. Please continue to file good bugs. We will continue to fix them. There are rumors of a BEOS Permanent Beta program where you will be able to taste fresh bEOs made daily at Be Farms. Stay tuned for details. BREAKDOWNS COME AND BREAKDOWNS GO (What are you going to do about it, that's what I'd like to know.) If you're working on a big project, say a photoshop clone or any of the aforementioned audio tools, please consider contacting me about maintaining your bug database. Nothing will set your priorities straighter than a good thrashing of the "work so-far." We have a nice clean simple system for reporting and a crack team of reserve testers ready to beat down your application until you have corrected every flaw. We can provide UI input and technical knowhow to help shape your dream. We want you to grow with us, and we want to be inspired by you, by your imagination. We want to feed off your success, be it programmatic or financial. I want to help you bring stuff I can use to my desktop. What do you want to explode today? TOMORROW AND TOMORROW AND TOMORROW And so after all that has happened, to BeOS, to Be, have we learned any more about the world? The market? The consumer? The technology? Can we see into future at all? I say, from screen1.tga to the screen actors guild... BEOS. From the kernel debugger to the top of K2, we will count the bodies and climb on. We cull concepts from known solutions and hack hack hack hack hack away at our art, our passion. Together we are sculpting an experience, a feeling, and not simply a set of features. We are writing a way to walk away "Satisfied." US from the "product." THEM from the "computer." We aim to provide an experience so seamless and simple that the "OS" becomes the "Appliance." The way you get driving directions, hear new music, find books, watch the weather and gather the information you need, the way you need it. The way you play games, make music, talk to friends. From the board room to the chat room we are working to make it so that one day... There is only BEOS. --+-@ Rock over London. Rock over Menlo Park. DEVELOPERS' WORKSHOP: Media Nodes Made Easy (Well, Easier...) By Christopher Tate "Developers' Workshop" is a weekly feature that provides
answers to our developers' questions, or topic requests.
To submit a question, visit
http://www.be.com/developers/suggestion_box.html.
As anyone who sat through the Node Track of the recent Be
Developers Conference can attest, writing Media Kit nodes
is a complex task. Be's engineering team has been hard at
work extending the Media Kit APIs to simplify node
construction; similarly, DTS has been developing tools
developers can use to debug their own BMediaNodes. This
week's article introduces a tool for analyzing the behavior
of any BBufferProducer node or node chain: the
LoggingConsumer. Designed to work with the Genki/6 beta
release (I told you the APIs were under construction!),
this node tracks the behavior of its input stream, logging
a trace of all activity to a file for post-mortem analysis.
Before I discuss what exactly the LoggingConsumer does,
here's the URL to download it, so that you can follow along
at home:
<ftp://ftp.be.com/pub/samples/media kit/LoggingConsumer.zip>
So, What Is This "LoggingConsumer" Anyway?
As BBufferConsumers go, LoggingConsumer is pretty simple: it
doesn't manipulate its input buffers in any way, it doesn't
talk to hardware, and it doesn't pass buffers downstream --
it sits at the end of the node chain. It has a single input,
which can accept any kind of data. You, the developer, connect
to a node or node chain that you're interested in, point it at
an output file entry ref, and voila! Useful information about
buffer flow and internode handshaking is recorded for later
interpretation.
The LoggingConsumer node serves two purposes: it produces a
trace of node activity, for the purpose of debugging
producers and producer/filter node chains; and it serves as
a clean example of BBufferConsumer structure and behavior.
The node tries to do everything "right," and is commented to
help you understand what's going on at all points in the
code. The node uses the latest and greatest
BMediaEventLooper class. It publishes a set of controllable
parameters via a BParameterWeb, and handles the
B MEDIA PARAMETERS buffer type for changing those
parameters. It reports late buffers to the producer, and
reports latency changes as well. In short, it demonstrates
pretty much all of the major functionality that a
BBufferConsumer has to worry about.
So, How's It Work?
In order to preserve the realtime behavior of a proper Media
Kit node, the LoggingConsumer doesn't do any disk access
from within its main BMediaNode control thread. Instead, it
spawns a separate thread to write logged messages to disk,
and passes messages to that thread via a kernel port. The
LogWriter class encapsulates this pattern, managing the
non-realtime thread and message port transparently to the
LoggingConsumer node implementation.
The LoggingConsumer itself is another example of using the
new BMediaEventLooper class to handle most of the
nitty-gritty details of being a node. Because it does very
little actual media-related processing, it's a pretty clear
illustration of the organization we recommend that nodes
use. The example application, which hooks the
LoggingConsumer up to an audio file reader, also uses a
simple "Connection" structure to illustrate the necessary
bookkeeping for setting up and tearing down the connection
between two nodes.
What's It Give Me?
Lots. Every virtual entry point a media node has generates
an entry in the log file (with the minor exception of
GetNextInput() and DisposeInputCookie() -- and you can add
support for these easily). Log entries are marked with the
current real time (i.e., system time()) when they are
generated, as well as the current time according to the
LoggingConsumer's time source. Some operations log
additional information, as well. For example, when a buffer
is received, the logged message will indicate whether it is
already late. When a buffer is handled (i.e., popped off of
the BMediaEventLooper's event queue) the buffer's
performance time is logged, as well as how early the buffer
was handled. That "offset" needs to lie within the node's
scheduling latency; if it doesn't, the buffer is late. The
node also maintains a count of late buffers, so your testing
application can follow what's happening.
LoggingConsumer is a BControllable, too, and you can
manipulate certain aspects of its behavior while it's
running. In particular, you can adjust its latency on the
fly. Reacting to latency changes is one of the trickier
aspects of BBufferProducer nodes, so having this facility in
the buffer consumer lets you test a producer in a reliable,
repeatable fashion. Future versions of the LoggingConsumer
will have other controllable features as well, such as the
ability to change media formats on the fly.
Here's an example of what you get in the log file:
The "realtime" field is the current system time() at the
moment the message was logged, and "perftime" is the
LoggingConsumer's idea of the current time according to its
time source (i.e., the current performance time). As you can
see, the node is registered, then the format negotiation
with the producer occurs, the node is Preroll()ed, then it's
Start()ed. When the producer node was started it sent a
ProducerDataStatus() message, then began sending buffers.
Note that there is a distinction between the buffer's
receipt in BufferReceived() and its eventual handling in
HandleEvent(). Also note that given our stated scheduling
latency of 5000 microseconds, the first buffer was sent too
late for the LoggingConsumer to handle in a timely manner --
information to be communicated to whoever wrote this
particular BBufferProducer node!
The LogWriter class can easily be adapted to log other sorts
of messages. Just add your own custom message codes to the
log what enum in LogWriter.h, string translations for them
to the log what to string() function, and appropriate
handling in LogWriter::HandleMessage(). If you need to pass
custom information in the log message, add a new variant to
the union declared in the log message struct.
If you're developing BBufferProducer nodes, this class will
help you debug them. If you're developing BBufferConsumers,
this node will show you how to structure your code. And if
you're just writing Media Kit applications, this node gives
you an easy way to tell whether you've set up the rest of
the node chain correctly. Any way you slice it,
LoggingConsumer is a must-have component in any Media Kit
development suite!
Good question. As the range of devices connectable to the
Web keeps growing, I'd like to offer Be's perspective on
this increasingly hot topic. Every week brings new market
research showing how enormous this category is anticipated
to be -- and such large numbers clearly assume the bundling
of all sorts of devices. Even if IP-enabled refrigerators
don't make a huge contribution to this new genre, the broad
definition has to cover many disparate devices: PDAs,
cellular phones, game consoles, VCRs, set-top boxes, WebTVs
and similar devices, and the multimedia Web appliances
announced by companies such as MicroWorkz in the US and
contemplated by others in the US, Europe, and Japan.
Today I'll look at the multimedia Web appliance subcategory,
and approach it by turning the now well-understood WebTV
experience inside out. By this I mean that WebTV expects me
to read my e-mail on the TV screen in the privacy of the
family room. Another view of the world is reading my e-mail,
or browsing eBay with two video windows on the screen, one
with CNN, muted, the other one with a view of my front door,
while I listen to MP3 music from the Web. Or any other such
combination of unadulterated Web content -- as opposed to
content remanufactured for the need of a nonstandard
rendering device such as a TV screen or the display on a Web
phone.
Not that "remanufacturing" Web content is such a bad thing;
WebTV has gained a loyal following, and Web-enabled phones
and PDAs will be very successful. In the case of portable
devices, the trade-offs, the additional complications of
adapting Web content to "wearable" devices (as Motorola
likes to call them) are gladly accepted as the price to pay
for mobility.
The next question is whether or not the kind of Web
appliance I just described is a replacement for a PC. The
answer is a clear "no." In my mind they coexist, because
they address different users and uses. The PC is a protean
device -- its seemingly infinite variety of application
software enables it to assume an endless succession of
shapes, from an entertainment center to a software
development workstation to an office productivity toolkit.
This is great as long as it's what the user is looking for,
although it poses equally infinite complications in the
layers of software silt inside the legacy operating systems
and their applications.
But if all you want is the infinity of one space -- the Web
-- a multimedia Web appliance might be your cup of
computing. Why wait for a PC to boot? With broadband
connectivity, DSL or cable modems, you can have the Web at
your beck and call all the time, instantly, in the kitchen,
the family room, or on your night stand.
Lastly, multimedia. Just as the print medium embraced color
-- even the venerable New York Times -- the Web wants to
create a multimedia experience, whether to charm you into
buying something or entertain you, or to educate or inform
you. I realize the word "hypermedia" has been so abused that
its coin has lost some relief, but the fact remains that the
BeOS offers a unique combination of smaller footprint,
greater robustness, and multiple media streams for the
creation of a compelling Web hypermedia experience. The very
type of experience that defines our view of an extremely
promising segment of the Web appliance market.
1997 Be Newsletters | 1995 & 1996 Be Newsletters Copyright © 1999 by Be, Inc. All rights reserved. Legal information (includes icon usage info). Comments, questions, or confessions about our site? Please write the Webmaster. |