Proceedings of the May Be Developer Conference Writing Video Drivers




SCOTT BRONSON: My name is Scott Bronson. I want this to be more of a discussion than a presentation. I'll start out by saying I will do a quick broad picture of the architecture on the advanced access CD that was handed out today.

This is the same architecture that was in BeOS DR8 and even down through 7. It's essentially a small and simple graphics driver. Certainly room for improvement, but for now it does do the job. And easily and well. Right now, the graphics driver is not really what you would think of if you had seen graphics drivers in other operating systems. It is more of a plug-in. It's called an app server add-on. The app server, I'm sure you know, is the client server -- the app server is the server that does all the drawing on the screen. The applications are essentially the clients of the app server.

Now, to actually do the drawing and set up a frame buffer, the app server employs plug-ins which are really plug-ins in the truest sense of the word. They run in the app server's context, and they are at the user level. And the interface is very simple and well defined.

First of all, you must open it. Basically the low sequence goes, the app server -- when the app server is first started out immediately after the kernel starts, it runs user boot script. And about the first thing boot script does is fire up the app server. And the first thing the app server does is go out and interrogate the PCI space, looking for PCI display cards. For each card that it can find, it cycles through all the drivers it can find. The drivers typically reside in /boot system. And inside your system directory, there is add-ons which contain all the add-ons for the net server, the app server and print server and all that. Inside the app server add-ons is the app server directory. That is where you find these graphics drivers.

There is one other place you can find them. Like Trey was just mentioning, you can put -- you can set up a /system add-ons app server directory on a floppy disk, and when that floppy disk is inserted into the computer, the app server will try to load any drivers on the floppy disk. So if you end up blowing away the driver on the hard drive, you can put a known good one on from the floppy and insert it and redo the damage that you did.

So for each PCI card, the app server cycles through all the drivers that it finds. And the first driver that returns says, yes, I can drive that card. The app server uses that as the display. The first thing it does after the -- actually, essentially, this interrogation is done using the BeOS graphics cards. This message is sent when the app server says, Can you drive this? It is past the PCI -- the PCI address space zero is contained in the register zero of the six PCI base registers. It is past the I/O base if you should be so uncooth as to use I/O cycles to talk to your card.

I usually -- just checking the vendor device ID, is sufficient to tell if you can drive that card or not. If you return no error from the BeOS graphics card, the card is now yours. There is no -- well, in the header files, there is a B_close graphics card, but it is not called by the app server. The reasons for this is kind of historical.

The old app server was never shut down properly by the kernel. When the kernel went away, it shut down the user applications but it ran into rate conditions between the different servers. There was no way to shut them down one by one, so we didn't shut any of them down. We blew away the machine, including the graphics driver. We may -- and this still holds true for that advance access CD that you guys got today. On the advance access CD, we've revised that and the servers are now shut down one by one. Conceivably, we can now implement the close graphics card. However, none of the drivers we have now require this. So this change will probably appear when some more significant changes come into play.

This is probably the first message your newly opened graphics card will receive. Version and ID, pretty uninteresting. You -- sorry. This slide is out of order. This is -- oh, well. Once you've configured your graphics card --

A SPEAKER: Before that one?

SCOTT BRONSON: It's not in here. That is quite all right.

Anyhow, all of this is pretty straightforward. Flags. I'll touch briefly on the flags. CRT control. Many graphics cards, not all of them, many of them allow you to do fine-tuning of the timing, so that you can bump the image a little bit to the left or up or widen it, or do all that in the graphics card rather than in the module. This allows you to sync out to, say, these projection devices that are notoriously difficult to sync up to.

Gamma control. Currently unimplemented. We've got the gamma table, we've got gamma calls. It is beyond me why nobody seems to use them. This will, I think -- I came on -- I'm pretty sure the reason is it worked at one time and then never -- never got released. I'll sure that it can when it starts being used.

Finally, frame buffer control. We will touch on this later when we get to the game kit. About the only other interesting field is the RGBA order. Trey, this is for you. Right now, RGBA order, despite what Benoit said, unless he made changes last night --

Does it work now? No.

BENOIT SCHILLINGS: I did not --

SCOTT BRONSON: Okay. So I'm pretty sure--most positive--the RGBA order is ignored currently. This is because it's -- when it comes to endianness, it doesn't matter which way you do it as long as you do it consistently. BeBox that way is -- on the Macintosh because of the architecture that Apple chose to use because of the frame buffers, it is going to be a big endian. We can make exceptions to that but that is going to have to be very carefully controlled and well thought out. For now, no exceptions. BeBox, little endian; Macintosh, big endian. What I mean by that is, starting at byte zero in the frame buffer on the BeBox, BGRA. On the Macintosh it goes ARGB.

A SPEAKER: Is there a different copy on the app server based on the platform you're running? The app server has sort of a native internal order for device in the B_bit Max, I assume.

SCOTT BRONSON: Yeah, it does. It used to be that way. It used to be pound defines. I think that is still the case. Yeah. But that will not be the wave of the future. We can -- we can certainly -- you know, the one thing that is a little misleading about this field, it seems to indicate you can put your components in any order you want. GRAB or something like that. Anything you want.

That is not the case and we'll not support that because that would be awfully -- it will be one of two things: Incredibly slow or incredibly huge. We want neither. Because graphics cards do ARGB or BGRA or both, it makes sense to stick with the standard.

Now, the next thing your card needs to tell the app server in order to make the app server to make an intelligent request of it is what sort of screen resolutions it supports. This is extremely straightforward, returns along of which the bits and the longer set, depending on your card -- you go out and interrogate the amount of RAM it has on it, and the DAC, and you can set it to one of these.

A SPEAKER: Why is it limited to these few resolutions?

SCOTT BRONSON: That is the architecture that -- this is an architecture that was in place a long time ago and we're still living with it. It will be improved. No. But that's all I can say on that. We do -- we absolutely need to add support for user-defined resolution so that you can have a card that can take 1024. Something like that. Something we've never thought of.

A SPEAKER: There is a need for monitors using medical imaging. A Macintosh is much bigger.

SCOTT BRONSON: Absolutely. We are definitely going to, you know -- if we hope to stay successful, we absolutely have to add user-settable resolutions, and we will. For now, this is what we have.

And then also in the header file, you see down there at the B fake device, B 8-bit 640 by 400. That is historical garbage for the old, old VGA modes that hopefully we never have to live with again.

And then B fake device is used internally by the app server to allow headless operation. It fakes out by eight wide, two wide long monitor. You don't crash if drawing a couple bits. You can -- this is evidence that -- that the app server was a very old application. It is a -- it started out being hard coded for one specific graphics card. The Number 9 -- actually before it started out, it was hard coded for our own graphics processor that was -- started out next to the Hobbit.

Then we added support for the Number 9 card, and that was hard coded into the app server. And then -- none of this seemed very important. And then we're talking multiples of years ago. And then we needed to add support for graphics drivers if we wanted to be taken seriously. We did that. Unfortunately, the scheme we had did not grow very well. We have run into those limitations right here. To repair this is very nontrivial. There is -- there are a lot -- a lot of dependencies -- actually I will touch on this later after I get done with the slides, I will tell you of some of the changes we're making that are on the CD. And one of them is B_set screen. And this will give an example of just how widespread these dependencies are.

Finally, pretty uninteresting. Minimum, maximum refresh rate, and so on. It is in the documentation. Pretty straightforward. We chose to use floats. These don't print very well, but they do allow sliders so you can have any refresh rate in between. This is academically interesting because no one ever uses the slider.

Index color. This is how you set the color map. Extremely straightforward. The one problem is it gets called 256 times. So if you are logging -- I'm sure you all have seen that.

All right. This is a call that I like. One call sets it all up. There are really only two fields here that you need to worry about. The screen space. I showed you that earlier. You pass a single bit set in the screen -- it is a 75 or point zero, whatever you want. And the app server will -- or the -- the app server will pass that request on the graphics card and the graphics card will set it and we are happy.

The other fields here that are notable and perhaps a little hard to understand are h_position, v_position, h_size, v_size. These range from zero to 100. 50 meaning right in the middle. And these are what I was talking about before, CRTC control -- CRT control or CRTC.

Each position will move your -- if your driver supports CRTC, your h_position will be moved in, able to move each left or right, and shrink it. It is extremely handy if you need to use it.

Screen gamma. Currently unimplemented. It is in the header files. We will implement it soon. Because most chips use the pallet for gamma, it is extremely easy to implement. It is certainly -- I will say it is even worth putting into your driver now, and as soon as the BeOS has the table you can be pretty sure it will be correct.

I like acceleration. And by far the most important -- the yellow makes it kind of hard to understand. This is blit. And those are the arguments for the blit operation. Actually that is not as hard as it was last night. Hey, it worked out. You can see it is extremely straightforward. All these -- every value is in screen coordinates. And basically blit performs a screen-to-screen blit. This is by far the most important.

If you write a graphics driver, please, please, please implement blit. If you do nothing else, blit is really what makes the machine responsive. If you ever try a graphics card that does not support blit and you try dragging a window around, you will notice it just feels terrible. Especially in 24-bit.

The two rect calls -- they're almost as important as blit but not quite as. But the nice thing is these extremely important calls are so easy to implement. Invert is interesting, but hardly ever called.

A SPEAKER: 32-bit?

SCOTT BRONSON: Well, it's called to invert, you know, text around, to highlight text or something like that. Typically the regions that you are inverting are so small that there is no perceptible speed difference.

I'm not sure -- this machine. There is something wrong with this machine beyond Benoit's crashing it. All right. Here we go.

Draw lines. That is straightforward. I will touch on draw array. With the lines, they typically involve very few pixels and they are drawn so fast that the bottleneck is trying to feed the data into the chip. The bottleneck is not actually in drawing the lines.

We use a line array. Typically when you draw lines, you are going to do more than one. You can punch it in all at once if your graphics control supports that. If not, it is still faster to pass all this data wholesale to the graphics driver and say, hear you go, here is everything at once and have the graphics driver do it all at once.

Finally, hardware cursors. I love hardware cursors. This is about as simple as we could make setting it up. You will notice it uses the Macintosh style cursor and so you can invert pixels, which is nice. Move. Show. Straightforward. And finally sync.

Now, this is for me, you know -- I inherited the necessity for sync, but I still find it an extreme embarrassment. After every accelerated operation that you do in the current graphics architecture, means a sync. Even if your driver can operate totally asynchronously -- one card I had a recent experience with is the IMS twin turbo. An 8-bit FIFO. So that means you can have eight graphics operations going on at the same time, all completely asynchronous. And it will handle the order -- I mean the -- it's got a little semaphore side that you can poll. It is a phenomenal chip. We are currently not taking -- we are currently not allowing you to take advantage of that because we say that we expect the graphic operation to be done before you return back to the app server. Again, for historical reasons. And, again, changing that will mean changing a significant portion of the rendering architect.

A SPEAKER: So it's specific?

SCOTT BRONSON: Absolutely. At least 30 percent. When we are doing a huge blits, 24-bit, you drag your -- you are dragging a window or something like that. I wouldn't be surprised if you can save 80 percent.

A SPEAKER: It depends on the chip?

SCOTT BRONSON: Absolutely. It depends on whether it uses VRAM and DRAM and how hard it is to program. But it is a huge win. We are not totally oblivious. There are some things I'm sure we have not thought of and we need to be told. That is why I want this session to be more of a discussion rather than a presentation. We did miss this one. We absolutely need to do this.

Finally, one of the neat things about the BeOS is the game kit. And it requires driver support. Basically what the game kit allows -- you've seen it -- it allows full screen and low level control of the frame buffer so you can -- the application can change the base address or set its own color table and take advantage of the accelerated routines itself directly.

Now, to do this, we can take the driver, you know -- because this driver is an app server add-on. We need to remove it from the app server and put it into the client application. This is cloning. Basically what the graphics driver -- the app server -- the graphics driver and app server does not go away but it is told, hey, create a clone of yourself and quit controlling the frame buffer for now. And then the driver is loaded into the client application as well and they arbitrate amongst themselves. The nice thing about this is that it makes switching from a client application between the app server and client application -- makes it pretty seamless. You don't have to do it by opening and closing drivers and reinitializing hardware. As long as the arbitration works, it falls right into place.

If you want to pursue this further, the BWindowScreen will show you a mechanism that really takes good advantage of cloning. A little bit further on the game kit. You will notice that you can pretty much set up any frame buffer you want. The graphics driver is not required to take advantage of it, but you will notice here you can specify height, width and bytes per row. You can specify a screen that is extremely narrow, but bytes per row of millions. I'm not sure why you would want to do that except down here further --

A SPEAKER: Display width.

SCOTT BRONSON: Exactly. You can change the display height to actually make what is -- what is displayed smaller than the actual frame buffer. And then you see the display X and display Y. That can move the display around the screen. Essentially, this is pan and zoom. Drivers are not required to support this. That is why proposed frame buffers is the way -- we are not saying accept this, we are saying could you possibly do this, please.

A SPEAKER: Does the DR9 app server respect the frame buffer control flag? Because in DR8 if you set the flag to zero, the game kit will happily open you up and call those methods, regardless if you couldn't tell them --

SCOTT BRONSON: That's true. That's true. That flag should not be necessary because you should respond with an error to any request that you do not support. We may add support for that flag, but absolutely return error if -- unfortunately, right now -- and I've done a little investigating into this. It is kind of a complex interaction. Those error codes are partially ignored as well, so that sometimes when you say, "No, I will not support the Game Kit", it says, "OK, now set up a WindowScreen". Then you say no. And then if you get lucky, you can quit the client application and everything returns to normal.

That's another minor embarrassment that we are cleaning up as fast as we can. But, you know, that's why you're here. I can tell you about them and save you time.

Finally, debugging is really straightforward. And it is pretty much the same as used everywhere in the BeOS. We are all big fans of serial debugging because it is so unobtrusive and quite powerful. Basically you need to set the print enable at the beginning of your driver or debugging messages will be squelched trying to save time, unless at boot you hold down one of the modifier keys and then it will turn on the bug message. This allows you to send a driver to a friend and he can dynamically turn off the debugging messages. When it is off, these messages take very little time, but if something goes completely wrong and he wants to see the debugging messages, all he has to do is hold down the shift key on boot and they come up and happen.

Now, if you always want to print out your debugging messages set_dprintf_enabled(bool). dprintf is the same as printf. It is not a crippled version. They will do floats. Extremely happy about that. That is basically the architecture we have now. I did mention that I was going to talk about BScreen. This is one example of how -- what this really is, the thing you have in your hand, we are preparing it for the future. Nicer architecture. Basically the problem is with the multiple monitor environment, the BeOS basically will support multiple monitors, and these monitors can come and go. For instance, if you are to have a card in your machine and that card has the monitor sense capability, can sense the load on the DAC pins that a monitor creates, it can tell if the monitor is plugged in or not. You can come over and plug in your monitor. Your machine could be running headless with a graphics card.

You come in, plug in a monitor, the machine notices, sets up a display, puts up windows and allows complete interaction. And when you unplug it, it can all go away. This is extremely interesting, say, for laptop users who, without rebooting the machine, may want to use a large 21-inch display at work and then unplug it and take the laptop with you and now you're on this little tiny completely different 640 by 480 onboard active matrix or something like that. This will happen fairly seamlessly.

From the user's point of view, it makes perfect sense. From a programmer's point of view, this is a nightmare because it pretty much implies the screen can disappear at any time. Let's say I have a program running in 8-bit that is using the color table, and the user unplugs the monitor every time, and another thread, it -- click it, removes the monitor, as would make sense, you know. Deletes all the data structures taken up and all that. All of a sudden this application, you know, there is contention for these data structures. There needs to be some way for this application to say, okay, wait a second. Don't allow any monitors to disappear on me until I'm done with it. This is what BScreen does.

Typically you will pass it a window. And it will return -- it will use the screen that that window is on. Right now it obviously uses the main screen. There is only one screen. In the future, it will use the screen you specify. Either the main screen, or you can pass it to a window. You have it use that screen, or any other mechanism that might make sense. It will then allow you to interrogate that monitor to find out what bits per pixel it is set to, you know, screen depth, width, height. Even give row bytes and base address, even though that is basically academic. You can't write to it because it might not be mapped into your application space. It may ask you to map it yourself. You may have to call get area or something like that. You can find out about all that. When this object, this BScreen object is created, it pretty much tells the screen, hey, somebody cares about you, don't change for now. And then it is used briefly as possible. Let's say you want to look up a color index or inversion table or something like that. You do as little as possible. When that BScreen object is disrupted, the screen is basically told, okay, you're free again. You can disappear.

Without BScreen, there is really no way to support multiple monitors appearing or disappearing. So you can see that even though we have not arrived at our final architecture yet -- and time certainly is running out -- we are applying the foundation, the groundwork, to end up there one day.

With that, it is basically the end of the information I want to present. Now, I'll take questions. And basically you guys can ask pretty much whatever you want as long as it is graphics related.

A SPEAKER: Is there any way to expose card-specific features out to something outside the app server, so that like you can put custom features into a library of your own?

SCOTT BRONSON: You can do it with the game kit. If you want to take over the entire screen yourself, there is almost nothing you can't do.

A SPEAKER: If you don't -- if you want to do video overlay windows with color conversion with like every PCI video --

SCOTT BRONSON: Yes. And that will require specific support for that. I'm not sure if you are interested in a generic way of doing it.

A SPEAKER: I really prefer if there is a way to expose card-specific features.

SCOTT BRONSON: Okay.

A SPEAKER: Although that is a case that might benefit from having a unified architecture later on.

SCOTT BRONSON: Right. Exposing card specific futures without making the card yours. Right now there is no clean way of doing that --

A SPEAKER: Is there a not clean way?

A SPEAKER: I've been talking to Scott about taking out the possibility of the existing hook functions in the Windows screens, and then -- you know, maybe between us we can talk about -- I thought I was going to take 48 -- 47 first and use it as a, what, you know, extended hook functions to support with a mask, to use the 32-bits. And it will be easiest enough. And as a group we can decide what other things we want. Like the graphics card has as horizontal and vertical scale size and horizontal and vertical positions. Those two numbers -- four numbers ain't enough. Can we get that corrected?

SCOTT BRONSON: Again, the real solution here is to provide you with full timing, which our architecture should do and will do. But I can definitely see the need for custom hooks. Basically that would probably --

A SPEAKER: It is going to be short-lived, I guess.

SCOTT BRONSON: Yeah. But useful now. Yeah, we can do that fairly easily. I can put it into an interface kit called -- probably pass it a name and then -- the name of the function you want to do. Actually there is one other -- I've been thinking about this for a week. And I want to add support for non-RGB pixels and something like that.

A SPEAKER: A lot of what people would do with video playback that is not on some kind of a service that can be made a lot better if you used color scaling conversion.

SCOTT BRONSON: Scaling. That is something our acceleration absolutely should support. It should support scaling, including interpolation, noninteger, and essentially user-definable interpolations, so that you can plug in whatever card you have and use the features that it provides. So, again, adding custom hooks wouldn't be usable by everybody but it would be usable now.

A SPEAKER: What is the timetable for doing that? BeOS is starting to go prime time and all this needs to be taken care of, or at least laid out, the plan of what is going to be happening here as soon as possible.

SCOTT BRONSON: Yes. The timetable originally was going to be -- we'd be showing you the fruits of our efforts around now. I think mostly because we're a fairly small company attempting an fairly large task, some things -- because video works good enough now, we're -- we're doing the more important tasks first. Basically the answer to your question is, we don't have a specific timetable right now, but, you know, you will be -- things will become clear when you -- things will start solidifying. We will be applying incremental fixes.

A few of the simple things I was talking about today, you will see in the next release. I can promise that. But the more difficult stuff will be, you know -- we're talking months. We all feel the time pressure.

A SPEAKER: One problem I have is a lot of times I need to write to a graphics card by another PCI card plus masking image.

SCOTT BRONSON: Yes.

A SPEAKER: It would be nice if this part of the screen doesn't touch with the Apple so we don't have menus come down and get destroyed.

SCOTT BRONSON: Yes. That is an incredibly difficult question that no operating system has solved adequately. There is some machines like high-end GUIs that spend a huge amount of time arbitrating because they have processing time to burn.

A SPEAKER: Apple had a way to write saying this is a protected area. One of the earlier systems.

SCOTT BRONSON: Right. I don't think it works any more.

A SPEAKER: No, it doesn't. If this was something I could write.

SCOTT BRONSON: Apple pushed that functionality off to QuickTime and QuickTime doesn't even do it well. The funny thing about it is that Apple's operating system is probably the easiest one to do that on because it is not parallel.

A SPEAKER: Right.

SCOTT BRONSON: We absolutely will see that. We've got a couple other things, some rudimentary video screening that is going on as well that will require this.

A SPEAKER: A lot of problems is we're not going through the processor --

SCOTT BRONSON: Absolutely. It is DMA. It would be a problem if you were going through the processor. We need -- we need to force -- yeah. I can tell you what needs to be done but not -- so that I -- I'm interpreting you correctly -- but not how to do it right now. Basically what needs to be done is you need to be notified when a certain region of the screen is going to change. And then you can stop whatever you're doing, come back and tell us, okay, you can go ahead and use that part of the screen. And when we're done, it is yours again.

It needs to say, sorry, you just don't get to use this part of the screen. This is a realtime television broadcast or something like that. I don't want you getting in the way.

A SPEAKER: For most cases like that, you can do it pretty easily with overlays as long as they're there.

SCOTT BRONSON: You can do it easily. Well, you can do it partly easily.

A SPEAKER: At least you don't have to notify the DMA provider.

SCOTT BRONSON: You don't have to stop the DMA, but how are your menus getting behind --

A SPEAKER: Now with the chromakey --

SCOTT BRONSON: That's true. That is only possible if your controller and DAC support chroma-key.

A SPEAKER: That is what I'm saying. Like all -- almost all that are made today supports, like streaming video. It does video overlays, chroma-key, color key, take your pick.

SCOTT BRONSON: That is true except for when you've got, say, in -- not really incompatible, but let's say I've got, I don't know, an S3 graphics controller and then a Phillips Pantera digitizer. It's -- you would pretty much have to write -- get them to interact together. Getting those two hardware devices --

A SPEAKER: No, I don't see what you are saying. If you had some device that previously was going to DMA pixels on the screen, it would be no different to tell the device DMA is off screen and reserved. It would never move. You would use the old video overlay and put it on the screen and an app server can move the video overlay --

SCOTT BRONSON: That is absolutely true. But it is not as common today. I'm not even sure --

A SPEAKER: On any chip being made today, that is there. Be's whole philosophy -- you should really look at what the current chips can do and not try to have all these past technologies.

SCOTT BRONSON: Unfortunately, right now, the majority of our users are using onboard video that has no support for that whatsoever.

A SPEAKER: Don't screw future users for the past. Don't go too far out of your way to leave those capabilities undone.

SCOTT BRONSON: Yeah. Okay. I can -- I can definitely hear what you are saying.

A SPEAKER: Who do we go sit on, you know, like the four of us or something, to make sure it gets moved up in the cue? It is going to kill us. The change of the graphics API, while at the driver level, it is a C thing, we can change it and it won't affect anything else. But for that functionality promulgated up through the app server and to the applications is going to involve changes to the B_library. All the C++ stuff. If we don't get some of it in there now, it will really hurt Apple developers or --

SCOTT BRONSON: We do need to get it in now, but we're not going to change what is in there now. We can add to it and we will be adding to it. For instance, this all leads to a video architecture which I'm not going to say anything about. All of this, to truly be a usable solution, needs to be video architecture which handles video formats, streaming, arbitrary destinations. Needs to do it all fairly generically so that you can have pretty much any type of hardware interacting with any type of software code. It all happens predictably. This is no easy task, but we are undertaking this right now.

A SPEAKER: All these machines -- I'm not familiar with the Mac people stuff. Can they be overridden by a PCI card? There are cards under $100 that do all this stuff now.

SCOTT BRONSON: Most machines, yes. Sometimes you need to get a controller or something like that. Yeah, you can. But we can't -- we can't rely on that. Basically -- I don't want to force through sort of a hackish solution on what needs to be a general solution. I think that is my own philosophy -- I'm not certainly speaking for Be here. But what we're seeing with the graphics driver architecture right now -- some of the things we're having trouble fixing now are because back then, people said, heck, just push it through now and we'll get this little bit working now and fix it later. Once it is working, it becomes extremely hard to fix or spend time repairing.

So, yes, this is all -- I think this is becoming kind of circular, but we will have a media architecture, including streams, destinations and all that. You will be seeing it piece by piece and it will work well.

A SPEAKER: I don't know if this is -- I don't -- I don't think this is in the circle still, although it does address the deficiencies in the API. I notice a lack of system memory source blits or off-screen source --

SCOTT BRONSON: You do. Also, like color-spending blits patterns, expanding blits. Stuff like that. There are a lot more hooks to be used. And these will be added and these are going to be added in a short time frame.

A SPEAKER: Will existing API or are you guys going to use some of the hook numbers to use pattern expanding blits, or a different API?

SCOTT BRONSON: We will be keeping the primitives we have right now. And on top of that we'll be adding more functionality, the pattern expanding blits. Especially, especially, being able to bus master data out of host memory into your frame buffer so you can do that even though with the extremely slow blits asynchronously. That is extremely important to us and definitely in our future.

A SPEAKER: Does a -- when POS draw a frame buffer, does it have to be a graphics display?

SCOTT BRONSON: No.

A SPEAKER: A buffer up on a PCI card.

SCOTT BRONSON: In this architecture right here, you do not need to be a PCI device. When it passes you, the first PCI defines say, hey, I can handle that, and then just ignore the fact it is passing --

A SPEAKER: Over a PCI device. I can draw to it and have it be a true video display.

SCOTT BRONSON: Oh, I see what you are asking. Can you disconnect it from the app server? Is -- I thought I understood.

A SPEAKER: We want to use draw lines, that sort of stuff, too. The frame buffer -- which isn't a display card. You had -- for other systems you have to have a display card. You can't draw directly.

SCOTT BRONSON: Yeah, I do understand what you are saying. There is -- you are not able to use the app server routines to draw to this other display. But, yeah, you can set it up and use it like any other driver, as long as your application knows that it is there.

A SPEAKER: Use B-drive primitives or not?

SCOTT BRONSON: No.

A SPEAKER: I would like to have some 3D drawing or something in place.

SCOTT BRONSON: Sure. Sure. One solution there might be BBitMaps in memory and --

A SPEAKER: It sucks.

SCOTT BRONSON: If you really want to do your drawing to a card that the app server doesn't know about, right now you require your own drawing routines. If there is interest for making the routines generic, you know, e-mail me. But right now this is not a direction we're heading in.

A SPEAKER: Taking that point, I'm turning to the user perspective. Does the ATI XClaim GA cards?

SCOTT BRONSON: Yes. The XClaim GA card is supported. You will notice on Macintoshes in this advance access release, you have -- 24-bit is supported. Unfortunately, I've been interacting with the guys at API. They don't want us writing their drivers, they want to do it themselves. Makes me extremely happy. However, they're not on the same time schedule as we are and they didn't have their 24-bit ready in time to make this CD.

As soon as that driver is ready, which I expect will be sometime this week, we'll put it up on our web site and you can download it, and it will then support 24-bit.

A SPEAKER: Any glitches if I plug that card into a BeBox?

SCOTT BRONSON: Yeah, one glitch. It is not supported in the boot ROM. This hook on the boot screen, you won't get any of that. You will remain dark until the app server starts up.

A SPEAKER: Which isn't long under DR9. It comes up in a hurry.

SCOTT BRONSON: However, that boot screen is extremely useful when you are worrying -- when you have disk problems or something like that. So if -- I would say it is academically interesting, but if you will use them every day, you probably want a better solution than that.

A SPEAKER: Is there any way to turn on debugging in the boot ROM in terms of set -- the boot ROM has a place to store a bunch of volatiles or nonvolatile stuff. Is there a way, regardless if there a key held down or not, where it always comes up to debug?

SCOTT BRONSON: To set your machine?

A SPEAKER: Right, for the BeBox.

SCOTT BRONSON: The only way you can do that is to flash. Basically that is real easy. And we use it internally like that all the time. If you find it would make your life easier, you can e-mail me and I will send you a boot image that you can flash into your BeBox boot ROM. That way it will always appear. I'll be happy to do that. My e-mail address again is Bronson@Be.com.

A SPEAKER: Is there an easy way to debug the screen? I found it in the past and then you dropped it and it has to switch back.

SCOTT BRONSON: Yes. The only way -- part of the way BWindowScreen works makes it extremely difficult to interact with the debugger because the debugger actually is very much -- it is not a user level application, but it does take advantage of this screen. And essentially the user has fairly high level semaphores in the screen. The debugger is not extremely low level. The nice thing about a preemptive multi-tasking operating system, there is no real difference among -- there is no real low-level and high-level task. You can pretty much do anything at any time. Unfortunately, this is one instance where it gets difficult because there is contention for the screen and it has not worked out very well. I don't think this will ever be adequately solved just because -- I -- on the Macintosh, using a source level debugger -- I don't know if you've tried to debug an update on your drawing code. But every time the debugger comes forward, it blows away your region.

A SPEAKER: Does it take more -- multiple monitor support and it goes away?

SCOTT BRONSON: Absolutely.

A SPEAKER: You can always run the debugger on monitor two and --

SCOTT BRONSON: Except that this problem does have the slightly more insidious manifestation, is that if a program is hung as it captured the screen semaphore -- because you can't draw at the same time. Then the debugger can't draw no matter what, unless they're completely separate. And you will have a semaphore per screen but there are critical semaphores in there, semaphores and stuff like that that can cause the debugger to be unable to draw. Perhaps mistakenly. We could -- there are a couple things you can do with watch dogs, stuff like that. I think -- I think an easier and a better solution is just use dprintf.

A SPEAKER: I talked to Dominic about that not being synchronous. When you are writing the drivers and you print up, you did dprintf, and the next thing you do imposes the driver. A lot of times the stuff in dprintf doesn't show up. So you don't know that that is the place you died.

SCOTT BRONSON: Dprintf is synchronous.

A SPEAKER: He suggested -- I thought he said it wasn't. I suggested that it wasn't. He said well, I don't know. The snews is -- after a dprintf is --

SCOTT BRONSON: Sometimes you do need to do that. Dprintf itself is synchronous. However, the hardware does have internal buffers like the -- it's been so long since I've done that. I don't remember what chips are on there. I think a 16-byte buffer in there. Your bytes might be swallowed up in there when the machine hangs.

A SPEAKER: It is much longer than 16 bytes.

SCOTT BRONSON: You are sure it is hanging after the dprintout?

A SPEAKER: Occasionally I get that line printed out. It depends how the app server is feeling or something. It is not clear.

SCOTT BRONSON: It sounds like a more insidious timing issue. Dprintf currently is synchronous. So we use further up --

A SPEAKER: Regarding debugging. Could there be a way of blowing -- or otherwise opening the BWindowScreen in the work space so you can see the original work space in which it was run?

SCOTT BRONSON: Oh.

A SPEAKER: When it crashes I cannot see --

SCOTT BRONSON: You set up your work space. I guess you have two work spaces and one BWindowScreen and one your printouts in it?

A SPEAKER: I'm not able to run from the application -- a work space --

SCOTT BRONSON: Sorry. You're not able to run what?

A SPEAKER: Windows screen in a different work space. There is no parameter.

SCOTT BRONSON: When you launch it from the terminal, it comes up over the terminal. It sure does.

A SPEAKER: If you are writing the game, isn't there a way to select the work space from a --

SCOTT BRONSON: Yeah.

A SPEAKER: So if you launch the terminal window, select a different work space than the one you're in, start it in another work space, then create your BWindowScreen, it will create that. You can Alt/F key back to the first work space.

SCOTT BRONSON: That call is not documented, but I think it is in InterfaceDefs.h. But there is a switch work space it is called. So you can use that. E-mail tech support or Bronson@Be.com.

A SPEAKER: I will check the includes.

SCOTT BRONSON: Cool. That's it?

A SPEAKER: DPMS.

SCOTT BRONSON: Yeah, yeah. I like most parts of DPMS. This is the VESA display power management services. I think that is what it stands for. Basically puts your monitor to sleep. This is actually fairly easy to support. You will see it soon.

A SPEAKER: What about getting the supporter refresh rates for the monitor?

SCOTT BRONSON: DDC. DDC is -- hoping no one would bring it up. DDC is incredibly complex. Not incredibly complex, but the -- to really be useful, it requires a lot of services from the operating system. You need a list of monitors and capabilities. And then once you get these parameters from the monitor, they're in standard VESA timing format. Either EDIF or VID, depending on what you can get. Once you've got these timings in this current architecture, there is no way you can then translate it into a config screen call because there is just --

A SPEAKER: Too few parameters.

SCOTT BRONSON: Right. Our driver architecture is simpler than VESA's solution.

A SPEAKER: I tried it on the Mac and it works.

SCOTT BRONSON: Yeah, but on the Macintosh you have your own timings. You can --

A SPEAKER: Yes, but can reswitch the timings.

SCOTT BRONSON: On the BeBox you can only control the horizontal and vertical size, and only the ones we give you. And the refresh rate. So given the EDIF, you can go in and calculate the refresh rate and try to find a horizontal and vertical size. That's not real useful. Say you want to do 832 by 624 or some other resolution that the monitor wants to do, especially if it is fixed frequency. It is not -- right now the foundation is not in place to really usefully use DDC. But this is another thing that we're looking at.

We are VESA members. We've got the specifications. And that is actually starting to show in the architecture that we -- that is coming into existence.

One more thing you might be interested in. Very soon -- if you will send me an e-mail, I will have YUV apertures set up, that the card can tell you will support YUV apertures. This is probably the first extension that will happen. If you are interested in this or any of these other things, e-mail me and I can put you on, essentially an e-mail set up to keep you posted with this.

Thank you very much.



Transcription provided by:
Pulone & Stromberg
160 West Santa Clara St.
San Jose, California
408.280.1252

Home | About Be | Be Products | BeWare | Purchase | Events | Developers | Be User Groups | Support


Copyright ©1997 Be, Inc. Be is a registered trademark, and BeOS, BeBox, BeWare, GeekPort, the Be logo and the BeOS logo are trademarks of Be, Inc.
All other trademarks mentioned are the property of their respective owners.
Comments about this site? Please write us at webmaster@be.com.
Icons used herein are the property of Be Inc. All rights reserved.