March '98 Be Developers' Conference

Approaching a Cross-Platform OS
Stephen Beaulieu

 

Stephen Beaulieu: Can you hear me? Okay. As the sign says, this is the Approaching a Cross-Platform OS session. If you attended the Working With Intel section, the first session from this afternoon, this material is nearly identical, and if that's the case I would suggest you go see the BDirectWindow presentation in the next room.

So for those of you who want to stay, what we have here is Approaching a Cross-Platform OS. My name is Steven Beaulieu, I'm one of the Developer Technical Support engineers here at Be.

What this section is going to cover is basically... well, I'll try not to clobber that. Go ahead and cover a bunch of things.

Initial thoughts. The BeOS is a multi-platform OS. We currently run on PowerPC, BeBoxes and Power Macs, and we have an x86 release for Intel, which you all have CDs from today.

As a company, we have a philosophy of being platform and processor agnostic. We don't particularly care what we run on, our goal is to run on a images number of desktop machines with essentially the least amount of engineering effort that we have to put into that. And with PowerPC and x86, we now cover something like 96 percent of the desktop market. In the future, we might very well support other platforms. As they become popular and as more people are using them it might make sense to use other processors and platform architectures. As a company we are willing to look at that. There is no guarantee that we won't do something else.

So a lot of what this session will be about is issues for writing applications for the operating system, not writing applications for a given flavor of the operating system. We want the BeOS applications to truly be BeOS applications and run on all the platforms we have available.

A lot of what we will cover in this session are the differences between the two platforms and some images issues in general that you will need to think about as you develop for the platform, and these will stand you in good stead if and when we show up on Alpha or something like that. Though we are not working on that currently, just that processor came to mind.

So we will first deal with basic compatibility issues, I'm going to go through some specific coding issues having to do with the actual code you write. Some development environment issues and some final thoughts.

Let's step straight into compatibility. First and foremost we have taken a lot of time to make sure that your code will be compatible. And if you write code for the PowerPC version, it's a simple recompile to get on the x86. If you write for x86, it's a simple recompile. You should not have to make code changes, especially if you follow the guidelines I give you. That's the goal. Write it once, compile it, move it to the opposite machine, compile it, you have two identical machines up and running. Along with that, file contents are compatible, so if you have a text file, for example, on a PowerPC version, that same text file can be moved over to the Intel version and the contents are the same.

Of course binaries themselves will be incompatible. That's for two different reasons. On PowerPC we use the PEF format. On Intel we use the PE-COFF format. But more importantly on PowerPC we have PowerPC code inside of the binary and on Intel we have x86 code. We do not have a FAT image format. For those of you from the Mac world, there is a FAT format that has both PowerPC and 68K. We do not do something like that, it's too much trouble for what it's worth. So we have two different formats that we want to support.

Currently with release 3, file systems are incompatible. What I mean is if you take a floppy, for example, formatted on x86 and take it over to a Power Mac running the BeOS, it will not understand it. We have a lot of things we needed to do to get the operating system up and running on x86. And we had endian issues in the file structure itself. So the structures internal with the file system that describe what information is there, how images it is, where it is, is endian dependent upon the host platform. We made a decision to keep it that way so we didn't have a lot of expensive swapping every time you went in the file system. One of the nice things about the BeOS is how fast the file system is.

The unfortunate drawback is we didn't have the time to write the tool necessary so a big-endian PowerPC BeOS machine can understand the little-endian BFS format. That will be coming, but I don't have a date for it. It might make it into R4, it might make it into R5, we are not sure. We are dealing with that. It is important to us.

Along the same lines, resource files are incompatible. If you use a resource inside of your PowerPC application and you link against it, you cannot use that same resource filefor compiling on x86. The binary data in there is endian dependent. So we do have a couple resource conversion tools I will talk about a little bit later.

Our advice at this point is create new resource files from scratch for each platform and use them. Okay.

So those are the couple, those are some incompatibilities. But again, the actual rest of the file contents, if you are not dealing with a resource file and if you are talking about the actual contents of the disk, not the structure, that's all compatible. So you want to be able to get it from one machine to another, what you want to use is network transfers, that works the best right now.

The BeOS ships with an FTP client and ships with an FTP server, so yoy can move the contents back and forth. And if you use Zip, Zip understands the Be file system and Zip also is endian aware. It does the right byte swapping so you can take a file, Zip it up, preserve all its attributes, and transfer it over to the opposite architecture and unpack it. Everything will be unpacked the right way. So FTP and Zip are your friends, network transfers work really well. Those are some general compatibility issues.

Now I'm going to go through some coding issues. As I said, we made a big effort to make sure code simply compiles. It just works.

But sometimes it doesn't. And when it doesn't you are generally running into two different issues: either byte order issues or you are running into problems with importing and exporting symbols. So we will go ahead and deal with those in order.

Starting with byte order, the biggest difference between the two platforms as we stand right now is that one is big-endian and one is little-endian. I'm going to go through and just explain the differences between big-endian and little-endian, talk about byte swapping, and what functions we have available for that. I'm going to talk about methods we suggest that you follow to make sure that everything is swapped correctly. And finally, after you know how to do everything, I'm going to let you know when you actually have to worry about it.

So as most of you hopefully know, the difference between big-endian and little-endian is which byte in multi-byte data contains the most significant information. So if we go ahead and take a look at the decimal number 4660, that's a multi-byte number, on big-endian it's represented as 0x1234, with the most significant byte to be first, and little-endian has it swapped the other way. Multi-byte data is in opposite order.

If you were to get big-endian data into the little-endian machine, for example, 0x3412, for example, is something like 13,000 or something. So you would see the big differences you can get, you know, for a value you want to work with if you don't have things swapped correctly.

So, what do you need to do with byte swapping? You need to swap multi-byte data like floats, doubles and various flavors of integers and types like BRect for example, that have multi-byte data as part of their members. There is a series of different actions you can do. You can swap from native host order to big-endian, from native host to little-endian, from little-endian to host, from big-endian to the host, or always swap. In the case of host swapping if you are a little-endian machine and you want to swap to little-endian, nothing happens, there is no overhead for having used the macros under those circumstances.

All of our types and definitions of the types of actions you can have and the macros in front of them I will go over in a moment, are found in the <support/ByteOrder.h> header file.

Let's switch to the byte swapping macros we have available. There is a lot more than I have in the examples here to swap single units. If you have a single integer or float, and you want to make sure it's in the right order, you use the macros for it. That's the best way to do it. All of the host, all of the swapping actions I listed there is one macro available for each of the various types, double, float, int64, int32, int16. And I've just shown one example of each of the types and one example of each of the swapping. But again, there is many more listed in the header file. That's only useful, again, if you have a single instance, say of integer.

What if you have a type like a BRect or if you have an array of integers or floats and doubles and the like? What we do, we have a function called swap_data(), also listed there, that will handle those. And basically what you do is pass the type so it knows what sort of information it's going to be swapping. You have a pointer to the data, a length of the data, which could mean an array of say 50 floats, and what sort of action you want to take on it, and it will batch process everything. You can also pass in well defined BeOS types, as you get more experienced in Be programming, we have a bunch of defined types for message types, float types, time types, character types, string types, that we have some constants. And we use those constants to define what sort of class or structure we are dealing with.

And for example, something like BRect or BMessenger, which has multi-byte data internal to it, you can pass it to the swap_data function. It will handle it for you. There is also a function called is_type_swapped(). It will tell you whether a given type is swapped by swap_data(). So that's kind of what swapping is about, and how you can access the information to actually do the swapping.

Now for some philosophies for managing the swapping. There are pretty much two ways you can do it. You can do natural or canonical swapping. Natural pretty much says you have a native host byte order inside your machine, if you are x86 that's little-endian, and that you are always going to write in that order. Then you set a bit somewhere, wherever you have written that describes which order the file is written in, and then when you read it, the first thing you do is check that bit, and if that bit says oh, I'm a little-endian machine and this is big-endian data, I need to make sure I swap it. That's one way of doing it. The only time you have to swap then is on reading.

The other way to do it is canonical, which basically says I'm always going to write my data in a set format, so I always write big-endian or little-endian. Then swap on writing and reading if necessary. So if the canonical order is little-endian, a big-endian machine writing that data will always swap when writing, and will always swap on reading. So before the question comes out later, which is better?

It really depends on the sort of application you are working on or what exactly it is you are doing. In general, natural has a lot less swapping involved. You only have to swap upon reading, and then you only have to swap upon reading if it's from a different format. That's good for things like an e-mail program where if most of the time if you are having to... or let's say a word processing program where you are writing a file to a disk, and if most of the time a person is going to be using it on just their one machine, it's not going to be flying around various other places, there is a lot less swapping that goes on if you write to the same format. And then read back.

But you could have another type of program that might make more sense canonical. Let's say a driver of a preference program or hardware that is more often than not run on one platform or another, let's say a card exists that only is running currently on Intel, for example, an AGP card, then a PowerPC machine came out later that supported AGP cards, for the time being as you write your data you might say I'm just always going to write little-endian because there is no PowerPC machines that ever read it. Then on the off chance that someone does need that hardware, for example, that runs your program on a PowerPC machine later in the future, that program will end up having to swapp. But in the meantime, in 99 percent of the cases people are going to be writing little-endian because of something like that, canonical makes sense.

So it really depends upon your situation and exactly what your program does, and you just might want to experiment.

So when do you need to swap? When do you care about all this stuff? The only time you ever care about it is when data is possibly entering or leaving the system. If you are writing to a file that could be transferred over the net or by floppy disk, or when we get the file system stuff sorted out you drop the disk in somewhere else or could be read by an opposite system or sent over the network. The only time you care is if it's persistent data or you don't know where the data is going to be read on the other end. You also don't have to worry about it if it is generally a BeOS type. BeOS handles swapping of its known types.

For example, the BMessage, it's a great thing. And it's got functions to be able to flatten the BMessage to a stream of bytes that can be written to disk or shoved out the network. So when you then unflatten that stream of bytes into a BMessage, the BeOS handles all the swapping that goes on with that. You don't have to care about it. So any of the known types within the BMessage are swapped, it's got ints, floats and the like. Strings don't need to swap because it's read a single byte at a time. But it handles all that.

Same with file system attributes on unzipping, it takes care of that for you, you don't have to worry about it.

However, you are responsible as a developer for swapping your own developer defined types. So if you have a proprietary binary settings file, for example, where you write a couple floats and a couple integers and you know you don't want to do it in text, you just write the information straight to the file because it's a lot faster to read that way. Well, then if that file goes somewhere and you have a application needing to read those preferences back in, you need to handle swapping.

Also if you create proprietary structs and tuck them into a BMessage, that is something that you swap, the BMessage doesn't know what the struct is, it's raw data, so it doesn't swap it. in general that's the philosophy: If the BeOS doesn't know what it is, it's not going to be able to swap it, and you are responsible for making sure it needs to get swapped.

But it's only persistant data. If you use an integer internal to your program while it's running to keep track of how many hits you have gotten or something, or where you are in a loop somewhere, and that information doesn't have a chance to go anywhere else, you can't put it out to another machine on the network, it never gets saved to a file, you don't have to worry about swapping. Only persistent data that could be sent somewhere else is subject to swapping. In most cases people don't have to worry about it.

Just as a quick suggestion for preferences, one thing you might want to do for preferences is actually use a flattened BMessage for preferences, that way you can put all your preference information into a BMessage, write it, flatten it out to disk, put it in a file; when you read it back in you don't have to worry about swapping. As long as you are not putting custom structs in there. That's one way you can do it without worrying what platform you are on.

We talked about byte order issues. The other main thing is importing and exporting symbols from a shared library. For example, if you are writing an add-on or if you need to link against a system file, a system library, there are several ways that you can do this. We have a suggested method for handling this on both x86 and PowerPC, and that's with the windows specified declspec format.

You use __declspec(dllimport) to import a symbol into your application and __declspec(dllexport) to export it. It works on both x86 and PowerPC. It works on the Metrowerks compilers we have now. It's also likely to be supported on other compilers in the future that become available. For example, if Borland decided to port their C++ compiler on BeOS, declspec would work. Because they already have it on windows, and declspec is the windows format for doing it. It's a good common way of doing this.

On Intel, I mentioned forward declarations, one of the nice things about having forward declaration files is that you don't need to change your code. On Intel, when the compiler goes through your code, the first instance of a symbol it finds, the first global data function or class name that it finds, the first time it finds it, that's when it determines whether that symbol is going to be imported or exported from the binary.

So a forward declaration file would essentially be a header file that lists every single piece of data that it's going to import or export and decides at that point in time whether you are going to import or export it. And you can include that first among all your header files and you can be guaranteed that on both PowerPC and on Intel versions of the BeOS, that the right thing will be done, that your symbols will actually be correctly exported.

I've given you an example at the bottom of what these header files might look like. It's impexp.h, it exports MyClass and imports my_global_function. In Be we have a header called BeBuild.h, it has a macro at the top and it lists all of the classes we export, all the functions. That's a good way, that's a good way to do it. And that's available on the CD.

There is also perhaps some more indepth discussions of this in the release notes actually on the CD as well.

Right. More development issues. We talked about coding issues, at this point in time now I want to talk about things that are not specific to code but the environment you work with. Currently the BeOS, both R3, PPC will ship in April and R3 for Intel is available now, shipped with Metrowerks compilers. Their names, are oddly enough, mwccppc, and mwldppc for on the PPC side, and mwccx86 and mwldx86 on the Intel side.

That's a convention you will see throughout coding on the BeOS. Basically we identify if something is different for PowerPC or Intel, generally we have ppc or x86 somewhere in the folder name or in the binary name, so you know which one you are supposed to be using.

In both cases the header files path is the same, it's boot/develop/headers.

Link libraries are a little different. What I mean are the standard shared libraries, the static libraries you can link your applications against either that you create yourself or that you, you know, BeOS provided ones you need to link against. The path, again, is fairly obvious. They both have been boot/develop/lib. So boot/develop, thw develop folder has most of the information you need, header files, a lib folder that has inside of it either the ppc or x86 folder that contains libraries you link against. So if you are on x86 or building an x86 you want to link to your files inside of boot/develop/lib/.

There are some differences between PowerPC and x86 for whether you link against the link library or runtime library. I will go ahead and list the various standard libraries you need to link against.

Links libraries versus runtime.

Power PC links against runtime libraries. That means you could link against the files in boot/develop/lib/ppc, or you could link against that same library actually in boot/BeOS/system/lib. We recommend you link against the ones in the develop folder, but they are actually the same libraries.

X86 is different, it links against link libraries that are different from the ones that runtime. Shared libraries get a .LIB extension, for example. The reason is kind of the structure of the object format, it's a default. On PowerPC when you build an application it's default that it will automatically import symbols from anything that it links against. So when you find a symbol it doesn't find inside, it assumes it will find it in one of the things it links against.

On the x86 side it's a little bit different, the default is symbols that are defined, are internal symbols, and you have to specifically at compile time say whether a symbol is an internal default or whether you will be exporting it to the world or whether you will be importing it. And it has to be done at compile time.

So generally what happens is when a shared library is created on x86, you have a runtime version that has created a .LIB version that you can link against.

So some examples of this. For PowerPC two of the files you need to link against are start_dyn.o and libbe.so. For PowerPC, nothing special there.

On x86, it's start_dyn.o static library, doesn't get the .LIB extension, but libbe.so does.

So standard link libraries, these are the libraries you need to link against. We have three static: glue-noinit.a, init_term_dyn.o, and start_dyn.o. These static libraries give you access to the dynamic linking capabilities of the BeOS. They allow you to link against other shared libraries. The two shared libraries that in general you need to link against are libbe.so, and again they need libroot.so, and again they need the .LIB extension for x86.

Now libbe.so contains all of the basic Be classes, windows, views, messages. Libroot.so contains memory management functions, some of the C stuff.

So, that's kind of a lot of things. Again, as I mentioned earlier, resource files are incompatible, which means, you know, you need to have different versions for both sides. We do have a couple conversion tools that were supposed to be included on the CD but actually weren't.

The final documentation, there are two programs called rscvt_ppc and rscvt_intel. I've got a newsletter article coming out this week and we will be posting those up on the web. We just realized it didn't make it on the CD. These resource conversion tools are extremely limited. They will convert application information, and some BMessages. Nothing else. If you have anything else that you put in resources, it won't do anything. So what we honestly suggest you do is on PowerPC create your resources from scratch. On x86 create them again from scratch. That's just a lot easier and is guaranteed to work.

This is another issue like the file system issue that is very important to us, that we are working on making the resource file formats compatible so that you can use the same one on both sides. Again, I'm not sure exactly when that will be in, I think we are trying to get that in for release 4, which again comes out in, if I'm correct, September. Right.

Cross-compilation and testing. One thing you see with the new Metrowerks tools, which I guess you can get now from us, is they do provide a cross-compiler, so on the PowerPC side when you get the full Metrowerks tools, you have an mwccppc, but you also have the linker and compiler for x86. They are PowerPC binaries that will output x86 binary applications on the far end, and we have the same thing on Intel. At this point in time we at Be have not tested these very much. The Metrowerks people have tested them and they seem to be in good shape.

But at this point we are not recommending that people use them, partially because they are untested internally, but also because it's very important that if you are going to be writing applications that are going to work on both platforms that you have both platforms available to test on. We do not want people who just have PowerPC machines building x86 binaries and not having access to x86 machines to run them and see if they work, to see if they have the sort of byte order problems that they might. So at this point we are suggesting that if you have got to have both platforms for testing, to go ahead and compile on each as well.

Again a question from the earlier session, Working With Intel, which is pretty much the same information, the extending track, someone asked what is Be going to do to make sure that cross-platform applications are actually done? What's to make sure that people with just PowerPC machines actually make Intel versions, and more importantly, people who are just Intel may not have access to a PowerPC that they would make PowerPC versions of their applications? There is almost no way for us to enforce behavior. What we want to instill in people is the idea that it is very important to think of it as a BeOS application, not a flavored BeOS application.

And we want to encourage people, developers, if they don't have access to a PowerPC machine, to go ahead and use the cross-compilation tools to build the x86 and post it somewhere, and team up with some other developers who can actually do the testing for you. We want people to, developers to work together, if necessary, to make sure that applications are available in both flavors. And in the future, if we add another platform, we want the same sort of behavior. We want to make sure everything works.

So as far as I know, we are not going to have a requirement on BeWare that you submit both versions of it. But we strongly recommend that you try to have both machines and do the work yourself. And if not, team up with some other developers who do have access to machines you don't and work with them to make sure your application is correct. Post it on your web site, post it to BeWare and basically say I want feedback, here is an Xmap file, if it crashes, tell me where it crashed.

Finally, project files. There is some changes you have to make to BeIDE project files. Currently you can take a project file from PowerPC and move it over to x86, and remove files you don't need. You can do that. But also in my role as a dev support engineer I have written some cross-platform Makefiles that were mentioned in a newsletter article a couple weeks ago. And here are the links to two of them that are available.

The first one will produce R3 x86 binaries and PR2 PowerPC binaries. That's available right now from that link. What I do not have available at this point in time, however I will have it by Monday, is a platform that will build release 3 x86 and PPC binaries.

So what do these look like? Basically they are used to define the name of your application, what it is, whether an application, shared library, static library, what resource files do you need to go along with it, what source files you need to go along with it? What libraries you need to link against, what additional include paths you need to worry about. And you write all that up and then the Makefile will handle all of the cross-platform, platform specific things.

For example, what the Makefile does is it looks to see what you are running on. And if you are running on PowerPC it assumes you want to go to PowerPC binary and add, you know, put takes the libraries that you have added, and puts them in the appropriate paths then and uses the compiler. Same for x86, it will add the .LIB extensions and everything you need. So it's fairly easy and straightforward. Again, I'll go ahead and go through some final thoughts.

As we are a cross-platform OS, we want people to write applications and have it be available everywhere we are. It's very important. It's also very important to, if you do have multiple versions, it's very important when you are writing data out somewhere to make sure you pay attention to byte order. Byte order is crucial, it's the one thing that will tend to trip you up when dealing with applications available on multiple platforms. Be careful about it. You do not know what version people are going to be running.

And I think that's pretty much it. I am here to answer questions you might have about these issues, or let you guys go.

A Speaker: Can you say a little bit about why G3 isn't supported?

Stephen Beaulieu: Yes. That's a separate issue. The G3... the person here was wondering why G3 machines are not supported. There are a couple different issues involved with that. The first issue is that G3 itself as a PowerPC processor has a standard instruction set. We support the standard instruction set. Our PR2 version had a bug in it that did something wrong in regards to G3. It is very, very... and that might continue in R3. It's very likely that by R4, PowerPC, we will support the G3 processor without a problem. What that means is in current PowerPC machines we support, you could add a G3 daughter card to it to replace the old processor and we would run on that.

Now, the other issue is Apple has come out with a bunch of new G3 machines that have a brand new mother configuration, daughter card for I/O, things like that. We want to support those, but we can't support those because Apple will not give us the technical specifications about those machines. We are a small company, we don't have engineering resources to essentially reverse engineer the mother boards and that will probably open us up to suit. It's not worth our time and trouble to do that. We want to support it. But going and doing it ourselves is just really not an option at this point in time. So we are working with Apple, we are trying to get them to give us the specifications. If you want, to make sure that we can run on these G3 machines, the people to talk to is not us, it's Apple, to get them to give us the specifications.

I will get to you in just a minute.

As a company, again, we are agnostic about this. We want to support these things. If Steve Jobs walked in right now and handed me the specifications, chances are that in R4 we would actually have support for these. It's just motherboard stuff we don't know about. The chip is already supported. Is it likely to do that? I don't know. We have committed to supporting BeBoxes we have put out for two or three years, but the problems we are going to be running into is that the Macs that we... the Macs, the BeBoxes that we support now are already now at least a year old, a year from now they will be two years old, and for essentially a very, very inexpensive amount of money you will get much better performance on Intel.

Will we become currently an Intel only architecture? Yes. In two years we may be that if we don't get the specification. But that isn't something we are actively wanting to do, that's where it is right now.

Do you have a question?

A Speaker: You say Apple is the people to talk to. Do you know who to talk to at Apple?

Stephen Beaulieu: I honestly don't know off the top of my head who we have talked to currently, who we have asked. Perhaps the best person to get that answer from would be our VP of marketing/sales. And he would... or Steve Sakoman, our VP of engineering. They will know who we have gotten in touch with at Apple and who might be the best people to write to. And if you can't corner them, you can go ahead and send me mail at devsupport and I will try to get an answer back to you directly.

A Speaker: Just a follow-up. I thought I remember there was some kind of technical problem caching with G3 card that makes it less desirable on multi-platform configurations, is that...

Stephen Beaulieu: No, not really. Just very quickly the current G3 chips are basically the newer versions of the 603 chips, which we supported in our BeBoxes, which we support now. They do not have built into the processor instructions for managing cache between multiple processors, so we do that in software. So yes, we could write, someone could make a processer G3, 750 chip, and we could run and support that if we supported the motherboard, but you will get tons better performance from, eventually there will be generation 3 chips that were based on 604 design which do have the cache control instructions but that, you know, that is kind of here nor there for our reasons for support.

A Speaker: I was under the impression that the PowerPC could run little-endian or big-endian modes. Is that the case? Has Be considered doing a swap of the PowerPC version?

Stephen Beaulieu: The PowerPC chip can run in little-endian mode. However, what it does is it internally does all the swapping in the chip. Swapping on a PowerPC is a single instruction. It is naturally a big-endian chip. If you are storing little-endian data, it swaps it every time, the performance isn't nearly as good as running in big-endian.

So no, there is really no reason for us to degrade performance on PowerPC by doing that. What we need to do in terms of the file system problem that was mentioned earlier, which might be why you were asking, we just need to write a tool for each version that just understands how to read the other version. It's just going to be a matter of time before we get to that. I just don't know when it's going to happen.

Yes, sir.

A Speaker: I am using a Mac C500LT, and Be doesn't want to boot on that. It's my understanding that you guys know about it and will fix that in the next two weeks.

Stephen Beaulieu: Actually we did fix it. We found out what the problem is. I don't remember...

Doug, do you remember what it was off the top of your head?

Doug Wright: There were a couple weird things.

Stephen Beaulieu: If you want the full details on that, you could talk to Bob Herold, our director of OS design, but my understanding is we do support those and those will be supported in the R3 release. We have a fix currently. Those will be supported in the R3 PowerPC release.

A Speaker: Which will come out in April?

Stephen Beaulieu: Yes, available in April.

Yes, sir?

A Speaker: What's the performance like for swapping images amounts of data, particularly relating to distributed computing issues, if you have several machines of different architectures running Be?

Stephen Beaulieu: It really depends on data and how much you are doing it. I believe on both x86 and PowerPC to swap an int, it's a single instruction. But if you are passing an array of 10,000 integers, yes, it's going to take a while to swap it, if you need to swap it on one side or another. So you will take a performance hit. That's one of the reasons, that type of performance hit is one of the reasons why we didn't just keep the BFS, for example big-endian, and swap with Intel, because swapping for all of that is time-consuming. But yes, that is something you will need to have to worry about.

Again, that might be a situation where writing in canonical order, also writing little-endian, most of the machines, for example, if you are expecting most of the machines to be x86 machines, most of them will have to do swapping, but the PowerPC will then have to swap twice on reading and writing, or you could have everyone swap as necessary on reading. But yes, there is no doubt about it there is going to be a performance hit if you do that.

Yes, sir?

A Speaker: You were mentioning if you are developing on PowerPC and you want an Intel binary you need to test on Intel; does Be consider something like Virtual PC to be a sufficient test environment for that?

Stephen Beaulieu: Two answers for that. One, it doesn't work; and two, even if it did, no. Simply because the problem is the BeOS itself is multi-threading. And the Mac OS, for example, at its hard core, isn't. So you are not going to get proper threading issues that could show up to be testable. It's just, the performance is going to be so different that there can be all sorts of things that it has to do to emulate the hardware that could, you know, just pretty much make you miss something.

Now one thing that is available is we have, people have come into the office in the past to finalize some work. It is possible that what we at Be could do if people were in the area, they could come in and have access to Intel and PowerPC machines and do the compile on and do the testing. Again nine times out of ten, especially if you are going to be careful, it's going to work and there is not any problem. But if you are on PowerPC and you are worried about Intel testing, Intel boxes are cheap. Again, you might not want to do that, but there are going to be a ton of people out there with Intel boxes that can help you with testing.

Now that might mean that they... that one or two of those people you are working with end up seeing your code, you may or may not want to do that. If you want to keep your code entirely inside the company, you only want to do it just yourself, the best bet is buy an Intel machine with their chip.

Yes, sir?

A Speaker: The multi-threading issues you were mentioning that didn't show up on Virtual PC, those aren't the sort of things that make any difference, they could show up as bugs on PPC as much as on an Intel box, obviously running Virtual PC with BeOS on top is totally missing the point, but as far as seeing whether the app comes up and runs, it's all there, that would be kind of helpful. At the moment I tried it and it can't find the boot disk and I assume it's because the CD-ROM...

Stephen Beaulieu: Yes. Just to repeat the question, the multi-threading issue, for example, that you wouldn't be able to work on in Virtual PC if you could get it up and running, the issue was that the sorts of things you are likely to run into you would have found those multi-threading issues on the PowerPC side, and those wouldn't be the sort of thing you are testing on the Intel side. In general, yes. I would agree with you. Those aren't the sort of things one would expect to have problems with if you are just dealing with binary issues. However, you don't know. And it really is going to depend on what you are doing and how you are referring to things. Yes. Probably that's not likely, that's not likely to be the case. However, adequate full testing of an application in Virtual PC, regardless of whether you are just looking for binary issues or not, I would not, and Be in general would not consider that adequately tested.

You are free to do that and put it out and nine times out of ten, hopefully it won't be a problem. But in general that's not what we would recommend.

As to the problems with getting Virtual PC up and running and getting the BeOS for Intel up and running on that, the problem is Virtual PC basically emulates hardware, it basically gives the appearance of a certain set of hardware. The fact of the matter is the certain sets of hardware they emulate isn't supported by Be at this time. That's the reason you are not getting it.

A Speaker: Does the file system incompatibility apply to CDs?

Stephen Beaulieu: Does the file system incompatibility apply to CDs? Absolutely. Yes.

Anyone else? Yes, sir?

A Speaker: Former question. What about Orange Micro, it actually puts an 86 chip in your PowerPC.

Stephen Beaulieu: For testing? You wouldn't be able to get up and running because we need the... the BeOS... the question was Orange Micro Intel processor sitting inside your machine. That won't work because that's only the processor. The software that you use with Orange Micro basically maps over the requests from the Windows architecture to the actual Mac hardware to do it. And we go and take over the whole machine, we would have no idea what to do with an Intel chip sitting inside a Mac box. It just wouldn't work. And it would be a big technical mess to try to get it to work. It's just not worth the time, unfortunately.

A Speaker: Is there any reason it wouldn't be possible to have a CD which could boot and run apps for... like basically partition for x86 and/or PowerPC on the same CD?

Stephen Beaulieu: Yes. In fact you could have a CD that has both partitions on it. That's not a problem whatsoever. The point would be in the PowerPC side you wouldn't be able to see the x86 partition, and on the x86 side you wouldn't be able to see the PowerPC currently. We will fix this in the future.

A Speaker: You will fix this by R4? Pretty soon you will get the endian issue fixed, it's not an issue for very long, is it?

Stephen Beaulieu: We are going to fix it. The resource files are pretty definitely slated to be fixed come R4. Our goal right now is to finish getting the PowerPC version of R3 done and tested and then the engineers go back and figure out what's going on. Our priorities are kind of determined by engineering and what other things we need to get done in time. If the file system incompatibility is a big issue that developers are concerned about, and wants to get fixed by R4, send mail to us. Bring it up on BeDevTalk. We will hear that. But yes, it is important to us, but at this point I don't know if it definitely is scheduled for R4. It may not be, it might be scheduled for R5. If you want it scheduled for R4, tell us. I'm sure that's what you are saying, but send us e-mail, because it's easier for someone like me at devsupport who can go to engineering and say look, a bunch of people are insisting it gets done on time.

Yes, sir?

A Speaker: Does the x86 version support SCSI drives and does it support multiple IDE drives?

Stephen Beaulieu: Does the x86 version support SCSI and multiple IDE drives? No and yes. In R4 we will have Adaptec SCSI card support to support SCSI on x86. We do not support it currently, and in fact R3 will not support Adaptec SCSI cards. We do not feel comfortable with how adequately it has been tested. It's very important to us, we know it's going to be there, it's possible that SCSI support will show up in an interim release, it is definitely slated to show up come R4. It really depends on when interim releases come out and whether we feel it's been tested enough. In terms of multiple IDE drives, yes, we currently support that. Most Intel machines have two IDE buses which allow you to get up to three or four different drives in there. Another thing we don't support in R3 currently for Intel, we don't support removable hardware, you can't use a Zip Drive. That's something we are working on. We just couldn't get it implemented in time to get it out. And it was very important, lots of people were wanting to see something on Intel, so it was important to get that out.

Yes, sir?

A Speaker: You can, you just can't remove it, correct?

Stephen Beaulieu: No. We don't... at some point in time you could put it in and it used to show up as a CD, which meant it used to show up as unwriteable. Therefore you can't format it for R3 and we don't understand the old version. So essentially you just can't use them.

I'm going to go ahead and stop now. Thank you all for coming. If you have any more questions, feel free to follow up in person. Thanks a lot. (Applause.)