Owen Smith: First of all, I would like to welcome every one of you here. I want to especially welcome each and every one of you to the media team. You are one of us now.
As Stephen said, my name is Owen Smith. I'm a DTS engineer here at Be, which means I'm one of Stephen's evil minions.
As an evil minion, I feel it is my duty to convert you all to the dark side of the Media Kit. But Stephen told me I'm not really supposed to say "dark side" because you might get the wrong idea. So I'll just say that the choice is yours, how you want to harvest the awesome power of the Media Kit, for good or for evil.
We're going to talk a little bit about the building blocks now, the actual pieces that we're going to be using later on in the day. And you'll be seeing these concepts over and over again as we return to them. So let's take out the hammer.
First of all, I'm going to talk a little bit about nodes. You've already had a brief overview what they are. I'm going to give you a little bit more information, and later on you will get a little more information about it, and by the end of the day you should have a pretty good idea what they are.
Next I'll talk about the connections, how you're actually hooking things together. And finally I'm going to talk a little about the buffers we're using and how they get sent around in the system because that is kind of interesting.
Okay. So let's start off with the basic question: "What is a node?"
Well, a media node is an object that processes buffers of media data. For example, the system mixer takes audio as input and gets buffers of input, mixes them together, sends out buffers of output.
Well, there are many kinds of nodes that we'll be talking about. Then we'll talk a little bit about how we actually do the underlying communication between the nodes.
And, finally, I'm going to introduce the two structures you'll be dealing with: The class BMediaNode and the structure media_node.
First of all, what kinds of nodes are there? Being able to answer this question is really, really important. So we have the concept, formal concept, of a node kind. This is the first step towards answering the question: "what kind of node do I have?"
So first of all, let's categorize nodes by how they can handle buffers. What they can do with buffers.
Two kinds here. First of all, you have buffer producers. Buffer producers are nodes that produce buffered data. For example, the audio input will read from the driver and spit out buffers of audio data.
Secondly, we have buffer consumers which are just the opposite. They receive buffers and process them in some way. For example, the video window will take buffers of video and display them on the screen.
So every node you deal with is going to be one of those kinds. Some of them are both. We call those filters, like a video node.
In addition to this, we can also describe other features that nodes have. Nodes will support a variety of interfaces when it makes sense for them to do so. We include those in the node kind concept as well.
First of all, we have time source. As Stephen described, a time source is a node that is able to provide timing information for other nodes. Hardware nodes often get to do this, like a sound card node will often have a sense of time associated with that sound card.
Next we have controllable nodes. Controllable nodes are nodes that we can use that support a user interface, like the mixer if you go into what is now called the media preferences panel -- you will all get to see this pretty soon -- if you go into the media preferences panel and you go down to system mixer, you get this astonishing array of channels and sliders and mute buttons, they are all provided by the controllable interface. Nodes that do that are called controllable.
And, finally, file interface. Nodes that can read and write files are called file interface nodes.
And, finally, for node kinds we have a couple of special nodes to perform special functions in the system. So we have kinds to describe them as well. If you need to get at those nodes, there is a very quick way you can do it.
Physical inputs, like the sound card input and the video input, physical inputs to your system. Just the opposite, we have physical outputs to your system. And then finally we have the system audio mixer which you can get at by looking for the system mixer node kind.
Okay. So I've talked about various of the kinds of nodes that you will be dealing with. Let's go into a little more detail on how these nodes actually communicate with each other. And it is all done through smoke and mirrors, I'm very happy to report.
Actually, we use ports as the underlying communication mechanism. For those of you who may be new to the BeOS programming, or may have been dealing with communication on a higher level, ports are low-level kernel primitives that are message queues.
So you put messages in using write_port, you take messages out using read_port. Each media node maintains one port called the control port, and that is where all the messages that get routed to that node get sent.
So if you want to tell the node to do something, you write some data to its port, it picks it up later on, probably in a different thread because we're super multi-threaded, and acts on it accordingly in its own time.
Finally, I'm going to tell you about BMediaNode and media_node. You will see those popping up and I don't want you to get confused.
BMediaNode is the base class that nodes derive from to implement their nodeship. Applications don't usually call BMediaNode functions except if you have a local class of a node and you want to send it directly. Of course, you can do that. But when you want to tell the node to run or stop, you don't call BMediaNode functions directly. That is done by the Media Server. Instead you will be going through the Media Roster as an application, and we'll talk about it later on.
On the other hand, you have media_node. It is a structure provided by the Media Kit which is the actual handle that applications use to talk to media nodes.
So applications are generally dealing with the media_node side of things. Node writers are going to care about BMediaNode and how to work with it.
There is some information about nodes. Of course we'll be going into a great more detail during the day, especially if you are interested in having enough fun to sign up for the node track.
Now I'm going to talk about connections and how connections are managed between nodes.
First of all, what do I mean by a connection? I'm talking about the links between nodes that you will be sending buffers of data through. And they're one way.
Buffers flow down a stream like silt flowing downstream, so we have a concept nodes that are upstream passing buffers down to nodes that are downstream.
Here's what a connection looks like. We have a couple of nodes. There is a connection between them, they're source and destination, and there is a format of data being passed through them.
What are all these words? Let's describe them in more detail.
First item I'm going to talk about, the pieces of a connection that actually build a connection. First of all, I'll be talking about source and destination.
What are they? Well, source and destination, as you can see, are the end points of the connection. They contain just enough information to identify a particular terminal that you can send data to or send data out of or receive data from.
Buffers travel from source to destination as you would expect, hence the names. And there are two structures that we use called media_source and media_destination that represent these. You can treat these as fundamental objects if you're in an application. You can assign them, test for equality to see if you are talking about the same source, and stuff like that.
Finally, there are two structures that you should be aware of: The null structures, media_source::null and media_destination::null, which represent uninitialized end points. These become extremely useful when you are starting to build your connection. I'll talk about that in a minute. So that is what source and destination are.
Let's talk a little bit about the format that we're using to send buffers. The format describes the type of media being sent through the system as you would expect. It is negotiated by the app and the nodes.
So I'm an application, I have two nodes, I want to connect them together, and maybe I have some idea what I want. I want some sort of audio that's playing at 44.1 kilohertz.
The nodes have a much better idea what formats they would like, and so they then go back and forth figuring out, okay, I can get a much better description of the format. And finally they figured out what is the best format for them to use. There is a negotiation process we'll talk about.
Then there are the contents of the actual media format. It consists of a basic media type. And then there is a union of additional information based on the type of media you have.
So what are the basic media types we have? We have a very extensive list of media formats that we either support or would like to support, so we've included constants for them. As technology progresses, as we get the OS better, we'll be adding stuff to that.
Basically the ones you'll see are, first of all, raw audio which are just frames of audio samples being sent, raw video which is pixel data that you're throwing downstream.
You have encoded audio which is stuff like MP-3 where it is compressed or included in some format. You have encoded video like Cinepak where the same thing applies to video. And, finally, there is a multi-stream format you can use if you are -- if you have audio and video, and perhaps other kinds of data, that are combined into a buffer in some format, perhaps interleaved.
Well, raw audio is pretty important. We see it a lot. We're going to go into a little more information about the kinds of things you can get from the raw audio format.
First of all, we're talking about frames. Each buffer has a series of frames. And a frame contains one sample for each channel you have in your format. So mono frame is one sample, stereo frame is two samples, and then it goes up for multi-track audio.
We have also -- you can get the format and the channel count as you would expect. The format I'm talking about is the data type of the sample, whether we're talking about unsigned characters or whether we're talking about floating point audio.
And finally there is byte order and buffer size. You need to get at those or set them.
Okay. Raw video, there is some information there that is useful as well. In video each buffer represents one field generally, and fields are combined -- if you are familiar with video -- fields are combined to create one frame of video, generally interleaved in some format.
So you have a field rate which says how fast I'm actually sending the buffers, and then you have interlace saying how many buffers comprise one frame. Then there is a variety of display information that you can send. There are hundreds of these -- no, not hundreds, but a lot of variables you can set to specify exactly what kind of video you're talking about.
And another important thing when we're talking about formats is the concept of wildcards. Very often when you are specifying a format, you don't know all the information that you need to specify the format completely. There are parts you simply don't care about or parts you're willing to be flexible about.
So we have these objects called format wildcards. There is one buffer for each kind of them. There's an audio wildcard, there's a raw video wildcard, and so forth, and these indicate unspecified or flexible parts of the format.
Probably the best way to illustrate this is with a little bit of code here. Don't want to get you too much code too early in the morning.
Let's say you're in application. You have this media format structure. This is the structure where you are using to describe our format. And then we have a wildcard, wc, where we actually get the wildcard object.
When we want to specify, let's say, we have -- we want raw audio and we don't care what the frame rate is. So we set the type to B_MEDIA_RAW_AUDIO. Then in the union structure which gives the additional information about the raw audio, we set the frame rate to the wildcard's frame rate.
We can assign these one field at a time depending on the fields that we're willing to be flexible about. You can also do it all at once if you want raw audio and you just don't care what kind of raw audio it is.
So I've talked about the pieces of the actual connection. I've talked about the source, the destination, the formatted data between them.
Let's take a look at some of the things we use to describe the connection as a whole. I'm talking about inputs and outputs. Now, people do get confused when they see input and output and source and destination. "I don't understand this." So let's make it clear.
Inputs and outputs represent the entire connection as seen by one of the nodes involved in the connection. So a node's output is a connection where the node actually contains the source end point. A node's input is a connection where the node contains the destination, and I'll show these. I have some nifty pictures for you.
Here's the output. We have a node and it contains the source, and then it is sending off to some destination somewhere else.
So the output information that you can get contains, first of all, the node that we're talking about, the source, the destination, the format that is being sent, and finally it gives a whole name to this connection. So a node is able to name its connections to whatever makes sense for that particular node.
And on the other side we have inputs. Now we're seeing things from the destination side of things. It contains, again, a source, a format, a destination. The destination is what our node is involved with. And then the node itself -- and now we give the input a name which can be different. So the guy upstream calls it output 1; guy downstream calls it input 5 or whatever.
So that's pretty much all I want to say about connections at this point.
Now, let's talk a little bit about buffers and what buffers consist of and how they are moved through the system because I think this is one of the coolest things about the Media Kit.
Buffers are represented by a class called BBuffer. Nodes will see this, applications will not. Applications generally don't care what kind of buffers are being passed. But it is useful information.
A BBuffer represents, of course, a packet of media data that is being passed between two nodes. It contains two things: A header and the actual data. So let's talk a little bit about the header.
The header is supposed to answer the question: "what the heck is in this buffer?"
Now, we've answered most of that question already because we've specified a format for the connection, so we already know the information that applies to all the buffers being sent down the connection. So the header is really pretty small. The header just contains the information that is specific to one particular buffer.
First of all, we have the start time. This is the time at which the buffer should start being performed. And performed means depends how you hook up the nodes. Perhaps if you have audio buffers that you are sending down to an audio output card, the start time would be the time you expect to see that buffer being played at your final output.
You have the size used which can be used to specify -- if you haven't filled up the entire buffer, you can tell exactly how much size this buffer takes.
And then, finally, for some media types like video, you have additional information like "what frame am I in?" and "what field in that frame am I at?"
Now we have the buffer data. The buffer data is generally manipulated by different applications. Why? Because different applications in the media system can corroborate to create a giant node chain of stuff. So you may have an audio application and video application. They're trying to work together to produce some cohesive result.
Different applications need to be able to get at the same buffer data. The way we do this is we store the data in shared memory areas so different applications can get to it.
Now I can almost hear you asking the question, "Okay, what exactly does he mean by shared memory area?" No? Well, you probably already know, but I have some pretty pictures, so I'm going to tell you anyway.
BeOS has an area called protected memory which means we have two applications. Each of them sees their own virtual 32-bit address space. And the memory that you are using is divided up into chunks of memory called areas. These areas will actually map to some portion of our actual memory somewhere.
In normal circumstances, protected memory means each address space -- the areas that an application uses -- map to pieces of memory that are for their use only. If App 1 goes totally berserk and starts writing to random locations in memory, it will not trample over App 2's beautiful memory garden.
So we have two applications. We have areas that point to some chunk of memory somewhere, and they do not meet. But in the BeOS it is possible to set things up so that two areas or multiple areas in different applications can actually point to the same chunk of memory.
So when you write to that virtual memory address, you're actually manipulating that same chunk of memory. And that's the mechanism we'll be using to actually store the data.
So what does it mean? Let's say we have three nodes and we're trying to pass the buffer down from Node 1 to Node 2 to Node 3. It would be extremely inconvenient, as you would expect, if you had to move the entire buffer of data with the whole thing downstream.
So instead what we do, we take a small piece which describes the buffer, some header information, and has a pointer to the data which is living somewhere off in shared memory. When we are passing it downstream, we're passing the small piece of information from node to node.
A little latency is very good and that's why.
So that's all the building blocks I want to talk about. We talked a little bit about nodes, what kind of nodes you have, how nodes communicate in their underlying mechanism, and the classes and structures you use to deal with those. We've talked about connections -- you know, I hate when people say "we've talked about connections." I've talked about the source and destination, the format and also what inputs and outputs are. And, finally, I talked a little bit about buffers so you have some idea what buffers are and how they get passed around the system.
And that's pretty much it. I'll turn it over to Stephen.
Stephen Beaulieu: So is always the case, we're going to take an unscheduled break for questions as people have them. If you were remembering our original schedule that we had set up -- we're 40 minutes ahead of schedule now -- which either means that we're cruising through this too fast, in which case you might have questions, or it doesn't match up with what we're going to do. Let's take an opportunity to make sure you understand everything we've covered. It is all basics right now. If anyone has questions, let's cover some of them. If you have questions, step up to the mike so that everyone can actually hear you, including our stenographer.
Audience Member: I take it this presentation is being made on a Be machine which might have code on it. Would it be possible to actually pull up some code and look at class definitions and get a little down to it?
Stephen Beaulieu: Yes. That is what the afternoon sessions are going to be about. We can do that. We do have code for both the application side and the node side.
In terms of class definitions, the only thing you're really working with, aside from the Media Roster from the app side, it's all on the node side. The node side can go into that quite a good deal. We can talk about the different classes. What actually is going to come up in the next section, we'll get to in a moment, is the actual media roster class and what you can do with it, how you can manipulate it. The sorts of things you can do.
Again, the point of this morning is to make sure everyone has the fundamental concepts of about what we're going to be doing. This afternoon we'll go into the nitty gritty of how it works.
Audience Member: Is it possible for a single node to have multiple sources and/or destinations?
Stephen Beaulieu: Yes, you can do that. Okay. Let's talk a little bit in general about the way the nodes work on the system.
A node is, like we said, is essentially something that knows how to process data. For a mixer, for example, our system mixer takes multiple connections in. It has to. It mixes. That is the entire point.
Currently it has one output to the sound card. We're working on getting it set up so that as you have multiple sound cards and sound cards that support multiple channels of output, you will have multiple outputs from the mixer.
Then you have the concept of, let's say, inputs one through five go to outputs one, two and three, and inputs six through 10 go to outputs one through eight. So, yes, you can actually have nodes that take data from many sources and pump them out to many destinations. And they could be independent, for example.
You could have a node that basically knows how to apply reverb, and it has an input that has some data in one format. Let's say it is audio and 44K shorts. It has one output. One input goes to that output and you apply a filter to it, a delay.
You could also have another stream of data that goes through the same node that is completely independent that is in a different format like 44 float or, you know, it is a .WAV file or something like that. 22K.
So, yes, you can have multiple inputs, outputs, multiple streams going through. You can merge the streams, split the streams. The nodes pretty much can do whatever they want, whatever they're designed for.
Audience Member: Is MIDI a media type or is it still off in their own classes?
Stephen Beaulieu: It is still off in their own classes right now. We're looking into combining those. I don't believe that that is scheduled for coming out in Genki with the next release. However, I'm not positive. That's something we can get back to you. I do know that MIDI interfaces are planned to be used in the Media Kit. Whether it will be solely through the Media Kit or not, I don't know. That is up to engineering to decide.
Audience Member: I wonder if you can sketch an example where multiple applications would set up a media stream. Owen talked about that. That hasn't sunk in.
Stephen Beaulieu: Okay. We can do it. I wish we had a white board. We can also do it in the afternoon.
Basically because a node -- an application can instantiate the node. Let's say you have an application that again -- let's deal with a filter.
So you have an application who's totally focuses on putting effects onto some form of audio or video. So that application, again, can get creative and it can -- its nodes aren't an add-on. It's just part of the application.
It creates a node inside the application that can only be accessed when that application is up and running. But the Media Roster and the Media Server know about the node that it created.
Let's say you have another application that you use to actually edit your audio. You want to combine four or five different tracks, but that application itself just says what tracks should be played when and it doesn't necessarily handle effects.
That audio application can find out about all the nodes that are available, either that are asleep, that are dormant, just add-ons, that can be created, or that are live in other applications. And the data itself, either application, could hook up to either of the other nodes by just looking for them and going through that.
So you would have the two applications collaborating, one application that just focuses on applying effects, another application that can actually produce things. And they can work together that way.
The other way is if, you know, the audio editing, the audio filtering application, could just put its nodes inside an add-on so it is add-ons, and those add-ons would be available without actually having that application wrapped around them.
Audience Member: To follow up. Would the applications actually have to collaborate or one application snip out the node stream and said, hey, I want to sit here and do it transparently, to say you have a filter application? Can it set it up transparently to the player application? The player application would never know about it.
Stephen Beaulieu: Yes, you could do that. There could be problems. Generally you want one application setting up your node chain.
Yes. What you could do, one application could basically say I've got these four audio files and I want to send them out to the system mixer and send them downstream. It connects each of those files up to the system mixer node which will then send it out to the speakers.
Another application could come along and see that, say, a given audio file is hooked up doing that and it could unhook the two connections, put a filter in place, hook them up again. The problem with that is that the first application would then decide to quit. Just tearing things down. And it is not necessarily going to know about those other connections.
So, yes, you can do that. There is a additional layer of complexity for interapplication communications. That is not a part of the Media Kit in general, those applications would want to do to be able to know how that works.
The first app could make sure as it is disconnecting things to follow the chain and just say what are you connected to, what are you connected to, and make sure everything gets disconnected. That would be good behavior, but the apps wouldn't have to do that. You could get complications.
What might be better -- I mean it would work. You would just -- you could just run into problems if one of the apps went away.
So, yes. You had a question up here.
Audience Member: This is totally picky. But the code example that you had up there, you had an media format and you assigned a type to it. And then you used the wildcard to set up the frame rate. I was assuming when I was working with the Media Kit that the media format, the constructor that would set it all up to do the wildcards, that is kind of a pointless setting with the wc.frame_rate.
Stephen Beaulieu: This in and of itself would be somewhat meaningless. However, if this is just the first thing you were filling out, it might make sense because you might then go and say, oh, I don't care about frame rates, but I want my buffers to be 2,048 bytes.
Audience Member: But that has already been set by the constructor by the media format. Oh, but then you have to set the media format type.
Stephen Beaulieu: Right, right.
Audience Member: Okay. I would like to ask it as a follow-on. Can application chain nodes -- some transformations are best applied at the source or the data source and some are best applied at the very end? For instance, a volume control would obviously be if it were a filter at the destination. Can an application chain them?
Stephen Beaulieu: Change them?
Audience Member: Chain them. Chain them together. Link them together.
Stephen Beaulieu: That's the entire point of the Media Kit.
Audience Member: Is there support for the default chain? Let's suppose there is some particular device that requires a chain of nodes to support it and not a single chain. Is there a support --
Stephen Beaulieu: Basically, if you had a particular device, that device would have a node. And the only way you would be able to talk to that device, if something requires that, say a video capture card -a video capture card, when you buy it and get drivers for the BeOS, you will get a driver for it that knows how to talk to it at the lowest level, and you will get a node. That is the thing that knows how to talk to that card. And you won't be able to talk to that card without its particular node because it has to know -- it is the only thing that knows how to read from the hardware.
Audience Member: You were talking about kinds of nodes. That is a little bit difficult to figure out what role a certain node should play.
For instance, an equalizer processes audio, has an audio input and audio output. A compresser has an audio input and audio output. And if I don't know what role it plays, that it is a compresser, I have to figure out from the defaults, which is very difficult reverse engineering of information, which is when someone designs the node. Do you have provisions for that?
Owen Smith: Yes. Like I said, the node kind is only the first step to answering the question what kind of node do I have. There are several different ways you can specify in a lot more detail what your node actually does, and it is flexible. You can add your own stuff to it.
We'll be talking a little later in the day about node flavors, which is a real key piece of information to identify the nodes you're working with. The node flavor will give you some information, not only descriptions but what kind of formats it can take.
And if you have additional information, special equalizer stuff that you want to provide, like what technology I'm using, or whatever, there are additional reserved spaces inside that flavor structure for you to access. So there is a lot of flexibility there in how you describe these things.
Audience Member: I had a question dealing with terminology and code. The code -- when you are writing the code, when you say node destination, you're talking about the source of the data for that node?
Stephen Beaulieu: Node destination.
Audience Member: The terminology is the destination of the connection?
Stephen Beaulieu: Yes.
Audience Member: When you say node destination, it is not the destination of the node, the data from the node, but it is the destination of the data. When you say node destination, it is the input into the node?
Owen Smith: Yes.
Audience Member: I find it very confusing and I thought I would mention it.
Jeff Bush: It is always --
Audience Member: It is not related.
Owen Smith: There is an interesting dichotomy there. When we're talking about the connection's point of view, destination is the terminus of that connection. When we're talking about the node's point of view, data coming in, we're talking input.
Audience Member: That's right.
Owen Smith: That's why we have that distinction between the two. Generally when you are dealing with applications, you're talking about inputs and outputs. You don't deal with the source and destination unless you're the node and you want to specify, okay, what exactly am I connecting to what exactly. When you are creating connections, you use that. When you are a node when you are describing what am I connected to, inputs and outputs are definitely the way.
Audience Member: So an application should never been writing node destination?
Stephen Beaulieu: Yes. No.
Jeff Bush: You will if you need to be that specific.
Audience Member: When I'm writing the code, it is a little confusing.
Jeff Bush: It seems a little daunting at first but it makes sense later because, bear in mind, nodes can have multiple connections. So the destination is actually just a part of that which describes how you talk to the node.
Audience Member: But the destination of the connection, not the destination of the node?
Jeff Bush: Yes. The node itself only has connections, and the destination doesn't have to be the connection.
Audience Member: Thanks.
Stephen Beaulieu: One more question.
Audience Member: So it actually isn't a question, it's feedback. You said you're not sure what to do with MIDI. Will MIDI appear in this release part of the Media Kit?
I strongly argue for putting it into it because we, our company, is only here because of low latency with audio, and that means mixing 50 milliseconds is sufficient. No problem with that. You want lower latency if you want to play a virtual instrument, which is actually a piece of software which runs on your computer. You don't put MIDI into the Media Kit.
There is no standard way of giving MIDI information to a node which is a virtual synthesizer. If you don't do that, we have to use our own technology.
Jeff Bush: There is still a Media Kit. I mean the software does go through the mixer.
Audience Member: Can you use the sync function to sync audio in MIDI?
Stephen Beaulieu: There is the way that -one way that MIDI could be put into the Media Kit, if someone could write a MIDI node and the buffers that are past downstream that are MIDI buffers. That's doable. That's not a problem.
What we're looking at doing is fully integrating our own Media Kit and the APIs into the nodes. I believe we're looking at doing that so it is transparent to you. So if you need to manipulate MIDI you can do that.
Unfortunately, I'm not the best person to answer those questions. We're going to have a Q and A with the entire Be team, and the engineers who are making the decision how that is going to work can answer that question a lot better than I.
Take these two questions up in front and then we'll take five minutes for a break and then we'll hit the rest of this morning's sessions.
Audience Member: Just as a followup to the MIDI questions. One of the concerns that I would have is that, say, someone implements a sequencer application. They're going to look at the current Media Kit to see what the available current MIDI output ports are.
As a software send we need to expose a MIDI output. Is there currently a way to do that with the tools in place, or do we have to wait for the MIDI stuff to be integrated?
Stephen Beaulieu: I don't know. Part of that is that I'm not an audio person. And so let's revisit that this afternoon. Let's -- I'll make sure we find some sort of answer and present it this afternoon.
In general, we know that MIDI is real important to our entire media system and we're going to integrate it. How integrated it is going to be and the timeline is something I can't say because I don't know. But there are people who might be able to.
Audience Member: Could someone comment on the timeline? We really do need to know.
Stephen Beaulieu: We can't comment --
Audience Member: Anybody else who can comment on the timeline for getting the MIDI input into the Media Kit?
Stephen Beaulieu: Someone can. Is that something we can deal with at the end of the day, please? I don't know -- I know who it is. I don't know if that person is here. I think one of the problems is that the engineer that we would like to answer that question isn't here yet this morning largely because the team has been working their tushies off preparing this stuff for the last four days. So I think he is enjoying a small amount of sleep.
And there is a general Q and A. And at the end of the day certainly that question will be covered, and we'll make sure of that. And I expect, in fact, when we have another Q and A later this morning, that engineer will probably be in the building and we can prod him again if he's awake.
And we can go in. It is something we clearly recognize as critical and, yes, we will absolutely get it out. I can't do it right now. We can't get an answer because the engineer is enjoying some well-earned rest.
Stephen Beaulieu: One last question up front and we'll take five minutes.
Audience Member: Okay. I hear a lot about nodes that produce information and then it splits -- not splits, it joins in with like mixers to one destination.
Is there a splitter type setup for a connection? Say I have a sequencer that provides information to six different nodes, is there a way to create a connection from that one node to five or six or an arbitrary number of nodes?
Stephen Beaulieu: What you would want to do is send the same information to all of them?
Audience Member: Yes.
Stephen Beaulieu: Yes. It is a splitter node. So that is a node that -- I don't have a splitter node for you. I can't show it right here today.
As we go throughout the day, we've got pieces in place. It is on my list of things to make sure are in the sample code we will get to you in the next week or two weeks to show you how to put all this stuff together.
A splitter node is pretty easy once you get the rest of it done. So, yes, that is how you do it, you can do it that way. It involves copying the data. If you are splitting and sending it, it is going to involve taking the data in and putting it into buffers because all the buffer streams are independent of each.
Audience Member: It will have to happen because all the different nodes are doing different things.
Stephen Beaulieu: Yes, yes. Okay. Let's take five minutes.
Audience Member: I have a question for you just quickly. You mentioned parameters. Are you going to cover what a parameter is?
Stephen Beaulieu: Yes. We will do a basic coverage of that in part of the next session.
Audience Member: Jolly good.
Stephen Beaulieu: Great. Take five minutes and come on back.