[CLAM] Re: AudioPorts Usage

Stéphane Letz letz at grame.fr
Thu Sep 15 01:16:17 PDT 2005


>
> Message: 6
> Date: Wed, 14 Sep 2005 20:04:05 +0200
> From: Pau Arumi <parumi at iua.upf.es>
> To:  clam at iua.upf.es
> CC:  mblaauw at iua.upf.es
> Subject: Re: [CLAM] AudioPorts Usage
>
> <offtopic>
> Stéphane, I hope you are a Lyon supporter and so you had an excelent
> time with
> the 3-0 to the Real Madrid. Well done!
> </offtopic>

<offtopic>
Not really...
Ah, I was wondering why people were screaming in the street...
</offtopic>


>
> I'm glad we have this thread about callback adaptation at this time,
> because
> Xavier Oliver an me are currently working on that.
>
> First let me explain 5 cents on the facilities that clam offers to  
> deal
> with
> processing networks.
>
> Clam ports are typed. For example we have InPort<Spectrum> and  
> InAudioPort
> (which is a subclass of InPort<Sample> with extended interface.
> Out-In Ports can be connected if they are type-compatible (same  
> type or
> have
> hierarchical class relation)
>
> A port, both in and out, have two important attributes: size and
> hop-size. Size
> is the number of tokens (think in audio samples) that the port  
> sees. And
> hop-size is the number of tokens that it advances in each  
> production or
> consumtion. Different size/hop-size is very usefull for things like
> overlap-and-add
>
> How port memory is managed? An out-port is owner of a circular buffer.
> Multiple
> in-ports can be connected to a single out-port (but not the  
> inverse). The
> out-port and the connected in-ports have a sliding window upon this
> circular
> buffer. An important feature is that this circular buffer always  
> guarantees
> that any port window is mapping contiguous memory (the phantom-buffer
> trick),
> so the processings can do buffer-based algorithms. As you can guess, a
> lot of
> restrictions have to be managed (i.e. when an out-port can not produce
> and when
> an in-port can not consume).
>
> A clam network object is a processing container. It offer high level
> interface
> for inserting new processings, connecting them, and executing all of
> them in a
> row (that is calling its Do() method)
>
> Because of the mentioned ports flexibility, you can connect  
> processings
> that
> have different port sizes and hop-sizes (if they are type- 
> compatible). For
> example, this NetworkEditor screenshot[1] shows a network with
> SMSAnalysis and
> SMSSynthesis. Its audio ports are different from its connected ports
> (AudioFileReader and AudioOut).
>
> To execute all the network in a proper way, the network object  
> delegates
> to a
> flowcontrol object in charge of the firing scheduling (in a
> Strategy-pattern
> way, so we can change the scheduling strategy). Currenty, the default
> strategy
> is a push-based one.
>
> The network class have a DoProcessings() method that try to make a  
> single
> execution of all its inner processings. But given the dinamic  
> nature of
> ports
> sizes, can happen that you call DoProcessings() to the previous[1]
> network and
> that not all processings can execute because they lack data to  
> consume. The
> only think that DoProcessing() guarantees is that the generators  
> will be
> executed. But this is usually not a big deal, tipically the processing
> thread
> will call repeatedly DoProcessing, so the data will just "flow"  
> through the
> network.
>
> Then we have the NetworkPlayer classes which eases the task of  
> loading a
> network from xml and put them in execution.
>
> The BlockingNetworkPlayer loads a network from an xml definition and
> sets a new
> thread that will call DoProcessing in a loop, till exit condition.  
> It is
> suposed that, at least, one of the network processings will be an
> AudioIn/Out
> which will read or write to the card in a blocking way.
>
> Then (at last!) we have the CallbackNetworkPlayers (currently we only
> have a
> JackNetworkPlayer but the class family is about to grow with vst,
> portautio,
> ladspa...)
> To use a network with a callback network player, it must be free of
> blocking
> i/o processings. Instead you define add a special kind of processings:
> ExternalSource and ExternalSinks. See this screenshot [2]. At the  
> moment
> these
> external sources and sinks are only for Audio.
>
> The CallbackNetworkPlayer identifies these special processings  
> within the
> networks and changes its port-size to the same callback buffer-size.
> Then, it registers a function to the callback host (whoever calls the
> callback). This function does the following:
> 1. Copies the callback in-buffers to the out-port of the external- 
> generator
> (which has been initialized with exactly the same size as the callback
> buffer-size)
> 2. Execute the network DoProcessing() as many times as necessary as to
> fill the
> ExternalSink in-port.
> 3. Copies the content in the ExternalSink in-port to the output buffer
> of the
> callback.
>
> Of course, in the JackNetworkPlayer subclass you can have as many
> ExternalGenerators/Sinks as you whish. In other cases it would be
> limited by
> the host. (i.e. stereo signals for VST)
>
> Unfortunately all this CallbackNetworkPlayer stuff didn't get into  
> the last
> release, but I think tomorrow I'll upload a CVS snapshot to the web,
> with some
> simple example, so people won't have to wait so much: Having the  
> new build
> system ready for windows is not very easy, ask Miguel!
>
> Pau
>
>
> [1]
> http://www.iua.upf.es/mtg/clam/screenshots/ 
> NetworkEditor_SMSAndMonitors.png
> [2] http://mtg66.upf.es/clam-network-externals.png
>

Thanks for the explanations.

In what kind of network do you need buffer size adaptation? (I guess  
mostly when spectral processing is done with requires power of 2  
buffer sizes ? Or in temporal processing only kind of networks also?)

But then you have the following issue : imagine a network which is  
driven ( in a thread+blocking i/o  or callback based model) by a  
buffer size of N, but where some internal nodes in the network use 2N  
buffers. Then the 2N nodes get called every 2 callbacks but are  
supposed to handle 2N "token" (frames) in the duration of N to meet  
real-time deadline. Thus an algorithm that would use more 50% of CPU  
time would not run in this configuration if everything is computed in  
the same RT thread.

This is a typical "problem" Jamin software (doing FFT based  
processing) has also and the way they solve it is to use another  
lower-priority thread that will run along the RT thread, with ring- 
buffer based data exchanges between the 2 threads.

I was always wondering if  it would make sense to "abstract" this  
kind of design to make in transparent for the user, maybe something  
to have in CLAM  ? (since CLAM is supposed to provide all sort of  
powerful abstractions.... (-: )

Stephane







More information about the clam-users mailing list