[CLAM] AudioPorts Usage

Pau Arumi parumi at iua.upf.es
Wed Sep 14 11:04:05 PDT 2005


<offtopic>
Stéphane, I hope you are a Lyon supporter and so you had an excelent 
time with
the 3-0 to the Real Madrid. Well done!
</offtopic>

I'm glad we have this thread about callback adaptation at this time, 
because
Xavier Oliver an me are currently working on that.

First let me explain 5 cents on the facilities that clam offers to deal 
with
processing networks.

Clam ports are typed. For example we have InPort<Spectrum> and InAudioPort
(which is a subclass of InPort<Sample> with extended interface.
Out-In Ports can be connected if they are type-compatible (same type or 
have
hierarchical class relation)

A port, both in and out, have two important attributes: size and 
hop-size. Size
is the number of tokens (think in audio samples) that the port sees. And
hop-size is the number of tokens that it advances in each production or
consumtion. Different size/hop-size is very usefull for things like
overlap-and-add

How port memory is managed? An out-port is owner of a circular buffer. 
Multiple
in-ports can be connected to a single out-port (but not the inverse). The
out-port and the connected in-ports have a sliding window upon this 
circular
buffer. An important feature is that this circular buffer always guarantees
that any port window is mapping contiguous memory (the phantom-buffer 
trick),
so the processings can do buffer-based algorithms. As you can guess, a 
lot of
restrictions have to be managed (i.e. when an out-port can not produce 
and when
an in-port can not consume).

A clam network object is a processing container. It offer high level 
interface
for inserting new processings, connecting them, and executing all of 
them in a
row (that is calling its Do() method)

Because of the mentioned ports flexibility, you can connect processings 
that
have different port sizes and hop-sizes (if they are type-compatible). For
example, this NetworkEditor screenshot[1] shows a network with 
SMSAnalysis and
SMSSynthesis. Its audio ports are different from its connected ports
(AudioFileReader and AudioOut).

To execute all the network in a proper way, the network object delegates 
to a
flowcontrol object in charge of the firing scheduling (in a 
Strategy-pattern
way, so we can change the scheduling strategy). Currenty, the default 
strategy
is a push-based one.

The network class have a DoProcessings() method that try to make a single
execution of all its inner processings. But given the dinamic nature of 
ports
sizes, can happen that you call DoProcessings() to the previous[1] 
network and
that not all processings can execute because they lack data to consume. The
only think that DoProcessing() guarantees is that the generators will be
executed. But this is usually not a big deal, tipically the processing 
thread
will call repeatedly DoProcessing, so the data will just "flow" through the
network.

Then we have the NetworkPlayer classes which eases the task of loading a
network from xml and put them in execution.

The BlockingNetworkPlayer loads a network from an xml definition and 
sets a new
thread that will call DoProcessing in a loop, till exit condition. It is
suposed that, at least, one of the network processings will be an 
AudioIn/Out
which will read or write to the card in a blocking way.

Then (at last!) we have the CallbackNetworkPlayers (currently we only 
have a
JackNetworkPlayer but the class family is about to grow with vst, 
portautio,
ladspa...)
To use a network with a callback network player, it must be free of 
blocking
i/o processings. Instead you define add a special kind of processings:
ExternalSource and ExternalSinks. See this screenshot [2]. At the moment 
these
external sources and sinks are only for Audio.

The CallbackNetworkPlayer identifies these special processings within the
networks and changes its port-size to the same callback buffer-size.
Then, it registers a function to the callback host (whoever calls the
callback). This function does the following:
1. Copies the callback in-buffers to the out-port of the external-generator
(which has been initialized with exactly the same size as the callback
buffer-size)
2. Execute the network DoProcessing() as many times as necessary as to 
fill the
ExternalSink in-port.
3. Copies the content in the ExternalSink in-port to the output buffer 
of the
callback.

Of course, in the JackNetworkPlayer subclass you can have as many
ExternalGenerators/Sinks as you whish. In other cases it would be 
limited by
the host. (i.e. stereo signals for VST)

Unfortunately all this CallbackNetworkPlayer stuff didn't get into the last
release, but I think tomorrow I'll upload a CVS snapshot to the web, 
with some
simple example, so people won't have to wait so much: Having the new build
system ready for windows is not very easy, ask Miguel!

Pau


[1] 
http://www.iua.upf.es/mtg/clam/screenshots/NetworkEditor_SMSAndMonitors.png
[2] http://mtg66.upf.es/clam-network-externals.png





Stéphane Letz wrote:

>
>>
>> --__--__--
>>
>> Message: 4
>> Date: Tue, 13 Sep 2005 11:00:57 -0700
>> From: Xavier Amatriain <xavier at create.ucsb.edu>
>> To: Thomas Andrea <thomas.an at hotmail.com>
>> CC: clam at iua.upf.es
>> Subject: Re: [CLAM] AudioPorts Usage
>>
>> I guess you are implementing a vst plugin or something similar, right?
>> If so, you can either write the samples to a CLAM::Audio object on  each
>> call to the Process function or (better) write them to an independent
>> Outport that you can connect to the SpectralAnalysis Inport. I  
>> recommend
>> you read this other thread on the CLAM mailing list:
>>
>> http://iua-mail.upf.es/mailman/public-archives/clam/msg00435.html
>>
>> BTW, in any case if you are writing a vst plugin, the real problem is
>> how to adapt the size of the incoming buffer to the one configured at
>> the Spectral Analysis. We have a couple of working implementations but
>> none of them are clean enough so as to be in the main distribution.  All
>> of them are loosely based on Stephan Letz's very interesting article
>> "Callback Adaptation Techniques" [1]
>>
>>
>> [1]
>> kmt.hku.nl/~pieter/SOFT/CMP/src/portaudio/pa_asio/ 
>> Callback_adaptation_.pdf
>>
>
>
> I'm surprised to see this old technical report being of use for  
> someone...
>
> The main disadvantage of this "buffer size adaptation" technique is  
> that it has bad consequences in the way CPU is used.
>
> For example if a consumer using a bigger buffer size is feed by a  
> producer using a smaller buffer size, then the consumer is called  
> with a smaller rate but has to deal with the bigger buffer size each  
> time its callback is called... but in the maximum duration of the  
> smaller buffer size. Thus CPU use is not "homogeneously" distributed  
> if all computations are done in the same thread. Ore more complex  
> multi-threads techniques would have to be used.
>
> I'm interested to see in what kind of use it is used in CLAM. You are  
> speaking about spectral data. Could you explain more?
>
> Thanks
>
> Stephane Letz







More information about the clam-users mailing list