[all pages:] introduction tscore cwn mugraph signal score2sig utilities references file_tscore_umod file_mugraph_umod
signal | bandm models of music | utilities |
score2sig, How to Combine tscore And tsig For "Score Controlled Synthesis"
1
Fundamentals
2
Synthesis and Musical Score Data
2.1
A Simple Synthesis Model
"Score Controlled Synthesis", means to create a "fixed" recording of a synthetic sound, the structure of which is controlled by a certain musical score.
This discipline is one of the oldest in computer music, and is the basic paradigm for ancient programs like "Music one" to "Music five", and the resulting "C-Sound".
In the context of bandm software, this is accomplished by a combination of tscore for generating the control information, and tsig for performing the sound synthesis.
(This dichotomy could appear as a continuation of out-dated concepts, but we currently already support a much higher degree of modularity, aiming at full compositionality in the future.)
The theoretical and technical possibilities for score controlled
synthesis are unlimited.
They may seem overwhelming to the musician's mind, who has to impose
some order and structure.
For this, several very different approaches have been developed during the history of
analog and digital sound synthesis, each of them constructing a symbolic
algebra, a mental model.
Only the presence of such an abstracting model
allows decisions to be taken and notated,
variants to be explored systematically,
increasing complexity to be managed, and results to be fixed.
One of these settings is basically an abstraction from traditional instrumental performance, and from sound synthesis by voltage controled synthesizers:
This model is simple enough to be well manageable in very different
production contexts, and to serve as a basis for further complications
in subsequent definitions steps, e.g. by concatenation:
An overall "reverb" device (or some other post-processing device)
can be fed with the output of a whole voice ensemble. In parallel,
its control parameter settings are
treated and controled by one dedicated parallel "voice" on its own.
Or a circuit which realizes a certain voice
takes as its basic sound material
for forming one single event the output of some
other voice, which runs much faster and plays a multitude of events
for forming one single event of the former ("granular synthesis").
So the procesing networks which correspond to these voices can be combined along very different axes. These complexities are manageable, only because the basic model, on which the user mentally operates, is a simple one.
Let's keep in mind this possiblity of arbitrary further complexity when developing such a simple example for score-controled synthesis in the next sections.
In classical "C-Sound" synthesis, the parameters of the score are columns of a table which (a) in each column and each line contain exactly one numeric floating point datum, which (b) in the "orchestra" setting, i.e. in the synthesizing circuit, control some total arbitrary "electric" parameter. The semantics and effects of each column of the score are determined by the inlet of the circuit into which it is fed.
In contrast, when looking at musical scores in "classic western notation" / "CWN", we have parameters which are defined by a long tradition in instrumental performance. But of course also with this type of score, at last these "electric" parameters are required to control the synthesizing algorithm.
This gap must be bridged.
From the viewpoint of creative aestethics, this can be done by
any arbitrarily defined translation process.
From the viewpoint of economic desires (i.e. the wish to replace
expensive musicians by cheap computers),
this translation must try to follow the traditional meaning of
parameters as close as possible.
From the viewpoint of empirical research, there is e.g. the "rubato"
project which tries to generate performance parameters out of
data which is the result of automated music analysis.
Roughly spoken, a score is the source of parameters from four different categories:
Please note that these categories are only roughly defined and
quite dubious from a scientific standpoint. In many cases the borders are not
cleanly definable!
E.g., a "legato" parameter is an constitutive articulation
parameter, but not realizable without contextual knowledge.
And the different metric weights of notes, are they simply contextual,
or already analysed ?
And content mark-up like "H_ _H" or "CH_ _CH"
(see Schönberg and Berg), when directly fed into
some synthesizer input as an electric gate-like signal, can we call it
dedicated?
So this categorization can only serve as a rough orientation. But they are useful for describing our overall strategy when combining score level and sound synthesis circuits:
For synthesis let's take a comparatively simple example, the "wave table synthesis":
+-----------------------------------------------------+ | sequencer: play score for one(1) voice | | | | f gate a-d-s-r | +-|---------------------------------------------------+ | | ||||||||||| V | ||||||| +-----+ | |||| | phi |---\ | |||| +-----+ | | |||| | V VVVV | +--------+ | | ADSR | | +--------+ V V +-----------------+ +------+ +-----+ | table look up |---->| AM |----------->| ___ | | | +------+ +-->| \ | +-----------------+ V VVVV |+->| /__ | +--------+ || +-----+ | ADSR | || | +--------+ || V V || +-----------------+ +------+ || | table look up |---->| AM |--------+| | | +------+ | +-----------------+ V VVVV | +--------+ | | ADSR | | +--------+ | V | +-----------------+ +------+ | | noise generator|---->| AM |---------+ | | +------+ +-----------------+ |
This synthesis works as follows:
Please note that often the following variant is more appropriate:
| | |||| |||| | V VVVV VVVV | +--------+ +--------+ | | ADSR | | ADSR | | +--------+ +--------+ V V V +---------------+ +------+ +------+ +------+ | table look up |---->| AM |-------->| SUM |-| AM |----- | | +------+ +-->+------+ +------+ +---------------+ | | V | +--------+ | | 1.0-in | | +--------+ | V | +---------------+ +------+ / | table look up |---->| AM |---- | | +------+ +---------------+ |
Here the first ADSR/AM combination controls the MIXTURE of the two wave forms
explicitly, and the second ADSR/AM controls the overall loudness of the result.
"Mathematically" both variants seem identical, but "ergonomically",
w.r.t. the way the parameters are calculated, there is a big difference!
1 Of course square curves etc. must be filtered with the Nyquist frequency for not causing distortions.
[all pages:] introduction tscore cwn mugraph signal score2sig utilities references file_tscore_umod file_mugraph_umod
signal | bandm models of music | utilities |
made
2016-07-01_17h16 by
lepper on
linux-q699.site
produced with
eu.bandm.metatools.d2d
and
XSLT
FYI view
page d2d source text