There are still several alternatives
of how to handle the sample rates
(that can be equipped with physical dimensions).
- Stick to simple lists as data and
pass additional information directly to the functions.
E.g. mixing several signals is easy
since only one sample rate is given
which applies to all signals.
But it leads to the problem
that subsequent function calls must receive the same value.
This cannot be guaranteed and is thus a source of error.
E.g. the mistake
play (44100*hertz) (osciSine (22050*hertz) (440*hertz))
can't be detected.
In this approach the signal data structure is very simple,
the values may be passed to multiple functions,
the combinations are simply done by function application,
a supervisor is not necessary,
consistency checks can hardly be performed.
This approach is certainly the most basic one,
on which others, more safer ones, can sit on top.
It is implemented in Synthesizer.Plain.Signal with numbers without units.
- Equip signals with sample rate and amplitude.
Processors without input need the sample rate as explicit parameter.
If there is more than one signal as input,
then there must be additional checks.
The error in
mix (osciSine (22050*hertz) (440*hertz))
(osciSine (44100*hertz) (330*hertz))
can be detected at runtime.
However the sample rate has to be specified for both input signals,
although it is obvious, that both signals have to share the sample rate.
In this approach the data structure is more complex,
the values may be passed to multiple functions
but consistency checks can be performed
and a supervisor is still not necessary.
This strategy is implemented in the Synthesizer.Physical.Signal modules.
- We still like to hide the sample rate where possible.
All processors should work as good as possible at each rate.
Here we provide the sample rate to each processor.
The result of a processor is not just a list of samples
but it is a function, which computes the list of samples
depending on the sample rate.
Sample rate is fixed not until it comes to the rendering of a sound,
e.g. for playing or writing of a file.
play (44100*hertz) (osciSine (440*hertz))
Returning a function instead of computed data
has the disadvantage that multiply used data cannot be shared.
For these situations we need a share function.
Combinator functions similar to ($) are used
to plug sample rate dependent output from one processor
into plain signal parameters.
With this approach, the type signature tells
which signals share the sample rate.
Infinitely many signals can be handled.
Types for time and volume can be chosen quite freely.
Supervision is not necessary.
This strategy is implemented in the Synthesizer.Inference.Reader.Signal modules,
where we hide the sample rate in a Control.Monad.Trans.Reader.
There is also Synthesizer.SampleRateContext.Signal
which exposes the sample rate.
It is more convenient to implement and to call,
but I think it is more unsafe,
because you can mix sample rates from different sources accidentally.
The same is available for numbers with dimension terms in types.
In most cases this will be the method of choice!
Maybe I'm going to wrap this in a Reader monad/applicative functor.
It also requires that Haddock supports comments in parameters of type constructors.
- I have tried more sophisticated approaches
in order to handle not only the sample rates but also the amplitudes.
However I feel that I wanted more than I actually needed.
I do no longer maintain these approaches but explain them for completeness.
The most convenient solution for handling sample rates and amplitudes
is certainly an inference system like Haskell's type system.
If some input and output signals of a processor
must have the same sampling rate,
then the concrete rate must only be known for one of these signals.
If no participating signal has a fixed rate, this is an error.
The dependencies of sampling rates become very large by this system.
The direction can be from inputs to outputs and vice versa,
not to mention loops.
This approach needs a lot of management,
e.g. a supervisor which runs the network,
but it is very convenient and safe.
However, sometimes you have to fiddle with monads.
Unfortunately it is restricted to finitely many monads
and the types for time and volume are restricted.
Thus this concept does not scale to physical units expressed in types.
This strategy is implemented in the modules under Synthesizer.Inference.Monad.Signal.
- We tried to work-around the restrictions
using a function based approach.
Since the parameters are functions,
sharing cannot take place.
There is no way to spread sample rate from one consumer to another one.
E.g. If there is
let y = f x;
z = g x
and it is known that f and g maintain the sample rate,
and the sample rate of z is known - how to infer the sample rate of y?
This approach was dropped quickly and
exists for historical reasons in Synthesizer.Inference.Func.Signal.
- There is a very cool approach,
which implements the equation solver of the monadic approach
by lazy evaluation and Peano numbers.
This poses no restriction on types
and works for infinitely many equations as well.
The drawbacks are difficult application
(you cannot simply apply a function to a signal,
but you must compose functions in an arrow like way),
and slow solution of the equation system
(quadratic time although in principle
only run-time around linear time is necessary,
it's similar to topological sort).
However it's as slow as the explicit solver using monads in Synthesizer.Inference.Monad.Signal.
This strategy is tested in the modules under Synthesizer.Inference.Fix.