{- |
This module gives an overview of the library.

The library is a collection of modules for synthesizing and processing audio signals.
It allows generation of effects, instruments and
even music using the Haskore package.
It can write raw audio data to files,
convert them to common audio formats or
play them using external commands from the Sox package.
If used properly, it can run in real-time.

A signal is modeled by a sequence of sample values.
E.g. @[Double]@ represents a mono signal,
@[(Double, Double)]@ stores a stereo signal.
Since a list is lazy, it can be infinitely long,
and it also supports feedback.
(The drawback is, that its implementation is very slow.
You have to use other signal presentations of this library for real-time processing.)
We are using the NumericPrelude type class hierarchy
which is cleaner than the one of Haskell 98
and provides us with a type class for vector spaces and other structures.
This allows us to formulate many algorithms for mono, stereo and multi-channel signals at once.
The drawback is that the vector space type class has multiple type parameters.
This type extension is available in GHC and Hugs and maybe other compilers.
It may hurt you, because type inference fails sometimes,
resulting in strange type errors.
(To be precise: GHC suggests type constraints intended for fixing the problem,
but if you copy them to your program, they won't fix the problem,
because the constraint refers to local variables
that you have no access to at the signature.
In this case you have to use 'asTypeOf' or similar self-written helpers.)

There must also be information about how fast sample values are emitted.
This is specified by the sample rate.
44100 Hz means that 44100 sample values are emitted per second.
This information must be stored along with the sample values.
This is where things become complicated.

In the very basic modules in the "Synthesizer.Plain.Signal" directory,
there is no notion of sample rate.
You have to base all computations on the number of samples.
This is unintuitive and disallows easy adaption to different audio devices
(CD, DAT, ...).
But it is very simple and can be re-used in the higher level modules.

Let's continue with the sample rate issue.
Sounds of different sources may differ in their sampling rate
(and also with respect to its amplitude and the unit of the values).
Sampled sounds have 44100 Hz on a compact disk,
48000 Hz or 32000 Hz on DAT recorders.
We want to respect different sampling rates and volumes,
we want to let signals in different formats coexist nicely,
and we want to let the user choose when to do which conversion
(called /resampling/)
in order to bring them together.

In fact this view generalizes the concept of note, control, and audio rates,
which is found in some software synthesizers,
like CSound and SuperCollider.
If signals of different rate are fed to a signal processor
in such a software synthesizer,
all signals are converted to the highest rate among the inputs.
Then the processor runs at this rate.
The conversion is usually done by \"constant\" interpolation,
in order to minimize recomputation of internal parameters.
However the handling of different signal rates must be built into every processor,
and may even reduce the computation speed.
Consider an exponential envelope which is computed at control rate
and an amplifier which applies this envelope to an audio signal.
The amplifier has to upsample the exponential envelope before applying it to the signal.
But the generation of the exponential is very simple,
one multiplication per sample,
and the amplifier is very simple, too,
again only one multiplication per sample.
So, is there a need for trouble of the resampling?
Does it really accelerates computation?
Many other envelope generators like straight lines, sines, oscillators,
are comparably simple.
However there are some processors like filters,
which need some recomputation when a control parameter changes.

Our approach is this one:
We try to avoid resampling and compute all signals at the same rate,
if no speed loss must be expected.
If a speed loss is to be expected,
we can interpolate the internal parameters of the processor explicitly.
This way we can also specify an interpolation method.
Alternatively we can move the interpolation into the processor
but let the user specify an interpolation method.
(Currently it can be used only manually for the low-level routines in "Synthesizer.Plain.Signal"
and for the high level modules there is "Synthesizer.Dimensional.ControlledProcess".)

Additional to the treatment of sampling rates,
we also want to separate amplitude information from the signal.
The separated amplitude serves two purposes:

(1) The amplitude can be equipped with a physical unit,
    whereas this information is omitted for the samples.
    Since I can hardly imagine that it is sensible to mix samples
    with different physical units,
    it would be only wasted time to always check
    if all physical values of a sequence have the same unit.

(2) The amplitude can be a floating point number,
    but the samples can be fixed point numbers.
    This is interesting for hardware digital signal processors
    or other low-level applications.
    With this method we can separate the overall dynamics from the samples.


Let's elaborate on the physical units now.
With them we can work with values from the real world immediately
and we have additional safety by unit checks.
I have not fixed the physical dimensions for the signal processors,
e.g. an oscillator can well generate a signal
over the length dimension mapping to forces.
This is useful for interim results in physically motivated signal generation
but it can be useful on its own for non-audio signal processing.
The processors only check whether the dimensions match,
e.g. an oscillator generating a time-to-voltage signal
must have a frequency in Hertz
and a length-to-force oscillator must have @1/meter@ as frequency.

Of course I prefer static safety.
E.g. I want to avoid
to accidentally call a function with conflicting parameters.
However, I see no way for both applying the unit checks statically
and let check physical quantities that are provided by an application user via I\/O.
Since there seems to be no one solution for all problems,
we have two distinct ones:

(1) Store units in a data structure and check them dynamically.
    This is imported from NumericPrelude's "Number.Physical".
    Units can be fetched from the user.
    The API of signal processing functions is generic enough
    to cover both values without units and values with units.
    Debugging of unit errors is cumbersome.

(2) Store physical dimensions in types
    either using Buckwalter's dimensional package
    or using NumericPrelude's "Number.DimensionTerm".
    Here we use the latter one.
    This is the most useful if user interaction is not needed.
    If data is fetched from an audio file
    the dimensions are statically fixed.

* The various signal storage types are described in "Synthesizer.Storage".

* The various attributes, that can be attached to plain signal storages
  are described in "Synthesizer.Dimensional.Overview".

* Various abstractions are described in "Synthesizer.Dimensional.Abstraction.Overview".

* For historical reasons there is a survey on various approaches
  of sample rate abstraction in "Synthesizer.Inference.Overview".

* Some introductory examples are described
  in "Synthesizer.Tutorial".


Packages based on this one:

* @dafx@ package:
  The module "Presentation" contains functions
  for demonstrating synthesizer functions in GHCi
  and "DAFx" contains some examples based on them.
  Just hit @make dafx@ in a shell in order to compile the modules
  and enter the interactive GHC with all modules loaded.

* An interface to the music composition library Haskore
  together with various examples
  can be found in the @haskore-synthesizer@ package.

* @synthesizer-alsa@ allows to receive MIDI events via ALSA
  and convert them to control signals.
  This way you can do interactive signal processing via MIDI input devices.
-}
module Synthesizer.Overview where