








                             PREFACE





     Realizing music by digital  computer  involves  synthesizing
audio  signals with discrete points or samples that are represen-
tative of continuous waveforms.  There are several ways of  doing
this,  each affording a different manner of control.  Direct syn-
thesis  generates  waveforms  by  sampling  a   stored   function
representing  a  single  cycle;  additive synthesis generates the
many partials of a complex  tone,  each  with  its  own  loudness
envelope;  subtractive  synthesis  begins with a complex tone and
filters it.  Non-linear synthesis uses frequency  modulation  and
wave  shaping  to  give  simple  signals complex characteristics,
while sampling and storage of natural sound allows it to be  used
at will.

     Since comprehensive moment-by-moment specification of  sound
can  be  tedious,  control  is  gained  in  two ways: 1) from the
instruments in an orchestra, and 2)  from  the  events  within  a
score.   An  orchestra is really a computer program that can pro-
duce sound, while a score is a body of data  which  that  program
can react to.  Whether a rise-time characteristic is a fixed con-
stant in an instrument, or a variable of each note in the  score,
depends on how the user wants to control it.

     The instruments in a Csound orchestra are defined in a  sim-
ple  syntax  that  invokes  complex audio processing routines.  A
score passed to this orchestra contains numerically  coded  pitch
and  control  information,  in  standard  numeric  score  format.
Although many users are content with this  format,  higher  level
score  processing  languages  are  often  convenient.   The  Scot
language uses simple alphanumeric encoding of pitch and time,  in
a fashion that parallels traditional music notation; its transla-
tor produces a standard numeric score.  The  Cscore  program  can
expand  an  existing  numeric  score,  according to user-supplied
algorithms written in the C language.  One powerful  score  stra-
tegy,  then, is to define a kernel score in Scot, translate it to
numeric form, then expand and modify the data using Cscore before
sending it to a Csound orchestra for performance.

     The programs making up the Csound system have a long history
of  development,  beginning  with  the Music 4 program written at
Bell Telephone Laboratories in the early 1960's by  Max  Mathews.
That initiated the stored table concept and much of the terminol-
ogy that has since enabled computer music researchers to communi-
cate.  Valuable additions were made at Princeton by the late God-
frey Winham in Music 4B;   my  own  Music  360  (1968)  was  very
indebted  to  his  work.  With Music 11 (1973) I took a different
tack: the two distinct  networks  of  control  and  audio  signal


                        February 20, 1991





                              - 2 -


processing stemmed from my intensive involvement in the preceding
years in hardware synthesizer concepts and design.  This division
has been retained in Csound.

     Because it is  written  entirely  in  C,  Csound  is  easily
installed  on  any  machine running Unix or C.  At MIT it runs on
VAX/DECstations under Ultrix 4.0, on SUNs under OS  4.1,  and  on
the  Macintosh  under  ThinkC 4.0.  With this single language for
audio signal  processing,  users  move  easily  from  machine  to
machine.

     The 1991  version  has  many  new  features.   First,  I  am
indebted  to others for the contribution of the phase vocoder and
FOF synthesis modules.  This release also charts a new  direction
with  the  addition of a spectral data type, holding much promise
for future development.  Most importantly, with the advent  of  a
new  generation of RISC processors that are an order of magnitude
faster than those on which computer music was  born,  researchers
and  composers  now have access to workstations on which realtime
software synthesis with sensing and control  is  now  a  reality.
This  is perhaps the single most important development for people
working in the field.  This new Csound is designed to  take  max-
imum  advantage  of  realtime  audio processing, and to encourage
interactive experiments in this exciting new domain.
                                                            B.V.
































                        February 20, 1991


