Previous Section | Next Section | Table of Contents | Index | Title Page

Nyquist Libraries

Nyquist is always growing with new functions. Functions that are most fundamental are added to the core language. These functions are automatically loaded when you start Nyquist, and they are documented in the preceding chapters. Other functions seem less central and are implemented as lisp files that you can load. Many of these functions and files are not included in the main distribution of Nyquist. Instead, they are extensions (see Section Extension Manager). Extensions are just like any other library except you must first install them with the Extension Manager in the NyquistIDE, and each extension is in a separate folder within the lib folder.

For example, to get statistics functions, load "statistics", but to get the phase vocoder test functions, first install the pvoc extension, and then load "pvoc/phasevocoder.sal". In many cases, functions within an extension will “autoload,” e.g. after you install the labels extension, the first time you call read-labels or write-labels, labels.sal will be loaded automatically to define these functions, as if the function had been loaded all along. Extensions also update the function call quick reference and completion feature in the NyquistIDE.

Many libraries are documented in one of the following sections, but see the Extension Manager for possible new additions. Extensions have documentation, which you can find using the Extension Manager.


The file statistics.lsp defines a class and functions to compute simple statistics, histograms, correlation, and some other tests. See the source code for complete details.


The Nyquist IDE has a simple facility to plot signals. For more advanced plotting, you can use gnuplot.sal (load "gnuplot.sal") to generate plots for gnuplot, a separate, but free program. See the source for details.

Labeling Audio Events, Marking Audio Times, Displaying Marked Audio Times

The labels.sal program can convert lists to label files and label files to lists. Label files can be loaded along with audio in Audacity to show metadata. See the source for details.

Linear Regression

See the regression extension and its regression.sal for simple linear regression functions.

Vector Math, Linear Algebra

See vectors.lsp and load "vectors" for a simple implementation of vector arithmetic and other vector functions. These “vectors” are implemented as lists, but there are functions to convert to and from arrays.

JSON Input and Output

JSON is widely used in web technology, scientific computing and many other areas. See json.sal and load "json" for a simple implementation of JSON input and output. On input, JSON dictionaries are represented as association lists and JSON arrays are represented as XLisp arrays. Data can be retrieved from dictionaries using json-field. If you are writing in Lisp, note that you can call (sal-load "json") to load and compile json.sal so that you can call its functions from Lisp.

For output, you can either write a structure consisting of nested arrays and dictionaries (association lists), or you can call a sequence of functions to, for example, begin writing an array, write each element of the array and finish writing the array. This way, you can write JSON as the data is computed rather than building a monolithic structure in Lisp and then writing it.

Note that Lisp is already able to read and write structured data, and if everyone used Lisp and SAL, there might not be any need for JSON. Nyquist scores are an example of data you can save to files in a text format and read back in. So JSON is not recommended except for interoperation with other non-Lisp systems.

Piano Synthesizer

The piano synthesizer (library name is pianosyn.lsp) generates realistic piano tones using a multiple wavetable implementation by Zheng (Geoffrey) Hua and Jim Beauchamp, University of Illinois. Detailed acknowledgements print when you load the file. Further information and example code can be found in nyquist/lib/piano/piano.htm. There are several useful functions in this library. These functions auto-load the pianoysn.lsp file if it is not already loaded:

piano-note(duration, step, dynamic) [SAL]
(piano-note duration step dynamic) [LISP]
Synthesizes a piano tone. Duration is the duration to the point of key release, after which there is a rapid decay. Step is the pitch in half steps, and dynamic is approximately equivalent to a MIDI key velocity parameter. Use a value near 100 for a loud sound and near 10 for a soft sound.

piano-note-2(step, dynamic) [SAL]
(piano-note-2 step dynamic) [LISP]
Similar to piano-note except the duration is nominally 1.0.

piano-midi(midi-file-name) [SAL]
(piano-midi midi-file-name) [LISP]
Use the piano synthesizer to play a MIDI file. The file name (a string) is given by midi-file-name.

piano-midi2file(midi-file-name, sound-file-name) [SAL]
(piano-midi2file midi-file-name sound-file-name) [LISP]
Use the piano synthesizer to play a MIDI file. The MIDI file is given by midi-file-name and the (monophonic) result is written to the file named sound-file-name.

Dynamics Compression

These functions in the compress extension implement a compressor originally intended for noisy speech audio, but usable in a variety of situations. There are actually two compressors that can be used in series. The first, compress, is a fairly standard one: it detects signal level with an RMS detector and uses table-lookup to determine how much gain to place on the original signal at that point. One bit of cleverness here is that the RMS envelope is “followed” or enveloped using snd-follow, which does look-ahead to anticipate peaks before they happen.

The other interesting feature is compress-map, which builds a map in terms of compression and expansion. For speech, the recommended procedure is to figure out the noise floor on the signal you are compressing (for example, look at the signal where the speaker is not talking). Use a compression map that leaves the noise alone and boosts signals that are well above the noise floor. Alas, the compress-map function is not written in these terms, so some head-scratching is involved, but the results are quite good.

The second compressor is called agc, and it implements automatic gain control that keeps peaks at or below 1.0. By combining compress and agc, you can process poorly recorded speech for playback on low-quality speakers in noisy environments. The compress function modulates the short-term gain to to minimize the total dynamic range, keeping the speech at a generally loud level, and the agc function rides the long-term gain to set the overall level without clipping.

compress-map(compress-ratio, compress-threshold, expand-ratio, expand-threshold, limit: limit, transition: transition, verbose: verbose) [SAL]
(compress-map compress-ratio compress-threshold expand-ratio expand-threshold :limit limit :transition transition :verbose verbose) [LISP]
Construct a map for the compress function. The map consists of two parts: a compression part and an expansion part. The intended use is to compress everything above compress-threshold by compress-ratio, and to downward expand everything below expand-threshold by expand-ratio. Thresholds are in dB and ratios are dB-per-dB. 0dB corresponds to a peak amplitude of 1.0 or rms amplitude of 0.7 If the input goes above 0dB, the output can optionally be limited by setting limit: (a keyword parameter) to T. This effectively changes the compression ratio to infinity at 0dB. If limit: is nil (the default), then the compression-ratio continues to apply above 0dB.

Another keyword parameter, transition:, sets the amount below the thresholds (in dB) that a smooth transition starts. The default is 0, meaning that there is no smooth transition. The smooth transition is a 2nd-order polynomial that matches the slopes of the straight-line compression curve and interpolates between them.

If verbose is true (this is the default), the map is printed, showing, for each dB value below zero of this input, what is the gain (in dB) indicated by the output. Only regions where the map is changing are printed because at lower values, the dB gain is constant.

It is assumed that expand-threshold <= compress-threshold <= 0 The gain is unity at 0dB so if compression-ratio > 1, then gain will be greater than unity below 0dB.

The result returned by this function is a sound for use in the shape function. The sound maps input dB to gain. Time 1.0 corresponds to 0dB, time 0.0 corresponds to -100 dB, and time 2.0 corresponds to +100dB, so this is a 100hz “sample rate” sound. The sound gives gain in dB.

db-average(input, min: mindb) [SAL]
(db-average input :min mindb) [LISP]
Compute the average amplitude of input in dB. The result is a sound at a rate of about 40Hz based on RMS of input such that 0 (dB) corresponds to a sinusoid with a peak amplitude of 1. This is the same dB estimate that is used in the compress function. If mindb is specified and non-nil, the result samples below this value will be replaced by mindb. For example, if you plot the db-average curve of an instrument with a range from -3 dB down to -30 dB, but silences exist at -90 dB, then most of the interesting values will be squeezed into the top 1/3 of a plot. Instead, if you specify min to be -30, then the interesting range will span the entire plot (and silences will show up at -30 dB, the value of min).

compress(input, map, rise-time, fall-time [, lookahead]) [SAL]
(compress input map rise-time fall-time [lookahead]) [LISP]
Compress input using map, a compression curve probably generated by compress-map (see above). Adjustments in gain have the given rise-time and fall-time. Lookahead tells how far ahead to look at the signal, and is rise-time by default.

agc(input, range, rise-time, fall-time [, lookahead]) [SAL]
(agc input range rise-time fall-time [lookahead]) [LISP]
An automatic gain control applied to input. The maximum gain in dB is range. Peaks are attenuated to 1.0, and gain is controlled with the given rise-time and fall-time. The look-ahead time default is rise-time.

Clipping Softener

The clipsoften extension was written to improve the quality of poorly recorded speech. In recordings of speech, extreme clipping generates harsh high frequency noise. This can sound particulary bad on small speakers that will emphasize high frequencies. This problem can be ameliorated by low-pass filtering regions where clipping occurs. The effect is to dull the harsh clipping. Intelligibility is not affected by much, and the result can be much more pleasant on the ears. Clipping is detected simply by looking for large signal values. Assuming 8-bit recording, this level is set to 126/127.

The function works by cross-fading between the normal signal and a filtered signal as opposed to changing filter coefficients.

soften-clipping(snd, cutoff) [SAL]
(soften-clipping snd cutoff) [LISP]
Filter the loud regions of a signal where clipping is likely to have generated additional high frequencies. The input signal is snd and cutoff is the filter cutoff frequency (4 kHz is recommended for speech).

Graphical Equalizer

The library (load "grapheq.lsp") works with the NyquistIDE's Equalizer window (see Section Equalizer Editor, but this library can be used directly in Nyquist programs for multi-band equalizers. This implementation uses Nyquist's eq-band function to split the incoming signal into different frequency bands. Bands are spaced geometrically, e.g. each band could be one octave, meaning that each successive band has twice the bandwidth. An interesting possibility is using computed control functions to make the equalization change over time.

nband-range(input, gains, lowf, highf) [SAL]
(nband-range input gains lowf highf) [LISP]
A graphical equalizer applied to input (a SOUND). The gain controls and number of bands is given by gains, an ARRAY of SOUNDs (in other words, a Nyquist multichannel SOUND). Any sound in the array may be replaced by a FLONUM. The bands are geometrically equally spaced from the lowest frequency lowf to the highest frequency highf (both are FLONUMs).

nband(input, gains) [SAL]
(nband input gains) [LISP]
A graphical equalizer, identical to nband-range with a range of 20 to 20,000 Hz.

Sound Reversal

The reverse extension implements functions to play sounds in reverse.

s-reverse(snd) [SAL]
(s-reverse snd) [LISP]
Reverses snd (a SOUND). Sound must be shorter than *max-reverse-samples*, which is currently initialized to 25 million samples. Reversal allocates about 4 bytes per sample. This function uses XLISP in the inner sample loop, so do not be surprised if it calls the garbage collector a lot and runs slowly. The result starts at the starting time given by the current environment (not necessarily the starting time of snd). If snd has multiple channels, a multiple channel, reversed sound is returned.

s-read-reverse(filename, time-offset: offset, srate: sr, dur: dur, nchans: chans, format: format, mode: mode, bits: n, swap: flag) [SAL]
(s-read-reverse filename :time-offset offset :srate sr :dur dur :nchans chans :format format :mode mode :bits n :swap flag) [LISP]
This function is identical to s-read (see Section Sound File Input and Output), except it reads the indicated samples in reverse. Like s-reverse (see above), it uses XLISP in the inner loop, so it is slow. Unlike s-reverse, s-read-reverse uses a fixed amount of memory that is independent of how many samples are computed. Multiple channels are handled.

Time Delay Functions

The time-delay-fns.lsp library implements chorus, phaser, and flange effects.

phaser(snd) [SAL]
(phaser snd) [LISP]
A phaser effect applied to snd (a SOUND). There are no parameters, but feel free to modify the source code of this one-liner.

flange(snd) [SAL]
(flange snd) [LISP]
A flange effect applied to snd. To vary the rate and other parameters, see the source code.

stereo-chorus(snd, delay: delay, depth: depth, rate1: rate1, rate2: rate2 saturation: saturation) [SAL]
(stereo-chorus snd :delay delay :depth depth :rate1 rate1 :rate2 rate2 :saturation saturation) [LISP]
A chorus effect applied to snd, a SOUND (monophonic). The output is a stereo sound with out-of-phase chorus effects applied separately for the left and right channels. See the chorus function below for a description of the optional parameters. The rate1 and rate2 parameters are rate parameters for the left and right channels.

chorus(snd, delay: delay, depth: depth, rate: rate, saturation: saturation, phase: phase) [SAL]
(chorus snd :delay delay :depth depth :rate rate :saturation saturation :phase phase) [LISP]
A chorus effect applied to snd. All parameters may be arrays as usual. The chorus is implemented as a variable delay modulated by a sinusoid shifted by phase degrees (a FLONUM) oscillating at rate Hz (a FLONUM). The sinusoid is scaled by depth (a FLONUM. The delayed signal is mixed with the original, and saturation (a FLONUM) gives the fraction of the delayed signal (from 0 to 1) in the mix. Default values are delay 0.03, depth 0.003, rate 0.3, saturation 1.0, and phase 0.0 (degrees).

Multiple Band Effects

The bandfx extension implements several effects based on multiple frequency bands. The idea is to separate a signal into different frequency bands, apply a slightly different effect to each band, and sum the effected bands back together to form the result. This file includes its own set of examples. After loading the file, try f2(), f3(), f4(), and f5() to hear them. Further discussion and examples can be found in nyquist/lib/bandfx/bandfx.lsp.

There is much room for expansion and experimentation with this library. Other effects might include distortion in certain bands (for example, there are commercial effects that add distortion to low frequencies to enhance the sound of the bass), separating bands into different channels for stereo or multichannel effects, adding frequency-dependent reverb, and performing dynamic compression, limiting, or noise gate functions on each band. There are also opportunities for cross-synthesis: using the content of bands extracted from one signal to modify the bands of another. The simplest of these would be to apply amplitude envelopes of one sound to another. Please contact us ( if you are interested in working on this library.

apply-banded-delay(s, lowp, highp, num-bands, lowd, highd, fb, wet) [SAL]
(apply-banded-delay s lowp highp num-bands lowd highd fb wet) [LISP]
Separates input SOUND s into FIXNUM num-bands bands from a low frequency of lowp to a high frequency of highp (these are FLONUMS that specify steps, not Hz), and applies a delay to each band. The delay for the lowest band is given by the FLONUM lowd (in seconds) and the delay for the highest band is given by the FLONUM highd. The delays for other bands are linearly interpolated between these values. Each delay has feedback gain controlled by FLONUM fb. The delayed bands are scaled by FLONUM wet, and the original sound is scaled by 1 - wet. All are summed to form the result, a SOUND.

apply-banded-bass-boost(s, lowp, highp, num-bands, num-boost, gain) [SAL]
(apply-banded-bass-boost s lowp highp num-bands num-boost gain) [LISP]
Applies a boost to low frequencies. Separates input SOUND s into FIXNUM num-bands bands from a low frequency of lowp to a high frequency of highp (these are FLONUMS that specify steps, not Hz), and scales the lowest num-boost (a FIXNUM) bands by gain, a FLONUM. The bands are summed to form the result, a SOUND.

apply-banded-treble-boost(s, lowp, highp, num-bands, num-boost, gain) [SAL]
(apply-banded-treble-boost s lowp highp num-bands num-boost gain) [LISP]
Applies a boost to high frequencies. Separates input SOUND s into FIXNUM num-bands bands from a low frequency of lowp to a high frequency of highp (these are FLONUMS that specify steps, not Hz), and scales the highest num-boost (a FIXNUM) bands by gain, a FLONUM. The bands are summed to form the result, a SOUND.

Granular Synthesis

Some granular synthesis functions are implemented in the gran extension. There are many variations and control schemes one could adopt for granular synthesis, so it is impossible to create a single universal granular synthesis function. One of the advantages of Nyquist is the integration of control and synthesis functions, and users are encouraged to build their own granular synthesis functions incorporating their own control schemes. The gran.lsp file includes many comments and is intended to be a useful starting point. Another possibility is to construct a score with an event for each grain. Estimate a few hundred bytes per score event (obviously, size depends on the number of parameters) and avoid using all of your computer's memory.

sf-granulate(filename, grain-dur, grain-dev, ioi, ioi-dev, pitch-dev, [file-start, file-end]) [SAL]
(sf-granulate filename grain-dur grain-dev ioi ioi-dev pitch-dev [file-start file-end]) [LISP]
Granular synthesis using a sound file named filename as the source for grains. Grains are extracted from a sound file named by filename by stepping through the file in equal increments. Each grain duration is the sum of grain-dur and a random number from 0 to grain-dev. Grains are then multiplied by a raised cosine smoothing window and resampled at a ratio between 1.0 and pitch-dev. If pitch-dev is greater than one, grains are stretched and the pitch (if any) goes down. If pitch-dev is less than one, grains are shortened and the pitch goes up. Grains are then output with an inter-onset interval between successive grains (which may overlap) determined by the sum of ioi and a random number from 0 to ioi-dev. The duration of the resulting sound is determined by the stretch factor (not by the sound file). The number of grains is the total sound duration (determined by the stretch factor) divided by the mean inter-onset interval, which is ioi + ioi-dev * 0.5. The grains are taken from equally-spaced starting points in filename, and depending on grain size and number, the grains may or may not overlap. The output duration will simply be the sum of the inter-onset intervals and the duration of the last grain. If ioi-dev is non-zero, the output duration will vary, but the expected value of the duration is the stretch factor. To achieve a rich granular synthesis effect, it is often a good idea to sum four or more copies of sf-granulate together. (See the gran-test function in gran.lsp.)

Chowning FM Voices

John Chowning developed voice synthesis methods using FM to simulate resonances for his 1981 composition "Phone." He later recreated the synthesis algorithms in Max, and Jorge Sastre ported these to SAL. See fm-voices-chowning extension and documentation for more details.

Atonal Melody Composition

Jorge Sastre contributed the atonal-melodies extension that generates atonal melodies. You can find links to an example score and audio file in the code and also at

MIDI Utilities

The midishow.lsp library has functions that can print the contents of MIDI files. This intended as a debugging aid.

midi-show-file(file-name) [SAL]
(midi-show-file file-name) [LISP]
Print the contents of a MIDI file to the console.

midi-show(the-seq [, out-file]) [SAL]
(midi-show the-seq [out-file]) [LISP]
Print the contents of the sequence the-seq to the file out-file (whose default value is the console.)


The reverb.lsp library implements artificial reverberation.

reverb(snd, time) [SAL]
(reverb snd time) [LISP]
Artificial reverberation applied to snd with a decay time of time.

DTMF Encoding

The dtmf extension implements DTMF encoding. DTMF is the “touch tone” code used by telephones.

dtmf-tone(key, len, space) [SAL]
(dtmf-tone key len space) [LISP]
Generate a single DTMF tone. The key parameter is either a digit (a FIXNUM from 0 through 9) or the atom STAR or POUND. The duration of the done is given by len (a FLONUM) and the tone is followed by silence of duration space (a FLONUM).

speed-dial(thelist) [SAL]
(speed-dial thelist) [LISP]
Generates a sequence of DTMF tones using the keys in thelist (a LIST of keys as described above under dtmf-tone). The duration of each tone is 0.2 seconds, and the space between tones is 0.1 second. Use stretch to change the “dialing” speed.

Dolby Surround(R), Stereo and Spatialization Effects

The spatial extension implements various functions for stereo manipulation and spatialization. It also includes some functions for Dolby Pro-Logic panning, which encodes left, right, center, and surround channels into stereo. The stereo signal can then be played through a Dolby decoder to drive a surround speaker array. This library has a somewhat simplified encoder, so you should certainly test the output. Consider using a high-end encoder for critical work. There are a number of functions in spatial.lsp for testing. See the source code and extension documentation for more information.

stereoize(snd) [SAL]
(stereoize snd) [LISP]
Convert a mono sound, snd, to stereo. Four bands of equalization and some delay are used to create a stereo effect.

widen(snd, amt) [SAL]
(widen snd amt) [LISP]
Artificially widen the stereo field in snd, a two-channel sound. The amount of widening is amt, which varies from 0 (snd is unchanged) to 1 (maximum widening). The amt can be a SOUND or a number.

span(snd, amt) [SAL]
(span snd amt) [LISP]
Pan the virtual center channel of a stereo sound, snd, by amt, where 0 pans all the way to the left, while 1 pans all the way to the right. The amt can be a SOUND or a number.

swapchannels(snd) [SAL]
(swapchannels snd) [LISP]
Swap left and right channels in snd, a stereo sound.

prologic(l, c, r, s) [SAL]
(prologic l c r s) [LISP]
Encode four monaural SOUNDs representing the front-left, front-center, front-right, and rear channels, respectively. The return value is a stereo sound, which is a Dolby-encoded mix of the four input sounds.

pl-left(snd) [SAL]
(pl-left snd) [LISP]
Produce a Dolby-encoded (stereo) signal with snd, a SOUND, encoded as the front left channel.

pl-center(snd) [SAL]
(pl-center snd) [LISP]
Produce a Dolby-encoded (stereo) signal with snd, a SOUND, encoded as the front center channel.

pl-right(snd) [SAL]
(pl-right snd) [LISP]
Produce a Dolby-encoded (stereo) signal with snd, a SOUND, encoded as the front right channel.

pl-rear(snd) [SAL]
(pl-rear snd) [LISP]
Produce a Dolby-encoded (stereo) signal with snd, a SOUND, encoded as the rear, or surround, channel.

pl-pan2d(snd, x, y) [SAL]
(pl-pan2d snd x y) [LISP]
Comparable to Nyquist's existing pan function, pl-pan2d provides not only left-to-right panning, but front-to-back panning as well. The function accepts three parameters: snd is the (monophonic) input SOUND, x is a left-to-right position, and y is a front-to-back position. Both position parameters may be numbers or SOUNDs. An x value of 0 means left, and 1 means right. Intermediate values map linearly between these extremes. Similarly, a y value of 0 causes the sound to play entirely through the front speakers(s), while 1 causes it to play entirely through the rear. Intermediate values map linearly. Note that, although there are usually two rear speakers in Pro-Logic systems, they are both driven by the same signal. Therefore any sound that is panned totally to the rear will be played over both rear speakers. For example, it is not possible to play a sound exclusively through the rear left speaker.

pl-position(snd, x, y, config) [SAL]
(pl-position snd x y config) [LISP]
The position function builds upon speaker panning to allow more abstract placement of sounds. Like pl-pan2d, it accepts a (monaural) input sound as well as left-to-right (x) and front-to-back (y) coordinates, which may be FLONUMs or SOUNDs. A fourth parameter config specifies the distance from listeners to the speakers (in meters). Current settings assume this to be constant for all speakers, but this assumption can be changed easily (see comments in the code for more detail). There are several important differences between pl-position and pl-pan2d. First, pl-position uses a Cartesian coordinate system that allows x and y coordinates outside of the range (0, 1). This model assumes a listener position of (0,0). Each speaker has a predefined position as well. The input sound's position, relative to the listener, is given by the vector (x,y).

pl-doppler(snd, r) [SAL]
(pl-doppler snd r) [LISP]
Pitch-shift moving sounds according to the equation: fr = f0((c+vr)/c), where fr is the output frequency, f0 is the emitted (source) frequency, c is the speed of sound (assumed to be 344.31 m/s), and vr is the speed at which the emitter approaches the receiver. (vr is the first derivative of parameter r, the distance from the listener in meters.

Drum Machine

The drum machine software in the plight extension deserves further explanation. There is documentation associated with the extension and available in the NyquistIDE Extension Manager. To use the software, install the extension and load the code by evaluating:

load "plight/drum.lsp"
exec load-props-file(strcat(*plight-drum-path*, 
exec create-drum-patches()
exec create-patterns()

Drum sounds and patterns are specified in the beats.props file (or whatever name you give to load-props-file). This file contains two types of specifications. First, there are sound file specifications. Sound files are located by a line of the form:

set sound-directory = "kit/"

This gives the name of the sound file directory, relative to the beats.props file. Then, for each sound file, there should be a line of the form:

track.2.5 = big-tom-5.wav

This says that on track 2, a velocity value of 5 means to play the sound file big-tom-5.wav. (Tracks and velocity values are described below.) The beats.props file contains specifications for all the sound files in nyquist/lib/plight/plight/kit using 8 tracks. If you make your own specifications file, tracks should be numbered consecutively from 1, and velocities should be in the range of 1 to 9.

The second set of specifications is of beat patterns. A beat pattern is given by a line in the following form:

beats.5 = 2--32--43-4-5---

The number after beats is just a pattern number. Each pattern is given a unique number. After the equal sign, the digits and dashes are velocity values where a dash means “no sound.” Beat patterns should be numbered consecutively from 1.

Once data is loaded, there are several functions to access drum patterns and create drum sounds (described below). The nyquist/lib/plight/plight/drums.lsp file contains an example function plight-drum-example to play some drums. There is also the file nyquist/lib/plight/beats.props to serve as an example of how to specify sound files and beat patterns.

drum(tracknum, patternnum, bpm) [SAL]
(drum tracknum patternnum bpm) [LISP]
Create a sound by playing drums sounds associated with track tracknum (a FIXNUM) using pattern patternnum. The tempo is given by bpm in beats per minute. Normally patterns are a sequence of sixteenth notes, so the tempo is in sixteenth notes per minute. For example, if patternnum is 10, then use the pattern specified for beats.10. If the third character of this pattern is 3 and tracknum is 5, then on the third beat, play the soundfile assigned to track.5.3. This function returns a SOUND.

drum-loop(snd, duration, numtimes) [SAL]
(drum-loop snd duration numtimes) [LISP]
Repeat the sound given by snd numtimes times. The repetitions occur at a time offset of duration, regardless of the actual duration of snd. A SOUND is returned.

length-of-beat(bpm) [SAL]
(length-of-beat bpm) [LISP]
Given a tempo of bpm, return the duration of the beat in seconds. Note that this software has no real notion of beat. A “beat” is just the duration of each character in the beat pattern strings. This function returns a FLONUM.

Previous Section | Next Section | Table of Contents | Index | Title Page