Hok Pwah

for voice, percussion and electronics





"Hok Pwah"  is a piece intended for live performance, and for one or two soloists (voice and percussion) with live electronics. The  two main ideas behind the piece  are: 1) to extend the role  of  the solo or  duet,  giving  the soloist(s) an extremely  large instrumental and timbral range nonetheless based  on  (or  controlled    by)  on  their instrumental technique,  2) to explore the possibilities of working with electronically (live) processed text.


Expanding the timbral range involves combining the instruments' acoustic sounds with similarly behaved electronic sounds,  which tend to fuse with the former. The computer  runs software which coordinates the following: 1)  real-time  audio signal analysis,  2) signal processing of the soloist(s),  3) "complementary" synthesis,  which is meant to  mix  with  the instruments'  natural  timbres,  and (4) realtime  sampling (recording and playback).    Specialized  interfaces  incorporating  envelope/pitch  and spectrum  followers  are  linked  to  audio  signal  processors,  samplers  and  highly controllable sound generators, thus providing the players with direct control over the electronics based on  their  "natural" playing technique.   In  the case of  the singer, spoken and sung text or articulations such as trills, staccato, accents, slurs,  etc...  are analyzed and recognized by the computer.  From this analysis,  various control signals are derived, which control the synthesizers, samplers and signal processors.  Outside of their  normal  musical role, these articulations,  sung by the soloist,       makeup  the interface,  through  which the singer may control  the electronics. Thus  the  singer, through  what  and  how  she  sings,    can  have  subtle  (expressive)  control  of  the electronics  based  on  her  instrumental  technique.  The  electronics  include  sound generation and processing gear which is "patched"  or  programmed  to  be extremely sensitive to continuous control.  These patches are built and tuned around the particular kinds of control signals coming from the players.  This approach compares in certain ways to instrument building,  and is a vital part of the piece.


The text in the piece (the singer's voice) is modified with the aid of special analysis and signal treatment software written in MAX/MSP, a music programming language. Using  articulation  recognition  and  rich  signal  processing/synthesis  configurations, various elements of the text (syllables, inflection etc..)  can be treated or "colored"  in specific ways. In this way, the text serves not only as a text in the usual sense, but in addition, the text serves to control  the electronic treatment of  itself (for example, the first syllable "Vic" of the word "Victor" could be used to trigger the addition of  some reverb  to  the singer's voice-text  while  the  second  syllable  "tor"  could  trigger  the attenuation the reverb  etc...).  The  texts  for  the  piece  are  chosen  based  on  their structural (phonetic) properties and onomatopoetic tendencies, both of which can be accentuated or brought out by the singer and electronics. Imbedded  in  the text are many elements which are brought to the surface during performance.


Many thanks to Miller Puckette,  developer of MAX, and David Zicarelli,  creator of  MSP  (based  on  Puckette's  audio  extentions  to  MAX),    for  their  support  and enthusiasm in my work.


Z. Settel 1993