Hi There
Well yes the underlying thing is to pass sequences of numbers to the operating system thence they get passed to the sound card driver and finally accross the system bus to the sound card/sound chipset which converts the numbers into voltages. Programs which make sound generate this stream of samples themselves using various algorithms and pass them to the OS.
Operating systems provide various interfaces (API) to pass buffers of samples between the userspace program and the soundcard driver -- on Linux this is usually ALSA or OSS. JACK sits on top of these. It's more or less the same situation on other platforms. For cross platform work there are libraries which wrap multiple APIs with the same programming interface such as rtAudio or PortAudio (www.portaudio.com)...
HTH
Ross.
----- Original Message -----
From: AlgoMantra
To: ChucK Users Mailing List ; livecode_at_toplap.org
Sent: Saturday, May 03, 2008 3:43 PM
Subject: [livecode] audio source in linux
Apologies if I sound bloody ignorant in this:
I gather that most linux-based audio programming systems would be
playing with some basic interface provided by the system. I wonder
for instance if some basic sounds can be produced using the shell
or some kind of very rudimentary program that instructs the sound
card to, say, produce a square or sine wave. As it is a stream of numbers,
I wonder what is that underlying process which converts it to sound?
If I'm not wrong, both pure data and ChucK would be using this
same underlying system in Linux. Can anyone elucidate with
an example? I'm just trying to look beneath the surface here...
------- -.-
1/f ))) --.
------- ...
http://www.algomantra.com
Received on Sat May 03 2008 - 06:36:20 BST