in reply to Re: Re: Reciprocating to the perl community in thread Reciprocating to the perl community
Actually, getting it to sound 'right' is 90% of the
problem. Take the bass lines for instance: you can have
a 100% technically correct line, but . . .
- It's hard to make a synthetic bass sound that
sounds like the real thing, especially a doghouse
(the big, upright bass you see a lot in jazz).
- If you are dead on the beat, you sound flat.
To swing, you have to play around with the time a
bit. (I can't seem to get my current model to swing.)
I think it has something to do with anticipating
beats 2 and 4 by a smidgeon, but probably more to
do with articulation (also hard to do electronically)
- Ad libbing forever is possible--there's a book
by Jerry Coker called 'Elements of the Jazz Language
for the Developing Improvisor', where he analyzes
thousands of jazz tunes and comes up with 14 to 20
'elements' that make up the vocabulary of jazz. BUT,
you program a tune like this, you get . . . the sound
of a computer pretending to play jazz. Again,
rhythm, articulation, and just plain horse sense
are hard to program . . . even in Perl ;-)
$jPxu=q?@jPxu?;$jPxu^=q?Whats?^q?UpDoc?;print$jPxu;
Re^4: Reciprocating to the perl community
by andyf (Pilgrim) on Jun 03, 2004 at 05:51 UTC
|
I've noticed that sound is a vastly underdeveloped area of Perl. The latency requirements
for synthesis and audio editing make Perl a rather poor choice, but I've said before that's changing.
What you are talking about Ambidangerous is a sequencer. The various formats you discuss, MusicXML, MIDI, and ABC
are representations of multi-channeled time variant signals, the performance parameters and events of the music.
The most essential element of a sequencer is a clock, and as a Jazz musician I'm sure you'll agree that timing is
most important. Serious music software designers like to work in the sub millisecond range at least. So it doesn't look
good for real-time generative Perl programs. However, don't be disheartened. A large part of what you are talking about
is music theory, that is combining the relatively simple data structures used to make music in a combinatorial way
to make larger data structures (notes to chords to progressions), and this can be done a priori well ahead of the
buffering that would connect to a real sequencing engine on a real product. You describe
"..simple, non-proprietary way for folks to share leadsheets
In addition, I'd like the program to be able to autogenerate some elementary accompaniment
(bass, drums, and chords) so that folks who are learning jazz can play along and 'get' the tune."
The sounds you are describing, drums, bass etc are produced by a sampling or synthesis engine elsewhere. All you
want to, I think, is to make patterns. Search the literature around Algorithmic Composition.
A word about formats. MIDI is the Daddy and all lesser formats are insignificant. Not because MIDI is better, MIDI is an
obsolete POS that should have died gracefully 10 years ago. The reason MIDI became, and is, the One music format to rule them all
is because it was the first and to date most complete development in music protocols. And because a lot of expensive hardware
has been built and sold that uses it. It is both a time variant sequence representation, as a file format, and a transport and physical protocol
for transmitting those events over cables (to specialised synthesis hardware and from imput devices).
Having said that, there are multitudes of far superior static representations/formats for music, but even when you
have theoretically infinite timing resolution within them (by variable length arithmetic etc) you end up shoving it
all through MIDI at the end of the day. Sad, but if your going to be Practical write it at the timing resolution of
MIDI in the first place.
Flyingmoose, there is indeed already Markov based Algorithmic composition. I've heard some of the very best of it,
(you may guess this is close to my area of expertise) and my sincere advice is to avoid as much as possible. Academic
music is proof of concept, no one actually listens to it and algorithmic 'music' sucks by definition.
I really don't like the CSound syntax, but it's about as powerful as any Debian package I can find for making instruments,
so I may play with that...
Barry Vercoe is a Giant to me. I must have churned out tens of thousands of lines of Csound (don't be impressed DSP is cut n paste programming for the mostpart), it's
an old friend. Completely impractical as a composition or performance tool, but the most deadly sound design tool. It's also the best
conceptual introduction for programmers to understand DSP frameworks with multiple synchronous levels of execution. If you jump into writing
audio applications without a hint of this you will end up with an asynchronous and erratic mess.
"It's hard to make a synthetic bass sound that sounds like the real thing."
As a rule it's very hard to make a synthetic anything that sounds the real anything. The art is all about shortcuts
and mathematical tricks. Brute force synthesis (just throwing cycles at physmo) is considered inelegant. For your bass
sound Karplus-Strong is the kiddie. You need only a white noise source and a delay and you can make VERY realistic string bass sounds.
"If you are dead on the beat, you sound flat. To swing, you have to play around with the time a bit."
That's what sequencer software missed for years. If you look at something modern like Logic (the sequencer application) you will
see all manner of amazing things like groove templates, swing overlays, extracting tempo and timing from audio and other signals.
The logical editor is to _serious_ musicians what regular expressions are to Perl and programming generally. If you want to study
a sequencer look at Logic (by Emagic).
Open Source music and audio is really comming of age now. Lots of development happening. I noticed recently Audacity nicely
integrating to an old friend of mine Nyquist (basically Csound in Lisp), which is an awesome combination for intelligent editing, but I digress..
..Perl..Interesting possible new avenues in music sequencing and AI are things like analysis, recomposition/hybridisation of music on the
large scale structural level, and Perl would be just perfect for that.
Andy | [reply] [Watch: Dir/Any] |
|
Actually, I want my program to be rather small. I'm
following the Linux model of small programs that can
interface with other, share data, and provide services
for each other--as opposed to the Windows idea of the
single program megalith (that crashes all the time and
is packed with bugs).
To get an idea of the type of program I'm
talking about, have a look at PG Music's "Band in
a Box." That, unfortuantely, is a $50 to $250
megalith that does everything (music input, sequencing,
printing, etc). Also, it's proprietary, and only
works well on Windows (one of my band members has
a Mac and BIAB has *never* worked for him).
I don't want my program to *produce* sound. I want
my program to be able to write files that someone can
put into their favorite sequencer, midi program, etc.
So there's no need to worry about real time output. I
want to be able to dump files so someone could print
them in whatever program they already have (Finale,
Lilypond, etc.) As far as organizing your tune
collection, you should be able to dump into csv or
some convenient data format for use in a db or
spreadsheet.
Think of it as a compiler for sound files. The
input is a human readable language, the output is
machine readable sound instructions.
The problem is, you have to write the machine
instructions so that the human articulation, feel,
and timing comes through. As you said, I'm trying
to essentially build an AI, so that practically
demands something capable of deep pattern analysis
(eg Perl).
About synth sound: thanks for the tip on Logic.
Also, strangely, one of the most complex musical
sounds out there, a grand piano, can be modeled
fairly effectively now--I just wonder why upright
bass is lagging so far behind.
$jPxu=q?@jPxu?;$jPxu^=q?Whats?^q?UpDoc?;print$jPxu;
| [reply] [Watch: Dir/Any] [d/l] |
|
A book you might want to take a look at is Virtual Music: Computer Synthesis of Musical Style by David Cope. (I have not read this yet, but he was my 2nd year music theory teacher in college and was very good at it.)
| [reply] [Watch: Dir/Any] |
|
|