|Syntactic Confectionery Delight|
Back in the old days (well I actually like the fact that I have some :-) when I used an Amstrad CPC 6128 with a z80A CPU and wrote tiny programs in BASIC there where also some folks trying to get Computers into making autonome decisions.
The Task was a mini chess with a board of 4 times 4 fields. Each "player" got 4 pawns and the goal was that one player remains while he stroke out his opponents pawns. The program was written in Locomotive BASIC. Together with the tremendous speed of the CPU (4.75MHz) that was a long task and so my computer fought night after night against a small program that was called "little brother" which never had the opening because that AI learning program might have given up upon an opening, that it remembered unsuccessful for itself in the past. However, the little BASIC program on the Amstrad did "learn" to make decisions, and that's what AI stands (vaguely) for.
All Samples, that came up so far are enabling a "Machine" to make a decision based upon predefined conditions and mostly predefined reactions. BUT thats not what AI is meant to become, it rather shall enable an "auomat", eg. software here, to make autonome decisions and to decide on choosing autonome reactions, even if they are "first timers", upon conditions which must include those that are not known, and could not have been forseen, whether by its developers nor itself.
As stated "There is no authoritative answer for this question, as it really depends on what languages you like programming in." I would agree to this.
Of course you can program such software in Perl, since speed is currently not the biggest concern in AI developement, because we still do not know of any algorythms that could enable software (or even logic pressed into silicon) of making autonome decisions. I think its a bad idea of thinking that it could suffice to replicate human brain in building supernode computers unless one could assure that their way of interacting is "correct" in being a copy of the nature.
To get to the point: When one writes software it is his aim to enable software to gain information, to process that information and to choose a reaction from a pool of predefined possibilities, thats teh same as we do. And this we do well in Perl. And as we can do this, we can surely enable logic to redefine its own pool of possible reactions, but the question is if one would want to trust his very own life to such logic processor.
TIMTOWDI, but I prefer Intelligence that is by itself able and willing to take responsibility for its actions.
Have a nice day
All decision is left to your taste