Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl-Sensitive Sunglasses
 
PerlMonks  

(jcwren) RE: RE: why i may have to leave perl...

by jcwren (Prior)
on Aug 12, 2000 at 07:06 UTC ( #27604=note: print w/replies, xml ) Need Help??


in reply to RE: why i may have to leave perl...
in thread why i may have to leave perl...

It's hard to imagine any role that Perl couldn't fill, with the proper adjustment.

(Okay, I'm aware I'm taking this one statement a little out of context. But this is a pet peeve of mine for people who see Perl as a hammer, and everything else as nails.)

Actually, it's easy to imagine dozens of roles that Perl is completely unsuitable for. It will never be found in automotive embedded systems (engine/transmission computers), life support applications, microwave ovens, cellular phones, or any other application where the hardware cost is driven by volume, or certain types of compliances must be met.

There are a couple of factors here: One is that the lowest hardware cost is almost always achieved by using a minimal hardware configuration. Unless you're willing to use a 386 or better processor, and a corresponding environment that will support a real O/S (read 4+ MB memory, 2+ MB ROM), this is rarely co-incident with lowest cost. Perl is an application that requires an underlying operating system to be effective. People may port it to the Palm Pilot, but one has to ask oneself "Is this *really* useful?". Most of the time, such exercises are for the demonstration of "coolness", not practicality. This is evidenced by the fact that Linux and it's cousins still aren't practical on the hardware used by most PDAs. And this is also why MS has a market for WinCE (or whatever they call it now), and Palms are running Palm O/S.

Considering that minimum hardware cost for a 386, 4MB of RAM, 2MB ROM, supporting circuitry, yada yada yada is still about $50 in large volumes, are you going to pay that surcharge so your microwave oven can have a "Linux/Perl Inside" sticker? I sure won't. And neither will the normal consumer.

Perl will never be suitable for writing kernel mode device drivers. I'm sure someone will do it, just to show that it can be done, but it's not the correct model for driver development. Remember that Perl is still written in 'C'. There's a good reason for this. It's because 'C' is still the right tool for writing as close to assembly as possible, and maintaining portability. Possibly this might change when Perl-to-EXE actually works, but I seriously doubt it.

Another factor is that Perl will (in all probability) never be certified for military and life support programming, such as Ada is. There are a number of reasons for this, ranging from the requirements for strong type checking of variables and arguments, (current) lack of ANSI certification to compile time evaluation (How many times have you not realized you misspelled a subroutine name until it coughs a hairball at runtime?) and practical automated CASE tools.

I've said this before, I'll say it again: Perl is a hell of a language. It's very full featured, it has features that few others languages have, it's well accepted, etc. But, remember this: One of the things that makes Perl so useful is CPAN. Without the people that have written these modules and made them publically available (GPL, Copyleft, whatever), Perl would be seriously lacking. How'd you like to lay out $100 for CGI.pm, under the typical licensing schemes for most "DOS/Windows" libraries? I'm pretty sure it wouldn't be anywhere as popular as it is now. One of the major reasons (I believe) that Perl is so widely accepted is because of the underlying distribution model.

On a slightly different note, one little thing that bothers me about Perl is the lack of structures such as 'C' supports. I know that structures can be emulated, but the ability that 'C' has of mapping a block of memory to a structure, accessing the individual elements with ease, then stepping a pointer to the next block is missing. This makes it much more difficult to implement access to memory mapped devices, and other objects that can't be conveinently remapped into Perl idioms. I'm sure someone will immediately correct me on this (probably tilly...), but that's my take on it as a long time 'C'/assembly/Forth programmer.

Good programmers know when to use the right tools for the right job. These decisions are, of course, mediated by management (always runaway when management endorses a methodology. Nothing like an edict "all future code will be object-oriented" from people who don't even know how to reboot their own computers), development time & time to market, maintainability, and half a dozen other factors. Know when to hack Perl, when to sling 'C', and when to hand-craft assembly. Just because you know a way to make it *work* in Perl doesn't mean that Perl is the answer.

--Chris

e-mail jcwren
  • Comment on (jcwren) RE: RE: why i may have to leave perl...

Replies are listed 'Best First'.
RE (3): why i may have to leave perl...
by tilly (Archbishop) on Aug 12, 2000 at 11:16 UTC
    My home node says my beliefs about criticism, and this rant just went into the list of links I keep there. Perl is not and never should be all things to all people. Indeed anything that sincerely tries will fail. And some of your points I have said before and will again, for instance in Re: FS OS sysprog I pointed out that Perl is bad for kernel programming.

    I do disagree on some things though. Linux is not actually a bad fit for a lot of the embedded market. Sure, at the very bottom end it doesn't make sense. But betting against it is IMO betting against Moore, and Moore has a few years yet to run. Plus as soon as you want an embedded device that is able to network using standard protocols, you won't wind up with requirements much below Linux' and guess which is less work?

    As for memory mapped devices, it is possible in theory but the same hiding of details that makes Perl so easy to develop in makes it hard. OTOH take a look at IPC::Shareable and you see that Perl can be taught to be careful with what it does after all.

    There is just one point I would like to wind up with. And that is that there is a very good reason that Perl doesn't work like a lot of other languages. Most people who think about maintainability start with the idea that they are going to keep a large project under control. In saying that they have already lost sight of the fact that the act of keeping something short and sweet makes it easier to maintain. Perl is a master of that school of design.

    In order to be that, Perl goes out of its way to be expressive, emulates a way of thinking that people find natural, concentrates on simplicity of interfaces to a relatively small number of important concepts, and in general finds reason to break most of the classical CS rules. But in the end it works! The classical CS people are right that you wouldn't want to maintain many Perl programs with 30K lines. OTOH what takes 30K in C will take an order of magnitude less in Perl - with better debugging information yet!

    Not to mention fewer buffer overflows...

      I was just going back through a few old nodes, re-reading the good stuff, and came across something here. (This is from the node that this node is a reply to, by tilly.)
      I do disagree on some things though. Linux is not actually a bad fit for a lot of the embedded market. Sure, at the very bottom end it doesn't make sense. But betting against it is IMO betting against Moore, and Moore has a few years yet to run. Plus as soon as you want an embedded device that is able to network using standard protocols, you won't wind up with requirements much below Linux' and guess which is less work?
      While Moore does say we get faster systems on a fairly regular basis, there is more to an embedded system than just the CPU speed and architecture.

      One of the issues is an embedded systems is the number of connections. PCBs (Printed Circuit Boards) are costed by several factors, some of them being the material used (typically FR-4), the number of square inches per board (board area), the number of vias and pads (the pads are for integrated circuit pins, and vias are used when a trace has to make a transition from one side of the board to the other), and the number of components.

      While Moore may double the horsepower, if the number of connections and interconnects aren't reduced, the product price can't go down past a certain point. A typical rule of thumb for board costing is about $0.06USD per pad/via in medium quantity runs (5K to 50K). After you've added up all the pad and via costs, board space, packaging requirements, and cost all the required ICs, you may find that no matter how much you want to run Linux on a board, you can't get the underlying fixed cost down low enough to make a product viable.

      I am, in fact, in the middle of such a quandary. I'm looking at doing the software side of a project involving vehicle location and performance dynamics, and my hardware friend (and I) want to wind up with (as a result of our development for the customer) a platform that we can use on other projects. We can go buy an off-the-shelf PC/104 system, but they're way over priced (I'll address that at the end). If we develop our own hardware, we own all the technology (schematics, layout, etc) for it, and can produce additional boards at a pure manufacturing cost (as opposed to an aggregated design/mfg/support cost).

      Sadly, embedded Linux is not looking viable. The two major players, BlueCat and etLinux still have memory footprint requirements that make them impractical for a system with 2MB of DRAM, and 1MB of FLASH (actually, it'll fit in the FLASH fine, but the uncompressed image kills us).

      Whereas, I can buy (for a lot more money) lynxOS, OS/9, vxWorks, etc that will easily run in the memory requirements. These companies have different licensing requirements (vxWorks is $50K, and isn't in the market, OS/9 is lot more reasonable) that affect the total project pricing.

      Now some of you will say "But 2MB or 6MB more RAM is cheaper than the per license requirement!". How true. It would be far cheaper to put 8MB on the board, and run embedded Linux, than it would be to use vxWorks. But this means we're now raising the fixed cost point of the hardware to support 8MB on each and every board, to run a free O/S. Yes, it's open source, but if I buy OS/9, I get technical support (and very good technical support, I might add. Most of these OS vendors have good support departments). I may be able to fix problems I find in any OS components, but then I reduce my time to market by chasing down demons that I may or may not be able to fix. Whereas, I pay them, it gets fixed. If I have a wide time to market window, then this may become acceptable. Also, the documentation for embedded Linux is still far behind the commerical embedded OS products.

      This is not a rant against Linux, embedded Linux or open software. It's a matter of market economics, where sometimes, it's better to pay someone else to take care of certain problems. There are plenty of projects that need heavier duty CPU power than a lot of projects I work on, and embedded Linux is a natural fit (Tivo being an excellent example). But for the price window we need to hit, and the fact that we don't have a large development time window means we're probably going to wind up going with a embedded OS vendor, and probably OS/9. Maybe next time...

      On the topic of why somethings cost more than you think they should: Consider a company that makes amateur radios. A typical mobile (one for the car) amateur radio transceiver runs about $500USD, average. That's a lot of bucks. However, Yaesu, Icom, Kenwood, et al have to pay for the cost of development, manufacturing, distribution, advertising, etc, amortized across a comparatively small number of units (say, compared to selling a particular model of a VCR). Whereas the technology in a PlayStation 2 is more sophisticated, you're selling a helluva lot more units. Same applies to software. Company X sells package Y for $Z, and $Z has to pay for the janitor that cleans their bathrooms, the marketing exec, the development cost, shrinkwrap, lawyers, etc. There's a lot of "hidden" cost in that package you pay for, and if they don't sell the software, they still have to feed the people that work for them. Open Source and all that aside, keep that in mind next time you wonder why something costs so much for something that seems so "simple". Consider all those hidden costs behind it, and what the volume of product they have to ship to be an economically viable company is.

      --Chris

      e-mail jcwren
        Choose the right tool for the job.

        Linux works in a lot of the embedded market, but certainly not all. It sounds to me like you have done your research and found it didn't fit for you. However over time I think you will first of all find that the minimum requirements for embedded Linux will fall a little more, and the fixed cost of meeting those requirements will fall a lot more.

        Therefore even though you are not choosing it now, you have every reason to believe that in a couple of years you would be likely to make the opposite decision. And that raises a couple of very interesting issues.

        First of all Linux support. Anecdotal evidence is that technical support for Linux is very good. There is reason to believe that it will get better since as the embedded market matures that will be all that really differentiates the vendors.

        Second of all what is the future of vxWorks et al? You are depending upon support for them. But Linux is looking to eat away their current revenue base, and you have to ask questions about what new markets they have. History says that when companies run into financial crunches, they tend to start trying to cut back and quality suffers. I don't mean to spread FUD here, but think about what timeline you expect to need support over and whether you think that the vendor you are dealing with will be able to give that support. This is a sad decision that you need to make quite often in software and in business in general.

        Thirdly there is a lot of misunderstanding about the entire, "You can fix problems yourself" facet of Open Source. Yes, you can theoretically fix problems yourself. That doesn't mean that you should. As Bob Young likes to comment, buying proprietary software is like buying a car with the hood welded shut. We buy cars with hoods that are not welded shut. There are a lot of good reasons to do so. One of the best is that we then get a competitive market in auto-mechanics. And in fact companies like LinuxCare are willing to take contracts to fix problems in open-source software. (Specifically in the Linux kernel.)

        Now I don't say that after all of this you will decide on Linux. In fact in your case you may well not. But long-term, for a fixed need, betting against Linux in the embedded space is IMNSHO stupid. However short-term, for a fixed use, it may well be insane to go with Linux.

        History shows that in computers, the commodity wins. History also shows that in computers, the commodity is often not the best choice to make at a given point in time. :-)

Log In?
Username:
Password:

What's my password?
Create A New User
Node Status?
node history
Node Type: note [id://27604]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others about the Monastery: (5)
As of 2020-02-16 21:26 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    What numbers are you going to focus on primarily in 2020?










    Results (70 votes). Check out past polls.

    Notices?