Beefy Boxes and Bandwidth Generously Provided by pair Networks
more useful options
 
PerlMonks  

Re^3: Projects where people can die

by BrowserUk (Patriarch)
on Sep 07, 2006 at 23:46 UTC ( [id://571830]=note: print w/replies, xml ) Need Help??


in reply to Re^2: Projects where people can die
in thread Projects where people can die

Are you going to guarantee the absence of the effects of cosmic rays / radiation / jam on your processor?

Well, there are such things as space-rated cpus, and in any environment/application where radiation is a hazard, they would be used along with secondary protection (lead or gold shielding)--but hardware has moving parts; is subject to wear and tear and tolorances. Hardware fails. Disks fail. Even high quality brand new disks fail. Of course, you can run extensive tests to reduce the likelyhood of some failure modes, but in doing so you run the risk of increasing the likelyhood of others--through wear and tear.

In anycase, there are no guarentees.

  • Maybe the computer will be hit by a crashing airliner, so you bury it underground encased in steel and concrete.
  • But then you might have an earthquake that vibrates something loose--so you suspend the computer inside its concrete coffin to isolate it from that.
  • But the power supply might get severed--so you put a generator inside the coffin.
  • But that might fail--so you add two.

It's all about likelyhood, and the most vulnerable component in most computer systems is the harddisk. That's why solid state secondary storage is such a holy grail. Removing that from the equation just makes sense.

With no guarentees, it's all about minimising risk. And that's about spending your money to achieve the biggest bang for you buck. Of the millions of computer users around the world, it's probable that 5 or 10% have experienced some form of disk failure. I have.

How many have experienced cpu failure--of any kind? Of those that have, how many could be attributed to some form of radiation degeneration of the cpu (or memory)? Much harder to access as without extreme analysis, there is simply no way to know.

The point is that it is possible to test Perl code as thouroughly as any other code, but the additional step of repetitive runtime compilation is one further possibility of failure, For non-life critical systems, the additional risk is (in most cases), not worth the cost of elimination. But for life-critical systems, it is not worth the risk not to.

The safer way to provide security is to have multiple redundant, different (many people miss this distinction) systems checking each others' results.

I'm cognisant of the technique.

Applied to a Perl program, this would entail producing a completely separate implementation of perl. Since there are no specs--the existing sources are the spec--there is nothing against which to build such a system, let alone verify it.

I have a memory of reading an artical--possibly related to the fly-by-wire systems on Eurobus aircraft--that suggested that using a single set of sources compiled by different compilers and targetted at different cpus was better than producing two set of sources in different languages. I can't find references. From memory, the rational went that by starting with a single set of sources, it reduced the complexity, by removing the need to try and prove that two language implementations were equivalent. That somewhat unintuative conclusion actually makes economic sense. Every reduction in complexity comes with an increase in the possibility of proof. Maybe :)


Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.

Replies are listed 'Best First'.
Re^4: Projects where people can die
by hv (Prior) on Sep 08, 2006 at 11:46 UTC

    How many have experienced cpu failure--of any kind? Of those that have, how many could be attributed to some form of radiation degeneration of the cpu (or memory)? Much harder to access as without extreme analysis, there is simply no way to know.

    I have experienced failure which, after extreme analysis, we could only attribute to radiation degeneration of a byte of memory - see Re: What do you know, and how do you know that you know it? for some details.

    Later in the same job I was writing interrupt handlers for a new ARM-based chipset, and found a problem in that chipset - a bug in certain (characterisable) situations when a multiple-register store instruction crossed a 64k page boundary. Somehow that did less to shake my confidence in the perfection of hardware than the memory corruption problem.

    Hugo

      Somehow that did less to shake my confidence in the perfection of hardware than the memory corruption problem.

      One was random, one was not. Very understandable.

      --MidLifeXis

      ... after extreme analysis, we could only attribute to radiation degeneration ...

      A somewhat less than definitive causality. But assuming that you are correct, that's 1 in n*100e6 computer users.

      I'm still inclined to believe that disk failure is probably more common :)


      Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
      Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
      "Science is about questioning the status quo. Questioning authority".
      In the absence of evidence, opinion is indistinguishable from prejudice.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://571830]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this?Last hourOther CB clients
Other Users?
Others lurking in the Monastery: (8)
As of 2024-03-28 17:07 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?

    No recent polls found