in reply to Re^2: Reasons for Using Perl 6 in thread Reasons for Using Perl 6
Floatingpoint arithmetic is flawed and sucks, and has sucked for probably 40+ years by now (which is really a very long period when speaking about modern technologies). It is actually outrageous that this hasn't been fixed over such a long period. Experienced programmers know it and have been accustomed to live with that. Does that mean that we should accept this flaw? I don't think so. IMHO, we should get rid of that flaw, we need a flexit.
Consider this in Python:
>>> .3  .2  .1
2.7755575615628914e17
The result is very small, but it should really be 0.
It's not better in Perl 5:
$ perl E 'say .3  .2  .1;'
2.77555756156289e17
Of course, neither Python nor Perl 5 is responsible for that, it is the FP arithmetic of the underlying architecture which sucks.
Does this mean we can't do anything about it? Yes, we can, Perl 6 has this right:
> say .3  .2  .1;
0
> printf "%.50f\n", .3  .2  .1;
0.00000000000000000000000000000000000000000000000000
Getting an accurate result for such a simple calculation is not anecdotal. It is a real and major improvement. You can compare non integers for equality and get the correct result. Don't do that with most other programming languages.
Yes, this improvement comes with a cost. So long as we have hardware that is inaccurate, we'll need to work around it with software, and this is bound to be somewhat slower. Personally, I think that accuracy is more important than speed.
If speed was the only criteria, none of us would use Perl, we would all write our programs in C, nay, probably in assembly. But we don't. So, most of the time, speed is not the most important factor for us.
Well, sometimes, speed matters. And I probably would not argue for accuracy if that meant that computation was really "insanely slow," as you claim. But it is not insanely slow; it is just somewhat slower, as shown in my earlier post above, and this speed issue could just be solved with a faster CPU. And, as I said earlier, there are performance enhancements almost every week (and some very big ones should be there very soon).
Re^4: Reasons for Using Perl 6
by dsheroh (Prior) on Dec 24, 2017 at 09:13 UTC

And, as I said earlier, there are performance enhancements almost every week (and some very big ones should be there very soon).
Given the history of Perl 6 over the last couple decades, I'm not sure that anything which even remotely hints at "it'll be there Real Soon Now" is going to be effective at changing the minds of its nonfans.
 [reply] 

Yeah, fair enough, I get your point. ;)
To be a bit more specific, let me just outline the work done on performance. I'll split this work into two tracks.
Just about every week, someone from the core development team improves the speed of this or that specific feature (actually, often 3 or 4 features per week). Of course such a change does not improve the speed of programs not using that specific feature. But, with time, many features are improved and it becomes increasingly likely that your specific program will benefit from one of these feature performance enhancement.
Then, there is also more indepth work going on on the Rakudo compiler / MoarVM optimizer (including JIT optimizing). This can significantly improve performance of almost any program running long enough for the real time optimizer to kick in. Needless to say, this type of thing is pretty complicated and needs heavy testing. As far as I know, very good results have already been achieved, but they haven't found their way yet into packaged production releases. So, at this point, you would probably need to download development programs and to build Rakudo / Moar VM and related environment to be able to test these improvements. I don't have any specific information, but I hope these enhancements will probably make their way into a production release relatively soon (but I don't know when).
You can get more detailed information on these subjects here: https://perl6advent.wordpress.com/2017/12/16/.
 [reply] 

it becomes increasingly likely that your specific program will benefit from one of these feature performance enhancement
We've all heard that a lot! Assuming a linear progression of improvement is unrealistic. If performance isn't better than Perl overall now  with allegedly a better internal data model, a better VM, and a language that's easier to optimize  where's that speed going to come from?
I'm sure a few people here remember the Parrot benchmarks of a decade ago that showed raw Parrot performance (PASM, PBC, and I believe PIR) was generally better than Perl performance, and that was without the sort of optimizations that could have been possible (escape analysis, unboxing, JIT).
Then, there is also more indepth work going on on the Rakudo compiler / MoarVM optimizer (including JIT optimizing).
The last time I looked at Moar, it didn't look like it was designed for the sort of optimizations that people think of when they think of JITs like in JavaScript, Lua, or the JVM. When we were designing the optimized version of Parrot called Lorito, we looked at Squeak/Slang and JavaScript for examples, focusing on optimization possibilities such as unboxing, using primitive types where possible, avoiding memory allocations where possible, and (above all) not crossing ABI/calling convention boundaries you can't optimize across.
I could be wrong about all this  I haven't looked at any of this code in any sort of detail in seven years  but as long as the optimization strategy of Moar/NQP/Rakudo is "write more stuff in C because C is fast", it'll struggle getting to performance parity with Perl, let alone surpassing it. The fact that it's been years and the Rakudo stack is still four or five times slower than Perl does not give me much confidence that Rakudo will ever reach JavaScript levels of performance (let's be conservative and say it needs to be 20x faster for that) without yet another rewrite.
 [reply] 
Re^4: Reasons for Using Perl 6
by syphilis (Chancellor) on Jan 02, 2018 at 13:10 UTC

Floatingpoint arithmetic is flawed and sucks
I disagree with that on both counts.
To me, it's incredibly naïve to complain about a base 2 approximation of 0.3 minus a base 2 approximation of 0.2 minus a base 2 approximation of 0.1 resulting in a minute nonzero value.
I've nothing against rational arithmetic  but note that when you've done your rational computations and you end up with a result of 132511/43, the first thing you're going to do (in order to gauge the magnitude of that value) is to convert it to an approximate floating point value.
And I would think (untested) that perl5's Math::GMPq module provides better rational arithmetic than perl6 ever will.
I can't see perl6's arithmetic model ever being a reason for me to use perl6. (I'd rather stay with perl5  and do the arithmetic in XS space if accuracy is important.)
Cheers, Rob
 [reply] 

To me, it's incredibly naïve to complain about a base 2 approximation of 0.3 minus a base 2 approximation of 0.2 minus a base 2 approximation of 0.1 resulting in a minute nonzero value.
I don't complain about that, and I am not naive enough to ignore that base 2 approximations of decimal noninteger numbers are not going to be accurate. I am complaining about the fact that we should still rely on base 2 approximations. It really should no longer be the case 18 years into the 21st century.
Yes, I will probably convert 132511/43 into a FP approximate value only if I need it as a human to estimate the magnitude, but not if my aim is to store the value in a computer and if I am given the technical means to store it as a rational. This FP approximation has plagued us for almost half a century, I know we won't get rid of it overnight and that it will continue to plague us for quite a while, but I just hope it won't be for another half century. And for that to happen, we need to start somewhere. Perl 6's arithmetic model is a start.
And I would think (untested) that perl5's Math::GMPq module provides better rational arithmetic than perl6 ever will.
Maybe. Or maybe not. I just don't know.
I was not saying that Perl 6's arithmetic model should be in itself a reason for you or for me to use Perl 6, I was only answering another monk who picked on that topic.
 [reply] 

It will continue to "plague" us as soon as we venture outside mere division and multiplication. What's the rational value of sqrt(2)? What's the rational value of sin(25)?
Sure, you can use rational approximations instead of base 2 ones. For a fairly large expense.
Jenda
Enoch was right!
Enjoy the last years of Rome.
 [reply] [d/l] [select] 






 [reply] 



Re^4: Reasons for Using Perl 6
by Anonymous Monk on Dec 23, 2017 at 19:46 UTC

Real world programming tends to involve functions. For instance, trigonometric functions are very common. How exactly does Perl 6 do it right? What is the domain of accurate calculations? Say, can it do simple interest calculations accurately?
Perl and speed aren't mutually exclusive. You can have a lowlevel module or Inline::C doing the heavy lifting and still benefit from rapid prototyping or other comforts that Perl allows.
 [reply] 

Perl 6 is using the Rat type to represent rational numbers, such as 1/3, 23/7, .564, etc. With such numbers, calculations made with the four basic operators are generally accurate. This is because the Rat type represents rationals with two integers, one for the numerator and one for the denominator.
When using irrational numbers, such as square root of 2, or trigonometric functions, then Perl 6 is forced to use floatingpoint arithmetic and suffers from the same drawbacks as other programming languages.
So Perl 6 can do simple interest calculations accurately, but for compound interest calculations, it would fall back on floatingpoint arithmetic (and its flaws) for most common cases (although it is often possible to work around this if needed).
I agree with your point about Perl and speed not being necessarily mutually exclusive. Even though I am dealing with huge volumes of data (so speed is important to me), I have never needed to actually use things such as Inline::C at $work.
 [reply] [d/l] [select] 

