Passing my_perl on the stack/register is alot faster than the GLR and SLR and TlsGetValue calls.
Ignoring that I don't know what "GLR and SLR" are -- and you do not bother to explain them -- I'd be interested to see proof of that "much faster". Faster I have no doubts, but much faster?
See "TlsGetValue was implemented with speed as the primary goal."
And I counter that assertion with: speed isn't everything.
Burdening every function, and every programmer, with the need to accommodate a 'pass-through variable' and relying upon the optimiser to make it disappear when not required -- all to save what effectively becomes something like mov rax, GS:[8*rcx+0x2c] -- is short-sighted in the extreme.
And wrapping it over in a bunch of "trick" macros make the programmer burden -- via cognitive disconnection -- even worse.
I took the OP to mean...
How would you handle ...
I wouldn't. Just because I took the OP to mean that; doesn't mean that I think that it is a good idea, or even possible.
I only attempted to answer -- at perhaps a superficial level -- the OPs question: "I'm curious why non-threaded perl can do what thread perl can't do.". Naught more.
Which is why I think your post would have been better directed at the alternative you suggested.
GObject and Perl's GC systems have many similarities. ... I am saying that would be a bad choice.
Then why mention it? No one else did.
Aren't you just as guilty of misdirection by bringing it up and leaving it hanging as the guy that suggested: "you'll be fine so long as you don't use threads"? Which seems to be the focus of your posts.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
RIP Neil Armstrong
| [reply] [d/l] |
Ignoring that I don't know what "GLR and SLR" are -- and you do not bother to explain them -- I'd be interested to see proof of that "much faster". Faster I have no doubts, but much faster?
See "TlsGetValue was implemented with speed as the primary goal."
And I counter that assertion with: speed isn't everything.
My new motto, death by a thousand cuts.
Win XP TlsGetValue has 3 branches in asm. 1 branch on the found value path. On found value path, other than mandatory stack frame maintenance, deref FS Register, deref c stack index val, cmp index val to const, cond jump, and TEB in regular register + offset with const 0 (setlasterror = 0), move TEB in regular register+index reg+ SIB encoded constant to eax, return. A total of 11 machine opcodes executed, stack frame maintenance included. That is also ignoring the SetLastError and GetLastError done by Perl before and after TlsGetValue. Now time for some real world numbers.
void
CxtSpeed()
PREINIT:
LARGE_INTEGER start;
LARGE_INTEGER end;
int i;
PPCODE:
QueryPerformanceCounter(&start);
for(i=0; i < 1000000000; i++){
no_cxt();
}
QueryPerformanceCounter(&end);
printf("no cxt %I64u\n", end.QuadPart-start.QuadPart);
QueryPerformanceCounter(&start);
for(i=0; i < 1000000000; i++){
cxt(aTHX);
}
QueryPerformanceCounter(&end);
printf("cxt %I64u\n", end.QuadPart-start.QuadPart);
//separate compiland/obj file
#define PERL_NO_GET_CONTEXT
#include <EXTERN.h>
#include <perl.h>
#include <XSUB.h>
__declspec(dllexport) int no_cxt(){
dTHX;
return ((int) my_perl) >> 1;
}
__declspec(dllexport) int cxt(pTHX){
return ((int) my_perl) >> 1;
}
#in makefile.pl hash to WriteMakefile
FUNCLIST => ['no_cxt', 'cxt'],
dynamic_lib => { OTHERLDFLAGS => ' noopt.obj ' ,
INST_DYNAMIC_DEP => 'noopt.obj'
},
Make sure to check disassembly to make it wasn't all inline optimized away. cxt() loop was completely removed in my 1st try.
C:\Documents and Settings\Owner\Desktop\cpan libs\lxs>perl -MLocal::XS
+ -e "Local::
XS::CxtSpeed();"
no cxt 48160819
cxt 11096124
C:\Documents and Settings\Owner\Desktop\cpan libs\lxs>
Whole script took about 5-8 seconds. 1 Perl_get_context took 4.3 times more time than passing it on the C stack and of course everything tested fit L1 the whole time. I would much rather have my_perl on the C stack or in a register (compiler's choice) than call Perl_get_context half a dozen or more times in every Perl C function. If you want to know why TlsGetValue is never optimized away to inline assembly, ask MS. I didn't write Kernel32.
I'm surprised it only took 4 times longer. GetLastError is 3 opcodes, stack frame maintenance included. SetLastError is 8 opcode, stack frame maintenance included. TlsGetValue was 11 opcodes, stack frame maintenance included. Perl_get_context is 13 opcodes, no branches, stack frame maintenance included. 3 opcodes for no_cxt, stack frame maintenance included. Total of 38 opcodes for no_cxt. A total of 3 opcodes for cxt(), stack frame maintenance included. So, no_cxt took 12.6 times more opcodes than cxt, yet only 4.3 times more time. Did my superscalar Core 2 eliminate all those function calls to one function call with IPO when it recompiled x86 asm to microop asm or branch predictor + cache dirty flag checking removed the code? IDK, but interesting numbers anyway. In any case my_perl in a register/c stack wins.
Which is why I think your post would have been better directed at the alternative you suggested.
Should I delete my post and post it to the other post?
GObject and Perl's GC systems have many similarities. ... I am saying that would be a bad choice.
Then why mention it? No one else did.
The OP was vague and didn't concisely explain anything, so I had to consult with my crystal ball that I got at the pound shop made from lead wiring, chip board and bitumen, to read the OP's mind. My crystal ball said he is using Perl in a DLL that doesn't link with Perl but includes Perl's headers for Perl's GC. Should I use my O'Reilly brand tarot cards in the future?
Aren't you just as guilty of misdirection by bringing it up and leaving it hanging as the guy that suggested: "you'll be fine so long as you don't use threads"? Which seems to be the focus of your posts.
I gave an answer
Someone who didn't read the manual will think they can use Perl C data structures without a "useless" Perl around instead of GObject
If you read the manual (perlapi/perlguts/illguts/perlembed/perlxs), you will know that using Perl without an initialized interp is not supported by Perl. | [reply] [d/l] [select] |
Separating the useful from the non-useful. I reply to the first part of your post separately.
Should I delete my post and post it to the other post?
No. But had you aimed more carefully, you might be righting the wrong you perceived, byt discussing it with the guy that perpetrated it, rather than having this (pointless part of this) discussion with me.
The OP was vague and didn't concisely explain anything, ...
So, to correct wrong of bad information -- that someone else posted -- you supplied some equally bad information, in reply to me?
Let's call this part of the discussion a misunderstanding and close it.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
RIP Neil Armstrong
| [reply] |
First, thank you for arguing with numbers. It is a rare event and most welcomed.
But --- you knew that was coming right -- your benchmark:
- Has to use a huge multiplier -- 1 billion iterations -- to make a mountain out of a molehill.
Let's say the total runtime was the upper of your vague estimate. 8 seconds.
Which means:
- 'cxt' took 1.5 seconds for 1e9 operations = 0.0000000015 s/iteration.
- 'noctx' took 6.5 seconds for 1e9 operations = 0.0000000065 s/iteration.
By any body's standards, a whole 5 billionths of a second difference is hardly "huge". (Which was your assertion).
And if the body of the loop did anything useful -- like call one or two of the huge macros or long twisty functions that are the reason for having the context within the sub in the first place --then those 5 nanoseconds would just disappear into the noise.
- In the noctx, you are using the equally flawed Perl_get_context()
Which, as you point out, entirely swamps the call to TLSGetValue(), by bracketing it with (useless*) calls to GetLastError() and SetLastError().
As we discussed before, what Last error are they preserving, that is important enough to be preserved, not important enough to be reported straight away?
And, if there is justification for preserving some system errors whilst ignoring other, why preserve them in OS memory thus requiring every unimportant system call to be bracketed with GLE/SLE? Why not get the error just after the important system call that caused it and put it somewhere local?
That way, you do one GetLastError() call after each (significant) system call that you want to preserve; rather than bracketing every insignificant system call with two other system calls.
My prime suspect for why TLSGetValue() doesn't get inlined, is the fact that it is bracketed by those other two calls. I'd love to see you add a 3rd test to your benchmark that calls TLSGetValue() directly. I'm not saying it will be inlined, but even if it isn't, it would reduce the (already nanoscopic) difference quite considerably.
Most significantly -- you've tested something quite different to that I was trying to describe.
The reason functions need to have visibility of the context, is because some of the functions they call, require it be passed to them.
This requirement is often hidden by wrapping the functions that need it in macros. You know better than I do how grossly unwieldy many of the wrapper macros get.
There is a common pattern to many of the worst ones, that goes something like this: #define SOMETHING1 STMT_START { assert( something ); if(some_complex_c
+ondition) wrapped_function1( aTHX_, ... ); assert(something_else ) }
+STMT_END
#define SOMETHING2 STMT_START { assert( something ); if(some_complex_c
+ondition) wrapped_function2( aTHX_, ... ); assert(something_else ) }
+STMT_END
#define SOMETHING3 STMT_START { assert( something ); if(some_complex_c
+ondition) wrapped_function3( aTHX_, ... ); assert(something_else ) }
+STMT_END
int someFunction( aTHX_ ... ) {
dATHX;
...;
SOMETHING1( ... );
...;
SOMETHING2( ... );
...;
SOMETHING3( ... );
RETURN;
}
The logic being (I assume) that by testing the conditions inline, you prevent the call overhead for the cases where the condition(s) fail.
But a simple test shows that it isn't the case:
With x1(), 50% of calls are avoided by an inline conditional test.
With x2(), that test is moved into the body of the function, which returns immediately if the test fails.
Compile & run: C:\test>cl /Ox calloverhead.c
Microsoft (R) C/C++ Optimizing Compiler Version 15.00.21022.08 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
calloverhead.c
Microsoft (R) Incremental Linker Version 9.00.21022.08
Copyright (C) Microsoft Corporation. All rights reserved.
/out:calloverhead.exe
calloverhead.obj
C:\test>calloverhead 10000000
Inline condition: 60,068,106
Inbody condition: 45,064,458
C:\test>calloverhead 10000000
Inline condition: 60,037,515
Inbody condition: 45,084,879
C:\test>calloverhead 10000000
Inline condition: 60,048,828
Inbody condition: 45,057,681
C:\test>calloverhead 10000000
Inline condition: 60,032,691
Inbody condition: 45,032,724
The inline condition takes 1/3rd more cycles than putting the test inside the body of the function call!
And if the conditional tests are inside the body of the functions, you no longer need the macro wrappers -- which makes things a lot clearer for the programmer.
And you also don't need access to the context in all the callers of the wrapped functions, so then the called function can obtain the context internally, thus removing it from visibility at the caller's level.
And the code size shrinks because the conditional test appears once inside the function rather than at every call site.
That's a 3 way win, with no downsides.
The point is that you cannot take one single aspect of the overall vision, mock it up into a highly artificial benchmark and draw conclusions. You have to consider the entire picture.
Of course, it is never going to happen, so there is little point in arguing about it; but if you did effect this kind of change throughout the code base; along with all the other stuff we discussed elsewhere; the effects can be significant.
The hope for using LLVM to compile the Perl runtime, is that by re-writing the macro-infested C sources to IR, and combining them with current compilation unit of Perl code that uses it -- also suitably compiled to IR; it can see through both the macros and the disjoint runloop, and find optimisations on a case-by-case basis that cannot be made universally.
That is to say, (by way of example), a piece of code that uses no magic, and only IVs or UVs, may qualify for optimisations that could not be made statically by a C compiler, because -- given the current structure of the pp_* opcode functions -- it could never possibly see them; as it always has to allow for the possibility of magic; and NVs; and PVs; et al.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
RIP Neil Armstrong
| [reply] [d/l] [select] |