Re: The Boy Scout Rule
by hippo (Bishop) on Jan 25, 2015 at 14:04 UTC
|
To answer the last question, I have to say that
my $value = [ $x => $y ] -> [ $y <= $x ];
would not pass my code review. It is clever, but apparently pointlessly so. The fat comma in particular appears to be present only to cause confusion - a normal comma would add (a little) clarity. This code, even if commented, would cause many programmers to pause while they worked out quite what it was doing. That may only take a couple of seconds for an expert but pity the poor Perl newcomer who stumbles upon this.
For my money the ternary conditional version is perfectly clear and without overhead and would be the way I would choose to code this operation.
Regarding code reviews in general and workspace policies - I am essentially freelance and therefore am exposed to a wide range of different restrictions and policies. Generally speaking there are coding standards and a lot of the time these are enforced for the most part automatically (ie. on commit or pre-release). I don't do any pair-programming but there are code reviews of varying nature most of which would fit into your "lightweight" category. They don't tend to dwell on the minutiae; it is more a case of establishing clarity of purpose and eliminating flaws in security and robustness and promoting efficiency.
It is my personal belief (opinion alert!) that it will benefit any programmer to be exposed to code written in a wide variety of styles. That is partly why I am here in these hallowed halls. Here I see idioms, layouts, compound operators, data structures and algorithms which I would not generally have considered myself, to say nothing of being introduced to many useful modules which would otherwise have escaped my attention. With that in mind, communication between programmers whether on online fora or within (or between) development teams or even RLMs such as Perl Mongers are to be encouraged.
Thanks for this interesting meditation.
Hippo
| [reply] [d/l] |
|
my $value = [ $x => $y ] -> [ $y <= $x ];
would not pass my code review.
I agree it’s confusing the first time and the => should be a skinny comma :P instead. That said, the Schwartzian transform is even more confusing the first time you see it. No one in a post 5.6 Perl world would suggest rewriting it with a bunch of temp arrays and for blocks. So, I advocate simple little idioms like the above when they offer something more than clever/pretty code.
So, thinking to possibly defend the clever/pretty one, I tried a Benchmark, which I'm not necessarily doing right so someone please jump in if it’s badly formed, and the one that might be the most semantically clear and I assumed would be the slowest is the fastest by a good measure. I forgot that List::Util is XS.
use strictures;
use Benchmark "cmpthese";
use List::Util "min"; # This is XS.
my @xy = ( [ 1, 0 ], [ 0, 1 ], [ 0, 0 ], [ 1, 1 ],
[ 1_000_000, 999_999 ], [ 999_999, 1_000_000 ] );
my $m; # Avoid void in comparisons.
cmpthese(10_000_000, {
list_util => sub { $m = min(@$_) for @xy },
ternary => sub {
$m = $_->[0] < $_->[1] ? $_->[0] : $_->[1] for
+@xy },
clever => sub {
$m = [ $_->[0], $_->[1] ]->[ $_->[0] <= $_->[1]
+ ] for @xy },
});
Rate clever ternary list_util
clever 347584/s -- -56% -68%
ternary 792393/s 128% -- -28%
list_util 1096491/s 215% 38% --
| [reply] [d/l] [select] |
|
use strictures;
use Benchmark "cmpthese";
use List::Util "min"; # This is XS.
cmpthese -1, {
list_util_nb => q[ my( $x, $y ) = ( 0, 1 ); my $m = min( $x, $y
+) for 1 .. 1000; ],
ternary_nb => q[ my( $x, $y ) = ( 0, 1 ); my $m = $x < $y
+ ? $x : $y for 1 .. 1000; ],
clever_nb => q[ my( $x, $y ) = ( 0, 1 ); my $m = [ $x, $y
+ ]->[ $x <= $y ] for 1 .. 1000; ],
list_util_b => q[ my( $x, $y ) = ( 1, 0 ); my $m = min( $x, $y
+) for 1 .. 1000; ],
ternary_b => q[ my( $x, $y ) = ( 1, 0 ); my $m = $x < $y
+ ? $x : $y for 1 .. 1000; ],
clever_b => q[ my( $x, $y ) = ( 1, 0 ); my $m = [ $x, $y
+]->[ $x <= $y ] for 1 .. 1000; ],
};
__END__
C:\test>junk30
Rate clever_b clever_nb list_util_nb list_util_b ternar
+y_b ternary_nb
clever_b 1210/s -- -11% -67% -70% -
+79% -80%
clever_nb 1356/s 12% -- -63% -67% -
+76% -78%
list_util_nb 3694/s 205% 172% -- -9% -
+34% -40%
list_util_b 4062/s 236% 200% 10% -- -
+28% -33%
ternary_b 5630/s 365% 315% 52% 39%
+ -- -8%
ternary_nb 6107/s 405% 351% 65% 50%
+ 8% --
C:\test>junk30
Rate clever_nb clever_b list_util_b list_util_nb ternar
+y_b ternary_nb
clever_nb 1297/s -- -5% -68% -69% -
+75% -77%
clever_b 1372/s 6% -- -66% -67% -
+74% -75%
list_util_b 4078/s 214% 197% -- -3% -
+22% -27%
list_util_nb 4190/s 223% 205% 3% -- -
+20% -25%
ternary_b 5228/s 303% 281% 28% 25%
+ -- -6%
ternary_nb 5556/s 328% 305% 36% 33%
+ 6% --
List::Util::min() will obviously win in both speed and clarity for the min( @array ) case.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
| [reply] [d/l] [select] |
|
Re: The Boy Scout Rule
by BrowserUk (Patriarch) on Jan 25, 2015 at 23:19 UTC
|
Booking.com
As the major contributor to their parent company's (Priceline) $4.8 billion annual revenue, $1.1 billion profit and $29 billion market cap., they appear to be doing something right. And given that their core business model has remained essentially the same since they were taken over in 2005, perhaps (part of) the secret that makes this Perl-based company stand out from its (original) peers, is that it hasn't succumbed to any of the fads that place the programming process and programmers above the business model.
Their basic business model hasn't changed; thus the required processes haven't changed much. When new code is required, it is very likely to need to do something very similar to stuff that already exists, and is proven to work.
You don't re-write (or refactor) code unless there is an identifiable, demonstrable reason that benefits the business revenue stream. Entertaining programmers is not such a reason.
Sure, there are times when it is possible to make the case that rewriting a piece of working code will benefit the business -- by improving performance; or simplifying (an existing, bad history of difficult) maintenance; or perhaps reducing runtime memory requirement by combining two or more similar piece of code into one. But the case needs to be made and demonstrated. First.
Opportunistic Refactoring
My take on opportunistic refactoring is different from the interpretation I read here. Rather than: I've got some time on my hands so lets go looking for something to change; I interpret it to mean: I am in this piece of code anyway -- due to a bug to fix or functionality to add or change -- and if I see something else here that can be (demonstrably) improved whilst I'm here, and then make a case for doing so.
Example 1.
First, I think your changes obfuscated rather than clarified that code:
- You introduce two extra variables.
- You didn't remove the repetition:
just substituted 3 occurrences of a meaningless variable name for 3 occurrences of a self describing text constant; and four occurrences of another variable name, plus 4 integer constants (index numbers), that have to be visually cross-referenced with the actual, meaningful integers.
The original code is instantly clear and readable to its purpose; the refactor involves 3 levels of mental indirection to undo what you did.
The training element of introducing the programmer to map is barely justification for such changes.
And finally, if you have the time to faff around refactoring test code, you are under-employed.
Cleverness
I'm not adverse to clever code; but there is nothing clever about that. It's not clearer. It's not simpler. It's not more efficient. It's not even less typing.
Just obfuscate.
What would I have done?
Depends. If it was in test code; I'd probably insisted that the programmer that wrote it, described what it does and how it does it, in an adjacent comment, and then i'd pick holes in that description, until it was fully explained in excruciating detail. Something like:
- It creates a list from the two scalars;
- Constructs a two element anonymous array;
- Compares the two scalars;
- Converts the boolean result into an index;
- Dereferences the anonymous array;
- Applies the index to it;
- Extracts the selected scalar from the anonymous array and assigns it to the result;
- Discards the anonymous array it constructed.
And I would nit-pick that description until it was precisely, & exhaustively accurate.
I'm not suggesting the above is totally accurate; but the point is that teaching programmers to understand the consequences of their choices, is far more effective than laying down thou shalt/shalt not edicts.
If it was production code, I do the same; and then require it be changed to the ternary form.
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
| [reply] |
|
| [reply] [d/l] |
Re: The Boy Scout Rule
by blindluke (Hermit) on Jan 25, 2015 at 11:06 UTC
|
Thank you for this meditation. I work on the operations side of things, and the code we do here is mostly automation and monitoring tools. Due to their focused scope, they are usually created by a single author, and maintained by a single person, usually the author of the solution himself. The approach to refactoring was once described by one of my colleagues as:
Each time I notice a nice trick, a better way of doing things, or a good module, I do a quick scan of my existing code base, to check if it can be improved by the "new thing".
That's the way it looks - the refactoring is not triggered by the passage of time, but is strictly event - based. There is no weekly code review, no monthly refactoring phase. Just noticing new, better ways of doing stuff.
This leads me to two observations: first, in an environment like this, communication is crucial. If I notice a new module, I spread the news, since it might trigger an improvement. If someone tells me about a simple data structure he used in his script, it might lead to improvements in my code. Talk about the new, better things as often as possible.
The second observation is: don't try to refactor code that you don't want to be responsible for. If you see something that seems 'wrong' to you in someone else's code, either introduce the change and take the responsibility for maintaining the script afterwards, or just make the suggestion to the person currently maintaining the script. When working in a place that has people, not teams, maintaining the scripts, it's possible that something that would be more clear and maintainable to you, will not seem that way to the owner / maintainer of the script. Convince him, or let him convince you, either way, engage in communication.
| [reply] |
Re: The Boy Scout Rule
by flexvault (Monsignor) on Jan 26, 2015 at 10:25 UTC
|
Hello eyepopslikeamosquito,
I enjoyed your post and thanks for the research and references.
I would add as a reference the book "The Lean Startup" by Eric Ries.
Not because it's Perl related ( it isn't ), but because it discusses in detail the conflict between "business" and "programming" value(s) to a company. Your discussion about 'bookings.com' brought this book to mind.
As a programmer, I have always wanted to get a "perfect" finished product before announcing/shipping it. The book was written by a programmer and that was how he was taught. But he discovered that the best way was to build a MVP, or Minimum Viable Product and then test the waters, and then retest again and again. He also found that because of how he was trained, he was a major stumbling block for building a successful business.
This book changed how I look at programming and business. I don't try to perfect something that nobody wants, and I suspect ( IMHO ) that the successful software ( or depend on software ) companies prefer a MVP to a programmer's perfect product. (YMMV)
Regards...Ed
"Well done is better than well said." - Benjamin Franklin
| [reply] |
Re: The Boy Scout Rule
by choroba (Cardinal) on Jan 26, 2015 at 21:22 UTC
|
Interestingly, it's the middle management in this company that forces us to use the "Boy Scout Rule" (under an even crazier name). The reasons? The low management is content with the "getting job done", as they've been for the last ten years. As a result, it's almost impossible to hire a new programmer who wouldn't flee in a couple of months. The code is ugly to touch, untested, uncommented, copypasted, cargoculted, etc. The "technical debt" is so huge they're able to measure it in cash. So, our team was hired to make things move, to improve the situation, bring in new technologies, show new tricks to the old dogs (read: rewrite everything in Java). We teach them why testing is needed, what advantages git has over CVS, how code review helps all the participants. I'm still unsure we can make it; and so are my colleagues: three of my five closest coworkers already left for greener pastures.
| [reply] |
|
| [reply] |
|
You can find some interesting parts in my questions and meditations in the last year. However, most of the code is, unfortunately, agonisingly boring.
| [reply] |
|
|
|
|
This is usually the status quo that I initially walk into. The first, and perhaps the most difficult, step is to persuade management to treat a software project exactly as they would treat “the building of an automatic machine.”
Computer software is, in fact, an automaton. Acting only and completely on the yes/no instructions of which it consists, the machine is expected to correctly perform (or, correctly and meaningfully decline to perform) a real-world business process for a business consisting of humans. If the software code-base here really is as you describe it to be, the root cause of the problem lies in [the lack of] software project management. The code was “untested,” yet it was released and is in service. There is no such thing as “technical debt,” but the business cost of software failure – or even, inadequacy – is more than “measured in cash.” If the organization does not fundamentally change its approach to software building, then any “rewrite everything in” successor will merely suffer the same fate.
Usually, the root cause of the problems do not lie in the day-to-day activities of the code writers. The problem is upstream of this, in the business itself. But this is partly a social consequence of the very attitude that Joel’s article (Joel knows his audience ...) speaks to: that the software developer’s job is to take “business requirements” and to “write code” for it, and that those requirements ... a wish-list, really ... can, in fact, be changed arbitrarily without harm or consequence. Strangely, no one thinks that way when designing buildings or physical machinery. Yet, computer software is a machine with more degrees-of-freedom and loose-motion than any physical artifice could ever have.
If you find yourself speaking to “CVS vs. git,” then this is probably a symptom of “lack of version control and/or of release discipline.” If you are even discussing the importance of code-review and testing, it’s a symptom that these things are not burned-into the organizations process culture. Basically, that there is no process culture. A dire situation like this one must be simultaneously addressed at multiple levels: (Okay, a taste of what I do for a living ... tough love.)
- Triage: Stop the bleeding. Get control of business blood-pressure, even if you must amputate a limb of the existing app/web-site (temporarily ... or not ...) in order to stabilize what’s left. Stop all “future development,” because it will not matter in the end if a corpse [failed business ...] has yet-another half-grown arm in its [defunct ...] web site.
-
Eliminate “self-serving software excuses for” actual project management: Out with the Scrum, the Agile, the euphemisms like “technical debt.” “Everyone, please sit down.” No amount of quibbling about exactly how a group of software-writers spends their work day will, in the end, make the slightest bit of difference, as long as the teams are being asked to perform a series of tasks that are not rigorously defined before being presented to them, having first gone through an analysis & planning stage which translates business requirements into modifications to a now-moving machine otherwise known as “the application.” Yellow sticky-notes don’t solve anything, and focusing on such things is merely indicative of the root problem. Carpenters and masons and electricians do not have discretion.
- Get to “Big-D Done,” then “Move From Done to Done”:
1 = It Works, completely, perfectly, and in all cases. 0 = It Doesn’t.
“Yeah, it sucks to be binary,” but a digital computer is. The software machine consists of millions of freely-interacting moving parts, all of which are “either Yes or No.” “Either Done or Not-Done.” “Proved to be Correct, and to stay Correct in all cases. Or, not.” You can’t talk about “technical debt” because functionality is either in the product or it’s not, and the cost of any change is the same in terms of its risk to product stability. You didn’t “incur a debt to be paid-back later.” You didn’t do it. And even if you did, it’s most likely not Done.™
I could continue, but I’d have to charge you. ;-) Basically, software development fails consistently because the work actually consists of building an automated, moving, piece of machinery but nobody approaches the task in that way. Programmers focus on how they arrange their tool-boxes, what they wear to work, and where they stand at 10:00. Business owners stand at a distance, staring at metrics but without knowledge of the process. Incomplete requirements are handed down because those that supply them don’t know what is required. Changes are handed down ... but without a change-order process ... because neither party understands the cost and risk of “any change at all.” And the software machine chugs along, full of broken parts, incomplete behavior, and badly-dented covers (emitting foul smoke) which haven’t been opened in years.
The business failure, though, is not a failure of computer technology, nor of the language(s) that are used. The business failure is ... well ... a business process failure. But it is also a failure to recognize that the singular ruling constraint of this kind of project – altogether different from any other type of project – is the software machine. At the end of the day, no one is there but the machine and its user. The programmers, the managers, the testers, no one has any direct influence on what the machine does. No other type of project that has ever been “managed” has that characteristic, and “that characteristic” trumps all other concerns. It is the Nature of The Beast.
(You can find it on Kindle (Amazon) now; soon to be on Apple platforms too: Managing the Mechanism, by Vincent P. North.)
| |
|
| [reply] |
|
Software does get treated differently. The perception being that is is "soft", therefore malleable.
Our requirements group prepares requirements for 3 groups: Mechanical, Electrical and Software. They are familiar with and use the processes required for mechanical and electrical specifications. But when we (the software group) ask them to follow the same processes they follow for mechanical and electrical, they claim that those processes are too slow so they would not be able to deliver specifications to us in time. So they would need to issue preliminary specifications to us - but that would be extra work, so better to keep using the current processes.
Then, the upper level managers state that the business case for using software at all is the flexibility software allows and the speed it can be developed. Therefore, if we follow processes oriented for creating hardware, we negate the business case for using software.
Of course, when we get incomplete/ambiguous/self-contradictory specifications, we still get blamed for not delivering what was wanted. And when we do ask for clarifications, we get blamed for the delays introduced by the need to respond to our questions.
So why do software developers keep developing software?
At least for my team, most of the time we have fun making our software make electro-mechanical "contraptions" do things.
| [reply] |
|
Re: The Boy Scout Rule
by karlgoethebier (Abbot) on Jan 26, 2015 at 10:04 UTC
|
"Would this statement pass your code review?..."
I would use List::Util by all means.
See also Don't be clever.
Best regards, Karl
«The Crux of the Biscuit is the Apostrophe»
| [reply] [d/l] |
|
Yeah, using max() for two values is going overboard :) its just like using bitshifting on codethinkied ... which is just like the anon-array-dereference eyepopslikeamosquito posted .... just use the ternary or if/else
| [reply] |
Re: The Boy Scout Rule
by Anonymous Monk on Jan 25, 2015 at 19:56 UTC
|
What would you have done?
I wouldn't write this $liststr ( $ports[1], $ports[2] )
I'd write this instead find => [ map { "port = $_ " } $config =~ m{(\d+)}g ],
Why? I don't see any benefit to introduce two vars and a set of "magic numbers" (yes misusing the term I know)
Also, I'd never let desc => "# Test 1", remain, testing modules number tests , humans should name them , so "find the ports" or "find four ports" or "find four farts"
FWiW, I've heard of "Always leave the campground cleaner than you found it" but AFAIK its not a Boy Scouts rule
| [reply] [d/l] [select] |
Re: The Boy Scout Rule
by Anonymous Monk on Jan 25, 2015 at 15:34 UTC
|
Well, Joel is an experienced writer who knows how to address his audience. You are, of course, sailing on a yacht, not a dinghy, and you are intended to identify most-specifically with his “old salt.” But the point of view of the rest of the yacht’s crew, and of the millionaire owner, and of every customer that the yacht exists to serve, must also be taken into account, too. The fact that you are placed into a well-supported bubble also means that your point-of-view is not the only one that must be counted. And, this is where a lot of the friction arises.
These days, I am mostly a consultant, mostly dealing with existing projects that were written in a variety of languages, including but not limited to Perl. These projects now have “gray-hair problems,” yet for the most part they are also still earning revenue from still-satisfied clients. The developers (who are still left), however, always want to “refactor” the code ... to make it, somehow, “–er.” They insist that it must be done; that their careers are eroding before their eyes without it. But that’s not the business’s proper point-of-view, and this they do not see. They count the business owners as being both uninformed and clueless, and often leave perfectly-good jobs for what is no good reason.
bookings.com, for example, exists for two purposes: to help travelers make bookings, and to help travel professionals receive the benefit of those bookings.” The company has been financially successful, but not because of Perl and not in spite of it. Every day, it sails into waters surrounded by hungry sharks and enemy submarines. If Bookings makes the slightest mis-step, or shows the slightest sign of weakness, they will pounce. There will be no second chance.
So, a primary testing-concern for Bookings is to be able to ensure that the software does not degrade, as seen by either of its two sets of paying customers. The number-one concern is not whether the crusty-old code remains crusty (it will ...), but that it continues to earn revenue without incurring returns or loss of goodwill. “Refactoring” is merely a euphemism for “[partial or total] rewriting.” The business risk of doing any such thing is enormous, but any change whatever to software that is in service carries similarly disastrous risks. The one and only way to counter that risk is through effective present-state and future-state Testing. Testing which may or may not exist, and which, if it does exist, might not be adequate to avoid ... regression. (And it is not being hyperbolic to say that, “well, the Titanic ‘regressed.’”) There is no room for error, because the potential business risk is infinite. Those sharks and submarines won’t leave any flotsam behind.
Therefore, it is most-important to be certain that each change which is introduced into the (now-legacy) code base is clearly understood, correctly installed and then deployed, and that it is known in advance (by objective testing) that regression will not occur ... so that it never does ... so that the torpedoes always miss and the sharks remain hungry. These are procedural things, and IMHO “software testing” is especially about that procedure. Testing is the minesweepers and anti-submarine craft which always sail in front of the fleet, and you can be quite sure they’re not just sitting on the foredeck, looking out at the surface of the water and saying self-confidently, “I don’t see anything.”
| [reply] |
|
... and to anyone who says that PerlMonks does not log you out and allow your post to go as Anonymous Monk, leaving you with no ownership and no recourse ... well, it just did. Again. :-[ The post to which this is a reply, used to be my post. So, if you like, here’s your substitute downvote-target.
| |
|
Someone else suggested this already, I repeat: a good software dev would be able to write up a bug report for this showing how to repeat it with exact steps and maybe Network/HTML trace from the debug console of any modern browser. It would be easy enough to turn that on persistently until the bug happened again. Then submit the relevant portion of the log (with passwords scrubbed if present) to pmdev.
I have no idea if this is a real bug or just user/user-env error and neither do you. Since it’s never happened to me and I’ve never seen anyone else mention it, I lean toward the latter.
| [reply] |
|
|
So where are the technical details sundialsvc4 ? What good is it to say "it happened again I'm special downvote away" if you're really interested in correcting the problem?
| [reply] |
Re: The Boy Scout Rule
by sundialsvc4 (Abbot) on Jan 26, 2015 at 14:12 UTC
|
My take on opportunistic refactoring is different from the interpretation I read here. Rather than: I've got some time on my hands so lets go looking for something to change; I interpret it to mean: I am in this piece of code anyway -- due to a bug to fix or functionality to add or change -- and if I see something else that here that can be (demonstrably) improved whilst I'm here, and then make a case for doing so.
Hear, hear!
My point-of-view is admittedly altered by being the consultant who is called-in to (re-)evaluate present state and to (re-)plan future state on projects which are presently “on fire,” or, as the case may be, “smoking [ruins].” One of the things that the client asks is ... “can we simply get back to the stable-state where we used to be, and proceed forward (older but wiser) fom there?” In order to give a meaningful answer to the question, I look at [look for ...] the change-order log and the associated [try somehow to associate it with ...] the git or svn commit and branch history.
What I find, way-y-y-y too often, is that there really is no correspondence between the two. “A single commit” does not correspond to “the remedy to that service-order, no more and no less.” Far too often, the developer found something that smelled bad [to him ...] and “simply fixed it,” and didn’t tell anyone. Didn’t structure the change so that it could be backed-out. And didn’t update the validation test-suite (which should have detected any regression), because there wasn’t one. The (now mostly-departed) team gave only lip-service to testing because it took time away from the secret Ruby re-write making Kewel New Fee-Churs. In any case, no management was guarding the hen-house. Management simply decided that the programmers were un-manageable anyway trusted the programmers to know what they were doing, and did not realize that those programmers were flying a 747 by the seat of their pants ... were making it up as they went along ... didn’t know what they were doing, either.
Testing, to me, is simply one of several expressions of discipline. Way, way too many programmers out there have no discipline at all. They were taught “how to write source-code,” not how to build robust, maintainable, software machines that must safely carry passengers without a pilot or co-pilot on board. To their training and experience, “source code” is “the end,” not “the means to a different end.”
And, if you asked me where the “fabled disconnect” comes, between programmers and management, that would be my reply. To a classic software developer, everything is software.
| |
|
| [reply] |
|
Downvoted!
Not because I disagree with you; but because of your inane, facile, puerile, snide, underhand and utterly deliberate practice of posting a reply to a particular node; as a response to {some other} randomly chosen node.
What the .... do you think it achieves? (Paraphrase:Why are you such a deliberate, willful moron?)
With the rise and rise of 'Social' network sites: 'Computers are making people easier to use everyday'
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
| [reply] |
|
It achieves the goal of making you expend energy against the troll, while they sit back and admire their handiwork :)
| [reply] |
|
|
The post is more visible because after a given response depth they are hidden. Being everywhere is the best you can do to sell your work when the content of those posts should close every door.
| [reply] |