http://www.perlmonks.org?node_id=927683


in reply to An exploration of higher-level-language approaches to the "First all-zero row in an NxN matrix" problem (goto Considered Harmful)

So I walked away from my meditation for a few minutes and thought of an other solution. Let's assume that the matrix has a pretty random distribution of 0's and 1's, and consequently, that all-zero rows is not a frequent occurrence. The solutions I posted above attempt to reject a row as early as possible to take advantage of this possibility.

There's another option, and that's to test each row's column 0 on the first iteration. On the second iteration, only test column 1 for those rows that passed the first test. Then for column 2, only test the remaining rows that are still with us. ...and so on.

I do think that such a solution would require a second array of row numbers to test on the next go-around. That single-dimensional array would start out N rows long, and after each pass would contain fewer elements, each containing a row number from the primary matrix that is still in the running to be tested.

Computationally I don't think that in an average case such an algorithm would gain anything. It seems that if the original algorithms already eliminate each row as quickly as possible, turning the test on its side and still rejecting rows as quickly as possible would only add programming complexity, an additional memory requirement, while not gaining anything in either average or worst case.

When I have a moment I'll implement it and add it as an update to this node. At the moment my wife and kids are wondering why I'm holding them up from their outing. ;)

Update: As promised, here is a version of the search for first row in a matrix containing all zeros, which searches each column, eliminating each row where a non-zero was found in the current column scan. It rejects entire rows from subsequent searches as soon as a non-zero is found, but walks the matrix column by column rather than row by row. The worst case is still O(n) for the size of the data set. In fact I can't think of a situation where there would be a performance gain. And we've got the misfortune of having to also keep track of row numbers in a separate array. So it's really not a win in any regard. But I thought the concept was sort of cool anyway, so here it is:

use strict; use warnings; use v5.12; my @array = ( [ qw/ 1 0 1 0 1 / ], [ qw/ 0 1 0 1 0 / ], [ qw/ 0 0 1 1 0 / ], [ qw/ 0 0 0 0 1 / ], [ qw/ 0 0 0 0 0 / ], ); if( defined( my $result = find_first_sideways( \@array ) ) ) { say "First row containing all zeros is $result."; } else { say "No rows found to contain all zeros."; } sub find_first_sideways { my $matrix = shift; my @scan = ( 0 .. $#{ $matrix } ); my $depth = 0; while( @scan > 0 and $depth < @{ $matrix->[0] } ) { @scan = grep{ $matrix->[$_][$depth] == 0 } @scan; $depth++; } if( defined( $scan[0] ) ) { return $scan[0]; } else { return undef; } }

Dave