|more useful options|
Re: An exploration of higher-level-language approaches to the "First all-zero row in an NxN matrix" problem (goto Considered Harmful)by davido (Archbishop)
|on Sep 24, 2011 at 20:44 UTC||Need Help??|
So I walked away from my meditation for a few minutes and thought of an other solution. Let's assume that the matrix has a pretty random distribution of 0's and 1's, and consequently, that all-zero rows is not a frequent occurrence. The solutions I posted above attempt to reject a row as early as possible to take advantage of this possibility.
There's another option, and that's to test each row's column 0 on the first iteration. On the second iteration, only test column 1 for those rows that passed the first test. Then for column 2, only test the remaining rows that are still with us. ...and so on.
I do think that such a solution would require a second array of row numbers to test on the next go-around. That single-dimensional array would start out N rows long, and after each pass would contain fewer elements, each containing a row number from the primary matrix that is still in the running to be tested.
Computationally I don't think that in an average case such an algorithm would gain anything. It seems that if the original algorithms already eliminate each row as quickly as possible, turning the test on its side and still rejecting rows as quickly as possible would only add programming complexity, an additional memory requirement, while not gaining anything in either average or worst case.
When I have a moment I'll implement it and add it as an update to this node. At the moment my wife and kids are wondering why I'm holding them up from their outing. ;)
Update: As promised, here is a version of the search for first row in a matrix containing all zeros, which searches each column, eliminating each row where a non-zero was found in the current column scan. It rejects entire rows from subsequent searches as soon as a non-zero is found, but walks the matrix column by column rather than row by row. The worst case is still O(n) for the size of the data set. In fact I can't think of a situation where there would be a performance gain. And we've got the misfortune of having to also keep track of row numbers in a separate array. So it's really not a win in any regard. But I thought the concept was sort of cool anyway, so here it is: