There's more than one way to do things PerlMonks

### Re^2: Speeding up ... (O(N) determination of point in polygon regardless of complexity)

by punkish (Priest)
 on Aug 29, 2006 at 03:16 UTC ( #570083=note: print w/replies, xml ) Need Help??

I have been thinking about BrowserUk's proposed solution, and have concluded this to be one of the most innovative strategies I have seen in a long time. Using a png essentially as a lookup table... even though I haven't tried the solution yet, I have to say "wow! Bravo!"

Well, I have been thinking about it, and there are a few quibbles, and possible mods --

As noted in the solution, a single image to cover the entire country would be simply too big, and would have to be chopped up. That creates problem of juggling images in memory, as there is really no easy way to sort the points and process them in image-based batches.

The pixels in the image can be mapped to the smallest unit of the coordinate, but fractional units for a point like 1.24435, 455.22452 get all fudged up.

If I could figure out how to generate all the pixel coords for a polygon, I wouldn't even need GD. I could simply store each pixel (something akin to an INTEGER UNIQUE NOT NULL would work just fine) and its associated attribute in a Berkeley Db hash. The advantage of Bdb would be that I wouldn't have to worry about image file sizes and chunking up images as in the GD approach.

So, does anyone know of a way to generate all the coord pairs for a given polygon and a resolution?

By the way, I tested my approach posted in the OP that started this thread. Not including the data prep, it took me about 3.25 hours to update 5.25 million points against 210k polys. Nevertheless, compliments to BrowserUk for a truly innovative solution.

--

when small people start casting long shadows, it is time to go to bed
• Comment on Re^2: Speeding up ... (O(N) determination of point in polygon regardless of complexity)

Replies are listed 'Best First'.
Re^3: Speeding up ... (O(N) determination of point in polygon regardless of complexity)
by BrowserUk (Pope) on Aug 29, 2006 at 04:14 UTC
as there is really no easy way to sort the points and process them in image-based batches.

Assuming your zipcode polygons are obtained from somewhere like here, the each of the .e00 files contains the polygons for the zipcodes in each state. By building one image per state you will have 48/50 images and 48/50 bounding boxes. Pre-selecting or ordering the points by those bounding boxes (much as you currently do using the bounding box of the individual polys0, is simple, and as there are many fewer states than zips, much quicker.

If the points originate in a DB, then something like

```select x,y from points
where x >= state.minx and x <= state.maxx
and   y >= state.miny and y <= state.maxy;

would allow you to load the state images one at a time and select only the points (roughly) within state for lookup.

The pixels in the image can be mapped to the smallest unit of the coordinate, but fractional units for a point like 1.24435, 455.22452 get all fudged up.

Taking Texas as an example (guessing that it is the largest state from looking at a map), width covers 13 degrees of Longitude and 773 miles. That means that each arc minute represents ~1 mile and each arc second ~88 feet. In decimal degrees, each 0.0001 of a degree represents ~32 feet. (Check my math, it's getting late!)

So, if you represent the biggest state by an image 13000x11000 pixels, and multiply all the coordinates in your polys by 10,000 and truncate, each pixels will represent ~32ft X 32 feet. The image takes under 600MB when loaded in memory. If you have fully populated hardware, you could perhaps increase your resolution by a factor of 10 if need be. Remember that the majority of states are considerably smaller.

There will obviously be some overlap between bounding boxes, which means some points will be lookup in 2 or 3 maps, but it is convenient that in large part, most state boundaries follow lines of longitude and latitude.

There is even a module Geo::E00 for processing these files :)

Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
Lingua non convalesco, consenesco et abolesco. -- Rule 1 has a caveat! -- Who broke the cabal?
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.
Re^3: Speeding up ... (O(N) determination of point in polygon regardless of complexity) (Bresenham)
by tye (Sage) on Aug 29, 2006 at 03:44 UTC

Converting a polygon to pixels is something that graphics libraries are highly optimized at. So I'd go with the graphics library approach.

It may be as simple as drawing a border around each polygon in white. If your lookup finds a white pixel, then look at nearby pixels to find the colors of polygons to test the 'long' way. Then you don't have to have a really high-resolution image.

If you've got very thin strips, then that may require special handling. Assigning one more color to each polygon that has such should cover it, though (that color means try that polygon but also try polygons represented by the color of nearby pixels).

An alternate approach would simply be to draw the polygons without borders (being sure to draw any really thin strips last) and always look at the 9 closest pixels instead of just 1 pixel.

- tye

Create A New User
Node Status?
node history
Node Type: note [id://570083]
help
Chatterbox?
and all is quiet...

How do I use this? | Other CB clients
Other Users?
Others lurking in the Monastery: (4)
As of 2017-11-18 09:17 GMT
Sections?
Information?
Find Nodes?
Leftovers?
Voting Booth?
In order to be able to say "I know Perl", you must have:

Results (277 votes). Check out past polls.

Notices?