If this is a task you're going to do repeatedly, and your polygon list is relatively static, then you should consider redefining the problem. I suspect that the loop and indexing operations my be consuming a considerable amount of your time. So maybe you can change the problem to unroll the loops.
For example, instead of storing the polygons as they're defined, break the polygons into horizontal strips with with P=lower-left, P=upper-left, P=upper-right, and P=lower-right. In the case that the top or bottom edge is a point, you'll have the same thing, where P==P or P==P, respectively. Then you'll insert these simpler polygons (quadrilaterals, nominally) into the database, so you can use a simpler test. Thus, we break this polygon:
into these three:
Since we have quadrilaterals (and triangles, sort-of), we can simplify your subroutine into something resembling:
As you can see, by redefining the problem and using quadrilaterals with parallel top and bottom edges, we get the following benefits:
So we're simplifying the code by increasing our polygon count. Even with the increased number of shapes in your table, I'm guessing that select statement will probably be about the same speed. Yes, there'll be more polygons, but they'll have smaller bounding boxes, so there'll be fewer false hits. I also think (without testing!) that _pointIsInQuadrilateral should be considerably faster.
This is only one way you could redefine the problem. After studying, you might find a different way to redefine the problem to speed things up. If you try this approach, please let me know how it turns out!
Note: One other advantage is that with a fixed number of points per polygon, you can store the point coordinates in table directly as numbers, rather than having a list of coordinates in text. Omitting the parsing of text to numbers may provide another speedup....