|P is for Practical|
I think that this may actually be a live sighting of the mythical "PM XY problem".
Is it fair to sum up your question as:
You don't want to use the straight forward linear calculation because it will be too slow, but you don't want to build the obvious tree structure because it takes too much memory? Is there some magical third way?
You've describe the query you need to satisfy as given a range of positions (n,m), where along it lies the minima. How many of those queries do you have to satisfy?
Are they effectively random queries. Ie. random start position and random length?
Or are you calculating one (or a few) length(s) for all start positions?
Or all lengths for all start positions?
You mentioned 300GB. Is that a single huge bit-vector, or many short vectors?
Basically what I'm getting at here is a clearer description of what this data is; how big it is; the nature of the required processing; etc. rather than just your current approach to this very specific problem, might trigger a different or more innovative approach to the overall problem.
Examine what is said, not who speaks -- Silence betokens consent -- Love the truth but pardon error.
"Science is about questioning the status quo. Questioning authority".
In the absence of evidence, opinion is indistinguishable from prejudice.