|The stupid question is the question not asked|
There are some data structures that can be easily put into arrays, and where operations on the data structure involve arithmetics with the index. In many such cases it's easier and less hassle with zero-based indexes. (I can't think of a particular example right now, sorry. But I remember that I came across some of them)
And if you then find an algorithm where it's the other way round, you can still leave the first item empty and work as if you had 1-based indexes, with minimal overhead. Doing it the other way round (ie emulating 0-based indexes with 1-based indexes) would involve an arithmetic operation on every array access.
But in the end it's a topic where you can have very strong opinions about, and no amount of arguing will convince you in the end. Like coding style.
Update: I thought a bit more about, and came to the conclusion that more integer operations as performed by the CPU stay within range if your numbers start from 0, not from 1:
resulting range if numbers Operation start from 0 start from 1 Comment * 0..Inf 1..Inf No difference / 0..Inf(*) 0..Inf 0 better + 0..Inf 2..Inf 0 better - -Inf..Inf -Inf..Inf No difference % 0..Inf(*) 0..Inf 0 better ** 0..Inf(*) 1..Inf No difference (*): some operations disallowed
So you see that if your numbers are start from 0, more operations have identical domain (for the left operand) and codomain, at the cost of having some disallowed operations for 0 as the right operand (like 1/0, and 0**0). IMHO that's a plus for choosing 0 as the start.
It also feels nice to have the neutral element of addition inside the range.