|The stupid question is the question not asked|
In general, it's a good idea to put your error handling as low as possible.
The higher, more abstract layers of your code should be able to trust the lower layers. You shouldn't have to check for buffer overflows, data consistency, or other such problems in the "okay, now I want to get something done" parts of your code. Unfortunately, most programmers are used to writing incomplete functions, so they don't know how to make their low-level code trustworthy.
'Completeness' is a mathematical idea. It means that the output of a function belongs in the same set as the input. Addition is complete across the positive integers: the sum of any two positive integers is also a positive integer. Subtraction, OTOH, is not complete across the positive integers: "2-5" is not a positive integer.
Broadly speaking, incomplete functions will bite you in the ass. Every time you leave something out, you have to write nanny-code somewhere else to make sure the function is actually returning a good value. That's difficult, often redundant, wasteful, and confusing.
It's a much better idea to design your code with complete functions at every level. Instead of allowing foo() to return any integer, then checking the result to make sure it falls between 0 and N, you break the problem into two parts:
Yeah, you're still writing some output-testing code in the higher-level functions, but the code itself is much simpler. Instead of having to range-check the value, you just have to check 'is-valid' as a boolean. The low-level code does all the heavy lifting on deciding whether the output can be trusted. And in many cases, you can find default error values that work just fine in the main-line higher-level code.
When you write code that way, you end up with each level carrying only the error-handling that makes logical sense at that level, and just enough error-handling to pass usable output along to the next layer up.