I think we've fallen into a slightly different mode where our project managers look at the past data and apply their own factor to each developer's estimates.
Two red flags on this. The first you touched on: developers don't review their actuals vs. estimates, and so don't improve their estimates. If management wants it this way, it says something pretty cynical (to me, at least) about how management views programmers.
The other danger is that this gives management a bunch of out-of-context data about individual "performance". Data out of context can be used stupidly (e.g., for ranking purposes) and can lead to stupid management decisions.
When you're in the position of estimating without knowing who (else) will do the work, you're in a bad spot. When a team has data on their own past performance, and has been working to improve their estimates, I'll wager that they can give you better estimates than you can pull out of the air yourself, regardless of who on the team will actually do the work. That's been our experience. We do team estimation (or some subset of the team does for simpler tasks), and our estimate vs. actual history is a lot better than it would be if someone up the chain guessed at task difficulty.