Also, seriously consider your “philosophical approach” here. If this is a shared data structure that all of the threads must constantly access, then it could have the effect of serializing their execution in a way that effectively destroys the value of running multiple threads in the first place.
So, I shall graciously presume, instead, that what you have posted here is just an edited example ... that the threads actually spend most of their time doing something else, say, something I/O-related. “Something else” that, in fact, consumes most of their respective wall-time, such that the time spent managing a shared data structure (once you get the example to compile and run, as BrowserUK correctly insisted) is just an insignificant step for which each lock attempt will most likely proceed without delay.
Multithreading is a fine strategy for overlapping I/O operations. That’s essentially what it’s for. But it has very limited use in CPU-intensive activities, and the necessary locking of shared data structures can utterly wipe that out, resulting in tasks that are slower than they otherwise might be. Your example suggests to be one of those cases.