Humans learn language as a social tool within a social environment. I won't exactly say never, but it will be a long time before computers are able to learn language the way humans do. Some people argue that it's possible to give an AI a corpus and have it learn from that, but the conditions are still not the same because the AI is still not participating to learn, just observing. Children learn language by forming intermediate (defective) grammars, which are then corrected by others in their environment, usually adults--their parents. A computer program is never going to have that kind of exposure unless we get humans to correct them, which comes back to a knowledge-based approach.
Wittgenstein pointed out that people learn not by being told what things are, but by being exposed to examples. (People are always talking about "food" and "the fridge" in the same context, so maybe there's a relation...) Given this and that language is so dependent to do with the way humans are built and live (How do we learn what "mother" or "cousin" is?) that we'd pretty much have to emulate a human before teaching our emulation how to speak in this way. So knowledge is still important to give our linguistic AIs a field of reference that it would otherwise just not have.
I guess the point I am trying to make is that stats just produce results, but don't really reflect anything more than data regularities in a given context. Knowledge has its major fault in its static nature. And fuzzy logic is nice, but, at least for this application, it needs some knowledge to start with. A hybrid approach using all three might be possible by giving a knowledge-driven AI the capability of creating it's own knowledge using statistical snapshots. Who knows?