Part 2
In this complex machine, each gear wheel represents one decimal digit.
Each time the bottom gear completes a full rotation (counts to 10), it causes the gearwheel above to step forward by 1/10 of a rotation. And when the bottom gear completes 10 rotations it has caused the second wheel to rotate completely once; which in turn causes the third gear wheel to step forward 1/10 of a rotation; thus 100 is recorded.
So, to add digits of precision; you simply add more gears wheels; but there is a problem with that. As you add more gear wheels, the physical force required to click over the highest digit gets harder and harder. At first you can make the bearing more accurate and better lubricated; then you can add a reduction mechanism to the (human powered) input shaft so that it requires more turns of the handle to drive a single rotation of the bottom gears. It means it takes longer to run through a given calculation but the force multiplier of the reduction gear overcomes the input force requirement limitation.
That difference engine uses 31 digit numbers. With modern manufacturing techniques & materials, I could see that being extend to 100 or even 200 digits. The cost would be huge; and the reduction ratio of the input drive would be very high, meaning that the calculations would be very slow; so today, we'd just add an electric motor to do the donkey work.
So then you add more digits; and increase the reduction gear ratio further to compensate; but eventually physics wins out and the force required to turn all the gears in concert is so high that the metal of the gear teeth -- whatever metal you use -- simply cannot transmit the forces required. And you've hit the fundamental physical limitations of the system.
In software; that physical limitation doesn't exist.
On my commodity hardware machine, using arbitrary precision software, I can multiply 10,000 digits numbers together with ease. Slowly, but the greatest effort is inputting the numbers. And distributed computing projects like Great Internet Mersenne Prime Search are routinely working with numbers with millions of digits. Whatever physical limits the hardware of the day has; they can be overcome by "simply" using more hardware. Well design & properly written for the purpose, the software doesn't need to change at all.
Suggesting that the difference is simply a matter of scale is like saying the distance from here to Alpha Centauri is just a matter of scale. The truism that every journey starts with the first step; doesn't help when there are 41314127522800000 steps to take.
The difference between software and hardware are not orders of scale or magnitude; but the fundamental physical laws versus the human brains ability to conceive of and coordinate the logical complexity of the problem.
As of yet, software is still in its infancy. The software (DNA) that controls the hardware (wetware) of the human brain is millions of times more complex than the software we currently write. Our ingenuity has allowed us to construct hardware that can run our, relatively speaking, currently simply software, very fast. Much, much faster than the human brain.
However, we don't yet have good algorithms for solving problems that even extremely small, and primitive biological computers (eg. ant's, bumblebee's; octopus's and jellyfishes brains) take in their stride. We currently compensate for the crudity of our algorithms by using brute force; using speed and the historic growth of that speed.
That growth in speed is rapidly running out; so we are now moving to using concurrency and massively distributed concurrency to compensate. In doing so we hit another fundamental physical limit; energy demands and cost. The largest HPC systems are using 10s and 100s of MegaWatts of power; but mostly what they do are quite simple algorithms; they just do them billions and billions of times to produce the results we are after.
But it still requires the human brain to intuit the next steps in the evolution of our knowledge.