in reply to
Re: Re: about Coolness, Impatience and Complexity
in thread about Coolness, Impatience and Complexity
Having been a computer scientists, I strongly disagree with
the statement that a Turing machine should be involved.
Not at all! We use elementary operations in a certain
computational model. That could be a Turing machine, but it
could also mean a pointer machine, a Von Neumann architecture
or something more abstract as arithmetic or geometry. Complexity analysis
for calculations on a modern computer certainly don't assume
a Turing machine as the underlaying model. You wouldn't be
able to do binary search in O (log N) time on a Turing machine.
You cannot achieve O (log N) on a Turing machine because
indexing in an array cannot be done in constant time on a
Turing machine  it's going to be at least linear in the amount
of tape used.
 Abigail
Re: Re: about Coolness, Impatience and Complexity by no_slogan (Deacon) on Jul 13, 2001 at 00:06 UTC 
Good point. Turing machines can do any polynomialtime computation in polynomial time, but they might take a higher order polynomial length of time than some other type of machine. I was trying to compose a snappy reply, and got sloppy.
Still, when we talk about "polynomial time", what we really mean is "polynomial in the length of the problem using a reasonable encoding scheme." Not "polynomial in the number of entities in the input set," though we sometimes ignore the distinction. If the point and the line in your geometrical computation are specified with 512 bit numbers, that's a larger problem than if they were only 16 bits. You can't reduce that  it's information theory.
Any physical machine you build is going to have a finite number of bits of processing power available to it. We can't make computers out of infinitesimally thin Euclidian line segments. A quantum computer might be able to consider a virtually unlimited number of possibilities at the same time, but it has a limited number of qubits and they have to collapse down into one answer before it can be printed out on the screen.
If we start talking about machines that are impossible to build, are we still doing computer science? I don't say this to be facetious, but just to ask what computer science really is.  [reply] 

Still, when we talk about "polynomial time", what we really mean is
"polynomial in the length of the problem using a reasonable encoding
scheme." Not "polynomial in the number of entities in the input set,"
though we sometimes ignore the distinction. If the point and the line in
your geometrical computation are specified with 512 bit numbers, that's
a larger problem than if they were only 16 bits. You can't reduce that 
it's information theory.
No, what we mean is "polynomial in the size of the input". That could be
bits if your computational model asks for that. If your computational
model is geometry, then your input might consist of points and lines.
Not bits. Bits don't play part in geometry. They might play a part in an
implementation of the model, but not in the computational model itself.
Yes, this requires a level of abstraction, but that's why we call it
"science". ;)
Any physical machine you build is going to have a finite number of bits
of processing power available to it. We can't make computers out of
infinitesimally thin Euclidian line segments. A quantum computer might
be able to consider a virtually unlimited number of possibilities at
the same time, but it has a limited number of qubits and they have to
collapse down into one answer before it can be printed out on the screen.
All true, but not really relevant. If we have an O (f (N))
geometric algorithm (I graduated in the closely related fields of
datastructures and computational geometry, hence bringing up geometry
over and over again) and we want to implement it on a real machine,
we might have to multiply or divide by a factor of log N (or
something else) somewhere to get a running time expressed in the
number of bits of the input. [1] But we'll have the same
conversion for another algorithm. Hence, the relation of running time
(and space) complexities in one computational model stay the same in
the other, assuming you use the same implementation. Of course, in a
typical implementation numbers are limited to 32, 64,
128 or some other number of bits, which is a constant disappearing in
the bigOh anyway.
If we start talking about machines that are impossible to build, are we
still doing computer science? I don't say this to be facetious, but just
to ask what computer science really is.
Well, it was you who brought up Turing Machine. ;) Now, have
you ever seen a Turing Machine? Not only are they impossible to build,
if they would exist they are quite impractical to do real computation
on. But much of the theory of Computer Science is founded on Turing
Machines. Take away Turing Machines, and Computer Science no longer is
the Computer Science we all know.
Do not forget the words of one of the most famous Computer Scientists
of all times, Edsgar Dijkstra: computer science is as much about
computers as astronomy is about telescopes. Computer Science is
a big misnomer; the French word is much better: Informatique
(I hope I spelled that right), which is described by the French
academy of Science as the science of processing, possibly by using
automata, of information, where information is the is knowledge and
data. I'm sure it sounds a lot better in French. ;) There are a few
European departments that use the term "Computing Science" when
translating their department to English. I prefer Computing Science
over Computer Science as well.
[1]I'm using a lot of handwaving here. An algorithm
on N numbers each consisting of M bits (and hence having size
N*M) doesn't need to have the same running time when it's run
on an input of M numbers each consisting of N bits (which also
has size N*M).
 Abigail
 [reply] [d/l] [select] 

Now, have you ever seen a Turing Machine? Not only are they
impossible to build, if they would exist they are quite
impractical to do real computation on.
Bosh. The only reason we can't make a perfect Turing machine
is that we can't actually give it an infinite length of
tape. We have a well developed theory that tells us what
computations can and can't be done by a Turing machine with
a finite length of tape. In many cases, it's easier to talk
about one with an infinite length, so we do that instead,
but at least we know what we're getting ourselves into.
Sorry, but I don't agree with Dijkstra. It's true that
astronomy is not about telescopes. Astronomy is about stars
and nebulae and planets and things, and telescopes are just
the tools that let us study them. But computer science is
about computing, and things that compute are computers, by
definition. So computer science is all about computers 
not about PCs and Macs and Sparc workstations in particular,
but about computers in general.
And what is computing? Pushing information around. The
flow of information is governed by information theory, which
tells us that information is very nearly a physical thing.
We usually deal with information in the form of bits, but
it doesn't have to be stored that way. A given quantity of
information is going to take up a finite minimum amount of
time and space, no matter what you do with it. Getting as
close as possible to the minimum is an engineering problem
that we'll be workng on forever. But it's important to
keep in mind that we're dealing with stuff that's not
infinitely compressible. Information is solid.
Of course, in a typical implementation numbers are limited
to 32, 64, 128 or some other number of bits, which is a
constant disappearing in the bigOh anyway.
In many cases, you can make the number of bits disappear
into the big O, because the numbers don't need to be all
that large. However, the reason this conversation started
is that I was talking about implementing arbitrary
precision arithmetic, which is not one of those cases.
If I'm planning on using a raid array to hold the bits of a
number, I'd better not count on the number of bits
disappearing into a constant factor. Maybe I want to
multiply two 10terabyte numbers together for some reason.
For you to claim that you have a geometrical computing
model that ignores the limits of information theory and can
do it with the same amount of effort as 2*2 is, quite
frankly, silly.
I don't think there's as much distinction as you think
between "computer science" and "computing science". Before,
I said, "why talk about computers that can't possibly be
built?" Now, I say, "why talk about computations that can't
possibly be performed?"
 [reply] 







