We say that we build, applications. But, is that definition changing before our eyes? If the only thing that you need to do anything is a web browser . . .
Well, speaking as someone who for a living writes things that happen in a web browser, the definition sure seems about the same. I have a pre-written cross-platform UI toolkit with a weak but usually sufficient set of control primitives... but the hard part of most applications isn't assembling the UI anyway.
I tend to believe that if the set of pre-written inter-pluggable primitives ever actually becomes rich enough to do all the stuff we "program" to achieve, all we'll have really done is just made a new programming language; it's not qualitatively different, and it's still going to require people with the same skill set as "paleoprogramming".