Post PC World?

Google recently announced a new parting of ways with Apple on an open source project controlled by Apple called WebKit. WebKit is the rendering part of a web browser, the piece that reads all of the code behind a web page and draws it on your screen. As you can imagine, this is a pretty massive piece of the web browser. A massive piece that was shared by Apple’s Safari, Google’s Chrome (and Chromium) as well as other browsers such as Opera, and perhaps most significantly, browsers found in devices from Kindle to Blackberry.

Google will now fork WebKit, spawning a version they control called Blink.

“Having multiple rendering engines will no doubt lead to more innovation,” says Adrian Kingsley-Hughes at ZDNet. But “The reason Google wants Blink is down to one thing — the post-PC era.” (source)

One thing Google has said about this project is that it will remove millions of lines of code from WebKit. Blink will be smaller and ostensibly more efficient than WebKit. Google’s goal is to make it run faster and with a smaller footprint for the purpose of tablets and other devices.

This concerns me for two reasons. 1. Google is making a heavy investments in the idea that the PC is going away. 2. Google does not predict the computing power of tablets in the near future will approach that of PCs.

In other words we will be sacrificing the computing power of a PC for the convenience of handheld devices. Our devices will no longer augment our PCs but replace them.

It’s a prediction that has been around for a long while, no doubt many people would say no kidding. But to me it’s a sad day. PCs are vastly different than tablets in the openness and power they provide to the user. Where PCs strive to be general and useful, to be a tool in our exploration of the world about us, devices are about convenience, attempting to solve our problems for us even before we know that we have them. PCs can be ripped apart, upgraded, replaced piece by piece. Devices now seal in the battery. The battery. Devices are of a world in which we have to throw out the lamp when the bulb burns out.

My first operating system was DOS. I used to write batch files to get things to work the way I wanted them to. Networking was fickle and as a kid I had to jigger and hack software to play the games I wanted to play. Working with the file system meant typing commands (dir, tree, mkdir, rmdir, erase, format, ok my memory is failing me here, some of those may only be the linux commands). When a hard drive went bad, or a video card, I replaced it. It was a valuable learning experience without even having the intention at the time to learn, I just wanted to play. The things I learned as a kid shaped the way I understood and approached computers through the years. By the time I was taking Computer Science courses, I already had a very solid understanding of the inner-workings of computers. I believe most of my peers did as well. I have a hard time seeing how a child today would build a foundation like that from these devices given their closed in nature.

But more selfishly and practically, I worry about trade-off between portability and power. I sometimes sit in awe at all of the things I do at the computer. Not at myself and my work, but at the ability of this one machine in front of me that gives me the power to do all of these things. To edit photos with more power than an entire darkroom once gave a photographer. To edit video, music. To have a movie or TV show streaming on the other monitor as I work on these things.

Or am I’m just an old man who doesn’t understand the new world and dislikes the new thing? The future will tell.