Posted by: Serena | 6th Jul, 2007

HAL 9000, upgraded.

I keep going back to Engelbart’s assertion that computers shouldn’t be easy to use. Do I agree with it? I know I don’t believe that computers should be difficult to use just for the sake of being difficult, but they shouldn’t be easy either. I think the debate is slightly oversimplified, because the real question isn’t whether computers should be easy or hard to use. The real question is if the things we do with computers should be simple or complex. It is possible that if computers are too easy to use, we won’t think of doing (or be able to do) more complicated things with them. But do you think that’s really true? Is “easy” synonymous with “simple”?

I think computers need to be complex. And yes, they need to be difficult to use, to a certain extent. It is only when we are being constantly challenged that we reach peak innovation. I want my computer to make me think and create. This should be a positive feedback loop. Increased difficulty and complexity leads to further mastery and inspiration, which in turn leads to the need for more complexity. We thrive on challenges; why would we want to eliminate that element when it’s an essential component of creation?

As for the concept of computers being more like people, what if computers were able to build off of us just as we’re able to build off of them? Everything we do with them and everything we create lets the computer add new components, new challenges. Things that will give us new ideas. A chain of human-computer innovation, of sorts. There has always been an urge to make computers more like us. Why is this? Why do we push so hard to make computers easier to use but also dream of giving computers human characteristics and abilities? (Isn’t a computer, after all, supposed to be a replacement for human activity that is difficult or time-consuming? A substitution that allows us to go further by eliminating things we would have to do ourselves otherwise?) I think we want computers to be more like us because we actually want this exchange of inspiration. We inspire each other, and since a computer is a replacement, doesn’t it follow that at some level we want (expect?) the computer to inspire us? To perform that last function that allows for further breakthroughs?

hal9000.jpg

Are computers one day going to be more intelligent than humans? Dr. Campbell doesn’t think so, but I’m not so sure. If we continue in this direction of expecting computers to be easy to use and nothing else, then perhaps not. Is a computer simply a sum of what we put into it? Does it–and can it–only do what we’ve programmed it to do? What if we program it (as discussed in class) to feel emotion? Or to reason? Would it still have limits? What if we program it to learn? Is it still functioning on the same basic set of processes and simply building an information base? Or could it be capable of true learning and creation? Clearly such programming wouldn’t be immediately successful and any possibility of “true learning” would be a result of the buildup of several of these processes running and evolving over time. But we shouldn’t ignore the possibility, because that’s what stops it from happening. We have an excellent track record of being able to accomplish things that were thought to be impossible. Anything imagined can be created.

Responses

The reason I enjoy “2001: A Space Odyssey” so much is because it asks the same questions this post asks. Can something that is “not real” experience something that is? You are standing on an edge, dangerously close to falling beyond the infinite. Man does not often transcend beyond the philosophical, staring blankly into the horizon of metaphysics.

2001 is an effective way to begin this conversation, because it revolves around the human obsession with technology. The arrival of the enigmatic monolith (technology) breaks mankind’s lack of progress. Mankind, peering indifferently into cavern walls in an endless cycle of boredom, had been awakened.

Hitting someone upside the head with a fractured forearm is not a complex process, but Kubrick would tell you that understanding the process was clearly a monumental obstacle.

Absurdly simple, yet absurdly difficult. I have just described man’s experiences with technology in one sentence which uses the same word two times. That means it cancels out, like Algebra or something.

When we think of technology today, we usually think of the technology associated with microprocessors, or computers, for the non-nerd. The punch card system is often used as an indicator of the beginning of a new age of information. Truly, a link to primitive mankind. Swing the club, (input) kill a man (output). Punch the card (input), the computer spells out naughty words in 8 languages (output). Both tasks are annoying, menial, labor-intensive.

The positive feedback loop, however, has begun its perpetual cycle.

Fascinatingly, technology is always impossible. Primitive man never thought about HAL 9000. George Washington probably never thought about HAL 9000. The Memex? Computers in homes? Are you out of your mind?

We’ll never get beyond 2 megs of RAM. This is the best it’s going to get. We’ll never have visual interfaces by which we can interact. This is the best it’s going to get. Computers cannot think for themselves, this is the best it’s ever going to get. But the cycle grows, and grows, and grows.

Hook up a game system. Ignore, for a second, that game controllers have gone from having 4 to an average of 10 buttons. The AI is not static. Shoot him, he moves. He might even go around you to try and edge you. Drive past her, she might ram you next time. Send out Articuno, they’ll hit you with Moltres.

Has the AI learned? Even HAL, the veritable God of human design, is unaware. The crew are unaware. Does HAL have true emotions? “It’s not something anyone can really answer.” HAL attributes his error to humanity, not to himself. I realize, however, that this is ironically human.

Where is the Monolith of AI? Sentience seems inexorably within our grasp, yet unattainable. Some may argue that true sentience is impossible, that true learning is possible only in human minds.

I think, though, that these are the same people who said that the visual interface would never make it. Asmiov knew it. Kubrick knew it.

Anything imagined can be created, and we are standing dangerously close to falling beyond the infinite.

[…] Original post by arynna […]

Categories

Blogroll

Browse

Meta

css.php