Jason J. Gullickson

Jason J. Gullickson

Recent Advances in Machine Intelligence

Recent Advances in Machine Intelligence

There’s a lot of discussion recently about the increasing number of advances in Artificial Intelligence and Machine Learning. Based solely on the number of products and projects that are yielding results, it would appear that progress in these fields is accelerating.  Pursuit of Artificial Intelligence has been a staple of computer science since the beginning, so I wondered why we are suddenly seeing so much progress? I’ve studied AI since childhood and when I look at what is considered cutting-edge today, I don’t see a lot of new ideas. There are small-grained improvements, but for the most part today’s Machine Learning systems are incremental improvements on classical AI techniques.  So, innovation in terms of a radical new approach to machine learning doesn’t seem to explain it. Another explanation is that the computing hardware we have available now is finally powerful enough to make the existing techniques practical. While the amount of computing power available to the average person has grown, I would argue that the peak power available in the world has changed much less. Even though your personal computer is many orders of magnitude faster than the personal computers of the 1980’s and 1990’s, big corporations and government agencies had access to computers at that time with performance that, when applied to specialized tasks such as AI, could rival (and in some ways, exceed) the computing power we apply to these problems today. So how do we explain what appears to be a sudden increase in progress toward computers that think like humans? I have a theory:

Machines are not suddenly becoming more human; humans are suddenly becoming more like machines.

Consider this: When you are in a room with another person, you are communicating with them even before you begin to speak. Your senses gather data about the other person when they are across the room, and your mind begins to retrieve memories you may have of this person (or alternatively, begins to form an internal model of who this person is). When you finally begin to exchange words, you are able to respond verbally based on this model even before you have been formally introduced. It’s very challenging to provide this amount of sensual input to a machine.  The most acute example might be the sense of smell. As humans, smell has a huge impact on how we respond to our environment, and smell has traditionally been one of the hardest senses to create machine interfaces for. If you were interacting with a machine in this environment, it would become immediately obvious; there would be almost zero chance that you would mistake it for another human, no matter how clever its code or how powerful its processor. However, since the creation of the written word we have been reducing the “fidelity” of human interactions, and in the case of writing, almost all senses are removed from the communication process. While this goes back millenia, only in the last few decades has this become the dominate form of communication for most humans. In these reduced-fidelity interactions, it becomes easier for machines to appear more like us. By reducing the majority of our interactions to these environments, we level the playing field between people and machines. The result of this is that by framing valuable human interaction within the limited fidelity of written communication, we make it more possible for machines to generate the same value. This is similar to how independent media producers can now create content on-par with mass media companies; the mass-media has reduced the quality of its content (as a cost-reduction measure) to a point where it has no quality advantage over the less-well-funded indies. Once there is a value proposition, the commercial world becomes interested. This explains the sudden interest from capitalists in machine intelligence. This influx of capital results in incremental advances in the technology that appear to be leaps if only because the applications are less abstract (the quest for thinking machines) and instead more tangible (the extraction of value from humans). As long as people continue to interact with one-another in the diverse ways we are equipped to, there is no reason to fear being replaced or overtaken by machines. However, there is a distinct possibility that we may simply assimilate ourselves into “machine culture”, and quietly be made obsolete by things that more naturally occupy that culture.  

Afterward

In the late 1980’s it became apparent to me that what was holding-back progress in AI was that all of the work I was familiar with was constrained by the limitations of how machines interacted with the outside world (in almost all cases this was via written communications). I theorized that even what appears to be dead-end approaches might demonstrate more promise if they could be supplied with the ability to interact more directly with the world around them and without the monumental barrier of written language (after all, how long did it take humans to learn to read and write?). Fortunately at the time a solution to this was on the horizon: Virtual Reality. A virtual world would put machines and humans on a level playing ground without the barrier of language, and with a richness of experience much closer to the physical world. I reckoned that this virtual experience, coupled with a means of connecting machines and people across physical space (some kind of global network…) could result in exponential advances in artificial intelligence, and yield applications which could benefit humanity in diverse and exciting ways. Unfortunately, when that network became available to the general public it was via a much lower-fidelity, 2-dimensional interface known as the World Wide Web. This was much easier to capitalize on than Virtual Reality, and well, you know the rest of the story…