AI: You’re Using It

Artificial intelligence is being used every day in today’s society. Many don’t believe it. Many are dismissive of the idea of artificial intelligence. Their argument is often similar in nature to John Searle’s “Chinese Room Argument”, which is often stated as follows:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a database) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

(taken from Stanford Encyclopedia of Philosophy)

Using this argument to argue against the existence of AI is missing the point entirely. John Searle does not argue that we cannot create artificial intelligence. His argument is that an artificial intelligence behaves inherently different to our intelligence, which is debatable but much less absurd.

Any object that can understanding a language would still serve to translate to and from that language. The entire point of the Turing Test is to define intelligence in a reasonable way. The Chinese Room that Searle describes, therefore, is indeed an artificial intelligence.

Advertisements

To Know the Future

The following post is based off a philosophy journal entry I wrote for my Grade 12 Philosophy class, taught by Brian Wildfong.


It’s impossible to know the future.

It’s common cliché that it’s “impossible” to know the future. I don’t agree. For some events, I have strong beliefs about the outcome. When I drop a book, for example, I know that it will fall, even though the event hasn’t happened yet. It would be impossible to live life without being able to predict the future. I know that if I leave the ground, then I will fall back down. If I didn’t know that, then I would risk floating off into space every time I take a step while walking!

I think that the future is at least as knowable as the past. Skeptics may argue that something important could change that invalidates all my predictions. They may contend that if I haven’t seen it happen, then I can’t possibly know that it will happen. But the same skeptics could detract from knowing events in the past as well. Let’s say that I just dropped a book. How do I know that I dropped a book? Maybe my memory is faulty, so I can’t rely on that. Sure, there’s a book on the ground, but maybe someone else put it there—or maybe I’m hallucinating and there isn’t actually a book on the ground. Even an event that happened seconds ago can’t be knowable from a radical skeptic’s perspective.

Such a perspective, in my opinion, is useful only as a thought experiment. It reduces all that one can know to meaningless statements, like “I perceive a book,” or perhaps tautologies like “Either this object exists or it does not exist,” or “If he is real, then he exists.” Some might accept certain innate ideas like “One plus one is two”. And the super-radical skeptic may even dispute the validity of all those statements. What’s the use of knowledge if it can’t apply to the real world, but only some abstract world of Forms, or if it can’t even apply to the world of Forms?

Some would suggest that knowledge implies certain truth, and I disagree with that. I think that requiring certainty for knowledge is absurd. I’m not certain that other people actually exist (maybe this is all a dream), and if someone doesn’t exist then they can’t know anything. I’m not certain that my senses aren’t deceiving me, so I’d have to accept that none of my experiences are knowledge. I’m not certain that I’m sane either, so I can’t accept anything that my logical reasoning suggests is true. Under this definition, there is no knowledge—nobody knows anything. There are already plenty of synonyms for “nothing”, so with that definition “knowledge” becomes just an unfortunate waste of a word.

If it’s probably true (for a very high standard of “probably”), then I would classify it as knowledge. That means that I believe that the future is knowable. I know that there will be a solar eclipse on March 9, 2016 and that it will be seen across Indonesia, because astronomical calculations have shown that. I accept that there’s a non-zero probability that it won’t be true: perhaps the sun will disappear before then, or the astronomical calculations (that have worked for thousands of years) are wrong, or Santa Claus will intervene and prevent the solar eclipse. But then again, maybe I’m just a brain in a vat. Life is too short to consider the non-zero but practically-zero probability that the underlying assumptions I make about the world are false.

This is the interpretation of knowledge accepted by B. F. Skinner, who classified knowledge into three kinds: acquaintance (having experienced an event), description (reading or hearing about an event), or prediction (to believe a future event). Skinner accepted that prediction may be the least reliable form of knowledge, but Skinner argued that it is in fact the most useful form of knowledge. Only with prediction can we decide on the best course of action. Many of the major problems plaguing today’s world are due to past mistakes made due to either incorrect predictions about consequences or not predicting the consequences (Skinner 105).

I know some things about the future, and I think what I know about the future is indeed the most important kind of knowledge. In the end, other forms of knowledge serve as a foundation for the kind of knowledge that helps us make the right choices: knowledge by prediction.

Works Cited

Skinner, B. F. “To know the future.” The Behavior Analyst 13.2 (1990): 103.

Unintelligent Design

Like my post on a universal Turing test, this below post originates as a journal entry for my Grade 12 Philosophy class, taught by Brian Wildfong.

Not everyone believes in a supreme deity, but many do. One common argument against the existence of a supreme deity is that the world is imperfect. This is not a bad argument, but it is not a proof. In this journal entry, I outline several ways to reconcile a supreme deity with the imperfections of the natural world.


On April 7, 2014, I attended the Waterloo-Wellington Science and Engineering Fair. Like with previous years, there were many cool projects. One project created a pillow that gradually wakes the user up using a steadily brightening light at a programmed time. Another investigated altering brainwave frequencies using binaural beats, with potential applications in headache and migraine therapy. And another project investigated different means of automated manual pollination, which is important because natural pollinators are going extinct. All these topics will be especially relevant in the future, and several of them may enter the mainstream within a few decades.

One thing that all these topics have in common is that they improve a “natural” mechanism by replacing or augmenting it with an “artificial” one. The natural mechanism is inadequate in some way: I can’t set the Sun to rise at a certain time, the brain is prone to headaches and migraines, and bees are not robust enough to survive climate change and habitat destruction. The pillow complements the sun, the binaural beats influence the brain, and the artificial pollinators replace bees and butterflies. But theist philosophical thought often alleges that an omnipotent planner designed the natural mechanism, and an omnipotent planner would already have considered all possible improvements. Why would humans, who are non-omnipotent beings, be able to improve on the work of a supreme being?

One solution would be to drop a premise: perhaps the planner is not omnipotent. The planner could be likened to a zookeeper, and human beings likened to the flora of the zoo. Although the zookeeper founded and maintains the zoo, the zookeeper does not know about all the animals’ desires and needs. The zookeeper builds a nest for the birds, but the nest may not be the most suitable for the particular birds of the zoo. Those birds may build their own nest, which is better than the zookeeper’s nest. The idea that the planner is not all-powerful is foreign to the Abrahamic religions, but was widely accepted in ancient polytheism. In Greek mythology, for example, not even Zeus—the highest of the gods—was omnipotent. Indeed, Zeus lost to the monster Typhon and only survived because of Hermes’ intervention. In these mythologies, technological advancement beyond even what the planners are capable of is possible. Although humans are individually not as powerful as the planners, humans have strength in numbers.

Another solution is that the planners are indeed omnipotent, but they are not interested in the best possible conditions for humanity; that is, they are not omnibenevolent. There is much evidence that the world was not created as a paradise (the presence of evil, for example), so it would not be unusual for an omnipotent planner to intentionally limit the usefulness or robustness of natural mechanisms. It is also unlikely that the planner is omnimalevolent, of course, because it has not tried to sabotage human technological advancement. Instead, such a planner is a neutral party. One interpretation is that the world was created as an experiment, as a simulation. This hypothesis, called the “simulation hypothesis”, alleges that the universe is just a computer simulation in a higher universe. One compelling argument for this hypothesis comes from Nick Bostrom, who noted that if there is some intelligent life that can survive, and is motivated and able to run simulations of other universes, then a probabilistic argument indicates that it is very likely that humans live in a simulation (Bostrom 1–6). This simulation is probably an experiment, and the goal of the experiment may very well be to observe technological advancement by simulated humans. This interpretation is consistent with the imperfections of nature, because that imperfection may simply be a parameter of the experiment.

Most modern religions today would generally be more comfortable with an interpretation that the planner is indeed omnipotent, but for some reason chooses to limit its own omnipotence. The universe seems to follow consistent rules (the Laws of Physics), and an omnipotent planner would in theory be able to alter the rules. However, this has not happened in recorded history. This is evidence that any omnipotent planner would be intentionally limiting its omnipotence. There are many possible explanations for why such a planner would do this. One reason is that it gives humans more freedom and purpose. The pursuit of a better society and a better life, made possible by consistent physical laws, leads to technological advancement and fulfilment. A world where everything is already perfect would not allow humans to feel fulfilment in any way. In this sense, because humans in a perfect world are slaves, and humans in an imperfect world are free, the omnipotent planner chooses to make an imperfect world and then refuses to intervene. Many forms of deism agree with this interpretation: that a supreme being created the universe but does not actively participate in its evolution (Paquette et al. 156).

Yet another possible explanation for technological advancement is that it is guided by the supreme being itself, like how a teacher would guide a student through a question. A teacher that tells the students the answers without teaching the process is a bad teacher indeed, and equally a supreme being that makes everything perfect from the start is irresponsible. Instead, the supreme being starts from a flawed universe and helps its inhabitants gradually repair those flaws. Many would agree that modern society is better than society 2000 years ago: there is less famine, more peace and tolerance, better healthcare and education, and many more modern comforts. This improvement represents humans learning, guided by a supreme being, on how to achieve a perfect society. In this view, technological advancement is an integral part of that learning.

One of the most important unresolved metaphysical questions is whether a planner exists and what that planner’s intentions are. Although the ability for humans to “do better than God” is a challenge to the idea of a supreme being, there are still numerous possible explanations for why nature is not perfect. It may simply not be able to: the supreme being may be limited in some way. Or, it may be able to but not want to, either because it does not care for human welfare, or because it wishes to give humans freedom, or because it wishes to guide humans through the path to an ideal society. Based on observational evidence alone, it’s impossible to accept one of these answers and reject the others. There is simply not enough evidence either way. If there is a supreme being, it does not seem to want to make its intentions known.

Work Cited

Bostrom, Nick. “Are we living in a computer simulation?.” The Philosophical Quarterly 53.211 (2003): 243-255.

Paquette, Paul G., et al. Philosophy: Questions & Theories. Toronto: McGraw-Hill Ryerson, Limited. Print.

A Universal Turing Test

This was originally written as a journal entry for my philosophy class. I’ve lightly edited it to re-purpose it for a blog format.


Someone told me about a video game–playing computer that learned how to pause the game of Tetris to avoid a loss. I was surprised and somewhat skeptical. On the Web, I located the original paper (which, given its publishing date, is quite light by research paper standards, but nevertheless scientifically rigorous) by Dr. Tom Murphy, named “The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel . . . after that it gets a little tricky.”

The software is able to play video games—in fact, it can play any video game for the NES provided it “watches” a human play first—and surprisingly, the method applied is very simple (Murphy 1–22). The software does not consider video or sound feedback from the game, but instead inspects the game’s memory directly.

From looking at data accumulated during a human’s successful playthrough, it identifies regions of memory where values generally increase as the human gets closer to winning. It then concludes that increasing those values will also get it closer to winning. Then, it looks at input sequences that the human player uses frequently. When playing the game, the software simulates each of those input sequences, determines which one will increase the values it identified earlier most, and executes that input sequence.

This strategy works well for some games. In the case of Super Mario Bros, the memory regions it identifies from human play include numbers like the score or position in the level, so the software then attempts to maximize score and position when playing the game itself through brute-force simulation. However, the strategy is extremely simple and is far away from anything that could be classified as intelligent “thought”.

Even the advanced behaviour the computer displays, such as the ability to pause Tetris to prevent losing, is just a consequence of testing possible input sequences and discovering that none of them except pausing the game prevent losing. Being used to very “dumb” computers, however, it’s shocking to see something so advanced now. Will a computer eventually display human-like intelligent thought?

Perhaps why learning about this software surprised me so much was how general it was, being able to play games as different as Super Mario Bros and Tetris—even if it wasn’t good at playing the latter. The first computer software, which still makes up the majority of software used today, are incredibly fast at doing a very specific computation-related task. This software takes inputs and applies a linear sequence of steps to get the desired output.

A calculator app would be an example of this. Even a auto-correcting word processor is extremely linear—when I press the “Space” key, it looks at the last word I typed, compares it to all the words in its dictionary, and if it matches one word very closely, it will correct the word for me. This software is generally good at tasks that humans are not good at, but it can only perform a very restricted set of tasks. It is useful, but not intelligent.

More advanced and interesting computer software tackles decision-making based on a variety of information. A chess-playing computer, like IBM’s Deep Blue, is an example of this. This software considers all the information about where the pieces are, and then simulates millions of possible moves before making a decision. The limitation of this kind of software is that still, the behaviour is dictated by a human. A human “told” the computer the rules of chess, what kind of positions are good, and how to search for the best move. The computer’s only input was doing the actual search.

The Mario Bros–playing computer is a step above this. It was never told what made a Mario Bros position good or bad; instead, it got this information by watching the human play. In this case, it even figured out for itself a “goal” of sorts—to maximize the score of the game—despite never being told that the game even had a score. The fact that it is so simple means that much of the computer’s behaviour was determined by itself instead of prescribed by a human.

Of course, the method the computer used to decide on these goals, and the way it searched through possible actions to find the best action, were still nevertheless programmed by a human. But if the program were allowed to watch another human player play another game, it could play that game too. However, the new game could have been one that Tom Murphy (the programmer) had never even played, or even known about. This adaptability seems to be some lower form of intelligence, at least beyond the intelligence of Deep Blue, which could not even play Checkers despite its similarity to Chess.

Alan Turing believed that determining whether machines “think”, in the common-sense interpretation of that word, was “ambiguous and biased” (Paquette et al. 147). He instead proposed the Turing test as a reasonable assay for displaying human-like intelligence. His original test involved an independent judge trying to distinguish between a human and a machine claiming to be human.

This test could take many forms, however, the most common being where communication is through email and text-only. I personally think that this is a very rigged test, in the machine’s favour, since text is a very restrictive format. To take an example to an extreme, suppose that the judge were restricted to the numeric digits 0–9 and the symbols “+” and “−”. This test would not be very useful at demonstrating intelligence, since even a calculator could pass the test. Indeed, the first machine to pass the text-through-email test would probably be failed if the judge were allowed to send a picture of a bird with the caption “What is this?”. Failing that, the judge can send an instruction like “draw me a picture of a bird using crayons”.

A universal Turing test should allow the judge to use whatever method he or she likes to try to tell apart human and machine, and such a Turing test obviously cannot be passed yet. The main reason that universal Turing test can’t be passed is the same reason Deep Blue couldn’t play Checkers—up to recently, computers could only do specifically what they were told to do. It could not draw a bird unless the programmer told it how to draw a bird. But perhaps that is changing, with Murphy’s computer being able to play games Murphy doesn’t even know about. If Murphy’s innovations are adapted to other fields, perhaps eventually a computer would be able to draw a bird after watching a human do it.

If this progress continues, I think that it’s certain that computers will eventually pass the universal Turing test, and therefore display human-like intelligence (whether that means they “think” is a question that, like Turing mentioned, ambiguous and perhaps even more difficult to answer). This answers my question in the affirmative. The remaining hurdle is the one that Murphy has somewhat successfully solved for video games: computers must be able to learn to do things beyond what they are explicitly told how to do.

Works Cited

Murphy, Tom. “The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel . . . after that it gets a little tricky.” (2013).

Paquette, Paul G., et al. Philosophy: Questions & Theories. Toronto: McGraw-Hill Ryerson, Limited. Print.