On Alignment Tab Characters

One of my pet peeves is the use of tab characters for alignment. Sure, this might have been acceptable in the early days of computing. But it’s really not a good idea any more.

Tabs display differently depending on the settings, since they do not come built-in with an alignment configuration.

Rather, tabs should be used for delimitation. For instance, separating the values in a table with tabs is often superior to doing so with commas.

Tabs should not be used for indentation. This is true of both code and word processor documents. Spaces are much more portable and flexible. The TAB key, on the other hand, is a perfectly reasonable shortcut for an editor to automatically indent code with spaces, or for a word processor to apply the appropriate indented paragraph style.

Multiple tabs should not be used to make tables look “nicer”. It’s the responsibility of the editor to display TSV files in a sane format; using multiple tabs is simply not portable and semantically wrong.

I realize this is a somewhat contentious topic—some people to this day still prefer tabs to spaces for indenting code, for example. My personal view is that indenting code with tabs is utterly ridiculous. I don’t personally see why this is even a debate.

AI: You’re Using It

Artificial intelligence is being used every day in today’s society. Many don’t believe it. Many are dismissive of the idea of artificial intelligence. Their argument is often similar in nature to John Searle’s “Chinese Room Argument”, which is often stated as follows:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a database) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

(taken from Stanford Encyclopedia of Philosophy)

Using this argument to argue against the existence of AI is missing the point entirely. John Searle does not argue that we cannot create artificial intelligence. His argument is that an artificial intelligence behaves inherently different to our intelligence, which is debatable but much less absurd.

Any object that can understanding a language would still serve to translate to and from that language. The entire point of the Turing Test is to define intelligence in a reasonable way. The Chinese Room that Searle describes, therefore, is indeed an artificial intelligence.

Write Once

I am beginning work on a résumé generator that creates beautiful typeset résumés, from HTML.

HTML is quickly becoming the standard format for semantic markup. The purpose of semantic markup is to be understood by computers. The natural step, then, is to convert semantic HTML résumés to TeX for typesetting.

This lets a user have both an expanded, online résumé, as well as a condensed and traditional paper one.

Test to Fail

In most disciplines, tests are important. An untested product is hardly better than no product. Untested products are prone to fail.

But not all tests are built equal. The human tendency is to make tests that test for success. For example, suppose I had a routine that checks if a number is greater than another. And suppose I implemented it as follows:

greater(a, b) = a ≥ b

The error in this code is obvious. There is a greater than or equal sign (“≥”), but I meant to use a greater than sign (“>”). But in practice, in more complex projects, errors will sneak in. Now imagine that my test suite looked like this:

@test greater(2, 1)
@test greater(1, 0)
@test greater(1, -2)
@test greater(100, -100)

What a comprehensive test suite! Unfortunately, this test suite will let my incorrect implementation pass. Why? Because I never once tested for failure.

Each test was written to see if the function returns the correct result when the first argument is actually greater. The test cases were written with passing in mind, not failing. We have a subconscious tendency to test for success, not failure. Tests for success are useful, but tests for failure are necessary too.

Outside of computer science, the same principle applies. Scientists and engineers would benefit from negative results as much as positive ones. In short, give failure a chance.


CS Club Resources

For SJAM CS Club, from 2014 to 2015, Ian Fox and I created a series of lessons to teach introductory computer science to high school students. These resources for CS enthusiasts are now available to the public.

I hope this is useful to other CS clubs around the world that target high school students. I also welcome any suggestions for improvements to the material.


Learn to Code

As I mentioned in my post on defence of STEM education, one of the most important skills in today’s world is how to program a computer. It really is a shame how poorly this is taught in schools. An introductory Computer Science course is at least as important as an introductory History or Geography course (and don’t get me wrong, those courses are very important!). It is absurd how many high school students don’t take it.

A Universal Turing Test

This was originally written as a journal entry for my philosophy class. I’ve lightly edited it to re-purpose it for a blog format.

Someone told me about a video game–playing computer that learned how to pause the game of Tetris to avoid a loss. I was surprised and somewhat skeptical. On the Web, I located the original paper (which, given its publishing date, is quite light by research paper standards, but nevertheless scientifically rigorous) by Dr. Tom Murphy, named “The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel . . . after that it gets a little tricky.”

The software is able to play video games—in fact, it can play any video game for the NES provided it “watches” a human play first—and surprisingly, the method applied is very simple (Murphy 1–22). The software does not consider video or sound feedback from the game, but instead inspects the game’s memory directly.

From looking at data accumulated during a human’s successful playthrough, it identifies regions of memory where values generally increase as the human gets closer to winning. It then concludes that increasing those values will also get it closer to winning. Then, it looks at input sequences that the human player uses frequently. When playing the game, the software simulates each of those input sequences, determines which one will increase the values it identified earlier most, and executes that input sequence.

This strategy works well for some games. In the case of Super Mario Bros, the memory regions it identifies from human play include numbers like the score or position in the level, so the software then attempts to maximize score and position when playing the game itself through brute-force simulation. However, the strategy is extremely simple and is far away from anything that could be classified as intelligent “thought”.

Even the advanced behaviour the computer displays, such as the ability to pause Tetris to prevent losing, is just a consequence of testing possible input sequences and discovering that none of them except pausing the game prevent losing. Being used to very “dumb” computers, however, it’s shocking to see something so advanced now. Will a computer eventually display human-like intelligent thought?

Perhaps why learning about this software surprised me so much was how general it was, being able to play games as different as Super Mario Bros and Tetris—even if it wasn’t good at playing the latter. The first computer software, which still makes up the majority of software used today, are incredibly fast at doing a very specific computation-related task. This software takes inputs and applies a linear sequence of steps to get the desired output.

A calculator app would be an example of this. Even a auto-correcting word processor is extremely linear—when I press the “Space” key, it looks at the last word I typed, compares it to all the words in its dictionary, and if it matches one word very closely, it will correct the word for me. This software is generally good at tasks that humans are not good at, but it can only perform a very restricted set of tasks. It is useful, but not intelligent.

More advanced and interesting computer software tackles decision-making based on a variety of information. A chess-playing computer, like IBM’s Deep Blue, is an example of this. This software considers all the information about where the pieces are, and then simulates millions of possible moves before making a decision. The limitation of this kind of software is that still, the behaviour is dictated by a human. A human “told” the computer the rules of chess, what kind of positions are good, and how to search for the best move. The computer’s only input was doing the actual search.

The Mario Bros–playing computer is a step above this. It was never told what made a Mario Bros position good or bad; instead, it got this information by watching the human play. In this case, it even figured out for itself a “goal” of sorts—to maximize the score of the game—despite never being told that the game even had a score. The fact that it is so simple means that much of the computer’s behaviour was determined by itself instead of prescribed by a human.

Of course, the method the computer used to decide on these goals, and the way it searched through possible actions to find the best action, were still nevertheless programmed by a human. But if the program were allowed to watch another human player play another game, it could play that game too. However, the new game could have been one that Tom Murphy (the programmer) had never even played, or even known about. This adaptability seems to be some lower form of intelligence, at least beyond the intelligence of Deep Blue, which could not even play Checkers despite its similarity to Chess.

Alan Turing believed that determining whether machines “think”, in the common-sense interpretation of that word, was “ambiguous and biased” (Paquette et al. 147). He instead proposed the Turing test as a reasonable assay for displaying human-like intelligence. His original test involved an independent judge trying to distinguish between a human and a machine claiming to be human.

This test could take many forms, however, the most common being where communication is through email and text-only. I personally think that this is a very rigged test, in the machine’s favour, since text is a very restrictive format. To take an example to an extreme, suppose that the judge were restricted to the numeric digits 0–9 and the symbols “+” and “−”. This test would not be very useful at demonstrating intelligence, since even a calculator could pass the test. Indeed, the first machine to pass the text-through-email test would probably be failed if the judge were allowed to send a picture of a bird with the caption “What is this?”. Failing that, the judge can send an instruction like “draw me a picture of a bird using crayons”.

A universal Turing test should allow the judge to use whatever method he or she likes to try to tell apart human and machine, and such a Turing test obviously cannot be passed yet. The main reason that universal Turing test can’t be passed is the same reason Deep Blue couldn’t play Checkers—up to recently, computers could only do specifically what they were told to do. It could not draw a bird unless the programmer told it how to draw a bird. But perhaps that is changing, with Murphy’s computer being able to play games Murphy doesn’t even know about. If Murphy’s innovations are adapted to other fields, perhaps eventually a computer would be able to draw a bird after watching a human do it.

If this progress continues, I think that it’s certain that computers will eventually pass the universal Turing test, and therefore display human-like intelligence (whether that means they “think” is a question that, like Turing mentioned, ambiguous and perhaps even more difficult to answer). This answers my question in the affirmative. The remaining hurdle is the one that Murphy has somewhat successfully solved for video games: computers must be able to learn to do things beyond what they are explicitly told how to do.

Works Cited

Murphy, Tom. “The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel . . . after that it gets a little tricky.” (2013).

Paquette, Paul G., et al. Philosophy: Questions & Theories. Toronto: McGraw-Hill Ryerson, Limited. Print.