On All Caps

There is just something about all caps that annoys me.

I don’t exactly know what it is. It’s ugly, but not horribly unpleasing. It’s harder to read, but not unreadable.

Maybe it seems angry or arrogant, as if screaming. But all caps doesn’t necessarily sound that way in my head. Maybe it’s just entirely superfluous. But a lot of things are.

Regardless of the reason, I’m always annoyed when someone needlessly uses all caps in some form of design. It’s high up there on my list of typography pet peeves.

On Teaching Grade 4 Olympic Math

For several years now I have taught an Olympic Math interest course at the Grand River Chinese School. This is a weekly, one-hour course on topics in mathematics that students might see on contests, but probably not in their regular curriculum.

I have gradually begun to make available the lesson plans I have used for teaching this course. Indeed, the Number Theory unit is in a state that I am very happy with. I will hopefully soon upload the homework and quizzes for the classes as well to that web page.

During this post, I’d like to reflect on some of the things that I found useful, and some attempts that I found did not work well. Finally, I give a proposed new curriculum that I would love to try out next year.

What Worked Well

  • True or false questions. This seems a really minor point, but I have come to believe that this style of question is ideal for students of this age. True or false questions let me ask questions that involve mathematical reasoning, not just computation, without forcing the students to write down their justification. It also develops a student’s intuition.
  • Geometry (and measurement). I always start the year off with this unit; it’s very accessible, great for developing intuition, and is also helpful for students who write math contests. Next year I might not start with measurement, but that decision is hard for me to make because of the success this unit has had.

What Didn’t Work Well

  • I tried to teach classical logic to the students two years ago or so. It wasn’t a good idea. Students at this age don’t have the experience to understand the rigor involved in classical logic; they need something that is easier to intuit and construct. In the future I’d like to revisit logic, but teach a constructive, intuitionist logic instead.
  • Algebra. While a lot of teachers do cover this in their curricula, and many of them succeed, I have never really been able to motivate algebra in a way that students are engaged in. It doesn’t help that this subject is probably going to be covered in school for them soon. Instead, this year I’m taking an approach where I get them used to variables by using them on the homework, lessons, and quizzes.

The Next Step

My goal for next year is to start and teach just one unit for the four months I’ll be here: discrete mathematics. This will include discussion on sets, multisets, pairs, tuples, and other collections; functions and relations; the proposed revision of intuitionist logic; induction and recursion; algorithms and algorithm design; and finally, probability and combinatorics. This might seem like a lot, and indeed it is. But I think this particular way of ordering the subjects will allow me to cover them in the four months.

On Alignment Tab Characters

One of my pet peeves is the use of tab characters for alignment. Sure, this might have been acceptable in the early days of computing. But it’s really not a good idea any more.

Tabs display differently depending on the settings, since they do not come built-in with an alignment configuration.

Rather, tabs should be used for delimitation. For instance, separating the values in a table with tabs is often superior to doing so with commas.

Tabs should not be used for indentation. This is true of both code and word processor documents. Spaces are much more portable and flexible. The TAB key, on the other hand, is a perfectly reasonable shortcut for an editor to automatically indent code with spaces, or for a word processor to apply the appropriate indented paragraph style.

Multiple tabs should not be used to make tables look “nicer”. It’s the responsibility of the editor to display TSV files in a sane format; using multiple tabs is simply not portable and semantically wrong.

I realize this is a somewhat contentious topic—some people to this day still prefer tabs to spaces for indenting code, for example. My personal view is that indenting code with tabs is utterly ridiculous. I don’t personally see why this is even a debate.

Embracing Sans

I used to be a big fan of serif fonts.

I still am, to large extent. But over the last few months, I’ve developed a love for sans-serif fonts also.

Sans-serif fonts are attractive because they’re minimal. They’re clean. They’re nice to look at. And once one gets used to them, they’re just as easy to read as serif fonts are.

Serif fonts have their uses. They give a classic look to works. They’re familiar, and they’re readable. At the time of this writing, this blog’s default theme used Lora, a serif font. But, increasingly, I’ve started to prefer sans-serif for many things. Like my website, which at the time of this writing is now using Ubuntu, a very clean and modern sans-serif font.

Maybe one day this blog will switch too. I’m ready to embrace sans.

AI: You’re Using It

Artificial intelligence is being used every day in today’s society. Many don’t believe it. Many are dismissive of the idea of artificial intelligence. Their argument is often similar in nature to John Searle’s “Chinese Room Argument”, which is often stated as follows:

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a database) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

(taken from Stanford Encyclopedia of Philosophy)

Using this argument to argue against the existence of AI is missing the point entirely. John Searle does not argue that we cannot create artificial intelligence. His argument is that an artificial intelligence behaves inherently different to our intelligence, which is debatable but much less absurd.

Any object that can understanding a language would still serve to translate to and from that language. The entire point of the Turing Test is to define intelligence in a reasonable way. The Chinese Room that Searle describes, therefore, is indeed an artificial intelligence.

To Know the Future

The following post is based off a philosophy journal entry I wrote for my Grade 12 Philosophy class, taught by Brian Wildfong.


It’s impossible to know the future.

It’s common cliché that it’s “impossible” to know the future. I don’t agree. For some events, I have strong beliefs about the outcome. When I drop a book, for example, I know that it will fall, even though the event hasn’t happened yet. It would be impossible to live life without being able to predict the future. I know that if I leave the ground, then I will fall back down. If I didn’t know that, then I would risk floating off into space every time I take a step while walking!

I think that the future is at least as knowable as the past. Skeptics may argue that something important could change that invalidates all my predictions. They may contend that if I haven’t seen it happen, then I can’t possibly know that it will happen. But the same skeptics could detract from knowing events in the past as well. Let’s say that I just dropped a book. How do I know that I dropped a book? Maybe my memory is faulty, so I can’t rely on that. Sure, there’s a book on the ground, but maybe someone else put it there—or maybe I’m hallucinating and there isn’t actually a book on the ground. Even an event that happened seconds ago can’t be knowable from a radical skeptic’s perspective.

Such a perspective, in my opinion, is useful only as a thought experiment. It reduces all that one can know to meaningless statements, like “I perceive a book,” or perhaps tautologies like “Either this object exists or it does not exist,” or “If he is real, then he exists.” Some might accept certain innate ideas like “One plus one is two”. And the super-radical skeptic may even dispute the validity of all those statements. What’s the use of knowledge if it can’t apply to the real world, but only some abstract world of Forms, or if it can’t even apply to the world of Forms?

Some would suggest that knowledge implies certain truth, and I disagree with that. I think that requiring certainty for knowledge is absurd. I’m not certain that other people actually exist (maybe this is all a dream), and if someone doesn’t exist then they can’t know anything. I’m not certain that my senses aren’t deceiving me, so I’d have to accept that none of my experiences are knowledge. I’m not certain that I’m sane either, so I can’t accept anything that my logical reasoning suggests is true. Under this definition, there is no knowledge—nobody knows anything. There are already plenty of synonyms for “nothing”, so with that definition “knowledge” becomes just an unfortunate waste of a word.

If it’s probably true (for a very high standard of “probably”), then I would classify it as knowledge. That means that I believe that the future is knowable. I know that there will be a solar eclipse on March 9, 2016 and that it will be seen across Indonesia, because astronomical calculations have shown that. I accept that there’s a non-zero probability that it won’t be true: perhaps the sun will disappear before then, or the astronomical calculations (that have worked for thousands of years) are wrong, or Santa Claus will intervene and prevent the solar eclipse. But then again, maybe I’m just a brain in a vat. Life is too short to consider the non-zero but practically-zero probability that the underlying assumptions I make about the world are false.

This is the interpretation of knowledge accepted by B. F. Skinner, who classified knowledge into three kinds: acquaintance (having experienced an event), description (reading or hearing about an event), or prediction (to believe a future event). Skinner accepted that prediction may be the least reliable form of knowledge, but Skinner argued that it is in fact the most useful form of knowledge. Only with prediction can we decide on the best course of action. Many of the major problems plaguing today’s world are due to past mistakes made due to either incorrect predictions about consequences or not predicting the consequences (Skinner 105).

I know some things about the future, and I think what I know about the future is indeed the most important kind of knowledge. In the end, other forms of knowledge serve as a foundation for the kind of knowledge that helps us make the right choices: knowledge by prediction.

Works Cited

Skinner, B. F. “To know the future.” The Behavior Analyst 13.2 (1990): 103.

Write Once

I am beginning work on a résumé generator that creates beautiful typeset résumés, from HTML.

HTML is quickly becoming the standard format for semantic markup. The purpose of semantic markup is to be understood by computers. The natural step, then, is to convert semantic HTML résumés to TeX for typesetting.

This lets a user have both an expanded, online résumé, as well as a condensed and traditional paper one.

Test to Fail

In most disciplines, tests are important. An untested product is hardly better than no product. Untested products are prone to fail.

But not all tests are built equal. The human tendency is to make tests that test for success. For example, suppose I had a routine that checks if a number is greater than another. And suppose I implemented it as follows:

greater(a, b) = a ≥ b

The error in this code is obvious. There is a greater than or equal sign (“≥”), but I meant to use a greater than sign (“>”). But in practice, in more complex projects, errors will sneak in. Now imagine that my test suite looked like this:

@test greater(2, 1)
@test greater(1, 0)
@test greater(1, -2)
@test greater(100, -100)

What a comprehensive test suite! Unfortunately, this test suite will let my incorrect implementation pass. Why? Because I never once tested for failure.

Each test was written to see if the function returns the correct result when the first argument is actually greater. The test cases were written with passing in mind, not failing. We have a subconscious tendency to test for success, not failure. Tests for success are useful, but tests for failure are necessary too.

Outside of computer science, the same principle applies. Scientists and engineers would benefit from negative results as much as positive ones. In short, give failure a chance.