I have a lot of friends who do the vast majority of their programming in new-style languages like python and ruby and one of the things is that I get the occasionally talks tossed back my way via blogs, twitter or whatever. Sometimes they’re interesting, but other times, they say things which I largely disagree with.

One of the refrains I hear quite a bit is that old-style languages (really mostly Java and C++) are too verbose. That you have to type things like this all over the place:

 ArrayList<Integer> intArr = new ArrayList<Integer>();

The claim is that you’re being annoyed because (1) you’ve had to type the exact same thing twice and (2) because you’ve been forced to specify the type in very specific terms when it’s not necessary.

While on the face of it, both these things are true, it somewhat misses the point. Not all verbosity is just annoyance. A lot of it has value because it forces you to type what you think you mean several times. If you do something different one of those times, maybe it’s a typo, but a surprising amount of the time—at least for me—it’s a bug in the logic in my head.

The fact that these languages give you ways to specify what you mean multiple times and then check them against each other for you isn’t a bug, it’s a feature. They type checker is your friend, strong types are your friend, a bit of redundancy in specification (especially when a decent IDE helps manage it) can be your friends. They all help turn hard-to-find bugs in your logic into easy-to-find bugs that a compiler can find for you.

Sure this isn’t true infinitely and there are times when you really are doing something stupidly simple and could do without typing the same long type specifier twice, but I find that more often, I’m doing something which is mildly complicated, involves code spanning  few different files and it’s very helpful to have a bit of redundancy tell me when I’ve screwed something up rather than merely assuming I’m obviously god’s own coding ninja and thus knew exactly what was doing at every point in time.

Another minor point came up in a PyCon video a friend posted. You can find it here. It spends 30 minutes arguing two points:

  1. You shouldn’t use a special class if (a) a simple function would do or (b) if one of the base classes would do.
  2. You shouldn’t use your own errors or exceptions which is really a special case of 1(b).

The core point seems to be that your own stuff is likely to be hard for others (or even you later on) to read or understand while others already understand the core/base classes. In other words, avoid unnecessary layers of indirection.

Again, while this is probably true in some cases, and likely quite true in the bite-size examples that fit nicely in a 30 minute talk slot, it misses a lot of the reasons why people do it. Hint: It’s not because we’ve all been brainwashed by Java and CS curricula and claiming so doesn’t help you make any of your points.

As one example, I use “empty” classes which either wrap or simply subclass an existing base class a lot as a way of leveraging the type system to let me know what “kind” of set or what “kind” of long I might have. For instance in code I’m writing right now, we use a long identifier to refer to both switches and hosts in a network, but it’s useful for me to know which is which, so I have two classes which are both basically just a Long in Java. It means that I can make functions which only take one or the other and give me an error when I do something I didn’t mean to though. Very useful way of turning logic bugs in my head into something the compiler can check.

In any event, I think we would do well to at least cast old-style and new-style languages a two points on a spectrum with advantages and disadvantages rather than simply looking at the new-style languages as having vanquished the obvious stupidities of the old-style ones.

Categories: Uncategorized

Leave a Reply

Your email address will not be published.