Programming Languages: Is newer always better?

I constantly hear the belief that modern programming languages and environment are better than older programming languages. More productive, easier to user, and so on. It would stand to reason: nobody would make a new programming language with worse features than an already existing programming language. Or would they?

Everyone seems to think that this is fact. But surprisingly it’s not. There are many features in older programming languages which are not present in today’s languages. I predict these features will be re-invented by the next generation of programming languages authors, and everyone will think they are geniuses for having come up with these ideas. But at the same time those new languages will omit most of the good points of today’s languages. This cycle can go on forever.

It’s like the cycle that tends to take place of “the network” vs “the standalone computer”.

  • Central – IBM used to make mainframe computers, which one would access from terminals, i.e. central computing power, distributed usage.
  • Local – But those computers were slow because they were remote. Then e.g. Sun invented the “workstation”. The PC then followed. Local power to everyone.
  • Central – Then the web happened. Suddenly everything was remote again. “All you need is a browser!”. No local software installation nightmare. (Perhaps) independence from the single operating system vendor.
  • Local – And now “using the web offline” is back in fashion. So that’ll be local computing again then.

A few facts, for those who think there was no programming before Javascript, the web:

  • 1957 – Fortran released: expressions, variables, loops, subroutines
  • 1959 – LISP released: treating functions as data, enabling higher-order programming
  • 1967 – Simula 67 released: Object-oriented programming

Consider the following:

  • Variable Bounds. Ada, developed for the American military, with high emphasis on program correctness, allows one to define bounds to variables. For example “array with index between 1 and 100″ or “0 and 10″ or number “not more than 5″. Most variables, in reality, have allowed ranges. Why not express it in the program, it’s more self-documenting and it allows the run-time, and to an extent the compiler. to check the constraints. Isn’t minimization of bugs something that affects not just the military?
  • Strict typing. If you know an object being passed to a function is a “User”, it’s no good being passed an “Email Address”. The set of operations those objects can perform are completely different, so even if the programming language is “advanced” enough to be able to accept the parameter, the first method call to the object will fail. Why not express that and let the compiler check that. C++ can do it (since 1983) so let’s use that not Perl which can’t do it. Recently I read an article making a joke about casting everything to a string, but in reality that’s the default behaviour (in fact the only behaviour) of all scripting languages.
  • Knowing what’s going on. In C, it’s well defined what “0” means or what the string “abc” in a program means, and so. Ask a C programmer if 0==NULL and as a PHP programmer if 0==null and see a) their reaction times b) if they’re correct. The C programmer will know fast and be correct, the PHP programmer will not. Who do you think writes programs with fewer subtle bugs?
  • Enumerated types. Is a user “active”, “disabled”, “inactive”? Having such options are common to all domains. C can define an enumerated type since ANSI C (1989) and Lisp since 1959. Why did Java have to wait until Java 5.0 (in 2004), and why do we have to create unreadable programs with languages like Ruby which can’t do them at all? For example what does the function error_log(“user not found”, 2) do in PHP, what does the 2 mean?
  • No compiler. Every byte in an interpreted language costs time to interpret. So it makes sense to have short variable names, fewer comments, for run-time efficiency. Is this the sort of programming style one should be encouraging?
  • No linker. You can build big libraries in a linked language, and only those functions used by the program (or used by the functions used by the program) will be included in the final executable. In Java, PHP etc, all the code you use is available all the time, taking up memory. I am often criticized for writing “too many libraries”, or code being “too object-oriented” in scripting languages, which is a fair criticism, as that code will run slower. However is it really an improvement to remove this function-pruning feature, which means bad programming practices will produce more efficient code?
  • Multiple compile errors. Why do modern programming languages such as PHP only tell you the first error in your program, then abort? This is laziness on the part of the compiler writer. Old compilers tell you all the errors in your program, so you can correct them all, without having to correct one, retry, correct next one, retry, and so on.
  • Formatted strings. There is nothing wrong with the format concept behind C’s “sprintf” command, originating from 1972. You can print numbers, strings, specify precision, field length and so on. (Apart from the inability to reorder parameters.) Why did C++ introduce the “<<" notation? (At least you can still use printf in C++). Why is this re-invented, worse, in .net? Why did Java have to wait until Java 5.0 to get this feature? Why do we have to reinvent the wheel (worse) all the time?
  • Auto-creation of variables. When programming languages like C were created, the authors made the decision that it was an error to use a variable without declaring it. This caught all sorts of errors such as misspellings of variables. Why have these decisions been forgotten, and every scripting language allows you to just use variables without declaring them? This means hours of searching for bugs when you simply misspell a variable name, something that’s going to happen to everyone at some point. We’re only human and we have to take that into account.

The above is a list of things that have got worse over the last 2 decades, I.e. they haven’t just not got better by staying the same, but these things have actually got worse.

16 Responses to “Programming Languages: Is newer always better?”

  1. Dave Doyle Says:

    Well, it seems to me you just have a basic disagreement with how “new” languages do things. I agree that just because a language is old it doesn’t make newer ones better, just different. That being said, as a Perl programmer, I wouldn’t call it new (20 years old), but I wouldn’t call it old either. So, I’ll look at your post with regard to Perl and perhaps some other side comments.

    Variable bounds: You obviously want the compiler to handle this. Perl itself was made to be very flexible (and as such you _can_ shoot yourself in the foot) but you can easily add modules and such that enforce boundaries when accepting arguements. No, the compile won’t get it, but it’s not hard to add checks. It’d be nice if we could have something enforced at compile time in some cases, I admit that, but I don’t see it as a deal breaker for me.

    Strict typing: Once again, this seems to be a fundamental difference of opinion. No, Perl isn’t strictly typed and can’t do what you’re saying. But once again, you can check things. You can validate that an Object is a particular class or descendant of a particular class. As with the variable bounds, you can validate your data. It does make prototyping a bit easier for me though. You can, of course, shoot yourself in the foot, but it does give you some flexibility.

    Knowing what’s going on: This feels more like an argument that everything should be the same in all languages. Is that your meaning? Everyone has to get to know the idiosyncrasies of their chosen language anyhow. I like that in Perl 0, ‘0’, ”, and undef all evaluate as false in a boolean context. Saves me some time and I don’t have to change some logic if something was going to be a string but is now a number. I’d argue that your example is still pointing to Static vs. Dynamic typing though.

    Enumerated types: Again, you can put checks in. Perl has modules that allow you to validate. Again, not a big deal to me but your mileage may vary.

    No Compiler: Every bite does take time to interpret, but against the running time of a program I don’t see this as a reason to minimize variable names. I try to make mine descriptive and I disagree that this is a cultural aspect of scripting languages. I’ve never heard of Perl, Python, Ruby or PHP programmers doing this. Where are you seeing this? Further, if it’s a webapp, there’s tons of environments where the script is loaded into memory and not re-compiled time and time again. I don’t feel this is a persuasive argument.

    No linker: Scripting languages tend to have a different focus though. I don’t write massive GUI apps in Perl. I would hazard most don’t in other scripting languages. Memory can be an issue in all kinds of applications. In my experience, it’s not the libaries that are the problem, but the way in which information is stored.

    Multiple Compiler errors: In Perl’s case, one compilation error can drastically affect how the rest of the code is interpreted. It does attempt to tell you other errors but often fixing the first changes or drops the following errors. I remember fondly the multiple errors the my good ole Borland Turbo Pascal would show me, but the reality for Perl is that this isn’t possible.

    Formatted strings: I can’t speak to the other languages but I still love me my sprintf in Perl. Still works like you expect. But what’s wrong with re-invention? It won’t always work but you can still try something new.

    Auto-creation of variables: I agree with you on this one. It should be noted this is considered horribly bad practice in Perl now. Adding one line, “use strict;”, stops this from happening and every program I write begins with that. I think the PHP folk have long since started declaring and initializing variables for the most part. So it didn’t work.

    I’d argue that anyone saying their language is better is selling something. They’re different. They have different philosophies. If it’s Turing-complete, your language is ultimately fine.

  2. Robin Salih Says:

    I’m just going to comment on the last point, namely auto-creation of variables. I can not understand why this ever would be preferable to having to explictly declare every variable. I mean writing a = “foo” is not any easier than writing string a = “foo”. no more lines of code or anything. and you know a is a string not an object or anything else that can be constructed with a string (think c++ where lots of implicit casting goes on).

  3. Timbo Says:

    I agree with pretty much all of Adrian’s points. I still use perl a lot though because as a sysadmin, my main programming jobs are just little scripts here and there that I need to knock-up quite quickly. So shell and perl are (designed) for that.

    Saying that though, when I write any perl script the 1st line of code is always the same…. “use strict;”

  4. Casper Says:

    I sometimes have a hard time explaining to my boss why a full blown Java stack involving Hibernate, JPA query language, Java, expression language, JSF, XML, HTML, CSS, JavaScript + gazillion support libraries and tools, is superior in productivity to their old Clipper all-in-one “stack”.

    So I’d say that newer language aims to be better, but how you define “better” is entirely subjective to whichever quality attributes you hold dear. I find it interesting though, that some of the newer elements of i.e. LINQ actually draws in much of the old stuff (lambda expressions) and results in stacks more in common with the before-mentioned Clipper all-in-one stack than a complex, highly-configurable, 7-storied JEE stack.

  5. Rafal Dowgird Says:

    “Every byte in an interpreted language costs time to interpret. So it makes sense to have short variable names, fewer comments, for run-time efficiency.”

    What? Are you suggesting that a comment in a procedure is parsed every time the procedure executes? I don’t know a single interpreted language implementation that would do that. The only exception are calls to “eval” or similar functions.

  6. John Says:

    RE: Formatted Strings — I went through a long period of time wondering this myself. I thought sprintf was good enough all these years, why should I bother with iostreams. Well, I experienced one too many crashes from the simple error of mismatching the printf format specifier with argument type (%s -> int). These instances usually occur in logging statements that you don’t always encounter in normal code paths. This problem goes away completely with iostreams, as the most important benefit is type safety.

  7. Ronie Uliana Says:

    IMHO 2 factors contribute for this “back and forth” way of programming languages:

    1 – There are so many new technologies that we spend less and less energy studying the past.

    2 – Several features are dropped from new languages because the designers consider it “very dangerous, no _real_ programmer would ever use that”. As that’s a matter of opinion, we lose several powerful features just because they are… hmm… powerful. For example: GOTOs and Multiple Inheritance.

  8. cremes Says:

    Wow, while I agree with the thesis, I disagree with almost every example. Let’s take each one in turn.

    Variable Bounds
    You really have two components here. One are primitives like integers, floats and byte arrays. Second is objects which reflect higher level abstractions like strings, byte arrays (again), hashes, etc. I don’t think variable checks are all that useful as a feature of the language. This should be part of the library of higher level functions. In any OOP language (including Simula) this can easily be supported and sometimes is.

    Strict typing
    This is really a static vs dynamic typing argument. You want the compiler to check that a method can only receive an object of type SomeObject while I want any method to be able to receive any object as long as it responds to (or has the same interface) as SomeObject. Lisp (old), Smalltalk (old), Perl (new), Ruby (new), Python (new), etc all have this dynamic feature and benefit greatly for it. C, Fortran, C++, Java and Pascal require static definitions and suffer greatly for it. C++ (again) and Java (again) have templates/generics to fake this kind of feature and suffer horribly for it. OCaml allows either and therefore has a leg up on both camps by giving the developer the greatest flexibility.

    Knowing what’s going on
    This is a terrible example. You are really arguing that PHP programmers don’t know how their language works while C programmers do. This is a horribly wrong-headed assertion. How about I counter your straw man with one of my own. I know plenty of new (as of the last 5 years) C programmers who have no idea that 0 is equivalent to NULL.

    Enumerated types
    This is a great feature modern day languages have though maybe it isn’t called “enumerated type.” Ruby has symbols so you can say your types are :hot, :warm, :lukewarm, :cold. These symbols mean the same thing everywhere. To use your PHP example in Ruby, how about error_log(”user not found”, :user_not_found). In this example, you don’t know the languages you are criticizing.

    No compiler
    Please point me to a modern language that is slower with longer variable and method names. Ruby, Perl, Python, OCaml and Erlang all “compile” the code to an intermediate form (bytecodes) and then execute those. Long variable names or method names have no performance impact except during this “compile” stage when it parses the languages. *Every* language bears this cost because they *all* to have to parse the code at some point to either turn it into bytes or machine code.

    No linker
    Your argument here is about memory footprint. This is a total non-starter on any modern operating system that does demand paging. If huge sections of your ruby/perl/python/whatever library are not used, the OS will never page them into RAM. If I am misreading this and you are really arguing over disk space, then let’s just agree to disagree.

    Multiple compile errors
    This is a criticism of coding style. Apparently you think programmers who used these older languages sat down and spewed forth entire programs in a single sitting and then debugged them in one shot after getting a list of syntax errors. I prefer to write a test, watch it fail, write the code to make it pass. My programs generally never have more than a single error. But that’s just my style; use whatever you like but don’t blame the language if your style doesn’t fit it.

    Formatted strings
    This isn’t a language feature. It’s a feature of a language’s library. Any language, no matter how old or new, can easily support “sprintf” semantics including C++. You don’t have to use “<<” if it doesn’t have the feature you need.

    Auto-creation of variables
    This is another style criticism like your “Multiple compile errors” criticism. Let’s agree to disagree. I’m never more than one test away from discovering a misspelling which is equivalent to the compiler spitting out the error.

  9. Bobo the Sperm Whale Says:

    Also, dynamic typing is pointless and should be banished to the nether realms from whence it came. I’ve never come across any instance where I suddenly need to change the type of a variable in the middle of an algorithm. I don’t even think such a situation exists in the Real World(tm).

  10. Web 2.0 Announcer Says:

    Programming Languages: Is newer always better?…

    […]I constantly hear the belief that modern programming languages and environment are better than older programming languages. More productive, easier to user, and so on. It would stand to reason: nobody would make a new programming language with wor…

  11. Helge Says:

    I just had a discussion with Max about you posting (I’m sure he’ll still post his thoughts on it) – and we came to a slightly different conclusion.

    I don’t know enough about programming languages, so I’ll shut up on your main point. As to your example with network vs. standalone computer: This is a very effective and productive cycle, it’s not the same things coming back but technologies being built on top of each other.

    The terminals of the early mainframes were entirely stupid. The workstations that came then were an added bonus to those mainframes, an *addition*. The notion of “the web is the computer” enables us to do the same things as locally but collaboratively – a real *addition* in value again. (Note that these “terminals” of today are not stupid any more, they interpret (html/css) and run (javascript) web applications locally.) But one wants to be able to use these great because collaborative services also when, say, on a plane. So offline versions are being added. Are real *addition* in value again.

    Every step in your cycle is a real addition to the previous one, and is built on top of it. Standing on the shoulders of giants, as Newton said I believe.

    Bottom line: If your main argument follows your central-local-central metaphor then re-inventing programming languages from time to time is a very good, productive and innovative thing.

  12. adrian Says:

    My responses here:

    http://www.databasesandlife.com/programming-languages-is-newer-always-better-part-2/

    @Helge: Response coming in a future post.

  13. Doug @ Straw Dogs Says:

    I think creme has hit the nail on the head. I agreed with the original posts point until it descending into petty, ignorant criticisms of languages and methods.

    “In this example, you don’t know the languages you are criticizing.” – Creme

    I think that goes for many of the examples given.

  14. Fortis Says:

    Bobo the Sperm Whale, really, no use for that what-so-ever? You just knocked out one of the pillars of object oriented programming, polymorphism. Perhaps you’ve never come across the need for inheritance or encapsulation either.

  15. Anon Says:

    Ruby has symbols which are the preferred way of doing enumeration.

  16. Nick Says:

    Basically, you’re right, though a few of the points are debatable. The example that I agree with most strongly is the last, auto-creation of variables. Didn’t people learn in about 1972 that auto-creation of variables is a Very Bad Thing? Every good Perl programmer I know puts “use strict” at the beginning of all executable code (which forces declaration of all variables). So why doesn’t PHP have “use strict”? If you ask this question on the PHP forums, you get answers that reveal the responders just don’t understand the issue.

Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

For inserting HTML or XML please remember to use &lt; instead of <