Do Languages Matter?
I started with database ‘languages’ twenty-eight years ago. My experience in those days was that when you got a new package or solution, you were going to learn a new language. A language was part of the experience, not a separate piece of the puzzle. If you look back at how Paradox, dBase and others worked. The language was simply ‘in the box’.
Of course this wasn’t true of the lower level languages like C and C++. They were all over the place and pretty ubiquitous. The difference then was that there was a steep divide between the productivity gap between something like C++ and Delphi. Enterprise developers especially seemed to use high-level systems knowing that they were trading productivity for a performance hit.
In the recent past, the world started to get divided into large silos of languages that are bound to their vendors. The C#/VB/.NET stack against Java/JVM was the story for ten years in many circles. The introduction of new combinations (e.g. Ruby/RoR) from the open source world didn’t seem to change this much. Even the recent Obj-C/Swift/iOS stack is a continuation of this trend.
What I find so interesting is that it seems that developers are becoming more entrenched to their respective vendors (and usually languages too). Many people self-identify as Java Developers, .NET Developers or Ruby people. Even though the language is our first experience with an ecosystem, the underlying plumbing affects how we create software much more than the language in my opinion. Especially since many of these languages have more in common than they have differences.
Finally, when I embraced Node.js, it informed the way that I wrote my ASP.NET code. Thinking in the light of asynchrony was no longer limited to Windows Services or other terse server code. I found the usefulness of being asynchronous by default in ASP.NET MVC and Web API. Sure, IIS can be tamed by throwing hardware at it, but why should it need to?
Do you agree with me?