I recently picked up 7 Powers: The Foundations of Business Strategy by Hamilton Helmer (based on this glowing recommendation). The book analyzes several different forces (the author calls them “Powers”) that lead businesses to long term strategic advantages, like economies of scale, network effects, and high switching costs. These simultaneously give a business an economic benefit and create a barrier against replication by potential competitors.
I thought it’d be interesting to analyze programming language adoption through this lens. I’ll list all of the “Powers” and then go into how each applies to programming language adoption.
- Network effects
- Economies of scale
- Switching costs
- Cornered resources
- Process power
- Branding power
Network effects are the dominant force in programming language adoption and the tech industry at large. Their power is the reason that new products “need to be 10x better” to displace incumbents. Every single thing about using a programming language gets easier as it grows:
- Jobs: companies want languages with deep talent pools, devs choose languages where they can get hired more easily.
- Ecosystem: devs want to learn languages with many libraries and frameworks, people build more libraries in popular languages.
- Learning materials: devs want languages with great documentation, Stackoverflow answers, etc. The larger a language get the more blog posts, learning material gets created.
Flipping the Script On Network Effects By Integrating
This is a new force the author named. The idea is to build something that an incumbent cannot because it would simply be too unattractive for them to enter the market. It can be for any reason: because that would cannibalize an existing product line, hurt the company’s high-end reputation, the heads of the company only have expertise in the old product, etc.
It’s similar to “disruptive innovation” as described in the Innovator’s Dilemma, where for example Kodak completely missed the shift to digital cameras because they were making so much money selling film, but it’s a little broader.
Language design is a potent source of counter-positioning. Languages need to maintain strict backwards compatibility: once a language has syntax/support for something, it is nearly impossible to remove the feature or significantly change it, and new entrants can use this to their advantage.
Fewer Features as a Feature
Rust has the majority of the benefits of C++ (performant, no GC, “close” to the metal), but is a much smaller language compared to C++’s kitchen sink (filled to the brim with footguns). I described it to someone once as “the good 20% of C++”. By having fewer features that compose well, Rust is easier to learn and feels better to use. C++ can’t compete with this approach because that would mean literally cutting 80% of the language out.
A better interface for fundamental features
Go built concurrency primitives elegantly into its standard library. Every existing programming language had those features too, but Go’s model was much easier to use. Because these features are so fundamental, existing languages can’t reinvent themselves to support new models.
Languages do eventually change, and a better design is not a lasting advantage, but depending on the feature the better design can remain an advantage for a long time.
Economies of Scale
Software products can often have tremendous economies of scale, since the marginal cost of selling and distributing the product is zero.
But the “programming language as a business” metaphor breaks down a bit here and I don’t think this force applies. Language creators don’t get paid with increased adoption and aren’t trying to offset the costs of creating the language with it.
Even businesses that create and charge money for their languages don’t benefit as much from economies of scale as much in comparison with the network effects that come with increased adoption.
This is a massive force. Even for small projects and individual devs, changing programming languages is a risky and time consuming endeavour that is rarely ever done without huge reasons. For larger entities like companies, once they’re invested in a language, they can almost never switch. They paid $XM for this app over several years that breaks even, why spend $YM and 6-24 months of time to rewrite it in a new language, even if Y<X? 1
There are too many examples here. But a couple are:
- The IRS has 200k lines of assembly code running one of its key systems that it has tried several times to replace.
- Government payroll systems running off COBOL, despite how thin the labor pool is for that language.
- Instagram is a Python+Django app, which is unusual for their scale, but they will likely never switch off python.
- Facebook has so much working PHP code that they cannot replace it. They invented a custom runtime (HHVM) and a dialect of PHP (Hack) to work around the parts of the language they didn’t like.
Process power is an accumulated improvement in a company’s process such that it undergoes a phase change. The process becomes both clearly better than the competition’s, and also so different that competitors can’t easily understand how it works and copy it. The example given is the Toyota Production System, which enabled it to produce cars faster, cheaper and more reliably than its American competitors. The competitors were getting beaten in the market by Toyota’s cheaper and more reliable cars, but still took decades to copy the processes, despite being able to visit the Japanese factories.
This force is the most confusing to me, and I don’t see it applying to programming languages. Programming languages aren’t manufactured, and they all use similar processes in their creation. It is difficult to have a good language design process with thoughtful design and community input, but this rarely leads to paradigmatic changes.
And small process innovations eventually get copied and adopted in other projects. Believe or not “don’t allow commits to merge unless CI passes” was far from a widespread practice when Rust created bors (see The Not Rocket Science Rule) but today it’s ubiquitous.
The author defines branding power as a long term, accumulated reputation that gives consumers good feelings about your company. He claims it doesn’t work as much in B2B sales since a lot of businesses buy things by comparing features in a spreadsheet. I agree that while a few companies have managed to build B2B reputations (you can’t get fired for buying IBM/Cisco/SAP/etc), it’s a much smaller set than in B2C.
The vast majority of programming language adoption is done by for profit companies, which makes programming languages more like B2B tools, and branding has almost no lasting power in programming language adoption.
Ruby vs Python
The weak impact of branding can illustrated by the story Python and Ruby in the late aughts. Back then they were neck and neck: as scripting languages they appealled to similar user base, and were similar in age, ecosystem, and syntax (which was almost transferrable between the two). On the internet it was common to see people asking “I want to learn a scripting language, should I choose Ruby or Python”? Ruby had an advantage with Rails, but Python’s Django was an ~ok competitor. Python had a slightly wider ecosystem and a burgeoning data analysis ecosystem.
At the time, Ruby had a better reputation than Python. All the Software Craft people loooved Ruby for its expressiveness and simplicity. Its creator Matz emphasized how Ruby was built to be an enjoyable programming language. Ruby had a more “natural” feel than Python and a better reputation, but that didn’t help it much and in the end Python’s humongous ecosytem won out. Now people only learn Ruby to write Rails sites.
This isn’t to say that the reputation of a language can’t affect its trajectory.
Short Term Advantages Can Build Up
In the last part of book, the author acknowledges that static advantages like a large installed user base are built up over time, specifically through short-term “dynamic” advantages. Ergonomics, libraries, and new features can all be copied in the long term, and hype always dies down, but having (or not having) these at key moments can make or break a language.
For example, early hype around a new language or framework is one of the most powerful ways to build up a large user base to overcome network effects and switching costs: Rust’s cult following really helped against C++’s ~30 years headstart.
PHP benefitted greatly from being the P in the LAMP stack around the turn of the millenia (kind of a cornered resource), but it built up a bad reputation for making it too easy to accidentally add SQL injection and other bugs to your website. Devs eventually abandoned PHP for Rails and Nodejs, and never came back when it eventually fixed those bugs.
The rules for business vs programming language adoption are a little different, and probably a more specialized book could be written about the latter. But to summarize the dynamics by saying that Network Effects and Switching costs trump everything, with the occasional Cornered Resource is not inaccurate, so I give props to the author’s framework, which he emphasizes he created to be simple but not simplistic.
Someone could go into much more detail about the specific forces that apply to programming language adoption (eg community, performance, programming model fashions like OOP and functional), but overall I enjoyed the book and enjoyed applying its framework.
comments powered by Disqus
- Compiling LibPDQ to JS with Emscripten Part 4: Writing Docs & Emitting Typescript
- Compiling LibPDQ to JS with Emscripten Part 3: All in JS → Pushing to NPM
- Compiling LibPDQ to JS with Emscripten Part 2: Using C functions in JS
- Compiling LibPDQ to JS with Emscripten: First steps
- The 2 Data structures you need in a Scrabble AI