I often tell this horror story when I teach my software development courses. It’s a cautionary tale of what one can expect when a company wants to upgrade their systems. And while computers have been a part of our life for half a century now, the modern philosophy of integrated systems, extensive documentation, and readable code wasn’t always a thing, which turns brings about a challenge we have to contend with, especially in a world of digital acceleration.
Covid-19 wasn’t the catalyst that started this trend, but it sure did push things along, as lockdowns happened all around the world, businesses had to quickly adapt and become agile, faster, and virtual, and many of them are seeing the benefits of adopting new technologies.
Why is legacy technology a thing?
Software developers will hardly find a phone number list company that’s willing to burn everything to the ground and reengineer their whole infrastructure Instead, they will most often end up having to work with what’s already there, and either improve it or work on top of it. That’s where the concept of legacy technology comes in.
I think that the simplest way to explain legacy technology is with an example. Did you know that the state of New Jersey had to actively look for COBOL programmers because their system was getting overburdened during the first days of the pandemic? Just for the record, that’s a 61 years old programming language. Sure, it’s still getting updated, but no academic program is going to drop C++ or Python to teach COBOL.
How do we deal with legacy technology?
Unfortunately, there isn’t an easy answer. Having to deal with decades-old technology can be seamless or jarring depending on many factors. For example, working with legacy tech that’s popular or that has an active community/foundation backing it can be trivially easy. Like a friend of mine says, “if more than a hundred people know it, the answer is probably on StackOverflow”.
On the other hand, less popular technology can be a pain to work with, as fewer people means fewer developers that have tackled the same implementation issues. Also, small scale software, especially before the early 2000s tends to have very poor documentation, if any at all, so it can literally become a guessing game or a tedious process of reverse engineering.
When dealing with legacy tech, we have two options, we either have to change it or work with it. Which option to take depends on many factors, including:
- The cost of changing or upgrading legacy tech
- The time spent in development
- Compatibility with the rest as climate change keeps impacting our daily lives of the infrastructure
- The time invested in training people in the new software
- The risks and possible consequence of something going wrong
Once again, there isn’t a straight answer. You can be dead set on keeping your current infrastructure only to discover half-way through development that there isn’t a way to go forward without doing some upgrading.
Remember that a system is as slow as it’s the slowest piece, so it doesn’t matter that you hire a frontend developer to create the best app in the world if the backend is run by a server with the power of a potato.
Be critical and be smart
Let me send you off with another story. Another buy lead friend of mine teaches in a college that kept backlogging computer upgrades, up to the point that the older PCs were running Windows XP and most of them were stuck on Windows 7.
They seemed perfectly content since the teaching staff only had to check emails and write reports. That was until the pandemic hit them and they had to move their faculty meetings to Zoom, which simply refused to run on older computers. Suddenly, backlogging those upgrades backfired spectacularly.