This will be my first-ever T-SQL Tuesday post. Since I restarted my blog, this makes great sense to incorporate into my postings.
Our host for this month’s TSQL Tuesday is Andy Leonard (blog|twitter). He’d like us to discuss how we handle changes in technology, particularly unexpected changes.
As I work at Microsoft, technology change is a way of life. We literally create the change we want to see in the world. As a service engineer, I am constantly being thrown new technology, specifically in an alpha or beta state. This means tech that won’t work right, or doesn’t do everything it is supposed to, yet.
When I first started at MS, I immediately hopped on the beta tester group for Windows. These were builds that you could only get if you were inside Microsoft. After a while, I had to stop, as they were hampering my productivity. After 15 years, the pace has accelerated such that the entire company is unintentionally back on that team.
So how do I manage change? I go through a three-step process to quickly evaluate, categorize, and handle tech change as it comes in.
1: What is the purported purpose of the new tech? What is it’s philosophy, or viewpoint? Do I see things how it sees them?
Two examples: PowerShell became a no-brainer for a tech practically begging for a feature-rich scripting environment that helps me get my work done. System Center Operations Manager (SCOM) takes a services approach to what has typically been a server-centric environment for an IT team.
2: What does it actually do? What are the limitations, features that work, features that don’t work, and do they add to my toolbox, or do I have to reconfigure everything around the new tech?
Let’s take PowerShell. When it first came out, it was definitely lacking in a lot of cmdlets and features, but the fact that it was built on DotNet, and could make use of it with a simple declaration format, meant that it was instantly extensible. It didn’t always work as planned, but oh boy, I could see where it was going.
SCOM, on the other hand, meant I had to learn either a third-party tool, XML, or an inconsistent console interface in order to create what I needed. There was lots of extensibility built in, but also there was the potential for dependencies that could get rapidly out of date. If your systems are relatively static, it makes great sense. Our environment is fairly dynamic, and it required constant feeding and care.
3: Is it mandated that I use it? Does management have a business need for this change? Or is it being driven by the end users? It matters at MS, because IC’s drive a lot of use cases for tech. Management is fine with whatever we use in most cases, if we don’t have a competing tool. Politics is going to play a part in whatever we use.
I still have SQL Server 2005 in my environment because we have tooling that can’t be upgraded. Management needs it working, but won’t invest in what it takes to upgrade the code. That’s not a criticism, as it is a reality across all industries.
One other aspect to technology changes is technical debt. As we are constantly running, technical debt always looms large. At some point, we ignore technical debt, scrap everything, and rewrite to create new technical debt. It’s a never-ending cycle, but one we are aware of. Legacy systems hang around for far too long, but never get addressed until there is a catastrophic failure. And this is in an org that has shifted from reactive to proactive over the past decade. I am really proud of all the engineers I work with to move us there. We still have debt always looming over us, and we never get rid of it. We merely trade it for another set of debts.
My team is constantly testing new technology, and incorporating it where it makes sense, both from a maintenance and business perspective. Remember, the business needs come first, and that will dictate how fast technology changes where you’re working.
Have a great Tuesday!