DevOpS – The Last Frontier of IT Operations or Back to the Future
Keenan Phelan | | December 16, 2015
DevOps – Continuous Development, Frequent Releases
The recent DevOps Summit in Santa Clara was an opportunity to listen to some brilliant – and I mean not just smart but off the charts brilliant — folks opine on what is one of the hottest new buzzwords in IT generally, and IT services for sure – “DevOps”. In general, it’s defined as the melding of software development and operations primarily to speed the rollout of new features/functions to the interested enterprise or client. I actually think the best description is “Continuous Development,” as this really is what DevOps embodies. One of the simple but profound things I took away from the summit was that DevOps is meant to change our mentality from a big quarterly or even annual X.0 release, to far more frequent, but much smaller, series of releases. It is a powerful concept and widely applicable, although not universally so, despite some proponents (but that is for another blog.) In a DevOps world, weekly, even daily releases in some cases are plausible. So, continuous development is a far more descriptive term. But DevOps it is.
What occurred to me, though, was how there is a “Back to the Future” aspect to all this.
DevOps from a Network Geek POV
Once upon a time, before 2000, I earned an honest living as a network engineer/analyst analyzing performance issues on mostly local area networks. I always started with the application and data architecture, however, because that is what initiates traffic – data and the apps that serve it. The better I understood where the data was supposed to flow, the faster I could get to root cause, sometimes without my fancy network protocol analyzers. How does this relate to DevOps?
Well, I started by interviewing Dev teams as a way to jumpstart my network troubleshooting, and one of the things I would find after talking to many of these teams was the way they thought (or originally designed) the data flow was often not how it was now happening. It may have been initially, but someone, usually in Ops, had changed the code or the location of software/data to fix a problem. I’d often hear from the Ops team “Well that is how it used to work but it hasn’t been like that for over a year.” So it was the Ops team, not the Dev team, that knew how the application actually worked at the performance and dataflow level. So, what if the Ops folks and Dev folks had collaborated on that change together? Viola – DevOps! There is a powerful reason to pursue this. I Wish I had thought of it then.
Agile Development & Infrastructure Collide
A consistent theme I’ve seen over the 3+ decades I have been around IT (yes I still have a box of Hollerith cards with a FORTRAN program I wrote in school) is the abstraction of all things infrastructure from the people we call Developers. Way back when, those who wrote code actually needed to understand the architecture of the machine they were coding for, and the peripherals used by that machine. Assembler programmers knew the memory locations. About as close as we came to a network was the bus of the mainframe and the things like tape drives and DASD (if you know what that is you probably have gray hair like me) attached to it. Far from wonderful, it meant feature/function development was glacial, with tremendous duplication of knowledge and effort. But we did understand where our code ran and how it was impacted by infrastructure. But releases where slow then as well, but more because it took forever to produce something of value.
Since then nearly every trend in IT has driven less and less connection between the person developing feature/function and the infrastructure needed to run it. Infrastructure and now “cloud” managers worry about availability, storage and compute. DBAs worry about data structures. Object orientation and robust middleware packages have removed entire burdens from developers allowing them to tap into rich repositories of code and generate feature/function at breakneck speed. But, will it work in the real world? That is where software development meets Ops, and agile development has really pushed this.
It is great to develop in weekly sprints, but what Ops team in any organization is geared to do weekly testing? Rapidly developed code sits and defeats the very purpose of Agile. Conflict is inevitable. Enter DevOps. Developers can no longer view pre-production testing as someone else’s job, nor can Ops view development as another part of the IT world. The only way you break this log jam is to get back to a place where the environment in which the code needs to run is directly connected to the folks developing it – DevOps. Sounds like the old mainframe days, but back then you could fit the entire IT team of a Fortune 500 in a conference room and we used dial up lines to attach our terminals.
Like cloud mimics the old shared environments of big iron, DevOps mimics the far more intimate relationship between Dev and Ops in that environment. The number of people and the amount of technology involved, however, is many orders of magnitude greater, so this is no easy feat, but at least conceptually, DevOps is very doable.
We did it before, only now we have the potential to deliver feature/function at a pace previously thought impossible.
EPM applications help measure the business performance. This post will help you choose the best EPM solutions for your organization’s needs and objectives.
Imagine there are over one hundred logins in the source server and you need to migrate them all over to the destination server. Wouldn’t it be awesome if we could automate the process by generating the scripts for the required tasks?