|By Don MacVittie||
|April 3, 2016 10:00 AM EDT||
We are rapidly approaching a world where the bulk of datacenter day-to-day operations are automated. The major application provisioning tools are integrating with infrastructure vendor APIs to give operations the power to control and monitor the datacenter – including things like SAN and networking gear – through their systems. To my mind this is a very cool development, but before we rush headlong into this world, let’s have a frank discussion about the nature of infrastructure, the nature of these integrations, and the nature of hackers. Because it’s never all sunshine and unicorns, and automation is no exception.
The rush to integrate has created a consumption level of new and previously unheard of modules that is astounding. If a module meets a pent up desire, thousands of organizations are using it in production practically overnight. This makes sense as more and more enterprises move to a more complete automation infrastructure, but it is not without its risks, and you really should consider those risks before the 2am phone call comes. Which of course, we all hope never will.
As mentioned, the major application provisioning providers are working closely with infrastructure vendors to integrate infrastructure into the realm of what they can manage. SaltStack, Puppet, and Ansible, for example, are integrated with products from infrastructure vendors like Cisco, EMC, and F5. The nature of these integrations is often that the vendor does the development, which is cool, because who knows the product better than the vendor?
But that brings one planning point into the equation. What to do if the vendor drops support for your chosen provisioning platform? While this could be an issue for the entire relationship, it is more likely to come into play when a vendor EOLs a product. These solutions are almost all open source, but it is the nature of Open Source in the enterprise that this is not a major differentiator. Except for extreme need, most organizations never work through the source code for providers – particularly complex, multi-layered providers – to make certain they can maintain it. Not that there is no interest, but in the enterprise, that kind of free time is a rare commodity, so it is only done at need.
So I suggest you have a plan. Know what steps you will take if a vendor ends support in a middleware DevOps tool, and you need that support continued. The plan doesn’t have to be complex, just have thought through it so you’re not making it up on the spot when the situation arises.
While you’re thinking about it, make certain the vendor-provided plug-ins are indeed open source, because that changes the “what would we do” equation a little if it can just be pulled from the market entirely, and you don’t have access to the source.
Just a reminder that infrastructure is the center of your world. If you have one of these modules, and it causes problems, it could potentially impact a lot more than just one service. You know that, but it implies a much greater need for quality assurance of modules than you would use for, say, an apache config/install module. The potential impact is huge, and we’ve seen when DevOps tools propagate problems across server farms, it could be so much worse if they do the same across networking gear.
This is more important when you find that module designed by a user that does exactly what you need. Make certain that it’s solid code. Bring it in, do a code review – no, I’m not kidding. This is code going to change things on your core infrastructure, due diligence is absolutely recommended. I’d say “required” instead of recommended, but to some extent the tolerance of your organization for risk figures into the equation. But if I’m a customer of yours? Consider it required.
Do you know what a bad actors’ dream scenario is? It is infrastructure as code. Given the opportunity to submit code to such a project, this is a golden opportunity. The attackers could stop messing with applications, and just get back doors into infrastructure. That’s a scary scenario. And it will happen.
This is another area that is a bigger concern when you are grabbing modules developed by users of a provisioning tool than when using tools implemented with vendor assistance, though in an open source and massive code reuse world, there is always a risk of both purposeful and inadvertent tainting of codebases.
Most enterprises today have a security team. They need to go over these modules before they are implemented – in production for sure, but I’d recommend this review before deploying in test too. The usual reason an organization doesn’t do this step is availability of resources as opposed to delivery timelines. Considering the number of man-hours a module like this can save over the long term, an investment up-front to make certain it’s safe is not too much investment. Stretch timelines or free up resources. I know that’s easier said than done, I’ve been management on high-visibility teams in enterprises. But the possible negative impacts are massive, and definitely worth the effort to get them reviewed.
A last word
Others have written more extensively about these concerns, since there is only so much one can cram into a blog and expect you to read it, I recommend seeking out some of those other sources and reading them.
The problem we have with security generally is that these risks as a percent chance are pretty slim. Most organizations will not suffer if they ignore this post and others like it. But the ones that do will suffer greatly. I don’t wish to over-exaggerate the risks, they are relatively small on a per-enterprise basis, though I think this type of problem will inevitably impact some of us. Of course the vendors – both application provisioning and infrastructure – do not want to be the source of problems with automated infrastructure, so they are watching also. But the risk is still there, and it’s worth a few extra man-hours to make sure there are not problems in the modules you choose to use. The network you save could be your own.
- The Top 250 Players in the Cloud Computing Ecosystem
- The Five Characteristics of Cloud Computing
- The Next Chapter in the Virtualization Story Begins
- 4th International Cloud Computing Conference & Expo Starts Today
- CIA was Headed to an Enterprise Cloud All Along: Jill Tummler Singer
- Deputy CIO of the CIA to Keynote 1st Annual GovIT Expo
- 5th Cloud Expo in New York: Themes & Topics
- Managing Cloud Infrastructure at Cloud Expo 2011 New York
- Nick Carr's Cloud-Network Disconnect
- "Private Clouds Cut Costs of Internal IT," Says Univa CEO