In a world where everyone is striving to become more digital and more customer-centric in their offerings there is a huge push towards utilising cloud computing. Cloud computing offers a tremendously flexible playground, within which rich user focused platforms and services can be developed, or straight procured. There is far less constraint as to what technologies can be used and far more agility in terms of sizing, provisioning and the ability to move to “new” locations.
No longer are you constrained by physical components and the respective feeding, watering and general upkeep therein. Though of course, there is still a certain, possibly misguided, comfort in having a physical appliance that you can touch and feel, and actually see.
The leap into the cloud raises the immediate concerns of not knowing where it is or having control over it. Traditionally this would have your security teams in a cold sweat! Well, not even traditionally, it still does have a lot of security teams in a cold sweat thinking of all the various aspects that they have no visibility or control over. They don’t like that! Not one bit!
That doesn’t, however, mean it’s a bad idea! Not by a long stretch. There are tremendous capability uplifts you can achieve through the adoption of cloud computing, with a serious acceleration of your digital vision. Saying that, you should venture into these realms with your eyes open and your security folks embedded with you through discovery, design and deployment. It rings true that if you design, build and run a service poorly then you are likely to have major issues with it, agnostic of the hosting provider, or whether it is physical or virtual.
It should go without saying that the length to which you should push the boundaries of possibility should be closely aligned with the value to the business of the service or platform you want to ‘cloudify’, the nature of the data and interaction therein and an understanding of the tangible risks that you will inherit by doing so. This needs to be a pragmatic stance as even though you may currently have services run from a nice solid datacentre that you can touch and feel, it is still likely to be run by someone else on your behalf. Ask yourself how this actually differs from cloud hosting adoption? At the end of the day, and like everything else, it is a risk based decision. It’s an old and tired adage, but hey it’s true!
There are numerous uses for the cloud, but for the purposes of this paper we will focus on the cloud hosting aspects wherein you will design, build and implement your own services. A significant number of organisations are driving down this route.
Cloud hosting providers
There is a tendency to think that all Cloud providers are basically the same, the only real differentiator being price. The real-world experience is very different indeed, and it’s important to understand those differences; you can make massive leaps in capability simply by choosing the right cloud provider, and then using them in the right way.
Cloud providers fall into roughly two camps:-
- Those that do not understand the need to create a security architecture.
- Those that do, and provide a variety of capabilities that can be used as building blocks.
In the former, there is no capability, natively, to create any sort of tiering, segregation, etc. Basically you have a single flat LAN to build something and not a lot else. If the attacker manages to break the app, then typically they’ve got any data too, plus any other services located in the same environment, unless you build in the requisite controls construct. In short, you’ve lost the lot because you cannot segregate services from each other, or indeed front-end services from back-end ones. That sort of scenario keeps security architects awake at night, and should keep CIOs having TalkTalk-like nightmares too. That’s not to say it’s bad, just that it requires a lot more thought from you as to how you protect your crown jewels. There’s less out of the box to help you build your considerations into your architecture.
The better, more enlightened, providers provide a rich variety of capabilities:-
- Capability to segregate VMs (e.g. firewall-enforced ‘tagging’ or ‘zones’).
- In-built IPSec and SSL tunnel capabilities.
- Multi-data centre (e.g. you can multiple zones inside a geographical region).
- Load balancing is implicit – even across zones.
- Anti-DDoS controls are easier to implement because of the massive bandwidth these larger cloud providers have.
- Data backup is typically included or available (e.g. ‘persistent disk’ and versioning inbuilt).
- Encryption of data at rest and in transit – even within the cloud environment.
- Can further strengthen any encryption requirements (e.g. Hardware Security Modules to store key material like certificates).
There is not a one size fits all model for your ‘cloudified’ services, but understanding and building in your security aspects right from the outset, based on the service(s) itself makes life a damn sight easier in the long run.
There are some great resources available to help you understand the types of controls you might want to consider for your cloud hosted services, though do bear in mind that most of these will be down to yourselves to implement rather than the hosting provider.
A good starting point is the continually developed Cloud Controls Matrix provided by the Cloud Security Alliance (CSA). I strongly recommend you familiarise yourselves with this before jumping two-footed into the cloud.
Beyond the (virtual) infrastructure
There is more to this. Infrastructure elements like those detailed above are great, but can be just as hard to manage as the physical equivalent. Fortunately, the OpenStack standard comes to our aid.
Now at this point there may be a few readers who wince, as we’ve mentioned something that is open source. Open source, like cloud computing, often splits an audience. It is a complete misnomer that open source is bad and that COTS is good. There are pros and cons in everything and it ultimately comes down to how you implement it and maintain it.
OpenStack is a set of standards for APIs that allow virtual provisioning to be automated. Automation is cool – especially from a security perspective. Tools like Terraform interface to these OpenStack APIs and allow scripts to automatically build virtual environments. As long as the scripts are built with portability in mind, it gives you a supplier-agnostic capability. Fallen out with Google? Fine, deploy the same automation to build your environment in Amazon… It also makes life easier for your security folk as they can focus on assuring the patterns you deploy rather than each and every deployment instance.
There’s even more – the combination of Terraform, Puppet and OpenStack offers some compelling opportunities:-
- You can create as many environments as you need – a few test environments perhaps, a couple of UAT ones and finally a Live one. All from the same build scripts.
- You can test fixes and then roll them forward. You never need to patch a Live system and you never need to take it down.
- You can test new suppliers easily.
- You can even use multiple cloud providers for different purposes (e.g. test on cheap ones, deploy Live on better ones).
- You can build security controls into the service design and build it perfectly, every time.
- Tools like Puppet audit unauthorised changes and revert them to as-scripted – and can alert if such an event occurs.
One of the key advantages with these open source standards and tools is that they are underpinned by thriving communities of development expertise. You no longer have to wait for a vendor’s next development release cycle to get your hands on the latest feature. If a feature doesn’t currently exist you can build it, or if someone else already has you can replicate it. You do need expertise to be able to truly take the greatest advantage of this, but it is far easier to get that resource than to cajole a vendor into designing something new into their product.
One area for major consideration is that of connectivity. No matter what type of cloud provider is selected – whether it is the simple ‘flat earth’ type or the much more enlightened ones that have security capabilities built-in, the question of ‘so, how do we connect it all together securely?’ remains. This is especially pertinent where you retain a back end data processing repository. Your crown jewels in terms of data.
Some considerations for that connectivity:-
- Cloud providers typically have massive capability (bandwidth, CPU), and are available 24/7.
- Core backend processing systems typically do not have massive capability and are only contractually available for a short period (8am-6pm is oft-quoted).
- So, is a connection directly to back-end services really required? Or can these back-end services be left protected in a safer world?
- The answer actually lies in examination of the risks.
o E.g. If the loss of a system would cause severe disruption then do not expose it to the Internet in a way that could affect its operation.
- This largely means an interface layer is required that minimises exposure of your data unnecessarily, but also allows the required availability via controlled data caching.
- Note that none of this should ever require a direct unfettered connection from the Cloud into the corporate estate.
- Oh, and you need to encrypt it…
So, as you can see just from this skim, there is a lot more to it than just typing “cheap hosting provider” into Google and comparing prices. That said the Cloud offers so much opportunity at your fingertips. Done right it can be a huge business accelerator.