There is a certain fallacy in the world of cyber security. It has been there since day one and continues to thrive today. It is simply that controls work. In the main they don’t. For too long security teams have lived the lie that what they have delivered has been effective, but so often from a viewpoint divorced from the very customers they effect. To be fair to most security teams there are generally blissfully unaware of the inefficiencies of their controls. Or ignorant. I respect that this is a very sweeping set of statements, but headline after headline making data breach tends to argue the point for me. And let’s not be shy here, these are major corporations with ‘systemic failures’ when it comes to protecting their crown jewels. Something doesn’t sit right.

But how can this be? Spend in security is at an all-time high. The volume of security offerings to cover every possible facet of security is unparalleled. The technological possibilities for mitigating risk know no bounds. We have more ‘experts’ than ever before. And of course, Big Data and AI to solve all our ills in the battle against superhuman adversaries with incredibly sophisticated attacks.

Is that reality though? Are organisations spending wisely when it comes to security? Are organisations doing the right things or papering over existing cracks? For me it is the latter and I’ll tell you why.

Let’s start with strategy. The overarching mission. How many organisations have such a thing? A few. How many are built through business engagement? Much less. Security strategy is generally written from a position of prejudice and as a means to gaining budget to mature the organisations posture. For a strategy to be sound it should be preceded by a warts and all look at the effectiveness and maturity of the as-is position and a clear line of sight of where it needs to get to. This requires a deep understanding of the business within which security operates alongside measuring the effects of the myriad of security jigsaw pieces across the organisation. This almost never happens. If it did then security teams would recognise that investment needs to be made primarily and almost solely on ‘fixing’ the crap that is already there. How can I say this? Well lets go through some of those jigsaw pieces that just about every organisation will have in their security picture.

Policy. We’ve all got policy. If you work in Government you will have more policy than you can shake a stick at and in other organisations or industries hopefully less so. However almost every policy I have ever read has been the equivalent of the ten commandments. Thou shalt not commit adultery; thou shalt not share your password. Exceedingly rarely will you see any explanation as to why it is a bad thing to do, or rather a risk altering thing to do. Nor will you ever see an explanation of the alternative for the end customer. In other words what they ideally should do to achieve the same goal that sharing their password does. So maybe delegate access mechanisms in this case. Of course those mechanisms are outside of the control of the security team. Which in turn means the security team has a dependency upon another team, in this case probably IT. To make it beneficial for the customer (user) to adhere to the policy then the alternative to sharing their password must be very simple, easy and slick. And of course promoted so that the user is aware of what they can do which has less of an effect on risk than sharing their password.

The trouble here is that policy is written very much from a position of prejudice by security people for security people. If we are honest with ourselves and maybe engaged with our customer base we would also learn that hardly anyone actually reads the policies, which are generally far too long and in the wrong tone, and even less actually understand them. If your policy is not read or understood then there is little point in having one. Much the same as operating procedures; there is what the policy or procedure says and then there is the reality of what people do. People share passwords and more. Deal with it!

Maybe something that could help here would be raising security awareness with our customers? That would be a great idea. Most organisations do this, which is great! However, what most organisations actually do is once a year mandatory Computer Based Training, which consists of the user clicking next, next, next, next, next and then answering ten questions that if they got them wrong they should not be allowed shoes with laces. You may laugh at this and then sigh because it is exactly what you do in your organisation. It is so common it is ridiculous. It is also ridiculous because it has zero positive effect. In fact it is a complete and utter waste of time and money. Security awareness isn’t but this approach is. You are simply ticking a box as is the user who is doing their mandatory security training as well as their diversity, health and safety et al yearly box ticking exercises.

Oh boy! That’s not a great start. It’s OK though, we’ve got some technical controls. Oh yes, we’ve got firewalls; phew! In fact we’ve got dual pair firewalls, from different vendors. AND when we installed them we blocked all unnecessary ports and protocols by default! We’ve got it nailed! Fab!

Then this minor thing called business change happens. Wherein the business, those little rapscallions, decide to make a change. A new process, a new technology, a new partner, it matters not. As part of that change we need to add a rule to the firewall to allow connectivity. Without it the change will fail. It goes through change control though. Good old ITIL, so it is still good. Except of course it doesn’t really look at if that change to the firewall changes our risk profile in any way. Now of course that is just one change and businesses make many changes rather regularly. And hey before you know it your firewall that had four rules on it now has four thousand. Your firewall has gone from being an effective control to effectively just heating your datacentre. Ask yourself; when was the last time you looked at your firewall rules? Hell, I’ll make it easy, when was that last time you looked at the rules just on your external firewalls? I won’t bother asking if you changed them as that is highly unlikely to have happened. If Wannacry told us anything it is that external firewalls are, shall we say, sub-optimal. Have you looked at them since that stark warning?

It is still OK though because we’ve got IDS and IPS. Happy days! The slight issuette here is that it really, really helps if you have a vague idea what protocols and ports are in use across your network. It also kind of helps if your internal, genuine traffic does not look anomalous enough to trigger the IDS. It is also rather beneficial if you have the faintest clue what assets are on your network, but more of that one later. And of course, if you ever looked at the alerts. Let’s presuppose that you do monitor the alerts. As a rough estimate what percentage are false positives? I’ll wager that percentage wise it is in the high 90s. Mainly because of the things stated above and that we’ve just built stuff and plugged it into other stuff for many years. Most of that plugging is done as simply as possible rather than how things should have been done in an ideal world. So what do we do with all of these false positives? Do we investigate the cause and influence change to reduce the noise by getting assets to talk to each other in a better way? Nah! We just turn that perfectly genuine rule off! That’ll sort it.

You’ll notice I’ve not talked about IPS. The reason being that almost nobody turns it on in prevent mode. Because frankly it will stop lots of genuine business traffic and be turned off again rather quickly. Security then gets a kicking from the business and loses credibility.

Now all of this is not security’s fault. IT has a lot to answer for in terms of network configuration etc. You really must work together if you want to make effective change, or even understand what is there today.

I’ll add two more in. Assets and users. Accuracy of asset inventories? 60% if you’re lucky. Users? Maybe slightly better. There is a massive problem in that most organisations do not know how many users vs accounts vs actual people they have. Neither do they in any way have anything like an accurate view of how many assets there are, their location, their health, configuration etc. Without any semblance of reality here you are going to struggle big time! Of course, what privileges do those users have? Do they need them in their current role? Do we do move, add, change well when it comes to access permissions? What about leavers? Consultants and contractors? Suppliers? What about admins; how many, where, who, and do they have internet access(!)? But at least we have individual (maybe) accounts for everyone, so still a control! Well, yes, only if you log and maybe look at it every once in a while. Cause you know, users share their passwords because it is easier than delegate access. So, without ever checking you’ll never see the dual logins from the same user on different machines. Or of course they’ll let their colleague sit at their desk and use their machine.

Same with assets. Simply, how do you know what your vulnerabilities are if you don’t know what assets you’ve got, let alone their health and status! It kind of makes vulnerability management, or patching a tad hard.

It is still OK though because we have anti-virus literally everywhere! Now I won’t get into which one and the ins and outs of different AV approaches. I’ll just simply ask, how often do you update the agents, and how many fail to update? Oh, hang on I don’t know how many assets I’ve got, which does make this tricky, but we do update AV every day. Good. However, I’ll wager my mortgage that several assets do not update every day for one reason or another.

OK, but despite all of this we’ve got a SOC! So we still maintain we’re in a good position. I get you! You’ve got a SOC. Eyes on glass. Coiled like a spring ready to respond to the slightest noise. If only! Aside from not knowing what assets, users, ports, protocols etc are in use on the network, or networks, you’re now logging all this ‘stuff’ and sticking it in a big SIEM engine. Effectively collecting and mashing together a noise akin to Saturday evening at Glastonbury. It is just noise. Your SOC analysts will be surfing through false positive after false positive. Be suffering huge bouts of alert fatigue. Chasing ghosts and generally not adding huge swathes of value. You’ll probably just deal with known alerts rather than actually look for abnormalities because everything looks abnormal and establishing a baseline of normal is nigh on impossible. Most of those known alerts will be controls doing their job, like blocking bad emails, or false positives.

I’m sorry, but I’m not painting a pretty picture. Mainly because it is not a pretty picture. I’ve not even talked about things like data. You know, like if you have a clue where your data is. OK you’ve got databases etc. that you know about, but do you know where all of it is? Quite a bit will be on people’s personal devices after they sent it home because it is easier to work on there without these stupid security things getting in the way.

Even your databases. I’m sure they are encrypted which is awesome. But what happens when a legitimate asset (user, device etc.) asks a legitimate question of that database? Does it reply? And is the reply encrypted? What if that asset were malicious?

I’ve not talked about risk and the fact that almost nobody does risk management in the true form. You know the continual loop of measurement, planning and action. Most organisations deal with theoretical risk (a one-time assessment) and notional controls that ‘mitigate’ the risks found. And then the parameters that make up each risk change, as they have a habit of doing, and nobody notices or reacts as they have no idea how to measure said parameters and act accordingly. Sound familiar? How do you go about measuring each parameter of your security risks? Threat actor / source, threat, exploit, vulnerability / weakness, likelihood, impact etc. Do you measure them on an ongoing basis in the context of your organisation? Probably not! But I bet you do risk yeah?

Still with me? If you recognise any of these things within your organisation then you need to focus here and not on some next generation panacea, Big Data or AI solution. It won’t work! If you don’t recognise any of these things then I’d say you’re not looking hard enough, OR you are in the 0.1% you do the basics well!

The reason so many organisations suffer breaches is simply down to a failure in doing the very basics of security. I don’t care how much security technology you buy you will fail. It is time to get back to basics!

It is very interesting to see the Equifax report. Most pertinently that they had processes, tools and policies in place, yet still succumbed in a big way. Risk materialised. A risk that, with what most would deem the basics, and probably more, should have largely been mitigated.

Yet we have a serious problem in the industry. The Equifax story, or certainly situation, could easily be transposed to almost any organisation on the planet.

Many of us in the industry trot on about the basics regularly. I do and am guilty of the following. When we talk about the basics of security, we do forget to factor in that a lot of organisations already have the basics in situ. Don’t get me wrong, some still don’t! In 2018!!!

However, those basics represent a serious legacy that security has to address. The legacy is that the basics of security have been operated poorly for years.

Utterly out of step policies written by security, for security to the detriment of the customer, who lets face it barely bothers to read the nonsense we throw at them anyway. Once a year CBT awareness training that literally does nothing but waste money, time and effort.

Password governance and advice that almost forces the customer into ‘bad practice‘. A complete unknown in terms of assets, users, privileges, protocols, networks. Security, and poor IT, that has also bread ‘shadow IT’; or simply unmet user demand.

Technologies that are criminally underutilised in terms of the functionality that has been paid for, and frankly unloved and uncared for.

It is this legacy of poorly operated security that makes ‘doing the basics’ really, really hard. That is also confounded by a need to accelerate the adoption of new ways of working, technology, methodologies, digital et al.

So, with shaky foundations in dire need of restoration, whilst the house keeps shifting direction and changing shape and dynamics.

If doing the basics was easy, we’d all do it. Many have actually. Badly. With best intentions, albeit with, in all likelihood, poorly articulated risk based on theoretical one-off assessments, and left to drift.

Technologies that are poorly managed, if at all, once they have entered operation. Firewalls drift, we all know that. They start with necessary ports and protocols only and over time drift to, as near as damnit, ANY ANY.

Despite of change governance and risk management being in operation. Which does beg a question as to what use they serve.

The same can be said of most security controls, people, process and technological. We let them drift, whilst some still take comfort in the fact that they are there, but when we look in the cold light of day, how many would actually be deemed effective?

And that’s before we factor in the shifting risk landscape, which in the majority we fail to actually manage.

And now we find ourselves in the position of needing to retrofit basic security, that actually works in business operation (and not to the detriment of the customer), which is nigh on impossible. Or seemingly so.

And of course, it is damn hard to get budget to fix all the stuff that is already there and should be working optimally by now. If we are even aware of the sub-optimal nature of our comfort blanket of security controls.

This goes way beyond just patching, which in itself can be a bloody art form!

I’m sure if you could start again, you’d do things differently, but we do not get that luxury. We are where we are. That’s not a great place to be for many an organisation.

So, do not be surprised when the next big breach happens. Nor be quick to point the finger. This could be almost any organisation. The failings at Equifax are extremely serious, but in no way should they been seen in isolation.

There is a common denominator in the plethora of breaches, and it is not what certain groups will want you to think, which is that it is down to the continued rise in sophistication of adversaries.

It is down to a legacy of poorly defined and operated security. If one thing has to change it is this.

Adversaries are getting more sophisticated, as is technology, the tools at their disposal, their awareness, their abilities. The same can be said of defenders. However, adversaries are not held back by this overarching legacy that we in security often inherit.

We do need to get back to basics, but also empathise, in that we know the basics is bloody hard, especially so with a legacy of poorly operated security.

This is no quick fix. No magic bullet. No 5th generation widget that will solve our ills. There is incremental and well thought out change that leads to improvements. But that also needs to be maintained or it will simply drift too.

We can use things to our advantage, to accelerate the change, like existing digital transformation programmes, though this has to be with well-articulated risks, including the parameters that make up such risks and be built into the design………and operated well!

In a world where everyone is striving to become more digital and more customer-centric in their offerings there is a huge push towards utilising cloud computing. Cloud computing offers a tremendously flexible playground, within which rich user focused platforms and services can be developed, or straight procured. There is far less constraint as to what technologies can be used and far more agility in terms of sizing, provisioning and the ability to move to “new” locations.

No longer are you constrained by physical components and the respective feeding, watering and general upkeep therein. Though of course, there is still a certain, possibly misguided, comfort in having a physical appliance that you can touch and feel, and actually see.

The leap into the cloud raises the immediate concerns of not knowing where it is or having control over it. Traditionally this would have your security teams in a cold sweat! Well, not even traditionally, it still does have a lot of security teams in a cold sweat thinking of all the various aspects that they have no visibility or control over. They don’t like that! Not one bit!

That doesn’t, however, mean it’s a bad idea! Not by a long stretch. There are tremendous capability uplifts you can achieve through the adoption of cloud computing, with a serious acceleration of your digital vision. Saying that, you should venture into these realms with your eyes open and your security folks embedded with you through discovery, design and deployment. It rings true that if you design, build and run a service poorly then you are likely to have major issues with it, agnostic of the hosting provider, or whether it is physical or virtual.

It should go without saying that the length to which you should push the boundaries of possibility should be closely aligned with the value to the business of the service or platform you want to ‘cloudify’, the nature of the data and interaction therein and an understanding of the tangible risks that you will inherit by doing so. This needs to be a pragmatic stance as even though you may currently have services run from a nice solid datacentre that you can touch and feel, it is still likely to be run by someone else on your behalf. Ask yourself how this actually differs from cloud hosting adoption? At the end of the day, and like everything else, it is a risk based decision. It’s an old and tired adage, but hey it’s true!

There are numerous uses for the cloud, but for the purposes of this paper we will focus on the cloud hosting aspects wherein you will design, build and implement your own services. A significant number of organisations are driving down this route.

Cloud hosting providers

There is a tendency to think that all Cloud providers are basically the same, the only real differentiator being price. The real-world experience is very different indeed, and it’s important to understand those differences; you can make massive leaps in capability simply by choosing the right cloud provider, and then using them in the right way.

Cloud providers fall into roughly two camps:-

  1. Those that do not understand the need to create a security architecture.
  2. Those that do, and provide a variety of capabilities that can be used as building blocks.

In the former, there is no capability, natively, to create any sort of tiering, segregation, etc. Basically you have a single flat LAN to build something and not a lot else. If the attacker manages to break the app, then typically they’ve got any data too, plus any other services located in the same environment, unless you build in the requisite controls construct. In short, you’ve lost the lot because you cannot segregate services from each other, or indeed front-end services from back-end ones. That sort of scenario keeps security architects awake at night, and should keep CIOs having TalkTalk-like nightmares too. That’s not to say it’s bad, just that it requires a lot more thought from you as to how you protect your crown jewels. There’s less out of the box to help you build your considerations into your architecture.

The better, more enlightened, providers provide a rich variety of capabilities:-

  •  Capability to segregate VMs (e.g. firewall-enforced ‘tagging’ or ‘zones’).
  • In-built IPSec and SSL tunnel capabilities.
  • Multi-data centre (e.g. you can multiple zones inside a geographical region).
  • Load balancing is implicit – even across zones.
  • Anti-DDoS controls are easier to implement because of the massive bandwidth these larger cloud providers have.
  • Data backup is typically included or available (e.g. ‘persistent disk’ and versioning inbuilt).
  • Encryption of data at rest and in transit – even within the cloud environment.
  • Can further strengthen any encryption requirements (e.g. Hardware Security Modules to store key material like certificates).

There is not a one size fits all model for your ‘cloudified’ services, but understanding and building in your security aspects right from the outset, based on the service(s) itself makes life a damn sight easier in the long run.

There are some great resources available to help you understand the types of controls you might want to consider for your cloud hosted services, though do bear in mind that most of these will be down to yourselves to implement rather than the hosting provider.

A good starting point is the continually developed Cloud Controls Matrix provided by the Cloud Security Alliance (CSA). I strongly recommend you familiarise yourselves with this before jumping two-footed into the cloud.

Beyond the (virtual) infrastructure

There is more to this. Infrastructure elements like those detailed above are great, but can be just as hard to manage as the physical equivalent. Fortunately, the OpenStack standard comes to our aid.

Now at this point there may be a few readers who wince, as we’ve mentioned something that is open source. Open source, like cloud computing, often splits an audience. It is a complete misnomer that open source is bad and that COTS is good. There are pros and cons in everything and it ultimately comes down to how you implement it and maintain it.

OpenStack is a set of standards for APIs that allow virtual provisioning to be automated. Automation is cool – especially from a security perspective. Tools like Terraform interface to these OpenStack APIs and allow scripts to automatically build virtual environments. As long as the scripts are built with portability in mind, it gives you a supplier-agnostic capability. Fallen out with Google? Fine, deploy the same automation to build your environment in Amazon… It also makes life easier for your security folk as they can focus on assuring the patterns you deploy rather than each and every deployment instance.

There’s even more – the combination of Terraform, Puppet and OpenStack offers some compelling opportunities:-

  • You can create as many environments as you need – a few test environments perhaps, a couple of UAT ones and finally a Live one. All from the same build scripts.
  • You can test fixes and then roll them forward. You never need to patch a Live system and you never need to take it down.
  • You can test new suppliers easily.
  • You can even use multiple cloud providers for different purposes (e.g. test on cheap ones, deploy Live on better ones).
  • You can build security controls into the service design and build it perfectly, every time.
  • Tools like Puppet audit unauthorised changes and revert them to as-scripted – and can alert if such an event occurs.

One of the key advantages with these open source standards and tools is that they are underpinned by thriving communities of development expertise. You no longer have to wait for a vendor’s next development release cycle to get your hands on the latest feature. If a feature doesn’t currently exist you can build it, or if someone else already has you can replicate it. You do need expertise to be able to truly take the greatest advantage of this, but it is far easier to get that resource than to cajole a vendor into designing something new into their product.

Access

One area for major consideration is that of connectivity. No matter what type of cloud provider is selected – whether it is the simple ‘flat earth’ type or the much more enlightened ones that have security capabilities built-in, the question of ‘so, how do we connect it all together securely?’ remains. This is especially pertinent where you retain a back end data processing repository. Your crown jewels in terms of data.

Some considerations for that connectivity:-

  • Cloud providers typically have massive capability (bandwidth, CPU), and are available 24/7.
  • Core backend processing systems typically do not have massive capability and are only contractually available for a short period (8am-6pm is oft-quoted).
  • So, is a connection directly to back-end services really required? Or can these back-end services be left protected in a safer world?
  • The answer actually lies in examination of the risks.

o   E.g. If the loss of a system would cause severe disruption then do not expose it to the Internet in a way that could affect its operation.

  • This largely means an interface layer is required that minimises exposure of your data unnecessarily, but also allows the required availability via controlled data caching.
  • Note that none of this should ever require a direct unfettered connection from the Cloud into the corporate estate.
  • Oh, and you need to encrypt  it…

So, as you can see just from this skim, there is a lot more to it than just typing “cheap hosting provider” into Google and comparing prices. That said the Cloud offers so much opportunity at your fingertips. Done right it can be a huge business accelerator.

It is increasingly commonplace for organisations to undertake phishing simulations against their employees. There is a plethora of service providers as well as free resources to use for this purpose. With the increase in such activities, you would think security awareness would be at an all-time high. But is it? And are these methods effective?

Let’s get one thing clear – you do not need to pay money to discover that many employees cannot spot fake emails from legitimate ones. They can’t. Deal with it.

This is even before you consider well-crafted phishing emails. People still fall foul of the rudimentary phishing emails that most of us laugh at. You can tell they still work because criminals are still using them.

Where you might consider careful spend is in raising awareness. Often this is coupled with simulation, but with mixed results. Training is often too long, intrusive and centred on corporate security and policies, with which the user – or rather customer, as they are all customers of security – has little or no affinity.

Obviously, this will be combined with the annual mandated security awareness training employed by most organisations – training that sits alongside health and safety, diversity, anti-corruption and all the other topics, and does nothing but annoy the user.

Inmost cases, the user will just repeatedly click “next” and pass a test that a five-year-old could ace. It does literally nothing to raise security awareness.The only awareness it raises is a dislike for security.

Threat vectors

So, there are two obvious considerations. First, email is not the only threat vector. If you are going to run simulations, you should do so across vectors, for example SMS, social media and voice, as well as good old email.

Second, if you are going to couple that with training, you should ensure the training modules are, at most, five minutes long and actually pertinent. Here, pertinent means that your intention is to alter behaviour. That will not come from trotting out generic corporate security rules or policies to an already jaded user.

You should focus on the skills available for the user to protect themselves and their families in their personal cyber space. They have a far greater affinity with this side of the subject and, guess what, these are the self-same skills you want them to build to protect the corporation.

If your training material does not render on mobile devices, please stop – it is a personal bugbear and so simple to remedy.

But let’s not stop there. This is just the simulation. What happens when an actual rogue email arrives? Let’s say you have an aware workforce, that has not been achieved to the detriment of day-to-day operations – in other words, people aren’t so scared to open any email that they do nothing all day.

Rogue emails

So, a rogue email enters the organisation. Your hyper-aware user spots it. They could delete it, which many will do, or they could report it to the security team – which does pre-suppose that they have the vaguest idea who to report it to! In all likelihood, they will have to search an intranet-type resource to find out where on earth to send the offending email.

They report it and the security team comes back to ask if they could forward the email as an attachment to preserve mail headers and such like. I kid you not –this happens.

The security team then has to take the email details and either go to the mail platform, or contact the mail team, to undertake an investigation – for example, search for other recipients, check attachments or links – to ultimately determine whether the rogue email is malicious.

Of course, part of this is to understand which users have done what with the email– opened, clicked, downloaded, and so on. And if it is malicious, they have to undertake the sometimes laborious exercise of purging the email from all respective mailboxes, which may or may not involve more teams and consoles.

All of this is massively open to time lag and error. When speed is of the essence, we are at the mercy of processes and procedures, which may or may not be slick.

Some of this pain is taken away by the “one-click” report button that many solutions provide, although that still does nothing for the back-end investigatory processes.

Now back to the simulation for a moment. If you want to change human behaviour, this will not happen quickly, or universally. People take time to change and some will still click the link, or open the attachment, regardless. So you really need to ensure that your back-end security processes and team engagements are as slick as can be.

Multi-faceted simulations

This does not paint the simplest of pictures, but that does not mean it is not a good thing. What, hopefully, it does serve to highlight is that simulations need to be multi-faceted. They need to factor in different threat vectors – not just email.

They also need to be operationalised. They need to work when an actual rogue email comes along. They need to be twinned with operational security processes, which include engagement with wider teams in IT, and may include suppliers. Training needs to be pertinent, and please, for the love of some higher being, mobile-friendly.

The desire should be that in raising awareness, we make it possible for one employee to protect the whole organisation. Swiftness and ease of reporting, coupled with seamless processes to investigate and remediate. Ideally, that would be measured in some way to show progress and also as a means of rewarding the vigilant user, whose actions helped to safeguard the organisation.

Too often, raising awareness through phishing simulation is a tick-box exercise that does little to actually raise awareness, unless it is done right – much like the pointless mandated security training that we force users to pay no attention to year after year, while patting ourselves on the back for a job well done.

With compromises on the rise, and phishing still being the prime entry point, you really need to look at the bigger picture than just phish your employees.