Human error is to blame for poor cloud security, not the infrastructure itself, warns Claranet

Global technology services provider points to automation and fully-accredited partners as way to avoid cloud security vulnerabilities.

A lack of knowledge and an overreliance on manual change processes is leading many businesses to jeopardise the security of their cloud deployments, global technology services provider Claranet warns today.

The warning follows the launch of a report published by McAfee this week, which found that the average business has approximately 14 improperly configured IaaS instances running at any given time and roughly one in every 20 AWS S3 buckets are left wide open to the public internet. Additionally, researchers estimate that roughly 5.5. per cent of all AWS S3 storage instances are in a “world read” setting, allowing anyone who knows the address of the S3 bucket to see its contents.

Commenting on the findings, Steve Smith, Senior Site Reliability Engineer and AWS Team Lead at Claranet, said:

The cloud security challenges highlighted in this report have little to do with the platform itself, but everything to do with the people using it and, in our experience, people are the biggest weakness here. The major cloud providers like AWS set a lot of sensible defaults designed to support configuration – for example, S3 buckets are now private by default – but unfortunately, it’s very easy to get things wrong if you don’t know how to use the platform.”

We’ve seen many AWS configurations that end-user businesses have developed themselves or have worked with partners that don’t have the right experience, and, frankly, the configurations can be all over the place. When internal IT teams create these environments themselves, mistakes can occur when they don’t have the depth of knowledge or experience to follow best practice.”

A click of a button or slight configuration change can have a major impact on your security posture, so it’s important to get a firm grip of the access controls and have safeguards in place to catch mistakes before they hit the production environment.”

Developing infrastructure as code – effectively, templated scripts that will create infrastructure in any public cloud environment – helps here because it makes it more difficult for mistakes to occur. Any changes in the code need to be peer-reviewed in the development lifecycle, making it much less likely that errors will make it out to the production environment and ensuring that any changes can be tracked and audited. In addition, it’s also good practice to run that code from a centralised location – some kind of CICD server for example – so that only that machine can make configurations and that there’s no way to make changes manually.”

Steve concluded by stating that AWS’s Well-Architected Framework, a programme designed to help AWS users build the most-secure, high-performing, resilient, and efficient infrastructure for their applications, is a key way that users can secure peace of mind about their cloud deployments.

AWS has set up a review scheme, the AWS Well-Architected Framework, to help address these very issues and provide users with the assurance that everything is configured securely and as it should be. Qualified AWS partners can conduct comprehensive and free reviews of existing AWS architectures, checking things like access policies and change processes, and advise on the best way forward to safeguard security.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.