Security Think Tank: Your path to understanding attack paths

The complexity of corporate IT systems has grown significantly in the past 10 years, first with the move from fixed on-premise systems to the cloud, and latterly with the growth of web apps and cloud-based services providing new more efficient ways of doing business.

While some smaller organisations may be fully cloud-based, the vast majority of organisations have a mix of on-premise IT, cloud or hybrid cloud, and use third-party systems and web apps for internal or customer-facing services.

While this has provided a significant increase in capability and efficiency, it has also brought complexity, both technically and organisationally, with external parties such as cloud service providers and developers having security responsibilities for the software or services they provide.

Over the same period, attackers have become more sophisticated, with targeted attacks typically using several vulnerabilities to gain a foothold, escalate their privileges, then move to other hosts and servers within the network.

There will then be yet more exploitation of vulnerabilities to maintain persistence – these vulnerabilities will not just be software vulnerabilities, but could be errors in cloud configuration or identity and access management (IAM), or could be the result of a supply chain attack on a software or service provider.

These can be addressed to some extent through vulnerability scanning and automated cloud policy verification applications that check configurations against a high-level policy, but they can never be eliminated.

The MITRE ATT&CK framework identifies nine main techniques that attackers use to gain initial access.

The majority of these – such as web drive-by compromise, exploitation of public-facing apps, external phishing, replication through removable media, use of stolen accounts – will only provide user-level access.

This allows the attacker to access information available to the user, but does not give full access. For this, the attacker needs to exploit a vulnerability to escalate privileges and become an administrator to escape that initial host, and another to gain a foothold on a second host or server.

Similarly, if hosting web applications, exploitation of a vulnerability or misconfiguration in an external-facing web app could give access to an underlying database, or direct access to the operating system and through that to other systems by exploiting other vulnerabilities.

While customer-facing and internal systems should be kept separate, often they are not, and it can be possible to jump from one platform, or system, to another.

The most likely connection will be a common IAM system, particularly if users’ Windows Domain passwords are used across different systems – which is not uncommon. However, if there is any connection between two systems, then poor configuration or unmitigated vulnerabilities could allow an attacker to move between them.

This risk cannot be properly addressed without an accurate inventory of assets and interconnections, which needs to be up to date at all times. 

“If there is any connection between two systems, then poor configuration or unmitigated vulnerabilities could allow an attacker to move between them”
Paddy Francis, Airbus CyberSecurity

Once this is in place, the first step in addressing this risk should be zoning/segmentation with appropriate monitoring of inter-zone traffic. This should be followed by regular vulnerability scanning and patching to remove the vulnerabilities found or, where patching is not possible, mitigating the vulnerabilities so they can’t be exploited. This may be at the level of the individual vulnerability or a system-level mitigation addressing several vulnerabilities.

For the cloud, misconfigurations can be identified using tools that can verify configurations against a high-level security policy, which should allow cloud misconfigurations to be corrected. This does, of course, assume a policy for the tool to check is in place.

For web apps, or other bespoke software development, security coding rules and use of static and dynamic code analysis as part of the DevOps testing cycle will help eliminate common problems like buffer overflow and cross-site scripting vulnerabilities.

There will inevitably be vulnerabilities that can’t be patched or mitigated and unknown misconfigurations. Something therefore needs to be done for those things that can’t be fixed, or which you don’t know about.

If not already in place, multifactor authentication (MFA) for administrator access, remote access virtual private networks and access to other sensitive systems will help mitigate privilege escalation and the use of stolen credentials – for example, through password sniffers, key loggers, and so on.

The use of zoning and additional monitoring can also help in creating system-level mitigations for known vulnerabilities and help identify, or prevent, unknown vulnerability and configurations being exploited by limiting traffic between zones to that which would be expected and monitoring inter-zone traffic to detect potential exploitation activity.

Finally, an independent penetration test on the system would prove the mitigations of the known vulnerabilities and could also identify misconfigurations, but won’t be able to identify unknown vulnerabilities.

Today’s larger IT systems tend to be complex and often built up piecemeal over time. This usually creates vulnerabilities and misconfiguration through many reconfigurations of equipment and systems and the introduction of new applications and services. Such systems are likely to contain vulnerabilities and have configuration errors – and if they do, they will eventually be exploited.

Leave a Reply

Your email address will not be published. Required fields are marked *