It seems every week you read a story about unsecured databases being exposed on the Internet, information being held for ransom, or even children’s teddy bears being attacked without their owner’s knowledge.
While it’s easy getting caught up with the exciting new attacks, vulnerabilities and acronyms, I think it’s important for everyone to remember that while network security is best done in layers, the most critical is a strong foundation.
While the majority of attacks can be mitigated through the application of the Australian Signals Directorate’s (ASDs) ‘Essential Eight’ (previously the Top 4), those recommendations are targeted at desktop environments because that is where the majority of attacks happen. My focus in this article is on the public facing servers and networks, as these often contain the ‘crown jewels’ and are exposed to the entire world by nature of their functional requirements.
Creating and testing firewall policies
Every firewall policy starts simple and with the best of intentions, only to grow and change until it no longer resembles its former self. Firewall management is so vital that it is the first requirement in the Payment Card Industry (PCI) Data Security Standard. Even if you are not required to be audited against PCI or similar standards, we can borrow some aspects to increase the security of any network.
PCI requirement 1.1.1 is “A formal process for approving and testing all network connections and changes to the firewall and router configurations”. This can be as large or formal as needed by the business but should be performed and documented to ensure that additions and changes are justified.
You also want to ensure testing is performed, usually by the data owner or change requester, so that incorrect or incomplete firewall rules don’t pile up as more small requests get layered on until the application isn’t working. This is also aided by ensuring the change requestor does their homework and provides all address/port/protocol allowances at once, usually with assistance from the software vendor documentation.
Documenting firewall policy intent
PCI requirement 1.1.6 is “Documentation of business justification and approval for use of all services, protocols, and ports allowed, including documentation of security features implemented for those protocols considered to be insecure”. This may seem procedurally excessive, but when taken into context of PCI requirement 1.1.7 “… review firewall rule sets at least every six months” you’ll see how this can actually save you time and effort in the long run.
An example of how to implement 1.1.6 is to comment each individual line or rule of a firewall policy with a globally unique identifier and a very short description. Most firewalls allow this information to be stored within the firewall policies themselves.
For example: [DMZ-04125] Allow HTTP and HTTPS from Internet to web servers
You then have a separate controlled document that links the unique identifier and provides the detailed description to comply with 1.1.6. This is sometimes called a narrative and includes descriptions for the source, destination, ports/protocols, and detail of what the traffic is that is being allowed. For larger environments, you can also include the data owner or change requestor, and change tickets for the creation or modification of the individual narrative.
For example: Title: [DMZ-04125] Allow in HTTP and HTTPS to web servers
Data owner: Web server administrators
Changes: CHG0002129, CHG0003188
Description: Traffic from the public Internet is allowed to all web servers in the DMZ on HTTP (80/TCP) and HTTPS (443/TCP).
Traffic is unencrypted on HTTP, but this does not contain sensitive information or PII as web servers are configured to always redirect to encrypted HTTPS. Traffic is encrypted on HTTPS and sometimes contains PII as some web servers request public users to enter their personal information.
Regular firewall policy review
Now when it comes time to review your firewall rule sets twice yearly to comply with 1.1.7, you can run two relatively simple tasks in parallel: comparing the active firewall rules to their linked narratives, and using the narratives document to check if the traffic allowance is still required. The latter can be done through interviews with the data owners, or simply distributing the individual narratives and having the data owners reply with a yes or no with comments.
When you think that some firewall rules allow anyone, from anywhere in the world, with any legitimate or malicious intent, to send arbitrary packets of data to your servers, it makes you stop and think about ensuring your firewall rules only allow the minimum ports and protocols to make your applications work. Once you have the basics of network security taken care of, you can then look at additional layers of protection provided by an array of technology acronyms like IPS, WAF, CDN, DDoS protection, and more.
Leave a comment below if you have any suggestions for improving security basics, or want other areas covered in future articles.
Original post appeared on Macquarie Cloud Services Blog
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.
I think this article is simplistic, and conflates “compliance” and “process” with “security”.
How about security _design_? Do you have one firewall at the Internet border and hence explicitly trust everything and everyone inside, or do you split Dev/QA/Prod networks with their own separate firewalls, or do you put NSX firewalls in front of every VM? Do you have generic servers with exposed ports that need filtering, or do you use e.g. docker containers that only expose the public ports, or a load balancer that will only forward http(s) and drop all other traffic?
Do you have a “border” any more? Consider BYOD and staff mobile devices, B2B alliances, remote monitoring and management services, site or endpoint VPNs, Cloud services which store sensitive data or require access to your DC data. Many applications now send their traffic over TLS for privacy, which can also cloak malicious traffic through a border firewall all the way to the application.
“You also want to ensure testing is performed, usually by the data owner or change requester” — these people are unlikely to be experts in testing firewall restrictions. These people are likely to be much more interested in getting their application to work — opening more holes than are required and never cleaning up unneeded access.
“all address/port/protocol allowances … usually with assistance from the software vendor documentation” — don’t rely on that to be an accurate minimal set, e.g. they list all traffic regardless of direction. Often the requestor won’t/can’t discover what access is required until after the service goes into production.
Thanks,
John
Thanks for your feedback John, I think we can use your suggestions for future blog posts on security.
This particular post was aimed at companies with existing infrastructure, and provides some guidance on their existing firewall policies, while trying to stay vendor, technology, and architecture agnostic.
As for testing of firewall policies, I simply meant that the data owner or change requester should test their application to ensure it operates as expected after the firewall policy change, not that they should actually perform any testing on the firewall itself.
Thanks again for taking the time to read and comment on our blog!
Jamie
Amazing article . Thanks to Sharing Information