This article continues our look at the basics of information security, with a focus on the server side.
Patching is so important that the Australian Signals Directorate’s Top 4 mentions it twice. From a security perspective, if there is a known vulnerability, nothing is better than applying the patch to remove the vulnerability. I won’t go into depth here, as hopefully the concept is already well understood.
If your data is even remotely important, you need to ensure you have regular backups and that at least some of them are kept offline where they cannot be interfered with by the same misadventures that may impact live data on servers. Similar to Schrödinger’s cat, if you don’t regularly test your backups by doing a restore, then your backups are both valid and corrupt at the same time. Murphy’s Law, of course, would give worse odds and say untested backups would always be corrupt when you need them the most.
Worthy of an entire article on its own, both system and application logs contain a wealth of information for multiple purposes.
Application logs should be descriptive enough to provide not just information to debug errors, but also to provide insight into security event successes and failures that would otherwise go unnoticed at the operating system level. If not, then it may be useful to have a word with the developers so they understand the security benefits of descriptive logs. Operating system logs are quite verbose by default, which then leads to the next challenge of managing these logs.
The benefits of centralized logging far outweigh the effort in setting it up. Servers and applications can be configured to keep a local rolling log while forwarding a copy of each log entry to a central server. Once disparate logs are together in one location, you can start looking for activity that would normally fly under the radar on a single server, but is actually part of a coordinated and distributed attack across the entire server fleet. You also have the ability to filter out only the useful or interesting log entries and forward them onto a SIEM for deeper analysis. This is an important step, as most SIEM products are licensed based on the volume of ingested logs.
Something commonly overlooked is ensuring all servers and network devices are using a common source for time synchronization. While on the surface this doesn’t appear to be security related, not having a common reference for time makes correlating logs between disparate systems difficult to impossible.
Another requirement for having synchronized clocks is the use of time-based one-time passwords (TOTP), which use the current time as part of the input to a cryptographic hash function.
While we previously talked about firewalls from a network perspective, you also need to think about firewalling traffic from a server perspective.
Servers typically don’t need unfettered access to the Internet and should be restricted to the minimum access required. Servers should use internal update servers (for example Windows Server Update Services and RedHat Satellite) and have access to a secure jump server or bastion to allow moving data in and out. You can then allow very limited access to official websites for application updates that cannot be proxied internally.
Servers with excessive Internet access are commonly used by employees to perform non-business related web browsing, which may lead to accidental infection and compromise the server. If the server is compromised via any means, Internet access would allow the attacker to easily exfiltrate data to any number of public file transfer servers.
Operating systems and applications come loaded with additional services, demo content, and configuration options that are not required for operating a production server. Many of these extras also include vulnerabilities that increase your attack surface and overall risk exposure.
You should perform a systematic inspection of everything on your servers and remove or disable anything not absolutely required for the server to perform its business requirement. This should be done for at least the production servers, preferably before they are put into production.
Remember, vulnerable test or development servers can still be compromised and used as a stepping stone to pivot and attack other servers internally, so don’t forget them in your scanning and mitigation activities. Similar to extraneous services, you should inspect all default configurations to ensure your servers (and network devices) are not vulnerable by default.
Hopefully, this has given you some guidance for improving your server security, or at least a reminder to skilled practitioners not to forget the basics when being bombarded with new acronyms and silver bullet solutions.
Leave a comment below if you have any suggestions for improving security basics, or want other areas covered in future articles.
Original post appeared on Macquarie Cloud Services’ Blog
The views expressed by the authors of this blog are their own and do not necessarily reflect the views of APNIC. Please note a Code of Conduct applies to this blog.