For 57% of enterprise organizations in our latest survey on cloud adoption, IT infrastructure took the form of a hybrid cloud, i.e. a mix of public cloud infrastructure-as-a-service (IaaS) and some form of private cloud data center. At McAfee, we spend a lot of time speaking about the benefits of using public cloud infrastructure providers like AWS and Azure. We spend less time discussing private cloud, which today is increasingly software-defined, earning the name “software-defined data center” or SDDC.
Infrastructure designed to operate as an SDDC provides the flexibility of cloud with the most control possible over IT resources. That control enables well-defined security controls with the potential to rise above and beyond what many teams are used to having at their disposal in a traditional data center, particularly when it comes to micro-segmenting policy.
To start, the concept of software-defined data center describes an environment where compute, networking, and often storage are all virtualized and abstracted above the physical hardware they run on. VMware handles the largest share of these virtualized deployments, which is a natural extension of their long history of transforming single-purpose servers into far more cost-effective virtual server infrastructure. The big change here is adding network virtualization through their technology NSX, which frees the network from physical constraints and allows it to be software-defined.
In a physical network, your infrastructure has a perimeter which you allow traffic in/out of. This limits your control to the physical points where you can intercept that traffic. In a software-defined network (a critical part of a software-defined data center) your network can be controlled at every logical point in the virtual infrastructure. For a simple example, say you have 100 VMs running in 3 compliance-based groupings. Here is how your policy could be constructed at a high level in an SDDC:
Group 1: PCI compliant storage. Every VM in this group is tagged for Group 1, and network traffic limited to internal IPs only.
Group 2: GDPR compliant application with customer data. Again, each VM is tagged for its group to share the same policy, this time enforcing encryption and read-only access.
Group 3: Mixed-use, general purpose VMs with varying compliance requirements. In this case, each VM needs its own policy. Some may be limited to single-IP access, others open to the internet. A per-VM policy effectively introduces micro-segmentation to your infrastructure.
The point of these basic examples is to clarify the opportunity that a software-defined data center has to fine-tune policy for your assets held on-premises. If you’re also running in AWS or Azure, then what you’ve kept on-premises likely consists of your most sensitive assets, which require the most stringent protection. Controlling policy down to the individual VM gives you this flexibility. Once you’re controlling policy at the VM-level, you can also monitor and control the communication between those VMs (i.e. east-west or intra-VM), stopping lateral threat movement from one VM to another within your data center.
If you’re in a state where certain assets simply can’t enter the public cloud, and you want to make improvements in your resource efficiency and protection strategy, you should consider building out a plan to completely virtualize your data center, including the network. To help you with that strategy, we partnered with VMware and research firm IDC to write a short paper on the security benefits of adopting a software-defined data center. You can read it here to dive deeper into this topic.
The post Moving to a Software-Defined Data Center and Its Impact on Security appeared first on McAfee Blogs.
Read more: securingtomorrow.mcafee.com