Resources

How to Monitor the Performance of a Software Defined Everything-based Infrastructure

Video
 


Dave Hegenbarth, Director of Systems Engineering - Strategic Partnerships, describes how to monitor the performance of software defined everything-based infrastructure.


Transcription:

Hi, I'm Dave Hegenbarth, Director of Systems Engineering for Global Strategic Partnerships at SevOne. Thanks for joining.

The days of service providers and large enterprises, buying rack after rack of specialized network computing storage to launch a new service or application, are coming to an end. Our customers, and the market as a whole, are on a journey to Software Defined Everything, where network, compute and storage, along with automation and orchestration, will allow them to innovate and deliver applications and services much quicker through a virtualized infrastructure. At SevOne, we're committed to making that journey with our customers and our technology partners, and provide complete performance visibility through the technology transition that is about to happen.

In this white board session, we're going to describe the value proposition of monitoring a software defined network using SevOne. Let me describe a little bit about why we would want a software defined infrastructure, and what we would do with it. One of the easiest use cases to understand is the rapid development of some new business service or application. This could be applicable to both enterprises or service providers. The simple explanation I'm going to give today surrounds I might have someone in marketing that wants to stand up a new website. We have "Dave" here in marketing. A little self-serving, but that's okay. Dave's thinking about the fact that he needs a new website for the business.

A website has a lot of different components. I'm going to focus in a little bit within the realm of virtualization what would happen. The website needs to be accessible, but we need it to be secure. We're going to front-end probably our website with a firewall. This firewall is not going to be a bare metal rack-and-stack, put it in, turn it on, get it configured, and have a lot of different hands involved. This firewall's actually going to be built virtually via what we call a controller. I'll explain a little more about that.

What else do we need for our website? We probably need, since we want to be secure, we want maybe some packet inspection. We're going to put a packet inspection module in there. Then because this website is going to be very popular, we want a load balancer, so we can distribute the load as users come into our website. We're going to deploy a load balancer as well.

These devices are also virtual devices, and have been deployed by using a software defined controller. This controller could be from Contrail, from Juniper Networks. It could be Alcatel-Lucent Nuage. Part of this could be from the Cisco ACI controller. There are a number of different manufacturers who are now getting into this space. It could be what we've seen most frequently in the lab, OpenStack, and that's Canonical and Ubuntu, running this open-stack controller, which allows us to very quickly build a virtual environment.

In that virtual environment, as I said, we have a firewall, an IPS or packet inspection, and a load balancer, in case this website gets very popular. I'm just going to represent the load balancing here. I probably have web servers that are doing the front end. I then have probably a couple of database servers. I'm just going to draw little databases off here. The end goal, obviously, is to get Dave his website. We need to do that very quickly.

The way we've done this in virtualization is, the application through northbound API calls to a controller, it gives us a GUI that I can use to build my website. As I'm clicking things in the GUI, I'm building out these virtual devices. They each sit as a VM on top of what we call a hypervisor, or physical server. This physical server has been there the whole time. We've had virtualized servers for a long time. Now we're talking about not only virtualized servers to provide the "www" front end or the database back end, but these virtual servers are also now running special software that turns them into a firewall, an IPS, or a load balancer. These could be a virtual firewall from Juniper Networks, "Firefly." It could be a virtual firewall from F5. It could be from a number of different manufacturers. Same thing with the load balancer.

These are spun up in real time, and I'm going to have to spin up a couple VM's to handle my web traffic as well. The neat thing is, it's all been done through a GUI; that's an App provided to you, whether it's OpenStack, whether it's Contrail, whether it's Nuage, that tells a set of controllers how to configure this virtual network such that I can reach the website that I just built. What you have here, from my desk, to the actual website is what we call a service chain. Service chain are those components that I just spun up virtually, on top of X86 hardware. That service chain is Dave's application.

Another term you might hear is end point group. I might be in the marketing endpoint group. That allows my traffic to go through this to the marketing website. Someone else from maybe development builds a new website, right? They want their developers. They would be in the developer end point group. They would pass through this same service chain, only differently: Their end point group would allow them certain resources, but not the marketing website; probably just the development website.

The other thing about service chains is because they're spun up virtually, and we've seen it sits on this particular hypervisor, what if I need more? What if there are now millions of people who want to go to this website, and I need to scale? What happens is, we simply build another hypervisor, a physical box connected to the physical network, and then these controllers will instantiate more instances of firewall, IPS, and load balancer. Probably a lot of load balancers as that traffic comes through to handle that load.

Something else is going to happen to make this environment really more useful, and that's going to be a change in our applications. As we write new applications to provide web services, they're no longer going to look like the old App where it sat all in one server, or a server with one instance running Apache from my web, and another instance running a database. They're actually going to be federated across a lot of these virtual machines sitting on top of hypervisors in a way such that if one dies, the others can take over.

Where is SevOne in this infrastructure? I'm going to draw SevOne right in the middle. SevOne's job is to monitor the performance of this entire environment. At the very bottom, we have a legacy physical network. That physical network might be made up of old switches and routers. It may be built in a traditional L2-L3. There are certain ways that we would do that. It might be a newer leaf and spine. If we're talking about software defined data centers where we get to redefine everything, it may be a leaf and spine architecture. Doesn't matter either way; you're going to have some physical gear that you still have to rack. That's going to be a whole lot of X86 servers for the hypervisors, followed by some networking gear that's going to allow us to pass packets.

This infrastructure will then be enabled, configured and controlled by some level of controller, as I mentioned, whether it's Contrail, Nuage, ACI from Cisco. It gives us the ability to build out these applications. SevOne is going to use multiple different technologies to monitor this environment.

The first of which is we're going to have our own set of API's that do a couple of things. The first is, they're going to get real-time inventory. It only took me seconds to spin up this whole new network. We're going to want to know in seconds that we can monitor that. We're going to have inventory here. The next thing that's going to happen is, as the next person comes along and spins up more of this, the topology of the virtual network is going to change. We want to make sure we capture that in real time as well. We're going to deliver a topology, and an understanding of the performance of this service chain.

One thing to note is, these controllers, whether provided by Juniper or Cisco or whomever, actually have a lot of performance statistics built into them already. Those performance statistics typically stop at the health of the hypervisor. They understand the commands that they've given to change things within this service chain, but they typically stop here. They're rarely giving you VM level or virtual device or physical device metrics to go along with the controller health or the hypervisor health.

That's where SevOne comes in. We've got an inventory and topology. Now we can take all of the monitoring tools that we've been providing for years, and monitor this network as well. We might just go straight to the firewall with SNMP and get health metrics, like CPU usage or number of rules or rules per second, or whatever the flows-per-second might be through.

Speaking of flows, another one would be just taking net flow in from the physical network. Now we understand who's talking to who on what we call the "underlay" about the logical topology, which is the "overlay," we'll understand the volume of traffic and the ratio of our physical or underlay to our overlay.

We'll also probably take in log messages. IPS and firewall send out a lot of logs. We can begin to provide a way to understand the performance of this particular component within the service chain, in addition to being able to understand the health of the whole service chain.

Lastly, we're always going to have these edge devices, whether it's an MX router from Juniper, whether we're talking about a Cisco router, there's always going to be a connection to other services. It's quite possible that our service chain is actually going to extend all the way into the Cloud. I might have some of my servers that are serving up this very popular website in my data center in my private Cloud. I might actually expand that, so even though it's the same website, some of the web traffic is actually served up from AWS or Microsoft Azure. SevOne has the ability to monitor those Cloud services as well, along with the physical devices. This is going to be a rack-and-stack router probably, again, monitored with SNMP. Lastly, any non-virtualized services, so they'll be outside this virtualized domain.

The other point I'd like to make is that this is not going to happen overnight. There's going to be a lot of physical legacy network monitoring that needs to go on, but you need a performance-monitoring platform that is going to take you from the old world of SNMP and fixed switches into the new world of dynamic provisioning of network function virtualization, such as firewalls, load balancers, IPS's, etc, and be able to provide a single dashboard with those sorts of metrics. You can envision a dashboard that updates in real time, that might have firewall statistics, and maybe those are from SNMP; you might have IPS statistics, and those could be coming from log messages that came into the box; we'd be collecting flow so we understand who's talking to who, how much, what ports and protocols are being used within the environment.

We do all of that from both a virtualized perspective and from a physical perspective, using technologies like SNMP, IP SLA, other things that we'll throw in the mix that the SevOne monitoring solution is capable of to give an end-to-end picture of the performance of your software defined network.

I'd like to thank you for taking the time to watch this white board presentation on software defined everything. The understanding of SevOne's performance monitoring within that realm. Thanks.