Managing Network Performance Analytics


Join Dave Hegenbarth, SE Director of Global Strategic Alliance, as he discusses the capabilities and the extent of SevOne's live dashboard. Learn how businesses can easily manage network performance and track specific analytics through SevOne's all-in-one appliance in this live demo.


We begin to build an idea or normal. We have that idea and you can see here we're running at an average over the time period I selected, was 148k bits per second. The baseline during that time is 153kb per second. We see that we're just below our average for these time periods. If I come in here and zoom a little closer, what we can see is we can see that baseline. We see the baseline cutting right through here and the data is a little bit on either side of it.

Another way to look at analytics is through standard deviation. We can actually show you, or visualize, the standard deviations off of the baseline data. If I go again, I'm going to graph again, this all get kind of wound up in here because I'm polling so frequently. I'm going to come in here and I'm going to zoom in a little more. What we see is this shaded area that's three standard deviations greater than the line. Three standard deviations above what a baseline would be. It would actually top out right around 315.92k bits per second which gives you a way to visualize and then allows you to make some decisions on where you would like to do things like set thresholds.

Where do I want to set my threshold for alerting? Is it right after I cross 100% utilization? That's probably too late. Is it after I cross 50% utilization? Maybe, but what time of the day or week would that be relevant? That's what we use baselines and then standard deviations for. I can say when I'm three standard deviations greater than my baseline, or what normal is on a Friday at this time, then I need to send an alert. Or I just need to visualize that in a report that I send to people saying, does everybody agree? If we have an event which is greater than three standard deviations, we should change the architecture or something like that.

Some other analytics we have. I'm going to switch to a little different interface because it will be a little more easy to see. Again, I'm going to make a fairly simple report. Let's see where this guy goes. He's pretty static as well, but we can see he's running above his baseline. Another way to look at this is with what we call time over time. In this case I'm going to look at the average of yesterday against today. Maybe I'll even widen my view here to the past week. If I want to look at data over the past week, what we can see is we have pretty consistent spikes in day over day time. If we look at this time period right here, the solid line is my actual data. The very faint, dotted line is my data points from yesterday or that time the day before.

What we see here is that we did have a spike that actually did not exist in that day's data. Again, trying to understand what normal is. I usually have spikes here and here, but I didn't today, or I had a spike today that I did not have yesterday gives us another insight into our network performance.

The other thing is what if? Just because in the past week my circuit ran like this, how is it going to run next week? We have the ability to do some projections around that. We can say number of days to project this out. Maybe I want to go back a week and project a week. Basically my circuit's been running here. This is my real data and then this line is my projection on out also in the table. It says look, I've been running 190k bits per second on average, I peaked at some point. My projection seven days out is 191.20. It gives us an idea, do I need to change the characteristics of the circuit? In this case it's pretty steady consumption, but we do get the idea that it's slightly rising.

We have a couple different ways of doing that. Not that I'm the world's greatest mathematician, but we can use any of these four equations - linear, exponential, logarithmic, or power. Linear is good for things that are on a constant change so if I'm monitoring temperature or something like that, something fairly constant, we have good confidence that linear equations will actually show us a pretty good projection. If I have things that are more varied in nature, CPU performance where it goes up and down, and up and down, and up and down I may want to go to a logarithmic to understand and smooth that curve, and still give a pretty accurate projection of where it will be.

As you can see, this is a configurable number so we can take this to 30 days. We can project out and say based on the traffic we've seen where we think we're going. Here it almost begins to trend down if we take a wider view of where we're going. If we take this off exponential to logarithmic, I think we'll see pretty much the same results. It will incorporate a little more of the spike data and say that we may trend upwards toward 157. In any case on a circuit whose top speed is 1.54 meg, 156k on average probably is not a big deal. In looking at that though we do have some spikes.

We also have the concept to work hours. This may relevant. You may have an environment that runs 7 by 24 and in which case you want to look at every day, all day long. You may have more of an office environment where people come, they do their job, they go home. You can specify those hours. Now we see when we do this that our values change. Not dramatically but they do change upwards of 214 if all I do is take into account the hours that people are working. I can take all of this to understand, this circuit has been performing normally over a pretty wide period of time. It's not going to run out of bandwidth any time soon, and I can understand that that is the case with either a 7 by 24 view or a work hours view that, was in this case the system default, 9 a.m. to 5 p.m. Monday through Friday.

Lastly, sometimes people are interested in the 95th percentile. The top, plus/minus 5% utilization. Let's turn this off and re-run. What we'll do is we'll draw a dotted line through there, plus/minus the top 5%. It will also show up here. Of course, across this period of time, our 95th utilization was about 199. Coming up on just about a mega bit of traffic. Again, on a 1.54 circuit we're probably pretty sure that we have plenty of room to grow yet in this circuit given the way it has performed.

I'm going to go back to my other circuit there for a second. One of the other analytics is to understand somewhat about the composite of the traffic. If I select a particular in-bytes for this circuit, we're going to go back to that view of in byte traffic. Maybe we'll take a look at it just in terms of who has used it in the last eight hours. If I draw a graft, again 20 second data, 8 hours of it, we get a pretty good granular representation. Couple of other buttons that we have to work from are NBar. What's the composite nature of the traffic? We can see here on this circuit largely it's SNMP followed by unknown. This is relying on the Cisco router and the PDLM modules for in NBar to identify the traffic as it crosses this particular interface.

We can see that's followed by http, some h323 voice traffic, some routing EIGRP, DNS, SSH. We've got a way of understanding which protocols and the breakdown of that utilization. That's with NBar.

We also can take a look at the composition of the traffic using NetFlow. Since we have NetFlow accounting turned on for this particular interface, and being sent to that one appliance I can click on the NetFlow button and for that same period of time, I can get an overview of the traffic that's crossing that link. By default we come into a top talkers view so we can see which hosts are providing the majority of the traffic on that link. We have then the ability to drill down into those hosts to say, well wonder who they're talking to.

Then we create a graph that says hey, you know what? This guy was talking to that guy for this volume of traffic or this number of packets over that eight hour period. Can also continue to drill down and say, I wonder what they were talking about. Again, I can drill down into that conversation and I can say well, this guy was talking to that guy and it was ICMP for some period of time. In the case that we were looking at where actually this guy's talking to this guy on port 3,000 to a high port. Again, over the eight hours they consume 2.8 Mb or 250 packets across that time frame. It really gives us a way to get into looking at the volume of traffic, the makeup of the traffic, the understanding of normal. This could also be re-drawn as a percentage. We've been doing a lot of engineering things in bits here. I can see on the volume of traffic my utilization. Given the ability to hold raw data back as far as a year, we can make some pretty good understandings of how the network is working.

Lastly this has all been largely about a particular interface, but we can do the same thing with our TopN reporting across all kinds of different interfaces. This is basically a stack rank based on interface utilization for today. We can do some of the similar things in the graph. We can include things like yesterday and maybe the past month. Re-run this report and I'm going to get my top 10 interfaces over the past 30 days, yesterday, and today. I can start to see trends here. We can also do this in a projection mode and say show me who's going to be my top most utilized interface 30 days from today.

Just that quickly I get a report of who should I be looking at next? This guy yesterday was .01. Past 30 days he was .03%, but his change in rate indicates that in 30 days he's 47%, probably 60 or 90 days he's even beyond that. Maybe this is a device that we want to watch. All of this can be rolled up into dashboards. These dashboards can be self refreshing and can tell a story about how the network is performing.

With that I'm going to open up the line for questions. I think we have six or seven minutes here and after that we can go from there. You've all been unmuted. If you have a question, if you would just say your name, and then your question, and we can talk about it.

I'll throw a question out there. Was this what folks expected to see? Is this the kind of analytics that you do today or the kind of analytics anyone was looking for moving forward?

Questions on any of the calendering or any of that that we showed today?

Can anyone hear me?

Hi this is Paul, I just wanted to confirm that yeah, definitely can hear you. I think everything that you covered so far is good for me. I'm really just trying to learn a little bit more about what you do. I think you covered it very well so far, thank you.

Great. Anyone else? Questions? Comments? All right guys, well that was pretty much what I had for today in Demo With Dave. You can find us on the web site. Thanks everybody for attending