Go back

Reducing MTTR and SLA Violations With Network Automation

Learn how network automation can reduce Mean Time To Repair (MTTR)

This webinar outlines the role network automation can play to reduce MTTR and SLA violations.  We’ll discuss how to solve the 4 key challenges which impact incident response times, driving up MTTR and increased SLA violations:

  • Increasing visibility for first responders to solve the problem
  • Automating the manual and time-consuming process of data collection
  • Improving collaboration between teams to speed up resolution
  • How to address network problems that disappear before the first response

By watching this webinar you’ll learn how to leverage network automation can help you easily address these challenges, allowing you to quickly respond to incidents to reduce MTTR.

Priyank: Thank you for joining us. Today we are going to be talking about the role network automation can play to reduce mean time to repair and SLA violations. My name is Priyank Savla, I’m the Director of Digital Marketing here at NetBrain. I’m accompanied by Ray Belleville, our director of solutions architecture who will be running some live demos for us later on. I want to encourage all of you to engage with us throughout the presentation. If you have any questions or comments please type them in your WebEx chat window.

Let’s start this webcast by reminding ourselves that no organization is immune to outages. Earlier this year Amazon suffered a massive outage because of one human error. Other notable outages of this year were Facebook and Delta Airlines where 280 flights had to be cancelled. Let’s see how these outages impact the bottom line. According to an IHS study, $700 billion per year are lost due to outages. As applications running on the network become more critical there is even more emphasis on reliability of infrastructure and the need for lower mean time to repair.

A couple of weeks back when I was working on this presentation I asked our fellow network pros on Reddit about their goals for uptime and mean time to repair for 2017. Based on the responses I received, it is quite clear that there is more pressure than ever to attain uptime of over 5 9’s. So let’s take a look at what this means. Up time requirements of 5 9’s means that over the entire year, total, network unavailability must be just under 5 minutes and 17 seconds. This is not an easy task. When most network teams are asked, how do they plan to achieve their uptime goals, the common answer is build redundancy. Billions of dollars are spent on making networks fault tolerant, but it is not enough or else companies like Facebook and Amazon would have not have had these outages.

To understand how network teams fare against these ambitious uptime and mean time to repair goals, we recently conducted a survey among network engineers and managers at major enterprises. It’s called the “State of the Network Engineer,” and is available for download on our website. Throughout this presentation we will be sharing insights gained from this survey.

Let’s start with frequency of network outages. From this data, one can infer that over 90% of organizations suffer network outages more than four to five times per year. To achieve 5 9’s repair time, each of these outages must be under a minute. Let’s see where enterprises stand today. Over 80% have repair time over one hour per outage. This means that enterprises aren’t even able to attain 4 9’s today.

Let’s take a look at the life cycle of network outage to see what can be improved. When there is an event it is usually detected through a monitoring software, where the first responders try to understand the problem. In this process they either rely on network diagrams or their own memory to find the source of that issue and once it’s found they mostly use CLI to fix it. When they are unable to resolve an issue, it is escalated to senior levels until resolution. In this cycle, monitoring tools are already automating detection of issues. However, there is incredible amount of waste between the detection and resolution phases where so much time is spent collecting, analyzing, and visualizing data. And this is where NetBrain automation comes in.

Let’s take a look at four key challenges between detection and resolution. Our survey uncovered that engineers troubleshooting network issues simply don’t have the visibility to quickly solve them. Data collection today is extremely manual and time consuming through CLI. Lack of collaboration between teams often delays resolution, and occasionally, problem disappear before the first response is even executed. Network diagrams serve as important tools to uncover the source of the problem. About 43% of engineers don’t even have up-to-date diagrams to solve the problem at hand. Why? Because network documentation is a time-consuming project.

Majority of companies take over a month to completely document their network and once they are done, the network has changed and these diagrams are obsolete almost immediately. Manual network documentation is simply not sustainable yet only 6.5% network teams have tried automating network documentation. NetBrain helps solve this problem with dynamic network mapping. With dynamic mapping you can map any part of your network from the live network data, so you always have up-to-date network maps to solve the problem at hand.

Say you are troubleshooting a slow application. You need the map of that specific traffic part. Even those who are really diligent about keeping their networks well-documented may not have all part combinations documented.

Let me introduce you to my colleague, Ray, who will now show us how easy it is to build dynamic maps. Ray, over to you.

Ray: All right, thank you, Priyank. Before we get into the front end of NetBrain, I want to take a quick look under the covers to the back end. There’s a really important function within NetBrain that really empowers the network engineer to reduce their MTTR and meet their SLA targets. And that’s the scheduled task function. We have two basic functions, one is a discover task, and this is, you know, very self-explanatory. We’re going to use this to discover new devices on our network, but the power comes from the benchmark task. So if I look at it, I have the ability to set a frequency of when this benchmark will occur, set it once hourly, daily, weekly, or monthly. I can define the scope of devices so that it will run on either all the devices in my network, or a predefined device group which might be, as an example, all the firewalls running a certain operating system, or all might be GP devices, or I can do it on sites. And this is usually a geographical allocation where I have all of my sites around the world.

Let’s say the one in Australia – I want to benchmark it at different time than one in Boston, Massachusetts in the United States. Okay, so now that I have the scope to find, I need to define what type of data I’m going to collect and this is really where the power starts coming into play. By default I’m going to collect things like the configuration file, route table, ARP Mac, NDP, spanning tree, your BGP advertise routes, and some inventory information. But I can go further than that, I can look at what we call NCT tables that help build paths. So things like your MPLS, LFIBs, or your NATing tables, or if you have load balancers, we have virtual server tables. I can go even further than that, I can look at formations like neighbors’ interface information, the LLDP. If I have VDCs in the Nexus world I can benchmark their states. Net flow, zone tables for my firewalls.

So why am I showing you all this? Ultimately, when you’re in a troubleshooting scenario access to information is going to be key and NetBrain is going to give you as much historical information as possible.

Another function that I really like about NetBrain is most engineers think in the CLI world. So we know show commands –  that’s how we think and that’s how we operate on a day-to-day basis. What if I’m troubleshooting and I use CIP BGP summary all the time. Wouldn’t it be nice to know what the result of that command was yesterday when things were working, and compare it to the state that I have today? Well, this is what adding this command to this benchmark function will allow me to do.  You can keep adding your own; NetBrain has a bunch of templates that I can just go through and select and have them added. So now you see all of these commands will be executed on every device that’s in scope at the time that I set and I can use this data later on.

One of the values of collecting this is that now I can rebuild the topology, so now this digital twin of your network is actually up-to-date because I’ve collected all the information required to build these topologies. I’m going to show you how we use the topology to build the maps on-demand in a second, but I just wanted to show you that this is one of the functions that we can do. The second is we can recalculate those device groups and sites, so these are definitions are ones that you the user define. So as an example, I use the firewall with a certain operating system version, if a new firewall gets discovered I want to ensure that it’s added to that device group so that any subsequent benchmarks that are used to collect that data will include collecting it off that new device. Same thing goes with your sites, okay?

And the next thing I want to look at are these data views. Data views are a really powerful function within the 7.0 NetBrain platform, and what they do is they pre-decode information from your network or third-party applications. The power here is while I’m going through a troubleshooting exercise I probably don’t want to have to look up all of the routing information for every device on this map or in my problem area and do that in real time. I would really want to know the results of that so I can make a decision. So what NetBrain does is it allows us to add these data views and you define what they are and then when I need that information I just click on the data view and that information is layered on top of my existing map. I’m maintaining the context of the problem area and I’m using the information instantly. That’s all done for me so I have a running start on almost everything that I do

And then the last major area here is updating maps. We might add new devices to sites or device groups and we want to make sure the maps that are associated are up-to-date so when I open and use them they’re already ready for me. Again, huge time saving and to take it even further, if I have the requirement to create Vizios of these to track the history or what not, I can do that. Click this button and now I have a folder that will have all of my Vizios and if I want to keep them for however long I want I can click this button and now each Vizios map will be timestamped. So I have a full version history of every site map, device map or public map that’s ever been created for as long as I want.
How powerful will that be? So without further ado let’s go and see how I’m actually going to use this information. So I’m going to go to the front end and I’m going to start with two different scenarios.

One is I’ve received an alarm on a particular device and the second I’m going to show is I have an application slowness and I know the source and destination, so there’s two ways to start the visualization process. And the thing I want to get across here is that a lot of companies for the last three years have been building these all-purpose maps and when you go to use these almost all the time they’re either out of date or there’s so much information on it that you can’t really get the information you need, or there’s just so much clutter. There’s devices that aren’t relevant to what you need to do. So what NetBrain is going to do is really help me get to the root of what I need and start me from a nice clean canvas where I have nothing and get me really quickly to the problem area, and we’re focusing on MTTR and SLA, but this can be used for any function that you need to do. So very simply, NetBrain has a search bar, I’ve already populated the name of the device. I click search and NetBrain has searched and found the one instance of this device, perfect. Click map and now I have the first device on my map.

What would probably be of interest to me when I’m doing a troubleshooting exercise is what are the impacted devices around this single device that alerted? Because, one, I might be able to get information from those devices about the problem, or I need to know the scope of the impact. So click this plus button and I can very simply extend all the neighbors either V4, my layer two, or V6. In this case I’m just going to do a quick layer three at IPV4 and just like that I have my first map of a problem area. Now, just think back to how long this would take you if you needed to create this type of map or get this information for the troubleshooting exercise that you’re currently in, right? We’ve done it in seconds, it’s going to take you way longer, you’re going to need access to each of these devices. Do you have the password? Which commands do you need to run to pull this information to know which interfaces and neighbors are selected?

So I also mention data views in the benchmark, right? So I have this instant-on information, so let me show you how that works. I click the Data View button and NetBrain presents to you the data views that are relevant to the devices on the map. So any one of these that I select will actually do something on the map. Again, we really want you to focus on the information that’s available so you can get a running start at everything you do. And let’s look at BGP, say I’m going to need to know this at some point in my troubleshooting exercise. Click of a button instantly I have all the information I need about BGP and I can customize this to as much information as I want. But in this case, let’s see what I have. I’ve got the full BGP configuration, I can look at that nice and easy. I don’t have to search through the configuration at all to find this, it’s just there. I can see the router idea, I know the autonomous system, I’ve got a color-coding system here that tells me this device is in a different autonomous system than these two, and if I really want to know about that autonomous system, lime green is 64550, and the orange is 64554. Very useful information. I click on this button and I can see the BGP neighbors. Interesting this one’s been, up for 10 minutes – that might indicate where a problem has occurred for me, so I’m getting a lot of information really quickly.

So this is all operational information from the network, what about that third-party information I talked about? Well, I have a maintenance view here that is actually pulling the serial number of the device from a CMBB, it’s another third-party system and through an API we’re collecting this and putting it on the map. And say I need to open a ticket with my vendor this information is now there for me. I can take this information and add even more. Say I need the support contract number or the phone number to call, get that information at my fingertips so I no longer have to hunt for this, it’s there for me, I’m not wasting time.

So that was example one where I have a single device that’s alarmed and I’ve built a map around that device. Now, the other option I had, which is very common, is that I have a slow application. Often I’ll be given the source and destination address and I prepopulated these to make things a little bit quicker, but I can use host names, I can even use subnets. I can change the direction either one way or bi-directional and I have some other options here. I can build the path from layer two perspective, so I’m going to have layer two and layer three devices. Or if I just want to know the layer three that’s there as well and if I’m using Cisco express forwarding I can get the active path. Keep in mind NetBrain is going to evaluate your equal cost multipath as well, so I can get a full view to every device that could potentially be part of this path that is behaving badly for me

Even more importantly, I have access to the current baseline that was from the benchmark that I showed you earlier. I can do it from the Live Network and I can even go back to a point in time, so historical data. So the benchmark that happened two weeks ago when I know this application was performing well, I can map the path from then to now and see if there’s any difference. So let me show you how that works. Selecting the current baseline, I click ‘path’ and NetBrain is going to go and pull this information from the database and give me a nice visual representation of this path.

So if I compare this to trace route, I’m getting a ton of additional information here. Even just the basic functionality, right? I know every single device, I know the interfaces that are connecting it and I know from a graphical perspective that this is a layer two switch versus a router, right? So a little bit of information that I wouldn’t get from a trace route.

But I want to show you something a little bit deeper so you see what kind of information NetBrain is evaluating. Let’s just click this button, I’m going to show you the different steps that we can evaluate. So if there was an access list on the inbound and outbound, NetBrain can evaluate that and tell you, will this application work? Is there NATing that’s happening? I’m going to show that. If there’s a VRF I’m going to use that information and follow the VRF routing table instead of the global table. If I’m going through an IP Sec, VPN I get that information if there’s policy-based routing.

Traceroute will never be able to determine this for you, so you could at the end of the day be troubleshooting a path that had nothing to do with where here application is going. So every second that you spend on troubleshooting that path is really wasted time and that could cost significant amounts of money to your organization. And we go even further we can look at MPLS and follow those tables. Again, NAT, if NAT happens you’ll see it on the map. In this case this is the actual step that we match because none of these previous steps were actually configured.

Okay, so that’s very deep, but I want to show you a real example of this. I’m looking at this path I’ve selected L2 TCP, I’m using a historical version of it so I’ve looked at a certain folder that was created on September 29th, I’ve got my protocol ports, my source and destination. I’m going to click path and NetBrain is going to go and map this out for me. So very quickly I see there’s a firewall here and I have this sticky note, well, what is that telling me? Well, it’s telling me that it evaluated a policy and that the traffic was permitted. If it was denied I would get a similar message, but it would say denied and I’d be able to look at the policy to tell where it was denied.

If I go further over to the right you can see I have this load balancer. So let me just look at this address here so first is the VIP. It’s the virtual IP, the user would not know what the real device is serving that application, right? That’s the whole point of these load balancers. When I’m troubleshooting, I need to know that information. If I did a trace route, it’s going to stop at the load balancer and then I’d have to log into the load balancer and I’m going to have to look at the translation tables and see what’s there for me. NetBrain takes care of that for you and you can see the path continues back through the distribution switch up to an access switch for one of the servers that’s serving that application down to a different access switch to the other server. I could have seen that right from the sticky note here. So very, very powerful and again, we’re looking to get everybody a running start on all of their troubleshooting activities, and with that I’m going to send it back to Priyank, who’s going to talk about where you go from here. Back to you, Priyank.

Priyank: Thanks, Ray. Once you have a clear picture of your network, performing diagnostic checks can still take a lot of time. Forty-four percent of our survey respondents mentioned that troubleshooting one device at a time through CLI is holding them back, yet only 4% network teams automate diagnoses. Most teams are still relying on the CLI trial and error methods. We will go back to Ray, to see how network automation can help save valuable time while diagnosing network problems. Over to you, Ray.

Ray: All right, thank you, Priyank. So what I want to demonstrate now is how you can take those manual tasks that you’re performing in your day-to-day activities and apply some automation so you can collect, analyze, and visualize in a much more efficient way and really collapse the time down to almost nothing. So the first thing I always do when I have a problem is I run the overall health monitor, and what this is going to do is it is going to log into all these devices and it’s going to give me a visual representation of all the data and give me a heat map so I can pinpoint where problems might be occurring on this path. And as you can see in a fraction of the time I’m able to see visually that all of these devices and interfaces are green, if I zoom in a little bit I can see that the CPU and memory of these devices are all in good shape, I see that I have no congestion on my interfaces so I’ve just eliminated one, a bunch of work and two, a bunch of the top causes that would really cause me to focus in on a certain area. That’s good, I haven’t wasted a lot of time on the normal stuff, so let’s dive a little bit further.

The next thing most people are going to do is they’re going to start running CLI on these devices. So NetBrain has a little different take on it and we call it the Instant Qapp. And what this does is it takes CLI commands, it breaks all of the little portions into the variables and I can just select the ones that are of interest to me and drag them over to the map and drop them. Now, NetBrain’s going to go about and collect this information and put it on the map for me. You can see that as I hover over these different variables I get a description which actually gives me some guidance of what these variables mean so I can be as verbose in there as I want. And now that this is done you can see all of those attributes are on the map, everything’s done for me very quickly. But what’s not done is, really, there’s been no analysis on this data. So what I’m going to do is I’m going to fix that right now, I’m going to put an alert on these input drops. And I’m going to say if it’s not zero then create a warning and put an alert message that input drops not equal to zero, okay?

It’s a very simple check here and now I’m going to rerun the data and I can do it two different ways. I can run the data once, have that alert applied, or I can turn it into a monitor. And now any time that value is not zero I’m going to get some sort of visual representation that I have a problem. And again, within a few seconds NetBrain has pulled that information and now it’s applied some analysis and you can see very quickly my eyes are drawn to it that I have some input drops in this segment of the network. You can see right beside it I also have some input errors so what I would do in real life is I would apply alerts to every one of these pretty much saying if they’re not zero set an alert. Okay, so that’s great if I need to do this one time or I’m just needing a very quick way to get at some information, but this is something I might want to do on a frequent basis, so NetBrain has the ability to convert this Instant Qapp into a full blown Qapp. And I’m going to save it as reduce MTTR for the time being, and I want to show you how this looks.

So I’m going to go back to my default data view and I’m going to run the Qapp, but instead of running my overall health monitor, this time I’m going to run the Qapp that I just built. It’s called reduce MTTR, click run, and let NetBrain go and do what it does best. And within a very short time NetBrain has collected that information and I have the exact result I had before. How cool is that? And now, I can take this app and share it with my colleagues and they can benefit from this automation that I’ve created. And it took me 30 seconds to create this app, and now I have added some automation to my entire organization. Very, very quick and very powerful.

And the next thing I might want to do is compare different data sets, and we talked about this when we were looking at the benchmark in the first demo. And the power here is that I can select two different data points. I’ve got this benchmark from October 13th, I can search the available data that I can compare in there and what this is telling me is that all of these different bits of information are in both the current baseline and this basic benchmark. So I’m going to just do a quick select all to show you how fast this is.

Within seconds NetBrain has analyzed all those different data sets and told me the things that have changed. In this case I’ve got some routes that have been added to PE3600 X02 and if I look at PEX3600 X01 I can see that there’s some CLI commands that have changed. In this case nothing much has changed except for some timers. When you consider that over 50% of all issues are caused by someone making a change this can be extremely powerful to get you to the root of the problem within seconds.

So with that I want to turn it back to Priyank, to go through the third part of this webinar.

Priyank: Diagnosing network problems on a map can save you a lot of time. But what if you don’t know how to solve that problem? In such circumstances efficient collaboration is the key to reducing your resolution times. Let’s take a look at how teams collaborate today during an outage. Information shared during an outage can be broadly classified into two categories. Troubleshooting playbooks and data related to the live issue. Many organizations create these troubleshooting playbooks so that everyone on the network team is well-equipped to solve non-recurring problems, and when these problems can’t be solved, engineers often escalate these issues with minimal information. Data shared is usually ignored, they’re are supposed to describe issues and ambiguous CLI data that shows diagnosis already performed. I’m sure many of you have already been through this process. There is a better way. To effectively share knowledge, network teams should digitize their best practices into executable runbooks.

Every step of a NetBrain runbook is a troubleshooting diagnosis made executable with a click of a button. Executing these runbooks with only the relevant output which is displayed directly on the map for your convenience. You don’t need to pass through hundreds of lines of CLI data that is of no use to you. These runbooks may include best practices for solving issues that occur regularly or they can be designed to train broader teams on specific technology such as QS, multicasting, BGP etc. These same runbooks can be used during escalation. Engineers can share them using a simple URO. This way everyone is on the same page about the problem and the analysis already performed. This minimizes the time tier two and three must spend repeating diagnosis performed at tier one. You can create your own runbooks easily without writing a single line of code. Let’s go back to Ray, to see a demo on executable runbooks.

Ray: All right, thank you, Priyank. So up until this point I’ve only demonstrated the individual components of NetBrain that automate specific functions. But in our analysis we find that a lot of engineers need to know what to do and when, so what NetBrain did is we created what we call a runbook, and this codifies your process and the tasks within. So let me show you what that looks like. Opening up in a full view here you can see that I start with the overall health monitor, if I click on it I can see the description, it tells me a bit about what I’m trying to do and then it tells me follow the branches based on the results of this step. So you can see I have three different branches here, one for high utilization, one for no errors, and one for high CPU. And each of them has an individual unique set of steps that I want them to perform when that type of result is found, so you can see I’m really automating the decision process as well. So let me close this and show you how it works in real time.

I have my overall health monitor, I’m going to click play on here and I’m going to let NetBrain go and do what it does best and that’s the heavy lifting. You can see I didn’t have to find this overall health monitor in the interface, I’m just prompted and pressed play. Pretty much anyone can do this. I let it run for a few seconds here and I can zoom in and see my data on the map and at this point I don’t see anything going wrong, I have no congestion, I have no CPU or memory alarms being triggered. So I’ll stop this step and I’ll follow my branches. In this case I had no errors so I select here my first decision is made, and now I’m asked to run a different step. Click play, let NetBrain go and do what it does best then look at the description it says, you know, it checks for deeper interface errors such as collisions, CRCs, and follow the branches afterwards based on the results of the step.

And I’ll stop this for now, and I can see: oh, I have collisions. So I go to my branches and I see that I do have a path to follow if I have collisions detected. I select it and I’m asked to check neighbor interface status. Click run, let NetBrain go and do what it needs to do. And just like that NetBrain has turned some interfaces red and it’s telling me I have duplex mismatches. And now in record time I’ve automated the process of solving this problem from start to finish. If a different result was found at this level I would go to the high interface utilization and follow these steps. All right? So you can see I’m coding the process here. Now, I did find a problem and if I go back to my no error step and the collision’s found, it says to escalate to Ray Belleville if I find collisions. So how would I do that with NetBrain? Very simple. I save the map and then I click the share button. I put the user’s email name in here and this is me, Ray Belleville, click share. The user is going to get a notification that they’ve had a map shared with them, they’ll also receive an email that looks like this. I’ve invited you to see a map and I click MTTR demo 3, NetBrain’s going to load that map in exactly the same place where I was and we can see that we have the duplex mismatches already identified for me. So how powerful is that from an escalation perspective? I didn’t have to build this map, I didn’t have to collect any data, I’m automatically brought to the point where the problem is and I can now proceed with fixing the problem.

So with that I’m going to pass it back to Priyank, to show you how we can go even further than this. There’s one more area we need to discuss and that is the first response time. Back to you, Priyank.

Priyank: Thanks, Ray. The last challenge is that occasionally a problem is gone before the troubleshooting process is even started. A common example would be intermittent issues that keep coming back and are so hard to track down, and it’s all because the first response is delayed. NetBrain’s integration capabilities can help address this challenge. First, an event detected by third party network management system triggers an API service call to NetBrain. Then NetBrain instantly creates a map around the problem area and executes relevant runbooks for analysis. This means the first response is automated and you don’t miss the problem because of any delay, and when engineers arrive at the scene all the heavy lifting is done for them thus the time between detection and first response is minimized. We will go back to Ray, for our final demo of this webcast on API trigger diagnosis.

Ray: All right, thank you, Priyank. So up to this point I’ve demonstrated a user driving NetBrain, but what I want to demonstrate to you now is how NetBrain can eliminate another delay and this is the time between event detection and first response. The importance of this is two-fold. First, when it comes to intermittent problems, many times in my career, and those of the engineers that I speak with on a daily basis, by the time we get around to diagnosing the problem the symptoms are no longer presenting themselves. And what this results in is a certain level of dissatisfaction from the network user because in most cases the problem will come back and I haven’t supported them effectively. Secondly, in true NetBrain fashion we want to give engineers a running start at everything they do. So let NetBrain do the heavy lifting and allow the engineer to use the information to make their timely decisions.

The way that NetBrain addresses these two issues is by enabling machine-to-machine communication, we call this triggered analysis. Let me show you. So up on my screen I have my Service Now console and you can see I have a ticket here with a short description of LA disc two demo, response 200 milliseconds is above threshold. And this was generated by a monitoring tool that created a ticket in my Service Now instance. What Service Now can do is take the host name of a device, send it over to NetBrain using our full restful API, create the map, extend all the neighbors, run a predefined runbook, and then attach a URL to this ticket. So now, the user experience is they see the ticket, they come in they see that there’s a NetBrain map and they simply click.

Now, instantly they have the dynamic map, all the neighbors in my problem area are defined, I’m seeing some red errors here that are showing that I have CRC and input errors. I can click through the different results of all the tests that have been run and you can see when I click on this one NetBrain shows that we have a speed and duplex mismatch on this interface. As quickly as that, I’m able to take the information and make a decision on what needs to be done. So very, very quickly I’m at the heart of my problem and I’m on my way to resolving it. Now, if for some reason I didn’t have the information I needed through that pre-executed runbook I could turn on data views and get my inventory information, for example. And just like that I’m using NetBrain with the running start that I need to become that hero in my organization. This truly is one of my favorite capabilities in NetBrain There’s no other platform that provides this level of automation and integration on the market today. Even for the novice engineer this level of automation and integration will elevate their performance and ensuring that the company meets their MTTR and SLA goals. And with that we’re going to switch back to our slides to close out this webinar and open up for Q and A.

Priyank: So this is how automation can help reduce mean time to repair an SLA violations. Dynamic maps has gained visibility into the network because you can’t solve what you can’t see. Automated diagnosis can save a crucial amount of time spent in collection and analysis of data. Executable Runbooks can help streamline collaboration, this saves time during escalation and helps democratize troubleshooting knowledge across the entire organization. And lastly, API trigger diagnoses helps minimize the time to first response.

So that wraps our presentation today. I want to thank all of you for joining us. We will end with this slide for those who are new to NetBrain. We are the creator of world’s leading network automation platform, NetBrain has been around since 2004 and is used to power almost 2,000 of world’s largest networks, including a third of Fortune 100 companies. If you like what you saw in our demos you can request a live engineer-led demo for your company through our website at netbraintech.com/requestademo. We will remain on the line for the remainder of the hour to field your questions, so please keep them coming through the chat window. We will try to answer our top questions loud over the phone so that all of you can hear them. Once again, thanks for joining us.

Related Content


Innovations Unveiled at CiscoLive Barcelona 2020

Read More

Automating Incident Response With NetBrain (Podcast)

Read More

Automate Incident Response with NetBrain’s Free ServiceNow App

Read More