NetBrain’s Network Data Model and the Foundation for Automation
IT’s Big Data When IT problems strike, the key to prompt resolution is hiding in the data – data produced when the fault occurs, historical data, and live data obtained…
June 5, 2020
In Part 1 of this overview, we examined the NetBrain Automation Success Framework as a whole. Now we will examine the elements of NetBrain that comprise Level-1 Automation Success.
Network Discovery is the process whereby a NetBrain deployment figures out which devices are present in the network and which drivers and credentials each one requires. Discovery can either be run against a straight range of IP addresses, or it can be told to “neighbor-walk” starting from a few key points. Using the latter method, the system can discover up to 7,000 network devices per hour under ideal conditions! For a full discussion of the ins and outs of Network Discovery, please refer to the video-on-demand training or live web classes in NetBrain University.
Once our NetBrain Domain is fully discovered, we must then tell NetBrain what data we wish to pull from our network devices on a regular basis. This is done first and foremost through System Benchmarks, which allow us to not only gather data but also use it to feed our Calculated Topologies, Sites, and many other functions and features within NetBrain. In addition to System Benchmarks, we can also schedule the regular collection of data relevant to specific Parser Files and Data View Templates.
Regularly Benchmarking all of our important network data also allows NetBrain to build a history of these values over time. This history can be viewed when using a Data View Template, and also feeds the creation of our Golden Baseline (see below). For a full discussion of Benchmarking, please refer to the video-on-demand training or live web classes in NetBrain University.
There are many types of Built-In Data, which NetBrain knows how to pull from any relevant device types via the drivers for each operating system. However, this is not nearly every type of data we could need, to say nothing of data from linked network management systems. NetBrain handles this data by means of Parser Files.
To be brief, a Parser File tells NetBrain how to gather a particular data pull (CLI, Config, SNMP, or API) and how to extract all the different variables from the output. We can then use the Parser to build any type of Adaptive Automation that we need.
While NetBrain comes with a huge library of Parser Files ‘out of the box,’ we may wish to use data that’s not included in this default library. Fortunately, it’s quite easy to build our own Parsers if required. The building of Parsers can be learned from either a few short videos or a live web class.
The Golden Baseline is the NetBrain deployment’s sense of what is ‘normal’ in our network environment. It consists of specific values, ranges, or conditions for specific variables on specific devices or interfaces. We can set these ‘golden values’ manually, of course, but we can also tell our NetBrain system which values are important and let it determine the golden values automatically.
With the normal state of the network established, our Data View Templates (see below) will automatically alert us whenever a value is abnormal. And as I always say: “find the delta, find the problem.” The care and feeding of the Golden Baseline can be learned from either a few short videos or a live web class.
When presented with any networking problem—a trouble ticket, an alert from a monitoring system, et cetera—our first course of action is to gather the necessary data to ensure we have identified the root cause of the problem. We can then proceed to the correct procedure for localizing and remediating this root cause. In a NetBrain-powered world, this first course of action is embodied in a Data View Template, the first piece of automation that we employ in most workflows.
A Data View Template is a pre-defined set of data and a layout for how we want it to be displayed as a Dynamic Data View. However, in addition to simply gathering and displaying the relevant data for the subject at hand, there are three cool things that distinguish a Dynamic Data View from a normal, static Data View:
In NetBrain, user actions and the resulting data and notes are documented in the powerful, simple Runbook format. Runbooks are essentially a series of steps, each step defining an automation task for the NetBrain system to run and recording the results, along with any notes the user wishes to include. The end result is automatic, complete documentation of the workflow in question.
All automation functions in NetBrain, both hardwired and adaptive, are executed via Runbook steps. Their results are stored in the Runbook, which in turn is stored within a Dynamic Map. Therefore, all particulars of a workflow are automatically embedded in the Dynamic Map used to execute said workflow.
Progression to Level-2 Automation Success involves pre-defining Runbook flows. We will discuss these Runbook Templates, and the rest of Level-2, in Part 3 of this series.
To help us jumpstart our NetBrain deployments to Level-1 Automation Success, the NetBrain Support & Services Teams, with lots of help from the NetBrain user community, has developed a huge package of general-use Data View Templates (DVTs) covering all manner of features and operating systems. This package is freely available to anyone who wants it.
Moreover, they have developed a simple utility program to let us easily determine which DVTs within the package are relevant to our network and automatically set up Benchmarking and Golden Baseline analysis for the data in these DVTs.
To get started with the Level-1 Automation Success Package, please click on this beautiful and finely-crafted link. Still more information can be found in either video-on-demand or instructor-led webinar format.
Over the next few days, I will be posting the two additional parts of this overview, covering Level-2 and Level-3 Automation Success, respectively. Once they are live, we will add links to each article below.