IT management is a tricky balancing act. Navigating federal government IT regulations? Well, that’s a high wire act, and most are traversing without a safety net.
Traditional Network Operations have for decades been centered on using brute force tactics to get through their long list of service tasks each day. They barely have time to think, and with hundreds of network engineers embedded in these hands-on actions, it is no wonder that they have been slow to adopt anything new (including automation) as a means to look forward.
Ironic, because a little investment in time and new technology can augment and in many cases enable dramatically more effective operations. Network automation is a game changer. And when coupled with no-code, makes the whole NetOps function “magical”.
The federal government IT sector, for those who are unfamiliar with it, is rather difficult to navigate sometimes; the sheer number of security measures, compliance standards, industry, and workforce practices make it stand apart from a lot of other industries. Networks that serve federal agencies are experiencing an objectively more severe amount of growing pains when compared to other industries because they were traditionally locked-down environments where changes happened infrequently.
Now as Federal government IT operators come under increased pressure to modernize, it becomes clear that much of their infrastructure isn’t prepared to enter the modern age with them. Just recently, for example, the Pentagon failed its first audit, with many auditors noting issues with compliance, cybersecurity policies, and improving inventory accuracy.
With this in mind, it’s important to understand the elements that make the Federal government IT sector different.
Let’s take a look at a few use cases that could use some help from our friendly Network Operating System.
#1: Network Assessment and CISA 2023-01 compliance
Network assessment has always been a good idea. That is a fact. Understanding what is connected to what, and how it is delivering the packets and services needed by the stakeholders has always been a goal for IT leaders. But that’s easier said than done.
In fact, it is so difficult to do, that in the private sector assessments occur infrequently, perhaps once every 3 years or so, and in the government sector, and due to clearances and data sensitivity, even less often. The FCEB agencies have therefor basically ignored this basic best practice and hoped that they would not regret that avoidance. Well, in 2022, CISA decided that enough was enough and issued a mandate:
“By April 3, 2023, all FCEB agencies are required to take the following actions on all federal government IT systems in the scope of this directive:
- Perform automated asset discovery every 7 days. While many methods and technologies can be used to accomplish this task, at a minimum this discovery must cover the entire IPv4 space used by the agency.
- Initiate vulnerability enumeration across all discovered assets, including all discovered nomadic/roaming devices (e.g., laptops), every 14 days.”
So how can you meet this requirement WITHOUT Network Automation? Answer: You Can’t. It’s just simple math. If you have 10,000 assets in your preview, and you need to verify their existence and related anomalies every 7 days, how could you possibly even try to meet the mandate terms with Network Automation?
NetBrain solves all of this and allows fully automated discovery and verification as often as you like. It identifies devices that should be on the network but are missing and devices that are on the network but should not be. NetBrain then goes much deeper by establishing the flow of data between services and applications and identifies changes from the expected behaviors in real time. NetBrain is the answer for CISCO 23-01.
#2: Validating Network Refresh environments with Change Management
We recently looked at a finance-related project, looking at both the legacy and refresh environments and determining the security posture of both. It was a straightforward financial application, but they needed to hit a number of SLAs that the old equipment wasn’t properly equipped to provide. To make things worse, a lot of their new equipment had to be set into FIPS-CC compliance mode, which inadvertently wiped the security settings on their refresh environment clean.
Over the course of six weeks, we evaluated individual config files, compared tradeoffs between hardening certain systems vs meeting SLAs, and made sure the system as a whole was as secure as the one that came before it. Ultimately, this resulted in a lot of manual work and documentation, as older legacy systems were pulled apart and examined in order to help recreate golden configuration files for the newer systems.
For one, the ability to upload configuration changes on a massive scale would have saved time and effort, but more importantly, the ability to perform benchmark before/after comparisons, as well as insert customized reporting which would have made short work of the entire assignment – Doing so would have enabled us to tell which devices were not in compliance with the golden standards and modify them all at once.
#3: NIST Compliance with Network Intents
Today more than ever, federal government IT admins are relying on external service providers to carry out a wide range of services using information systems. Protecting confidential information stored in non-federal government IT systems is one of the government’s highest priorities, and required the creation of a uniquely federal cybersecurity protocol.
NIST, or the National Institute of Standards and Technology, has created a compliance standard for recommended security controls for federal government IT systems. This standard is endorsed by federal clients, as they encompass security best practice controls across a wide range of industries.
In a lot of instances, complying with NIST guidelines and recommendations also helps federal agencies remain compliant with other regulations, like HIPAA, FISMA, FIPS, etc. NIST is focused primarily on infrastructure security, and uses a value-based approach in order to find and protect the most sensitive data.
NetBrain fits in very well here too. Network Intents (sometimes referred to as Runbooks), as you may know, are NetBrain’s built-in capabilities for describing and automatically executing expected network operations.
As you can see above, NetBrain performs a number of data collection tasks to verify that the target network is functioning within acceptable security parameters. NIST compliance requires the client to control access and encryption protocols to its most sensitive devices, and the larger a network becomes the more intensive and error-prone the compliance check is.
Compliance related Intents are especially useful for audits, as they reduce the amount of time spent crawling between devices and clearly pinpoint where the problem areas in your network are.
#4: Security Remediation with Just-In-Time Automation.
Just-In-Time Automation is an API-triggered NetBrain diagnosis that clients usually program to occur in the event of a monitoring alert or a helpdesk ticket being created.
Within the context of applications on a network, one of the most common uses of Just-In-Time automation is to reduce the Mean Time to Restoration (MTTR) for problems that occur on the network, but given the sensitive nature of many federal government IT systems, another good application for this feature is Security Remediation. An IPS will only tell you where the malicious traffic is located, but NetBrain can provide an outline of the infected area in context to the rest of your network.
Essentially, by applying the API integration into an intrusion prevention platform, NetBrain can be triggered to identify the infected area, calculate the path between the attacker and the victim, and tag this area in a map URL for any security engineer who happens to go onsite.
This speeds up the general MTTR of the incident, as most of the initial triage work is completed by the time a human sits down to resolve it. Security incidents are as time-sensitive as network outages, if not more so, and having the ability to eliminate the fact-finding and data-collection operations means the organization will be more effective when it counts.
#5: Improving Documentation Handoff
One of the biggest problems the federal government IT sector faces is keeping things consistent between contract turnovers.
When contractors leave a job, they typically hand over a few large deliverables of the changes that were made, the security posture of the network and as well a few recommendations for their systems moving forward. For several weeks afterward, these contracts may be called back in to explain the changes to newer contractors overseeing the system and advise them how to implement certain security measures on the network.
Even though the previous team of contractors could be made available to come back in, that overlap of teams, just for continuity, was still a significant expense to the client. Imagine if the previous team was not available to the new contractor team – the new contractors would have had to spend time and energy re-learning the same things the previous team had already discovered.
NetBrain has always prided itself as having the industry’s most comprehensive data model of any hybrid multi-cloud connected network. It does so by describing the devices, the topology of those devices, the control plane and forward of packets within that topology, and the expected behaviors (Intents) of the network based on business needs to create a rich Digital Twin.
Documentation is just another representation of that Digital Twin. When most people think of documentation, they historically gravitate towards topology maps. In fact, as mentioned before, many people that have used NetBrain for year still consider it’s foundation as their primary source of truth with the ability to generate real-time maps at the push of a button.
But this Digital Twin can can do so much more, most notably, use its Network Intents to document common network processes and expected behaviors, use them as collaboration platforms across teams, and hand off information to people in order to resolve instances.
And because NetBrain’s Digital Twin is dynamic and historical, NetBrain can identify deviations between the current infrastructure and how it existed in the past by comparing to archived versions. This is hugely valuable when investigating network changes and design compliance. NetBrain’s powerful Digital Twin actually becomes the detailed record of what has been done, and how it has changed over the years. NetBrain remains as a permanent and incredibly accessible reference for the client’s entire network refresh, which could be used as the building blocks of future projects
Ultimately, modern Federal government IT organizations behave just like their private sector counterparts, with a lot of business requirements are shared across many verticals. The federal government IT sector stands out in its unique rotation of staff, compliance standards, and emphasis on access and security over ROI.