The browser-based interface of NetBrain Integrated Edition is backed by a full-stack architecture, adopting advanced distributed technologies to support large-scale networks with more expansion possibilities.

The distributed system architecture is as follows:

Note: The port numbers listed in the above architecture diagram are defaults only. The actual port numbers used during installation might be different.

The system components include:

Component

Description

Browser-based Thin Client

provides a user interface for end users to access the system.

MongoDB

The database that stores user data (e.g., Map, site definition) and network data.

License Agent

provides services that validate and activate licenses.

Elasticsearch

serves as a full-text search and analytics engine in a distributed multi-user environment.

Redis

provides memory cache for the system.

RabbitMQ

translates messages from a component to another component.

Web Server

serves static content such as HTML, JavaScript, and CSS resources, which serves as the user interface of the Thin Client.

Web API Server

serves RESTful API calls from browsers and third-party applications for integration.

Worker Server

serves parallel computing tasks on multiple servers. It relies on both Redis and RabbitMQ.

Task Engine

coordinates computing tasks.

Front Server Controller

serves to coordinate and communicate with Front Servers and other components.

Front Server

serves as a polling server to collect and parse live network data. It is the only component required to access the live network.

Service Monitor Agent

monitors the health of your NetBrain Servers with operations management of related services. Users can start/stop the service of components and view log of components.

Ansible Agent (add-on)

integrates with Ansible to define, execute playbooks and visualize results in Change Management Runbooks. See Ansible Integration for more details.

Smart CLI (add-on)

provides a Telnet/SSH client to connect to devices from Windows and can be integrated with NetBrain workflows. See Smart CLI for more details.

Considerations for System Scalability

The following table introduces the considerations for system scalability:

Server

Scalability

Web Server
Web API Server

Multiple Web Servers can be installed as per data center locations and load-balanced under your load balancing infrastructure to ensure the response time for accessing web pages of Thin Client.

Multiple Web API Servers can be installed with Web Servers and load-balanced under your load balancing infrastructure when there is a large number of API calls for intensive API triggered diagnosis in large networks.

Worker Server

Deploying more Worker Servers is recommended for a large number of back-end network automation tasks, such as TAF/PAF/IAF, path discovery, runbook execution.

Task Engine

Supports high availability with active/standby nodes.

RabbitMQ

Supports high availability with three nodes.

Redis

Supports high availability with master/replica/sentinel nodes.

MongoDB

Supports high availability with primary/secondary/arbiter nodes.

Elasticsearch

Supports high availability with normal/master-eligible-only nodes.

Front Server

Deploying more Front Servers is recommended for a large number of network nodes. Each Front Server is recommended to manage at most 5,000 nodes.

Front Server Controller

Supports high availability with active/standby nodes.

 

See also:

System Requirement

How it Works