Introducing the xSNMP Management Pack Suite


Over the past several weeks, I’ve been hard at work on some new SNMP management packs for Operations Manager 2007 R2, to replace the Cisco SNMP MP and extend similar functionality to a wide range of SNMP-enabled devices.   In the next few posts, I hope to describe some of the design and development considerations related to these Management Packs, which I am calling the xSNMP Management Pack Suite.   For this post, I hope to give a basic overview of the development effort and resulting management packs.

As I was working on some feature enhancements to the Cisco SNMP Management Pack, and following some really great discussions with others on potential improvements,  I concluded that a more efficient and effective design could be realized by aligning the management pack structure along the lines of the SNMP standard itself.   To expound on this point, much of the monitoring in the Cisco MP is not specific to Cisco devices, but rather, mostly common to all SNMP devices.   The SNMP standard defines a hierarchical set of standard MIBs, and a hierarchical implementation of vendor-specific MIBS, with consideration to the elimination of  redundancy.   I tried to loosely adapt this model in the xSNMP MP architecture.   The first of the MP’s, and the one that all of the others depend on, is the root xSNMP Management Pack.   This management pack has a few functions:

  1. It performs the base discovery of SNMP devices (the discovery is targeted to the  SNMP Network Device class)
  2. It implements monitoring of the SNMP v1/v2 objects for discovered devices and interfaces
  3.  It provides a set of standardized and reusable data sources for use in dependent management packs

From there, the remaining management packs implement vendor-specific monitoring.   Devices and/or interfaces are discovered for the vendor-specific management packs as derived objects from the xSNMP MP, and most of the discoveries, monitors, and rules utilize the common data sources from the xSNMP MP, which makes the initial and ongoing development for vendor-specific MP’s much more efficient.

Controlling Interface Monitoring

One of the topics frequently commented on with the Cisco SNMP Management Pack, and a subject of much deliberation, was that of selecting network interfaces for monitoring.   Even determining the optimal default interface monitoring behavior (disabled vs. enabled) isn’t a terribly easy decision.  For example, a core network switch in a datacenter may require that nearly all interfaces are monitored, while a user distribution switch may just require some uplink ports to be monitored.   In the end, I decided on an approach that seems to work quite well.   In the xSNMP Management Pack, all interface monitoring is disabled by default.   A second, unsealed management pack, is also provided and includes groups to control interface monitoring (e.g. Fully Monitored, Not Monitored, Status Only Monitored).  Overrides are pre-configured in this MP to enable/disable the appropriate interface rules and monitors for these groups.   So, to enable interface monitoring for all Ethernet interfaces, a dynamic group membership rule can be configured to include objects based with interface type 6, or if critical interfaces are consistently labeled on switches with an Alias, the Interface Alias can be used in rules for group population.  

Organizing Hosted Objects

For each of the management packs,  I tried to take a standardized approach for hierarchical organization of hosted objects and their relationships.   This organization was facilitated primarily through the use of arbitrary classes to contain child objects.   So, rather than discover all interfaces of a device with a single hosting relationship to the parent, an intermediary logical class (named “Interfaces”) is discovered with parent and child hosting relationships.   This approach has three primary benefits: 1) the graphical Diagram View is easier to navigate, 2) the object hierarchy is more neatly organized for devices that may be monitored by multiple MP’s (e.g. a server monitored by three MP’s for SNMP hardware monitoring, O/S monitoring, and application monitoring), and 3) the organization of hosted objects is consistent, even for devices with multiple entities exposed through a single SNMP agent. 


With loads of invaluable help from some volunteer beta testers, a great deal of time has been spent testing and investigating performance and scalability for these management packs.  While I will save many of these details for a later post, I can offer a few comments on the topic.   In all but the smallest SNMP-monitoring environments, it’s highly advisable to configure SNMP devices to be monitored by a node other than the RMS.  For larger environments, one or more dedicated Management Servers or Agent Proxies (Operations Manager agents configured to proxy requests for SNMP devices) are preferred for optimal performance.    From our testing with these Management Packs, a dedicated agent proxy can be expected to effectively monitor between 1500-3500 objects, depending on the number of monitors/rules, the intervals configured, and the processing power of the agent proxy.   By object, I am referring to any discovered object that is monitored by SNMP modules, such as devices, interfaces, fans, file systems, power supplies, etc.   So, monitoring a switch infrastructure with 4000-6000 monitored network interfaces should be doable with two dedicated agent proxy systems.  

I intend to write in greater detail about these topics in the coming weeks, and hope to post the first public beta version of these management packs soon.

SCOM: Combining a System.SnmpProbe and System.Performance.DeltaValueCondition Modules to Calculate SNMP Counter Delta Values

I have previously written about using the combination of an SnmpProbe and script probe in Operations Manager work flows to facilitate manipulation of numeric values.   While this is currently the only way to perform numeric operations, there are some cases in which the only required manipulation of a numeric value is the calculation of a delta between two polls, such as calculating the number of interface collisions in an interval (from the ifTable) or calculating the number of interface resets in a polling cycle (from the Cisco locIfTable).  In these cases, the SnmpProbe can be combined with a System.Performance.DeltaValueCondition condition detection module to calculate the delta value without having to engage a script probe.

The Performance.DeltaValueCondition module expects Performance Data as an input, so a System.Performance.DataGenericMapper must be used between the SnmpProbe and DeltavalueCondition modules to do the data conversion.   The DataGenericMapper accepts two options:  NumSamples and Absolute.   The NumSample parameter sets the number of value samples to maintain in memory, and the value returned is the difference between the first and last samples in memory.  The “Absolute” parameter, when true, causes the DeltaValueCondition module to return the delta as the raw difference between the samples, and when false, causes the module to return the percentage of change.

An example workflow can be represented in this diagram (the expression filter being used to validate the data returned from the SnmpProbe prior to continuing):

  Read more of this post

SCOM: WSH Vs. PowerShell Modules in Composite Workflows – Resource Utilization in SNMP Data Manipulation

One of the realities of working with SNMP monitoring is that more often than not, the monitoring data are presented in a raw form that requires some kind of manipulation in order to render meaningful output.  For example, required data manipulation may be a simple arithmetic operation on two values to calculate a percentage, or in the case of Counter data, mathematical operations based on the delta between values recorded in multiple polling cycles.  In Operations Manager, these manipulations require exiting the realm of managed code and utilizing script-based modules to perform the operations or facilitate temporary storage of values from previous polling cycles.  Two sets of modules are available for the Operations Manager –supported scripting engines: WSH and PowerShell.  To date, I had been opting to use VB scripts when authoring Management Packs for two reasons: 1) WSH is universally deployed in Windows environments whereas PowerShell is not necessarily so – by using VB scripts, there is no requirement to install Power Shell on proxy agents 2) I had assumed that the resource utilization impact of PowerShell was equal or greater than that of WSH.   I had assumed that PowerShell would carry a heavier impact based on the simple notion that if one were to watch process resource utilization when simply launching powershell.exe and cscript.exe, powershell.exe consumes more memory and CPU time (assuming WSH 5.7 is installed).  

The resource utilization of these script providers becomes a major concern particularly when implementing script-based modules in SNMP monitoring scenarios.   To illustrate this point, if a proxy agent were configured to proxy SNMP requests for 10 Cisco switches, with each of these switches having an average of 20 interfaces discovered, and each interface monitored with two monitors that utilize a script probe action to manipulate the raw SNMP data (e.g. collisions and octets), 400 scripts would be executed in a single polling cycle for just the interface monitors for this small scale monitoring scenario.  This poses a threat to the scalability of SNMP monitoring and could severely limit the number of devices/objects a single proxy agent can handle effectively.  

In the course of trying to find a way to address this scalability issue, I was fortunate enough to communicate with someone possessing a great deal of insight into Operations Manager who helpfully suggested that the PowerShell modules should be more efficient than WSH-based modules in composite workflows.  I rewrote all of the scripts in the Cisco MP to convert them from VB Script to PowerShell and began some testing.  I was familiar with the tighter integration of PowerShell in R2 modules (PS scripts no longer have to be launched as external commands), but to be honest, I was expecting to see a large number of powershell.exe processes spawned as the monitors fired.   However, this is not the case.  Rather, it looks to me that the modules are executing the PowerShell script through the .NET framework within the context of the monitoringhost.exe process.   This does appear to be more efficient overall, as the overhead associated with spawning new processes is effectively eliminated, and my impressions thus far are that CPU utilization overall is reduced.

However, switching from WSH scripts to PowerShell scripts in R2 workflows is a little bit of jumping from the frying pan and into the fire in that, instead of spawning a large number of processes each consuming relatively small amounts of processor and memory resources, the PowerShell script modules drive a single process (monitroinghost.exe) to consume a large quantity of resources, particularly CPU cycles.   Overall, memory utilization looks a lot better with the PowerShell modules, and although CPU utilization does seem to be better, it is still a concern for scalability. 

Thus far, I have been doing this performance testing in a development environment, with OpsMgr running on some virtual machines on both on workstation and older server class hardware, neither of which provide a good indication of real-world scalability (particularly given the fact that I have these VM’s running SQL, all OpsMgr duties, and SNMP simulations to boot).  On one of these woefully over-utilized VM’s, something around 130-150 interfaces on 10 monitored Cisco devices seemed to be the breaking point, but a more realistic OpsMgr deployment scenario (segregated database, RMS, and MS duties) on physical hardware should be able to handle far more than that.   I will report an update once I get a chance to do some broader scalability testing with the PowerShell version of the MP on more appropriate hardware. 

In summary, both the WSH and PowerShell probe and write action modules introduce a relatively heavy CPU load when utilized for data manipulation – relative to the very simple operations required to manipulate SNMP data, and a managed code module would be far more desirable, if available.  However, at present, these two providers are the only supported mechanisms for handling data that require processing before returning to a rule or monitor.   My testing thus far appears to support the assertion that R2 implements the PowerShell modules more efficiently than the WSH-based modules, which is welcome news, given the relative ease and impressive flexibility of scripting with PowerShell.  I’ve seen a bit of talk that PowerShell V2 is supposed to bring significant performance improvements over V1, and I hope to do some testing with the CTP version of V2 on an OpsMgr proxy agent in the very near future to see if it helps address any of the scalability challenges in SNMP monitoring with OpsMgr.  As for the best approach for the present, it looks like PowerShell is the way to go, and the overall impact on the MS/proxy agents can be mitigated by spreading monitored objects across multiple proxy agents, focusing discovery to only those objects which are required to be monitored (i.e. interfaces), and avoiding overly-aggressive scheduling of monitors.

SCOM: Building on the Net-SNMP MPs

Due to the ubiquity of the Net-SNMP agent, the Net-SNMP management packs can be used for a wide range of UNIX/Linux devices, and one of my primary intentions in creating these management packs was to extend them to Linux-based proprietary platforms such as Check Point Secure Platform and VMWare ESX.  To that end, I am currently putting the finishing touches on management packs for Check Point Splat and VMWare ESX SNMP monitoring that reference the Net-SNMP Library MP. 

Check Point Secure Platform

SPlat is a hardened Linux kernel, which conveniently supports the Net-SNMP agent for manageability.  The Check Point-specific SNMP objects are exposed through the extended Net-SNMP agent as described in the CHECKPOINT-MIB.   So in this case, the Net-SNMP Monitoring MP can be used for basic system health, while an additional Check Point MP can be added to monitor the Check Point software modules for availability status and Firewall/VPN/Etc performance metrics.  


Of course, ESX server is a modified Red Hat Enterprise Linux distribution that also utilizes the Net-SNMP agent for SNMP support.  VMWare exposes ESX-specific objects to SNMP via dlMod extensions to the Net-SNMP agent, including VM Guest info and some performance metrics.   So, in VMWare environments, the host operating system can be monitored for health through traditional Net-SNMP-implemented MIBs (UCD-SNMP, HOST-RESOURCES), while VMWare-specific counters can be monitored through the use of the VMWare MIBs.  

When it comes to monitoring of VMWare,  the VMWare SNMP implementation has the advantage of being easy to deploy and rather lightweight, and given the likelihood that SNMP may be used in VMWare environments for full vendor hardware monitoring, the VMWare SNMP implementation is a good way to introduce some monitoring of the hypervisor virtualization layer.  That being said, the VMWare SNMP implementation does leave a lot to be desired; for example, alarms/events are only exposed in SNMP through traps, only a few performance counters are available, and many VMWare Infrastructure objects are not represented.    For more complete/comprehensive monitoring of VMWare environment, the only data provider choice seems to be the VMWare API.   I’m working on something along those lines presently, but I’ll post more on that at a later date.

SCOM: SP1 Edition of the Cisco Management Pack , v1.0.2.6

I have completed the first version of the Cisco Management Pack for SP1 compatibility.  The monitoring in the management pack is identical to the R2 version of the MP, described most recently here.      Due to the dependence on the WMI SNMP provider for object discovery, there are inevitably some scalability limitations intrinsic to the SP1 edition of this MP, but I haven’t done enough full-scale testing to ascertain those limitations as of yet.   Additionally, deployment of this management pack requires some additional steps.   These steps are detailed below (taken from the MP documentation):


This management pack utilizes the WMI SNMP provider to perform discovery of SNMP objects.  In order to use this management pack, the following steps must be completed on each server that will function as a proxy agent for SNMP-monitored Cisco devices.

Install the SNMP protocol and WMI SNMP provider

  • To install these components, access Add/Remove Programs in the Control Panel, and select Add/Remove Windows Components.  Under Management and Monitoring select: Simple Network Management Protocol and WMI SNMP Provider

The following MIBs must be exported to MOF files with smi2smir.exe and imported with MOFComp.exe:  CISCO-ENVMON-MIB and CISCO-MEMORY-POOL-MIB.  

  • The MIBs and a batch file to perform these steps can be found in the /Setup directory included with this MP distribution.   Run the RegCiscoMibs.cmd file and check the output log: register.log to confirm that the mibs were successfully compiled and imported.

Recommended Proxy Agent Configuration

If WMI receives too many requests in a short time, it may suspend processing of requests for a period of time.   This can impact the ability of this management pack to discover Cisco SNMP objects in a timely fashion (WMI SNMP is only used by this MP for discovery, and not object monitoring).   In order to minimize the chances of this situation occurring, the object discoveries in this MP should not be scheduled too aggressively.   Additionally, it is recommended that if a large number of Cisco SNMP devices are to be monitored, they should be distributed across multiple proxy agents for load-balancing of the WMI SNMP requests. 

It is highly recommended that all agents that will function as proxy agents for SNMP devices have the Windows Script Host version 5.7 installed.  Version 5.7 is far more efficient than previous versions and reduces the resource utilization by the cscript.exe process dramatically.

It is also highly recommended that all agents that will function as proxy agents for SNMP devices have the hotfix: KB96163 applied.  This hotfix resolves instability with SNMP monitoring in Operations Manager 2007 SP1.


I’m interested to hear how this SP1 edition of the Cisco Management Pack functions in different environments, so any feedback is most certainl welcome.  I will continue to post updates to this site so be sure to check back regularly.   For more information about the scripts utilized in discoveries in this edition of the MP, this post should sum it up.

SCOM: Updates to the Cisco Management Pack (R2) v1.0.2.6

I’m hoping to finish up the SP1 version of the Cisco Management Pack pretty soon, but I’ve modified the R2 version to include several new changes.  The current version: can be downloaded here

The changes in this version are:

  • Added three new containment classes:  Cisco Device Chassis, Cisco Device System Components, and Cisco Device Interfaces.   These classes contain monitored objects to add an additional level of hierarchical organization.
  • Added discovery of the IFAlias property for Interfaces
  • Added discovery of the Hostname (OLD-CISCO-MIB) and Chassis description for the Cisco Device class.
  • Updated the properties displayed by default in the Device and Interface views
  • Added a rule to clean up unused XML temporary files once a day.   Several of the monitors utilize temporary XML files written to the %TEMP% path.  In the previous version, old files would be left on the file system if a previously monitored object was removed.  This rule will remove those temporary files.
  • Modified discovery intervals for some objects for more balanced timing.
  • Added four new monitors for switches that implement the CISCO-STACK-MIB.  The monitors are targeted at the Cisco Device Chassis class and include
    • Fan Alarm
    • Temperature Alarm
    • Minor Alarm
    • Major Alarm

With the new containment classes, the diagram view looks a lot better:

SCOM: Updates to the Net-SNMP Management Packs. v1.0.1.30

I’ve made a few minor updates to the Net-SNMP Management Packs, available at the same download location.   The changes in are:

  • Implemented new class:  Net-SNMP Monitored Process Instance, along with rules to collect and monitor process instance CPU and memory use.  Reference the MP documentation in the zip file (and the preceding post) for more information.
  • Updated all alerts to include the hosting device name.
  • Added a rule to periodically purge old temporary files used by the Interface Utilization data source.

Any future updates will be posted to this blog, and thanks to everyone who has commented on these management packs. 

Some screenshots of the new process instance monitoring capabilities: