Using the Operations Manager 2007 R2 Workflow Analyzer

I’ve only had my hands on the OpsMgr MP Authoring Resource Kit for about 24 hours now, but already the tools are proving to be invaluable.   This post describes a problem that I was able to investigate with the Workflow Analyzer tool to determine the exact cause of the issue.

Background

In a management pack I’m working on, I had a composite workflow designed to calculate SNMP network interface throughput and utilization by collecting the 32bit and 64bit in and out octet counters for an interface.  The SnmpProbe passes the values for all four VarBinds to an Expression Filter, which confirms that either VarBinds 1 and 2 (64bit) or VarBinds 3 and 4 (32bit) have values greater or equal to zero.   The Expression Filter than passes matched data items to a PowerShell property bag probe, which compares the values to a previously collected value set (stored in a temporary file in the file system) in order to calculate delta values and interface utilization and throughput.  

The script was written to use the 64bit counters if data are returned for the 64bit counters and 32bit counters if 64bit counter data are not returned.   I had been having some issues with this workflow when targeted to interfaces of devices that do not support 64bit interface octet counters.  From the lack of errors in the log, and evidence that the PowerShell script probe was not running (no temporary file being generated for these instances), I had concluded that the workflow was stopping with the post-SnmpProbe Expression Filter, but I didn’t know exactly why.   I had thought the Expression Filter was configured in such a way as to continue even if null values were returned for the 64bit counters. 

Using the recently released Operations Manager 2007 R2 Workflow Analyzer, I was able to drill into the actual processing of the workflow and identify the issue.

Workflow Tracing

The steps I used to debug this workflow were:

Launch the Workflow Analyzer and create a new session:

Read more of this post

Operations Manager MP Authoring Resource Kit

Microsoft has just released the Operations Manager Resource Kit, and my first impressions are very positive.   I haven’t had a chance to test drive all of the tools, but the MP Best Practices Analyzer and Workflow Analyzer show great potential.

The MP Best Practices Analyzer shows up under the Tools menu of the Authoring console and will scan a management pack for best practices compliance in great detail.  In my first use of this tool, I found it to be of great value. 

I’m looking forward to spending some time with the Workflow Analyzer, which provides a great interface for drilling into troubleshooting the more abstract elements of MP performance.  The Workflow Analyzer displays all loaded workflows for a management server, with the option to drill into a workflow to a specific instance, and then launch graphical debug tracing of that specific workflow.   Great stuff indeed.

There are a number of other tools in the RK, not the least of which includes a spell checker.

SCOM: Distributing Workflows with the SpreadInitializationOverInterval Scheduler Parameter

In Operations Manager distributed agent-based monitoring scenarios, resource utilization of the monitoring workflows is rarely a point of major concern as the data sources and probe actions typically consume only nominal resources of the agent host system at any given time.   However, in centralized monitoring scenarios, such as SNMP monitoring or wide-scale URL monitoring, the resource utilization of each work flow must be a primary concern as all workflows will execute on a small number of management servers/agent proxies and the potential for a massive number of workflows executing concurrently is very real.

While I had previously described some of my thoughts on workflow resource utilization with script probe actions, there is another highly relevant aspect of this general topic: workflow schedule distribution.   When working with centralized poll/probe monitors, almost every workflow will start with a scheduler.  By default, the Operations Manager scheduler module will not distribute scheduler initialization, and the result of this is that every workflow scheduled on an interval of X minutes will all fire at the same time – every X minutes since the initiation of the Health Service (unless a SyncTime is specified).   If, for example, 2000 network interfaces were polled with an SNMP probe every 5 minutes for status, every 5 minutes, 2000 workflows would execute simultaneously on the agent proxy system, and particularly if the workflow includes a script probe, the likely result would be oversubscription of the agent proxy’s CPU leading to script timeouts and/or SNMP poll failures.  

If the scheduled workflows could initialize at distributed times, so that they do not fire in a synchronized fashion, significant scalability improvements could be realized.  I had been experimenting with using a PowerShell script in a discovery probe to randomly determine a SyncTime and assign it as a property of an object, and then passing this randomized SyncTime to schedulers as a variable in order to distribute workflow schedules.  This worked to an extent, but was unnecessarily complicated and somewhat limited in effect (because the SyncTime parameter accepts times as an input in HH:MM format, initilization of workflows scheduled for 5 minute intervals could only be distributed at one of 5 possible initialization slots in the 5 minute interval).

However, I was very recently informed of a new R2-only scheduler parameter: SpreadInitializationOverInterval, which (as one would expect from the parameter name) distributes the initialization of the scheduler over a defined interval.   I’ve done a good bit of testing with this parameter and it works exactly as it should, which brings about major improvements in peak resource utilization in centralized monitoring scenarios.

Use of the parameter is quite simple, it expects a numeric value for the initialization interval (in seconds by default, but different time units can be specified with the Unit attribute), and for obvious reasons, it can’t be used along with a SyncTime parameter.  As for guidelines pertaining to ideal interval values, I have come to these conclusions in testing:  for monitors or rules that execute on relatively short intervals (e.g. 5, 10, 15 minutes), it works well to use the same interval for both the scheduler and SpreadInitializationOverInterval parameter.   This maximizes the load distribution facilitated by the spread initialization option.  For rules, monitors, or discoveries that execute infrequently (e.g. 4, 12, 24 hours), I prefer to set the SpreadInitializationOverInterval value to something like 30 minutes.   As an example, if a discovery workflow were scheduled to execute every 24 hours, setting the SpreadInitializationOverInterval parameter to 30 minutes would facilitate load distribution, but not require that any new objects that were added to the Management group go up to 24 hours for discovery. 

An example of the use of this parameter in a composite Data Source might look like this in XML:

<DataSource TypeID=”System!System.Scheduler”>
   <Scheduler>
      <SimpleReccuringSchedule>
          <Interval>$Config/Interval$</Interval>
             <SpreadInitializationOverInterval Unit=”Seconds”>$Config/Interval$</SpreadInitializationOverInterval>
       </SimpleReccuringSchedule>
        <ExcludeDates />
    </Scheduler>
</DataSource>

And the same scheduler in the Authoring Console:

The GUI “Configure” dialogue in the Authoring Console doesn’t provide an option to set the SpreadInitializationOverInterval parameter, so it has to be edited in the XML.  This is probably as ideal of an opportunity as any to highly recommend linking XML Notepad 2007 as the editor in the Ops Mgr Authoring Console.   XML Notepad 2007 is a great XML editor in general, but when used as an editor in the Authoring Console, it does automatic XSD verification, even providing drop-down selections of options:

SCOM: WSH Vs. PowerShell Modules in Composite Workflows – Resource Utilization in SNMP Data Manipulation

One of the realities of working with SNMP monitoring is that more often than not, the monitoring data are presented in a raw form that requires some kind of manipulation in order to render meaningful output.  For example, required data manipulation may be a simple arithmetic operation on two values to calculate a percentage, or in the case of Counter data, mathematical operations based on the delta between values recorded in multiple polling cycles.  In Operations Manager, these manipulations require exiting the realm of managed code and utilizing script-based modules to perform the operations or facilitate temporary storage of values from previous polling cycles.  Two sets of modules are available for the Operations Manager –supported scripting engines: WSH and PowerShell.  To date, I had been opting to use VB scripts when authoring Management Packs for two reasons: 1) WSH is universally deployed in Windows environments whereas PowerShell is not necessarily so – by using VB scripts, there is no requirement to install Power Shell on proxy agents 2) I had assumed that the resource utilization impact of PowerShell was equal or greater than that of WSH.   I had assumed that PowerShell would carry a heavier impact based on the simple notion that if one were to watch process resource utilization when simply launching powershell.exe and cscript.exe, powershell.exe consumes more memory and CPU time (assuming WSH 5.7 is installed).  

The resource utilization of these script providers becomes a major concern particularly when implementing script-based modules in SNMP monitoring scenarios.   To illustrate this point, if a proxy agent were configured to proxy SNMP requests for 10 Cisco switches, with each of these switches having an average of 20 interfaces discovered, and each interface monitored with two monitors that utilize a script probe action to manipulate the raw SNMP data (e.g. collisions and octets), 400 scripts would be executed in a single polling cycle for just the interface monitors for this small scale monitoring scenario.  This poses a threat to the scalability of SNMP monitoring and could severely limit the number of devices/objects a single proxy agent can handle effectively.  

In the course of trying to find a way to address this scalability issue, I was fortunate enough to communicate with someone possessing a great deal of insight into Operations Manager who helpfully suggested that the PowerShell modules should be more efficient than WSH-based modules in composite workflows.  I rewrote all of the scripts in the Cisco MP to convert them from VB Script to PowerShell and began some testing.  I was familiar with the tighter integration of PowerShell in R2 modules (PS scripts no longer have to be launched as external commands), but to be honest, I was expecting to see a large number of powershell.exe processes spawned as the monitors fired.   However, this is not the case.  Rather, it looks to me that the modules are executing the PowerShell script through the .NET framework within the context of the monitoringhost.exe process.   This does appear to be more efficient overall, as the overhead associated with spawning new processes is effectively eliminated, and my impressions thus far are that CPU utilization overall is reduced.

However, switching from WSH scripts to PowerShell scripts in R2 workflows is a little bit of jumping from the frying pan and into the fire in that, instead of spawning a large number of processes each consuming relatively small amounts of processor and memory resources, the PowerShell script modules drive a single process (monitroinghost.exe) to consume a large quantity of resources, particularly CPU cycles.   Overall, memory utilization looks a lot better with the PowerShell modules, and although CPU utilization does seem to be better, it is still a concern for scalability. 

Thus far, I have been doing this performance testing in a development environment, with OpsMgr running on some virtual machines on both on workstation and older server class hardware, neither of which provide a good indication of real-world scalability (particularly given the fact that I have these VM’s running SQL, all OpsMgr duties, and SNMP simulations to boot).  On one of these woefully over-utilized VM’s, something around 130-150 interfaces on 10 monitored Cisco devices seemed to be the breaking point, but a more realistic OpsMgr deployment scenario (segregated database, RMS, and MS duties) on physical hardware should be able to handle far more than that.   I will report an update once I get a chance to do some broader scalability testing with the PowerShell version of the MP on more appropriate hardware. 

In summary, both the WSH and PowerShell probe and write action modules introduce a relatively heavy CPU load when utilized for data manipulation – relative to the very simple operations required to manipulate SNMP data, and a managed code module would be far more desirable, if available.  However, at present, these two providers are the only supported mechanisms for handling data that require processing before returning to a rule or monitor.   My testing thus far appears to support the assertion that R2 implements the PowerShell modules more efficiently than the WSH-based modules, which is welcome news, given the relative ease and impressive flexibility of scripting with PowerShell.  I’ve seen a bit of talk that PowerShell V2 is supposed to bring significant performance improvements over V1, and I hope to do some testing with the CTP version of V2 on an OpsMgr proxy agent in the very near future to see if it helps address any of the scalability challenges in SNMP monitoring with OpsMgr.  As for the best approach for the present, it looks like PowerShell is the way to go, and the overall impact on the MS/proxy agents can be mitigated by spreading monitored objects across multiple proxy agents, focusing discovery to only those objects which are required to be monitored (i.e. interfaces), and avoiding overly-aggressive scheduling of monitors.

SCOM: Building on the Net-SNMP MPs

Due to the ubiquity of the Net-SNMP agent, the Net-SNMP management packs can be used for a wide range of UNIX/Linux devices, and one of my primary intentions in creating these management packs was to extend them to Linux-based proprietary platforms such as Check Point Secure Platform and VMWare ESX.  To that end, I am currently putting the finishing touches on management packs for Check Point Splat and VMWare ESX SNMP monitoring that reference the Net-SNMP Library MP. 

Check Point Secure Platform

SPlat is a hardened Linux kernel, which conveniently supports the Net-SNMP agent for manageability.  The Check Point-specific SNMP objects are exposed through the extended Net-SNMP agent as described in the CHECKPOINT-MIB.   So in this case, the Net-SNMP Monitoring MP can be used for basic system health, while an additional Check Point MP can be added to monitor the Check Point software modules for availability status and Firewall/VPN/Etc performance metrics.  

VMWare ESX – SNMP

Of course, ESX server is a modified Red Hat Enterprise Linux distribution that also utilizes the Net-SNMP agent for SNMP support.  VMWare exposes ESX-specific objects to SNMP via dlMod extensions to the Net-SNMP agent, including VM Guest info and some performance metrics.   So, in VMWare environments, the host operating system can be monitored for health through traditional Net-SNMP-implemented MIBs (UCD-SNMP, HOST-RESOURCES), while VMWare-specific counters can be monitored through the use of the VMWare MIBs.  

When it comes to monitoring of VMWare,  the VMWare SNMP implementation has the advantage of being easy to deploy and rather lightweight, and given the likelihood that SNMP may be used in VMWare environments for full vendor hardware monitoring, the VMWare SNMP implementation is a good way to introduce some monitoring of the hypervisor virtualization layer.  That being said, the VMWare SNMP implementation does leave a lot to be desired; for example, alarms/events are only exposed in SNMP through traps, only a few performance counters are available, and many VMWare Infrastructure objects are not represented.    For more complete/comprehensive monitoring of VMWare environment, the only data provider choice seems to be the VMWare API.   I’m working on something along those lines presently, but I’ll post more on that at a later date.

SCOM: SP1 Edition of the Cisco Management Pack , v1.0.2.6

I have completed the first version of the Cisco Management Pack for SP1 compatibility.  The monitoring in the management pack is identical to the R2 version of the MP, described most recently here.      Due to the dependence on the WMI SNMP provider for object discovery, there are inevitably some scalability limitations intrinsic to the SP1 edition of this MP, but I haven’t done enough full-scale testing to ascertain those limitations as of yet.   Additionally, deployment of this management pack requires some additional steps.   These steps are detailed below (taken from the MP documentation):

Prerequisites

This management pack utilizes the WMI SNMP provider to perform discovery of SNMP objects.  In order to use this management pack, the following steps must be completed on each server that will function as a proxy agent for SNMP-monitored Cisco devices.

Install the SNMP protocol and WMI SNMP provider

  • To install these components, access Add/Remove Programs in the Control Panel, and select Add/Remove Windows Components.  Under Management and Monitoring select: Simple Network Management Protocol and WMI SNMP Provider

The following MIBs must be exported to MOF files with smi2smir.exe and imported with MOFComp.exe:  CISCO-ENVMON-MIB and CISCO-MEMORY-POOL-MIB.  

  • The MIBs and a batch file to perform these steps can be found in the /Setup directory included with this MP distribution.   Run the RegCiscoMibs.cmd file and check the output log: register.log to confirm that the mibs were successfully compiled and imported.

Recommended Proxy Agent Configuration

If WMI receives too many requests in a short time, it may suspend processing of requests for a period of time.   This can impact the ability of this management pack to discover Cisco SNMP objects in a timely fashion (WMI SNMP is only used by this MP for discovery, and not object monitoring).   In order to minimize the chances of this situation occurring, the object discoveries in this MP should not be scheduled too aggressively.   Additionally, it is recommended that if a large number of Cisco SNMP devices are to be monitored, they should be distributed across multiple proxy agents for load-balancing of the WMI SNMP requests. 

It is highly recommended that all agents that will function as proxy agents for SNMP devices have the Windows Script Host version 5.7 installed.  Version 5.7 is far more efficient than previous versions and reduces the resource utilization by the cscript.exe process dramatically.

It is also highly recommended that all agents that will function as proxy agents for SNMP devices have the hotfix: KB96163 applied.  This hotfix resolves instability with SNMP monitoring in Operations Manager 2007 SP1. http://support.microsoft.com/default.aspx/kb/961363

 

I’m interested to hear how this SP1 edition of the Cisco Management Pack functions in different environments, so any feedback is most certainl welcome.  I will continue to post updates to this site so be sure to check back regularly.   For more information about the scripts utilized in discoveries in this edition of the MP, this post should sum it up.

SCOM: Updates to the Cisco Management Pack (R2) v1.0.2.6

I’m hoping to finish up the SP1 version of the Cisco Management Pack pretty soon, but I’ve modified the R2 version to include several new changes.  The current version: 1.0.2.6 can be downloaded here

The changes in this version are:

  • Added three new containment classes:  Cisco Device Chassis, Cisco Device System Components, and Cisco Device Interfaces.   These classes contain monitored objects to add an additional level of hierarchical organization.
  • Added discovery of the IFAlias property for Interfaces
  • Added discovery of the Hostname (OLD-CISCO-MIB) and Chassis description for the Cisco Device class.
  • Updated the properties displayed by default in the Device and Interface views
  • Added a rule to clean up unused XML temporary files once a day.   Several of the monitors utilize temporary XML files written to the %TEMP% path.  In the previous version, old files would be left on the file system if a previously monitored object was removed.  This rule will remove those temporary files.
  • Modified discovery intervals for some objects for more balanced timing.
  • Added four new monitors for switches that implement the CISCO-STACK-MIB.  The monitors are targeted at the Cisco Device Chassis class and include
    • Fan Alarm
    • Temperature Alarm
    • Minor Alarm
    • Major Alarm

With the new containment classes, the diagram view looks a lot better:

SCOM: Updates to the Net-SNMP Management Packs. v1.0.1.30

I’ve made a few minor updates to the Net-SNMP Management Packs, available at the same download location.   The changes in 1.0.1.30 are:

  • Implemented new class:  Net-SNMP Monitored Process Instance, along with rules to collect and monitor process instance CPU and memory use.  Reference the MP documentation in the zip file (and the preceding post) for more information.
  • Updated all alerts to include the hosting device name.
  • Added a rule to periodically purge old temporary files used by the Interface Utilization data source.

Any future updates will be posted to this blog, and thanks to everyone who has commented on these management packs. 

Some screenshots of the new process instance monitoring capabilities:

SCOM: UNIX/Linux Process Monitoring in the Net-SNMP MP In Detail

Regardless of the operating system, monitoring of the availability and resource utilization of individual processes is a pretty standard requirement.  Between WMI and PerfMon counters, this is easy on Windows systems, but doing the same on UNIX/Linux systems can be a little bit more complicated.   In Operations Manager 2007 (R2) environments, there are three general approaches (excluding third party products) that can be utilized to monitor individual processes on UNIX and Linux systems:

  1. An agent-based solution using the R2 Cross Platform agents
  2. A purely SNMP solution using tables in the HOST-RESOURCES MIB
  3. An extended SNMP solution using the proc or exec directives in the Net-SNMP agent’s snmpd.conf file

I think it’s fair to say that in most cases (and when it is supported), the R2 Cross Platform agent is the best and most robust approach.   However, it’s almost an inevitability in medium and large enterprises that there will be some UNIX or Linux servers or appliances running distributions not supported by the R2 agents.  In these cases, or if there is another compelling reason not to deploy agent software to the device, SNMP may be the best or only option.   The pure SNMP option is probably the most universally applicable approach, but introduces a number of challenges, which I will discuss in this post.  The third option brings a great degree of flexibility (particularly with the exec directive, which can return the result of an on-demand shell script to an SNMP OID) but requires decentralized configuration. 

The approach that I took in the Net-SNMP Management Pack is a hybrid of the pure SNMP and extended SNMP options.  The latest version of the MP (which I will be posting soon) supports process resource utilization through the HOST-RESOURCES MIB tables in addition to process availability monitoring facilitated by identifying the monitored processes with the proc directive in snmpd.conf.  And as described in the previous post about the MP, if ultimate flexibility is needed, the Extensible Object capability with the exec directive is still supported.  

UNIX/Linux SNMP Process Monitoring In-Depth

Read more of this post

SCOM: Net-SNMP Management Packs for UNIX/Linux Monitoring, Version 1.0.1

I have completed version 1.0.1 of the Net-SNMP management packs for OpsMgr 2007 R2 , and I thought I’d go ahead and share them.  At this point, my testing of the management packs has been on a small development network with Sun and CentOS servers, but so far everything is looking pretty good.   These management packs should provide a pretty good set of monitoring for UNIX and Linux servers that run the Net-SNMP agent, which includes Solaris, most Linux distributions, and even VMWare ESX and Checkpoint Secure Platform. 

As I had discussed in my last post, there are two management packs in this set.  The Net-SNMP Library management pack defines the classes and performs discoveries.   This can be imported by itself to facilitate completely custom monitoring or to be referenced in other management packs.   The Net-SNMP Monitoring management pack implements a pretty standard set of monitors for UNIX/Linux server performance and availability monitoring and supports the Exec, Proc, and File directives of the Net-SNMP agent configuration

The management packs can be downloaded here.  They are released under the GNU Public License and can be used, modified, and distributed freely, as long as the attribution remains intact. 

Some screenshots:

Read more of this post