Improve Customer Experience For Less Than 50$

I was recently presenting a lecture speaking about how a large number of social networks cause companies to lose their control over communication channels with their customers. I was also explaining why it is a problem and how Azure serverless services can reduce the problem dramatically for less than 50$. The reaction from the audience was great. So I decided to share it with you. Here is the explanation:

Before social networks existed on the internet (Facebook, Twitter, Instagram and so on), the traditional communication channels between companies and customer were telephone, fax, email, mail or even a company website. Basically, companies had several common channels of communication. So, they could decide which of the communication channels to use.

Nowadays, social networks have become a legitimate communication channel between the customer and the company. As a result, the customer expects the company to respond within the social network. If the company does not, it becomes irrelevant. This means that companies no longer have the privilege to decide which of the communication channels to use.

Social networks are not the problem. Their quantity is. There are too many social networks and they continue to grow. If companies want to communicate with customers through all social networks or some of them, companies will have to pay an enormous amount of money. If they don’t use social networks, they will lose customers. Companies have no more the privilege to decide which of the communication channels to use.

Companies that understand how important this issue is. Spend time and money to develop their private solutions from scratch. Other companies are using external services such as Hootsuite or SproutSocial. These services are excellent, and companies should consider using these kinds of external services. Another option I would like to suggest is to consider using Azure serverless services to create your tailor-made solution. Let’s look at a diagram (1), it demonstrates an automated communication channel for customers that are using Twitter to interact with the company.

(1) – Twitter channel diagram.

An explanation of a diagram:  

The flow trigger is a tweet with the hashtag #expertsliveisrael (Experts Live Israel was a  conference that I organized and took part in during July 2018 in Tel Aviv). Next, it is splitting into using Azure Cognitive Services (Text Translate and Text Analytics) to analyze the tweet sentiment (is it a positive or negative tweet) in order to create a tailor-made response. It also stores the data inside an Azure Cosmos DB for future analyzing of the data.

Note: I used a database to store the data for demonstration purposes, it is not necessary for automatic response. I used it to give you an idea of possible insights that you might use, such as analyzing complaints and improving tailor-made responses.

You may follow my steps to achieve a similar result (I won’t describe all the workflow steps, only those that I think will be important for you to understand the solution):

  1. Create a Twitter account or use an existing account, if you don’t have a Twitter account click here to signup.

  2. Open Azure portal, and create a new Logic App flow ( I’m assuming you already know Logic App, if not I suggest to read this simple guide or start playing with it.)

My main Logic App flow is ExpertsLiveIsreal-Demo-Twitter, and here is how it visualized on designer view.TwitterFlow1

To simplify the visuality and for future maintenance purpose, I create a three separated Logic app flows.flowslist.JPG

  1. Add a Twitter trigger and fill in the fieldsFlow1.jpg

  2. Create a new Logic App flow for cognitive servicesFlow2.jpg

Note: You will need to create two Cognitive Services, a Text Translate and Text Analytics (If you don’t know what it is and how to use it, click on this link.)

  1. Create a new Logic App flow for a tailor-made responseFlow3.jpg

  2. Currently, to reply a tweet, there is no simple built-in action in Logic apps, therefore, at first, you will need to create a Function app and a developer account in Twitter.

  3. Once you created a developer account in a Twitter, you may continue to build a Function. Flow4

How I use a free API?

To be able to do so you need to download it and save it. Next, upload a file into the bin folder inside the Function app folder. To open the function folder, click on the function name in my case it is an ExpertsLiveIL,  then change the tap to a Platform features and click on an App Service Editor.Flow5

You will open a new window of an App Service Editor.flow6

Select a bin folder and Right-click on a mouseflow7

Select Upload Files, browse to the location where you saved a, select it and click on Upload button.

  1. Create a Cosmos DB follow the 5 minutes guide to creating it.

  2. Now, you are ready, to complete other actions on main flow.Flow10.jpg

Note: I didn’t mention a small function GenerateGuid, that I needed since Cosmos DB has a mandatory property “id” and it must be unique. So, I created a function that outputs a unique value to pass as “id” value.flow8.jpg

It took me a day maybe a little more to complete a whole flow. And here is a list of my costs:


As you can see the total price was 45.87$. It’s a low price, wouldn’t you agree?

You should try it, be creative and productive 🙂

You are welcome to write me anything (a question or a comment or feedback).

And don’t forget to subscribe! Thanks.



Guest Blog – Management Pack Tuning – Ruben Zimmermann

And Ruben is back at it again! This time he has rather interesting topic, that is always hot for a SCOM admin – tuning your management packs! Out-of-the-box, SCOM creates a lot of alerts. I mean A LOT. Truthfully, not every one of those alerts is useful, or relevant to you. If you just let it be like that, you or your support teams would waste a lot of time working on unnecessary alerts instead of focusing on the ones that actually matter. That is why tuning any management pack you import must be tuned to only focus on the things that matter to you and your organization.

That is exactly what Ruben has come up with here. I’m sure this information will be critical for any SCOM admin. Here goes:

SCOM Management Pack tuning step by step



This post explains Management Pack tuning, the reasons why it is required and how it can be performed with the help of free tools and PowerShell.

Monitoring Console showing Alerts

Monitoring Console showing Alerts


Every Management Pack contains rules and monitors to indicate a potential issue or expose an already existing problem. Another common type of rules are used to collect performance data.

The Management Pack author or the vendor decide which rules and monitors are enabled by default and which can be enabled via overrides from the SCOM Administrator.

Every environment is different so the judgement which rule or monitor is useful are not the same.

It is the best practice to disable all rules and monitors that don’t bring obvious benefit.
On the other hand, there might be rules and monitors that could be useful for you so you should enable them. The process of doing this is called ‘Management Pack tuning’.

For a few reasons it is important to Management Pack tuning immediately after importing.

  • Alerts that don’t have a noticeable impact just distract SCOM Administrators or Operators.
  • Performance data that is recorded but not analyzed consumes resources on the Database and makes SCOM significantly slower.
  • The more rules and monitors are active on the monitored computer the busier it is handling ‘monitoring’.

A nice side effect is that you’re doing documentation by doing so. It is easy to tell someone what is monitored and what not.

Invite subject matter experts to do the Management Pack tuning with you together.

This gives three direct benefits.

  1. The experts, e.g. DBAs know what is monitored
  2. The experts will tell you what it is needed from their perspective
  3. You, the SCOM Admin can share the responsibility when it comes to the question ‘why did we not know about it before?’ or ‘why wasn’t there an alarm?’

Performing the tuning

As example we will use the Management Pack for the Windows Server 2008.

Note: Usually you only need to care about Management Pack files named monitoring. – Leave those called discovery untouched. Smaller Management Packs might just consist of a single file.


  1. Download the Management Pack Windows Server Operating System and run the setup.
    Keep the default location

    C:\Program Files (x86)\System Center Management Packs\SC Management Pack for Windows Server Operating System
  2. Download MPViewer from
  3. Copy the PowerShell script from into PowerShell ISE or VSCode and name it MPTuining-RulesAndUnitMonitors.ps1
    • The script requires PowerShell v3 at minimum. – It is given by Windows Server 2012 by default, for older Windows Server versions please install the current Windows PowerShell version (at the day of writing it is PowerShell 5.1)
  4. Store everything on a Management Server (E.g. C:\Temp\MPTuning).


  1. Create a new Override Management Pack for that specific MP and name it properly.
    e.g. ABC.Windows.Server.Overrides

    Administration Pane to create a Management Pack
    Administration Pane to create a Management Pack
    Naming the Management Pack properly
    Naming the Management Pack properly
  2. Launch exe and load the Management Pack named from the default location and choose “Save to Excel”.
    Management Pack Viewer
    Management Pack Viewer
  3. Name the file WindowsServer2008MonitoringExport for instance.
  4. Open Microsoft Excel and open the file, select the Monitors – Unit sheet and hide all columns except of A, D, H and O
  5. In the Ribbon bar select Data and add a Filter. For column D choose only Enabled Monitors. Review and decide if they should be kept enabled. – From my perspective all are useful.
    Excel shet Monitors Unit shwoing filtered columns
    Excel sheet Monitors Unit showing filtered columns
  6. Revert the selection so that Enabled is set to False. Review. I left them also as they are.
  7. Switch to the Rules sheet and limit visible columns to A, C, D, K and O. Afterwards set the filter to show Enabled: True and Category: PerformanceCollection.
    Excel sheet Rules showing filtered columns
    Excel sheet Rules showing filtered columns
  8. Copy rules that seem to be not useful into a text file and name it txt

    Text file WindowsServerManagementPack2008_RulesToBeDisable.txt
  9. Note down the name of the Windows Server 2008 Monitoring Management Pack and the Override Management Pack.
    Administration Pane showing Windows Server MP Name
    Administration Pane showing Windows Server MP Name
  10. Navigate to C:\Temp\MPTuning and open the PowerShell script MPTuining-RulesAndUnitMonitors.ps1 (with VSCode for example)
    1. Place the file txt needs to be there, too.
      VSCode running script

      VSCode running script
Parameter Value Meaning
sourceManagementPackDisplayName ‘Windows Server 2009 Operating System (Monitoring)’ Management Pack that contains the rules and unit-monitors we will override
overrideManagementPackDisplayName ‘ABC.Windows.Server.Overrides’ Management Pack we created to store the our configuration changes (overrides)
itemType rule Sets that we will change rules
itemsToBeEnabled False Rules will be disabled
inputFilePath WindowsServerManagementPack2008_RulesToBeDisabled Name of the file that contains the rule names we specfied
  1. Run the PowerShell script by hitting ‘Enter’
  2. After a short while the overrides will appear in the Management Console
    Authoring Pane Showing Overrides
    Authoring Pane Showing Overrides
  3. Repeat the procedure for rules that you like to enable.

If you experience problems or have other questions, come to join our SCOM community at


Thanks Ruben!

You can know more about Ruben here:
Ruben Zimmermann (A fantastic person who has a lot of great ideas) [Interview]

More from Ruben:

Guest Blog – Authoring a PowerShell Agent Task in XML – By Ruben Zimmermann


Guest Blog – Authoring a PowerShell Agent Task in XML – By Ruben Zimmermann

It is my absolute pleasure to announce this guest blog today. My good friend Ruben has come up with a fantastic community MP that is going to make all of our lives a lot easier 🙂

I will let him talk about it himself. All yours, Ruben! –

Authoring a PowerShell Agent Task in XML


This post describes the code behind a PowerShell Agent Task. With the help of a real-life case ‘Create Log Deletion Job’ the lines in XML and PowerShell are commented.


Create Log Deletion Job is a SCOM – Agent Task which offers the creation of a scheduled task that deletes log files older than N days on the monitored computer. It works on SCOM 2012 R2 and later.


This blog assumes that you have created already management packs before. If not, or if you feel difficulties please visit the section ‘Reading’ and go through the links.

The used software is Visual Studio 2017 (2013 and 2015 work as well) plus the Visual Studio Authoring Extension. Additionally the ‘PowerShell Tools for Visual Studio 2017’ are installed.

The Community Edition of Visual Studio technically works. – Please check the license terms in advance. – If you are working for a ‘normal company’ you will most likely require Visual Studio Professional.


The agent task mainly consists of two components.

  1. A PowerShell Script which creates a Scheduled Task and writes another script on the target machine with the parameters specified in the SCOM console.
  2. A custom module that contains a custom write action based on PowerShell Write Action and a task targeting Windows Computer leveraging the write action.
  3. Visual Studio Authoring Extension (VSAE), a free Plugin for Visual Studio is used to bind the XML and PowerShell together and produce a Management Pack.
    The Management Pack itself can be downloaded as Visual Studio solution or as compiled version directly from GitHub.

Steps to do in Visual Studio:

  • Create a new project based on Management Pack / Operations Manager R2 Management Pack.- Name it ‘Windows.Computer.AgentTasks.CreateLogDeletionjob’
  • Create a folder named ‘Health Model’ and a sub folder of it named ‘Tasks’.
  • Add an Empty Management Pack Fragment to the root and name it Project.mpx.

Project File ‘project.mpx’ Content:

<ManagementPackFragment SchemaVersion="2.0" xmlns:xsd="">


    <LanguagePack ID="ENU" IsDefault="true">


        <DisplayString ElementID="Windows.Computer.AgentTasks.CreateLogDeletionJob">

          <Name>Windows Computer AgentTasks CreateLogDeletionJob</Name>

          <Description>Creates a scheduled task on the managed computer which automatically deletes old logs.</Description>






The Powershell Script:

Create a file named CreateLogDeletionJob.ps1 in the ‘Tasks'<h3> sub folder and copy the following content inside it.


$api = New-Object -ComObject 'MOM.ScriptAPI'

$api.LogScriptEvent('CreateLogDeletionJob.ps1',4000,4,"Script runs. Parameters: LogFileDirectory $($LogFileDirectory), LogFileType: $($LogFileType) DaysToKeepLogs $($DaysToKeepLogs) and scheduled task folder $($scheduledTasksFolder)")           

Write-Verbose -Message "CreateLogDeletionJob.ps1 with these parameters: LogFileDirectory $($LogFileDirectory), LogFileType: $($LogFileType) DaysToKeepLogs $($DaysToKeepLogs) and scheduled task folder $($scheduledTasksFolder)"

$ComputerName          = $env:COMPUTERNAME

$LogFileDirectoryClean = $LogFileDirectory      -Replace('\\','-')

$LogFileDirectoryClean = $LogFileDirectoryClean -Replace(':','')

$scheduledTasksFolder  = $scheduledTasksFolder -replace([char]34,'')

$scheduledTasksFolder  = $scheduledTasksFolder -replace("`"",'')

$taskName              = "Auto-Log-Dir-Cleaner_for_$($LogFileDirectoryClean)_on_$($ComputerName)"

$taskName              = $taskName -replace '\s',''

$scriptFileName        = $taskName + '.ps1'

$scriptPath            = Join-Path -Path $scheduledTasksFolder -ChildPath $scriptFileName

if ($DaysToKeepLogs -notMatch '\d' -or $DaysToKeepLogs -gt 0) {

                $daysToKeepLogs = 7

                $msg = 'Script warning. DayToKeepLogs not defined or not matching a number. Defaulting to 7 Days.'



if ($scheduledTasksFolder -eq $null) {

                $scheduledTasksFolder = 'C:\ScheduledTasks'

} else {

                $msg = 'Script warning. ScheduledTasksFolder not defined. Defaulting to C:\ScheduledTasks'


                Write-Warning -Message $msg


if ($LogFileDirectory -match 'TheLogFileDirectory') {

                $msg =  'CreateLogDeletionJobs.ps1 - Script Error. LogFileDirectory not defined. Script ends.'


                Write-Warning -Message $msg



if ($LogFileType -match '\?\?\?') {    

                $msg = 'Script Error. LogFileType not defined. Script ends.'


                Write-Warning -Message $msg



Function Write-LogDirCleanScript {









                if (Test-Path -Path $scheduledTasksFolder) {

                                $foo = 'folder exists, no action requried'

                } else {

                                & mkdir $scheduledTasksFolder



                if (Test-Path -Path $LogFileDirectory) {

                                $foo = 'folder exists, no action requried'

                } else {

                                $msg = "Script function (Write-LogDirCleanScript, scriptPath: $($scriptPath)) failed. LogFileDirectory not found $($LogFileDirectory)"

                                Write-Warning -Message $msg




                if ($LogFileType -notMatch '\*\.[a-zA-Z0-9]{3,}') {

                                $LogFileType = '*.' + $LogFileType

                                if ($LogFileType -notMatch '\*\.[a-zA-Z0-9]{3,}') {

                                                $msg = "Script function (Write-LogDirCleanScript, scriptPath: $($scriptPath)) failed. LogFileType: $($LogFileType) seems to be not correct."

                                                Write-Warning -Message $msg





$fileContent = @"

Get-ChildItem -Path `"${LogFileDirectory}`" -Include ${LogFileType} -ErrorAction SilentlyContinue | Where-Object { ((Get-Date) - `$_.LastWriteTime).days -gt ${DaysToKeepLogs} } | Remove-Item -Force



                $fileContent | Set-Content -Path $scriptPath -Force


                if ($error) {

                                $msg = "Script function (Write-LogDirCleanScript, scriptPath: $($scriptPath)) failed. $($error)"                       


                                Write-Warning -Message $msg

                } else {

                                $msg = "Script: $($scriptPath) successfully created"   

                                Write-Verbose -Message $msg


} #End Function Write-LogDirCleanScript

Function Invoke-ScheduledTaskCreation {






                $taskRunFile         = "C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe -NoLogo -NonInteractive -File $($scriptPath)"    

                $taskStartTimeOffset = Get-Random -Minimum 1 -Maximum 10

                $taskStartTime       = (Get-Date).AddMinutes($taskStartTimeOffset) | Get-date -Format 'HH:mm'                                                                                                                                                                                     

                $taskSchedule        = 'DAILY'             

                & SCHTASKS /Create /SC $($taskSchedule) /RU `"NT AUTHORITY\SYSTEM`" /TN $($taskName) /TR $($taskRunFile) /ST $($taskStartTime)                


                if ($error) {

                                $msg = "Sript function (Invoke-ScheduledTaskCreation) Failure during task creation! $($error)"


                                Write-Warning -Message $msg

                } else {

                                $msg = "Scheduled Tasks: $($taskName) successfully created"

                                Write-Verbose -Message $msg


} #End Function Invoke-ScheduledTaskCreation

$logDirCleanScriptParams   = @{

                'scheduledTasksFolder' = $ScheduledTasksFolder

                'LogFileDirectory'     = $LogFileDirectory       

                'daysToKeepLogs'       = $DaysToKeepLogs     

                'LogFileType'          = $LogFileType

                'scriptPath'           = $scriptPath


Write-LogDirCleanScript @logDirCleanScriptParams

$taskCreationParams = @{

                'ComputerName'  = $ComputerName             

                'taskName'      = $taskName

                'scriptPath'    = $scriptPath


Invoke-ScheduledTaskCreation @taskCreationParams

The Custom Module:

Add an Empty Management Pack Fragment in the ‘Tasks’ sub folder name it ‘AgentTasks.mpx’. Copy the content below into it.

<ManagementPackFragment SchemaVersion="2.0" xmlns:xsd="">



      <!-- Defining the Write Action Module Type. The name is user-defined and will be used in the task definition below. -->

      <WriteActionModuleType ID="Windows.Computer.AgentTasks.CreateLogDeletionJob.WriteAction" Accessibility="Internal" Batching="false">

        <!-- The items in the Configuration sections are exposed through the SCOM console -->


          <xsd:element minOccurs="1" name="LogFileDirectory" type="xsd:string" xmlns:xsd="" />

          <xsd:element minOccurs="1" name="LogFileType" type="xsd:string" xmlns:xsd="" />

          <xsd:element minOccurs="1" name="ScheduledTasksFolder" type="xsd:string" xmlns:xsd="" />

          <xsd:element minOccurs="1" name="DaysToKeepLogs" type="xsd:integer" xmlns:xsd="" />

          <xsd:element minOccurs="1" name="TimeoutSeconds" type="xsd:integer" xmlns:xsd="" />


        <!-- To make exposed items editable it is required to specify them in the OverrideableParameters section -->


          <OverrideableParameter ID="LogFileDirectory" Selector="$Config/LogFileDirectory$" ParameterType="string" />

          <OverrideableParameter ID="LogFileType" Selector="$Config/LogFileType$" ParameterType="string" />

          <OverrideableParameter ID="ScheduledTasksFolder" Selector="$Config/ScheduledTasksFolder$" ParameterType="string" />

          <OverrideableParameter ID="DaysToKeepLogs" Selector="$Config/DaysToKeepLogs$" ParameterType="int" />         


        <ModuleImplementation Isolation="Any">



              <!-- The ID name is user-defined, suggested to keep it similar as the type. The type signals SCOM to initiate the PowerShell engine  -->

              <WriteAction ID="PowerShellWriteAction" TypeID="Windows!Microsoft.Windows.PowerShellWriteAction">

                <!-- To keep the module readable and the script better editable it is referenced below -->


                <ScriptBody>$IncludeFileContent/Health Model/Tasks/CreateLogDeletionJob.ps1$</ScriptBody>

                <!-- The parameters are the same than in the Configuration section as we want to pass them into the script from the SCOM console -->























              <Node ID="PowerShellWriteAction" />











      <!-- The name for the task ID is user-defined. Target is set to 'Windows.Computer' that make this task visible when objects of type Windows.Computer are shown in the console -->

      <Task ID="Windows.Computer.AgentTasks.CreateLogDeletionJob.Task" Accessibility="Public" Enabled="true" Target="Windows!Microsoft.Windows.Computer" Timeout="120" Remotable="true">


        <!-- The name for the write action is user-defined. The TypeID though must match to the above created one. -->

        <WriteAction ID="PowerShellWriteAction" TypeID="Windows.Computer.AgentTasks.CreateLogDeletionJob.WriteAction">

          <!-- Below the parameters are pre-filled to instruct the users how the values in the overrides are exptected-->











    <LanguagePack ID="ENU" IsDefault="true">


        <DisplayString ElementID="Windows.Computer.AgentTasks.CreateLogDeletionJob.Task">

          <Name>Create Log Deletion Job</Name>








If you are new to management pack authoring I suggest the free training material from Brian Wren.

On Microsoft’s Virtual Academy:

On Microsoft’s Wiki:


If you have questions, comments or feedback feel free to feedback in our SCOM – Gitter – Community on:

Awesome! Thanks a lot for all your contribution and for being considerate for others by publishing it for everyone, Ruben 🙂 Wishing to have many more cool guest blogs from you!

You can get to know Ruben better here:

Ruben Zimmermann (A fantastic person who has a lot of great ideas) [Interview]


SCOM Event Based Monitoring – Part 2 – Rules

In the last post we discussed about event based monitoring options SCOM  provides with Monitors. You can find it here:

SCOM Event Based Monitoring – Part 1 – Monitors

In this post we are going to discuss the event based monitoring options using SCOM Rules. Basically the highlighted options in the image below:


As we can see, we have 2 kinds of rules for monitoring events. “Alert Generating Rules” and “Collection Rules”. Let’s walk through them one by one.

Alert Generating Rules:

As the name suggests, this type of rules raise an alert if they detect the event ID.

As you’re going through the Rule Creation wizard you will notice that there are several options in “Rule category”, as shown in the pic below. In my experience, there is no difference whatever you choose. These are more like for “Logical Grouping” of these rules, that you can maybe make use of if working on them through Powershell. Since we’re here to create an “alert generating” rule, the most obvious option here would be “Alert”.


Now, this step is pretty important and if you’re new to creating this, you’re very likely to miss this. As you reach the final window, on the bottom of “Alert Description” (which you can modify btw), is the box for “Alert Suppression”. This is used to tell SCOM what events should be considered “duplicates” and hence not raise a new alert if there’s already one open in active alerts. “Auto-suppression” happens automatically for monitors – it only increases the repeat count for every new detection – but for rules, you’ll have to do this manually. If you don’t do this, you’re gonna get a new alert every time this event is detected. In this demo, I’m telling SCOM to consider the event as duplicate if it has the same Event ID AND the same Event Source.

I HIGHLY recommend using this since I learned this hard way some time back. I missed out configuring alert suppression for some rule and a couple nights later, woke up to a call from our Command Center guys. They said, “Dude! We’ve received THOUSANDS of  emails in our mailbox since last 15 minutes…and they’re all the same. Help!”

I rushed and turned it off, further investigation brought to light the cause that something had gone wrong on the server and it was writing the same event over and over again, thousands of times in the event log, and since I had not configured the suppression criteria, it created a new alert every time and sent mail as set up in subscription. Now I check for suppression criteria at least 3 times 🙂


Now that is set up, I created the event in the logs with a simple PoSh:

Write-EventLog -Logname Application -ID 1000 -Source custom -Message "This is a test message"

And as you can see, the alert was raised.


Collection Rules:

Collection rules are used only to collect the events in the SCOM database. They do not create any alert. In fact, you’ll hardly even notice their presence at all. Why create these rules then? Most commonly for reporting/auditing purposes. You can also create a dashboard showing the detection of these events.



Notice in the left side of the wizard that there’s no “Configure Alerts” tab as we had in the “Alert Generating” rule.

So what this rule is going to do is detect the occurrence of event 500 with source “custom” and if it detects it, just saves it in the database.

I wrote this event in the logs a bunch of times, now let’s see if SCOM is really collecting them or not. We’ll create a simple “Event View” dashboard for this.


And yup, we can indeed see that the event is actually being collected by SCOM.


Here’s another thing that might be a bit tricky. After you create these 2 types of events, if you open the properties, you’ll see they’re almost identical. If you’ve named them properly, kudos to you but if you (or someone else) hasn’t, how will you find out whether the rule is generating alerts or just collecting events? Take a close look at the content in the yellow boxes below. See the difference?

Alert Generating Rule Properties:


Event Collection Rule Properties:


You rightly noticed that the response for the alert generating rule is “Alert”, which means this rule is going to respond to the event detection by generating an alert. If you click on the “Edit” option just beside that, you’ll see wizard for alert name change, format change, suppression criteria, etc.

On the other hand, the yellow box in the collection rule is “Event Data Collection Write Action” and “Event data publisher”, which indicates that it is only writing it in the databases. You can also verify this with Powershell:


You can also fetch a report for the event collection rule for auditing purposes, but unfortunately I do not have the Reporting feature installed in my lab so can’t show a demo for that. 🙂

Thus concluding the event monitoring options SCOM offers with monitors as well as Rules. Now you know what abilities SCOM has, now it’s up to you to decide which option do you wanna choose 🙂



#Go2ExpertsLiveIL (Implement an idea in less than a day)

Have you ever woken up in the morning with an idea that you thought would take a long time to implement? If you have, I hope I can convince you to try Microsoft Azure serverless products which will save you a lot of time.  Let me show you an example.


The idea is a campaign called #Go2ExpertsLiveIL. Tweet it and you will automatically receive a reply with a link to a personal website invitation. To implement it, I used Microsoft Azure serverless products such as:

  • Logic Apps – An orchestration service. I use it for subscribing to twitter posts with the hashtag #Go2ExpertsLiveIL. These tweets are then passed on to the Azure Functions.
  • Azure Functions – A Compute service, I use it to parse, manipulate data, and tweet the reply message.
  • Azure Web Service – A web hosting service, I use it to host the personal invitation site.

Additional Azure serverless products are:

Using those tools and services, I saved a lot of time and unnecessary effort. The past projects just like this one could have taken at least several days or weeks. Now with Azure serverless products, it took me less than a day to complete.

If you want to hear more about it, come to Experts Live Israel 2018 Tel Aviv, on June 21st. It is a community conference with a focus on Microsoft cloud, data center, and workplace management.

Don’t worry, if you can not come to Experts Live Israel I will write and explain how I did it shortly after the conference.

Besides me, you could meet well-known experts such as Bob Cornelissen, Avihai Hajaj, Idan Yona, Yury Kissin and Yaniv Shmulevich

I hope you are already preparing for #Go2ExpertsLiveIL:-)

See you soon…




SCOM Event Based Monitoring – Part 1 – Monitors

In this post I’m discussing about the possibilities SCOM provides with event detection monitoring using monitors.

I’ve written a similar blog for creating services, which you can see here:


Alright, so just go to Authoring -> Expand Management Pack Objects -> Monitors -> Create a Monitor -> Unit monitor. This is the screen that you should have got:


The options enclosed in the box is what we’re concerned about at this time. So let’s go through them, one by one. The three “Reset” options, “Manual Reset”, “Timer Reset” and “Windows Event Reset” exist for all the monitors (even though I’ve expanded only the first 2 in the pic above).

  • Manual Reset: Choose this option when you want the alert to stay in the console unless you close/resolve it manually.
  • Timer Reset: Choose this option when you want the alert to close itself automatically after a given period of time.
  • Windows Event Reset: With this option you can choose to automatically close the alert only when a second healthy event is detected in a given time period. So, one bad event raises the alert, and the second good event resolves it. If the healthy event is not detected in the given time, the alert stays in the console until you close it manually.

Simple Event Detection:

This is the option that may know the best. It’s the simplest and does exactly the same as the name suggests – simply detects the occurrence of an event in the specified Event Log and raises an alert.


Manual Reset –




Now that we have the monitor set up, let’s test it.

We’ll create a custom event with Powershell and try to detect that. Here’s a simple Posh:

#create a custom source
New-EventLog -LogName Application -Source "Custom"
#write event
Write-EventLog -LogName Application -Source "Custom" -EventId 100 -Message "This is a test event"

Just making sure the event was created:

new event

Right, looks good. Now onto the Ops Console:


As we can see, the alert has been raised. The alert will be resolved when the monitor producing it will be healthy. Since this is a manual reset monitor, it’ll only turn back healthy when you manually reset it.

There’s a good side to this and a bad one.

Good side:

You will always notice when the alert has been raised, and you can take any responsive measures as applicable. After you’re done, reset the monitor to make sure some action has been taken on this.

Bad side:

Unless and until you’re making sure to manually reset the monitor, there won’t be a new alert. As the monitor is critical already, it can’t be critical again and so won’t generate a new alert. It’ll only increase the repeat count, which may or may not be what you want. The work-around for this is to run a scheduled script that resets the monitors periodically to turn them back to healthy to make way for a new alert.

Timer Reset –

The only extra option you have here is to specify the wait time for reset. I’ve created this monitor to detect event 101 in Applications log.


With tests similar to the previous one, I get an alert for this.


You will have to take my word for it, the alert disappeared after 15 minutes 😉

Windows Event Reset –

Pay attention to the Wizard options here. You have to configure 2 event expressions, one for unhealthy and other for healthy. I set up the unhealthy event as event 102 with source “custom” in Application log while the healthy event is event 102 with source “custom1”.

Unhealthy event:


Healthy event:


As soon as I created the unhealthy event, I received an alert which was automatically resolved when I triggered the healthy event.
Repeated Event Detection:

Choose this monitor when you want to raise an alert if the specific event is raised repeatedly, with given settings. Here’s where the things get a little tricky.


You have a bunch of different (and confusing) options to set up here. Luckily, it’s all very well documented here on Technet : Repeating Events

What I’m doing is to configure the monitor to raise an alert when the event 103 is raised 3 times within 15 seconds. And sure enough, I do get an alert.

Missing Event Detection:

Choose this monitor when you’re expecting some event to be written in the Event Log – maybe due some kind of scheduled activity like backup, maintenance, scripted events, etc – at the given time. If the monitor doesn’t detect it, it generates an alert.


So what I’m basically telling SCOM is, “I’m expecting the event ID 104 from source “custom” in the Application event log every 15 minutes, let me know if it doesn’t show up, will ya? Thanks!”

To test this, I did NOT create an event with ID 104, and sure enough, I got the alert.


(Do not worry about the mismatch in the alert name and the monitor name, I made a typo in the alert name. It should say “anaops – missing event detection – manual reset” instead of the “repeated” as the name of the monitor at bottom suggests)
Correlated Event Detection:

Choose this option if you want an alert based on some correlation between two event ID’s. “Some correlation” can vary, as you can see in the wizard.



This can be bit confusing. In this demo, what I’m telling SCOM is,”Hey, let me know if event 105 from source “custom” is raised AND within 5 minutes of its occurrence, event ID 105 from source “custom1″ is also raised (in that order). Cool?”

SCOM said “Cool!”, so I tested it with writing these two events mentioned above within the interval of 5 minutes. And yup, I got an alert.

Correlated Missing Event Detection:

Choose this one when you need an alert when you have “some correlation” between two events – first one occurs, we’re expecting the other within 5 minutes, but it isn’t raised.

For testing this, I created the event 106 from source “custom” in applications log but did NOT create the other event 106 from source “custom1” within the next 5 minutes. Sure enough, here’s the alert I got:


As you can imagine the other two monitor reset strategies “timer reset” and “windows event reset” will have slightly different wizards, but I’m sure you guys can figure it out 😉

Also, As you may have noticed, unlike many other monitors, there’s no “interval” at which the event detection monitors are running. Meaning, it is looking for the events in the log “all the time”. So the event monitoring you get is almost real-time.

This concludes this fairly long blog, but I hope it gives you some clarity about what options you have for event detection monitoring and help you in choosing the right one. 🙂

We’ll talk about the event monitoring options with rules in the next post.


SCOM Basic Service Monitor Vs. Windows Service Template

Every now and then I’ve seen questions regarding this on the Technet forums. The most usual question is “A service XXX failure alert is being generated by a server where this service isn’t even present! What’s going on?”

The Basic Service Monitor:

This is a simple monitor that simply puts an instance of the monitor you create on EVERY server where it’s targeted. Most of the times the class you select is “Windows Server”, and so the monitor is delivered to every Windows server – regardless of whether the service is actually present there or not.



I suggest you to create a brand new MP and save this monitor in it. Now, export it and analyze what you see in the XML. You’ll notice that there aren’t a lot of things, just a single simple monitor.

So what this basically does is to put up an instance of the monitor on EVERY instance of the target class. It does not bother to check whether the service actually even exists on that server or not. And this more often than sometimes causes false alerts stating that the service is “down” on the servers where it isn’t even present! This is why I call this the “dumb” service monitor.

If you want to apply this only on a group of servers, you need to go through the additional step of disabling this monitor and enable it through override explicitly for the group you’ve created.

Windows Service Template:

Now let’s create a service monitor using the Windows Service template. As you’ll notice while creating the monitor, this wizard offers much more than just simple service availability monitoring. You can also specify to get alerts based on how much CPU and memory the service is actually using.

While setting up the target for this monitor, you’d also notice that you need a group to target this against (instead of the whole classes as we did in case of basic service monitor). What this does is to provide the precise targeting for this monitor to where you want to run this. If you want to target this to all Windows servers in your environment, just select the “All Windows Computers” group.


Now, let’s do the same thing – save this in a separate test MP and export it for our analysis. You’ll see some interesting stuff in the XML.

This MP will be considerably larger than the previous one and the first thing you’ll notice is the discovery. This monitor creates it’s own discovery for the service. And when you have a discovery, you also have a class. As you create this monitor, SCOM automatically detects the presence of this service on the servers (in the group provided earlier) and populates the class. Once the class is populated, the monitoring is targeted only to the instances of this class, saving SCOM and you the trouble to narrow down the scope later. Pretty neat, eh? 🙂 This is why I like to call this the “intelligent” service monitor.

You’ll also see that when you create this one monitor, under the hood SCOM creates several monitors as well as rules:

Type Description Enabled?
Monitors Running state of the service Enabled.
CPU utilization of the service Enabled if CPU Usage monitoring is selected in the wizard.
Memory usage of the service Enabled if Memory Usage monitoring is selected in the wizard.
Collection Rules Collection of events indicating a change in service’s running states. Enabled.
Collection of CPU utilization for the service Enabled if CPU Usage monitoring is selected in the wizard.
Collection of memory usage for the service Enabled if Memory Usage monitoring is selected in the wizard.
Collection of Handle Count for the service Disabled. Can be enabled with an override.
Collection of Thread Count for the service Disabled. Can be enabled with an override.
Collection of Working Set for the service Disabled. Can be enabled with an override.

So you see, this one wizard is actually creating THREE different monitors and SIX different performance collection rules. Also another upside of this is, since the class has also been created, you can target this class for any rules or monitors that you may want to create for this particular sub-set of servers where the service is running.

Another great thing about this is that since you have a class for this, you can even pull an availability report against this object to measure the uptime of your service.

OK then, which one should I choose?

After all said and seen, the obvious question you have in mind is probably one of the below:

  1. Cool, so the Windows Service template option looks pretty awesome. I should be using this one all the time, right?
  2. Wow, I never know that. I’ve never created a service monitor with the Windows Template option. Did I make a mistake?
  3. Why would anyone even create the basic service monitor then?

These are all legit questions, and you might be surprised to know my answer if you ask me my preferred way of monitoring a service. Yes, I (and many others) would still prefer the basic service monitor. Why? There are several reasons to do that.

  1. You only want to monitor the availability of the service. You are not concerned about the amount of CPU memory it is consuming. In fact, this is the case most of the times. You’re mainly focused only on the up/down status of the service. And in case you’re worried about CPU and memory utilization being consumed, you do have special dedicated monitors for them anyway.
  2. As the Window Service template creates a lot of things along with the service availability monitoring (1 class, 3 monitors and 6 rules), if you don’t actually need them, they’re just unnecessary overhead for SCOM. Now imagine you creating (1+3+6) 10 objects in SCOM for EACH service out of which 9 are not being used, how much litter you have created in SCOM. Basic service monitor on the other hand only creates 1 object (the actual availability monitor).
  3. It is much more work to disable the Windows Service template monitor than the basic service monitor. As you can imagine, if you no longer want to monitor the service, you’ll have to disable all 10 objects related to this monitor as opposed to just one in basic service monitor.

Hence, always first decide whether you REALLY want all this additional functionality that the Windows Service template provides, and if the answer is “Yes”, go for this way. Else, the good old basic service monitor is your friend 😉

Hopefully this clears up some things for you. 🙂



Run Powershell scripts in Console Tasks

I am working on one of the projects and as a part of it I needed to create some console tasks that would run a Powershell script to do the stuff I want. I knew that it was no problem for a script a line of two long, but any more than that and it is a real pain to pass it as the parameter in the console task. The other way I was aware of is you can point the path to your script in the parameters if you have it locally saved on your management servers (each and every one of them, at the exact same path). This didn’t really serve my purpose as I wanted to embed in a re-usable XML so I decided to do something on my own.

After a bit of researching the Internet and a lot of trial-and-error, I finally got it working. The key points to remember here are:

1. CDATA to parse it through xml

2. -command “cmd1;cmd2” to pass the block as a single input

3. Using “;” to break the cmdlets

4. Use the escape character “\” before every double quote (“) to skip the character otherwise the compiler misunderstands it for -command syntax and throws errors.

Here’s an example of the XML, with a simple Powershell code that creates an event in the custom event source:


<ManagementPackFragment SchemaVersion="2.0" xmlns:xsd="">
		<Category ID ="Cat.CustomTasks.CreateEvent"  Target ="CustomTasks.CreateEvent" Value ="System!System.Internal.ManagementPack.ConsoleTasks.MonitoringObject"/>


			<ConsoleTask ID="CustomTasks.CreateEvent" Accessibility="Internal" Enabled="true" Target="Windows!Microsoft.Windows.Computer" RequireOutput="true">

					<Argument Name ="WorkingDirectory"/>
					<Argument Name ="Application">%windir%\system32\windowsPowershell\v1.0\powershell.exe</Argument>
						<![CDATA[ -command 
#Create a custom source;
New-EventLog -Source 'Task Source Name' -LogName 'Operations Manager';
#Write the event;
Write-EventLog -LogName 'Operations Manager' -Source 'Task Source Name' -EntryType Warning -EventId 1010 -Message 'This is a test event created by task'


		<LanguagePack  ID ="ENU" IsDefault ="true">
				<DisplayString  ElementID ="CustomTasks.CreateEvent">
					<Name>CT - Create Test Event</Name>
					<Description>Creates a Warning test event</Description>
		<Assembly ID ="CustomTasks.CTCreateEventAssembly" Accessibility ="Public" FileName ="Res.CustomTasks.CTCreateEventAssembly" HasNullStream ="true" QualifiedName ="Res.CustomTasks.CTCreateEventAssembly" />

Here’s the output:


Hope this helps someone out there with similar need.

For further reading, you can go through this thread:

Powershell script in a console task?

Keep SCOMing 🙂


NEW Author Announcement – Sameer Mhaisekar ( A talented blogger, a SCOM expert, and ambitious guy)

This is an exciting period for AnalyticOps Insights.It is growing and it is becoming from personal blog to a community blog. Our new author Sameer Mhaisekar is a very talented blogger, a young ambitious SCOM expert and a community contributor. Let’s get know him a little better.

Who is Sameer?

Hello, I am a young addition to the SCOM community from India. I’ve been working with SCOM for the last couple of years and fell in love with it. After being blessed by the awesome community for a long time, a few months ago I started contributing my little share. I’m serving the community mainly in the Operations Manager forums. I aim to be a capable SCOM admin and MP author. Apart from SCOM, I also take a keen interest in Powershell, SCCM, Azure, and OMS (which I am still learning).When I’m not working I enjoy reading, blogging, traveling, sports, online gaming, etc.

I have read your Linkedin and Microsoft Tech profiles, and the first impression I got is that you are a very ambitious guy. In two years you have done so much. Your progress is very impressive. And therefore it makes sense to me that you have great ideas and goals that you want to achieve. So, first of all, am I right? If I am, what are they?
You are right, I am pretty ambitious and willing to work hard for it. I aim to be a capable IT professional all-around and to serve the community as much as I can. My goal is to be a person who can get your work done, whenever you ask me to.

What was the biggest challenge in your workplace that you accomplished?
The biggest challenge I faced (which I still face very often) is just coming up with the sheer vastness of IT. Having come from a non-IT background, this was pretty tough for me in the beginning. However, after a while, I got used to it, and now I actually love that there’s always more and new things to learn!
Do you think there is a future for SCOM?
Definitely. Apart from the fact that there is a vast majority of organizations who are highly dependent on SCOM environments, it is simply not possible or feasible to move everything to cloud and achieve the same level of competency. Not to mention SCOM is becoming better and better, just look at the latest version SCOM 1801!
Do you think the OMS will replace SCOM?
Not in near future, no it won’t. I believe SCOM and OMS both work the best hand-in-hand, and they compliment each other very well. I think the advantages on-premise software providers are not matched in cloud solutions (yet, at least). However, let’s not pretend that OMS will never replace SCOM in future, but for now, SCOM is here to stay.
And finally a traditional question, Star Wars or Star Trek?

Well, please don’t hate me for saying this, but I haven’t watched both and to be honest, I’m not a fan of it either.
For me though, the better question would be “Who’s better, Messi or Ronaldo?”

Thank you.

Chat with Sameer in SCOM Community chat room.

Or contact on LinkedIn for professional advice.

Or find him on Technet.

SCOM Management Group


Here we’re going to talk about something very important and basically the root of everything. Without this your SCOM does not exist. Essentially, this is what your whole SCOM setup is called – The Management Group. Yet, this is not something you’d work with daily. In fact, in most environments it’s just an install-and-forget kinda thing. You may sometimes run into wizards that’d ask you to specify the name of your MG, like manual agent install wizards, but that’s mostly all. However, there are some cases where you’d want to give some special attention to your MG, let’s discuss those here.

So the technical definition of an MG according to Technet goes like this:

Installing Operations Manager creates a management group. The management group is the basic unit of functionality. At a minimum, a management group consists of a management server, the operational database, and the reporting data warehouse database.

…and that’s really all about it. You have to specify the name of your MG when you install SCOM. Once everything is up and running, all your components (MS, GW, DBs, Agents, etc) would eventually be connected to this MG.

Where can I see the name of my MG?

You can view the name of your MG at the very top of your Ops Console. Like here:

The highlighted text is the name of your MG.

scom MG.PNG

(random screenshot from Internet)

You can also retrieve the name of the MG using Powershell:

Get-SCOMManagementGroup | select Name

The name of the Management Group can not be changed later, so plan ahead.

Speaking of planning, you must go through this document here to decide on a high level how the structure of your MG (or MGs) should be like, and a great insight into the components that make up an MG:

Planning a Management Group Design

There really isn’t a applies-to-all criteria of how big your MG should be, but there are certainly some constraints on how much data or load the underlying components may take. There’s an excellent tool that would make it easier for you to plan your deployment, its called the Operations Manager Sizing Helper Tool.

The Operations Manager 2012 Sizing Helper is an interactive document designed to assist you with planning & sizing deployments of Operations Manager 2012. It helps you plan the correct amount of infrastructure needed for a new OpsMgr 2012 deployment, removing the uncertainties in making IT hardware purchases and optimizes cost. A typical recommendation will include the recommended hardware specification for each server role, topology diagram and storage requirement.

You can download it HERE

The general recommendations from Microsoft go like this:

Monitored item Recommended limit
Simultaneous Operations consoles 50
Agent-monitored computers reporting to a management server 3,000
Agent-monitored computers reporting to a gateway server 2,000
Agentless Exception Monitored (AEM)-computers per dedicated management server 25,000
Agentless Exception Monitored (AEM)-computers per management group 100,000
Collective client monitored computers per management server 2,500
Management servers per agent for multihoming 4
Agentless-managed computers per management server 10
Agentless-managed computers per management group 60
Agent-managed and UNIX or Linux computers per management group 6,000 (with 50 open consoles); 15,000 (with 25 open consoles)
UNIX or Linux computers per dedicated management server 500
UNIX or Linux computers monitored per dedicated gateway server 100
Network devices managed by a resource pool with three or more management servers 1,000
Network devices managed by two resource pools 2,000
Agents for Application Performance Monitoring (APM) 700
Applications for Application Performance Monitoring (APM) 400
URLs monitored per dedicated management server 3000
URLs monitored per dedicated management group 12,000
URLs monitored per agent 50

As you can see, it really depends on the size of your infrastructure and also on your Hardware. I’d really still keep a little margin in these numbers as well, just in case. So, a thorough and careful planning in the beginning will make your (and many others’) life easier later 🙂

Now, what if your environment is really huge and spread across the globe that just one MG isn’t enough?

No worries, you can connect multiple MG’s AND manage them centrally too 🙂

Connected Management Groups

Consider this – Your company Contoso has data centers all over the world, and the total number of devices (servers, network devices, etc) to be monitored is around 50k, distributed all across the world. Let us say you have offices in the US, Europe, India, Australia, etc. and you have dedicated teams that are supposed to handle the servers belonging to the Data centers in their region. These environments are basically different and they need to be monitored differently, independent of other regions. You do not want admins from one region to interfere between the operations of other regions, but – you also want to have a universal console at your HQ (let’s say in the US) from where you can keep an eye on all your different regions’ monitoring operations.

This is the best example of when you’d want to handover different and dedicated MG’s to each of your regions, and then you consolidate them all in a central MG you have at your HQ.

The MG’s that you consolidate are called the Connected Management Groups and the MG that you consolidate them under is called the Local Management Group. All the connected MG’s are peers and they do not have any visibility into other MG’s. Functionally, all these MG’s (connected and local) can work completely independent of each other – with different MP’s, different infra, different admins, different monitoring standards, different everything. In fact, the peer MG’s are pretty much unaware of the other MG’s. Once you connect multiple MG’s to a single master MG, you can view all the alerts coming from all the different connected MG’s in a single console.

To apply this to the scenario we described earlier, it’d be something like this:

The Ind_MG, the Eur_MG and the Aus_MG would be peers and would be the “Connected MG’s”, while

The Us_MG would be the “Local MG” where you can view alerts from all other MG’s- additional to its own.

There are some excellent walk-throughs on how to do this, like,

Connecting Management Groups in Operations Manager

SCOM Connected Management Groups–2016 and 2012

so let’s not go through the same thing again 😉

Just a note, if you’re thinking, “I wonder if I can connect my SCOM 07 MG to my SCOM 12 or 16 MG…” – Nope, can’t do that. All the MG’s involved must be the same SCOM build versions 🙂

Ok, that’s all for today!

Happy SCOM-ing 🙂