Category: Azure

Fixing Hybrid – IaaS with Azure Update Management and SCOM

Azure Update Management (AUM) is a free service that helps to deploy patches on servers running in Azure and On Premises (in your datacenter).  It provides basic capabilities, but enough to control the whole patch process.

AUM and OpsMgr

While evaluating AUM on a Windows Server 2019 hosted on Azure I noticed that either monitoring with SCOM or patching via AUM worked. – The mom agent, which needs to contact AUM and SCOM could only contact one destinations at the same time.

Within the Log Analytics workspace the following error was show:

“VM has reported a failure when processing extension ‘MicrosoftMonitoringAgent’. Error message:” This machine is already connected to another Log Analytics workspace, or managed by System Center Operations Manager. Please set stopOnMultipleConnections to false in public settings or remove this property, so this machine can connect to new workspaces.”

Required steps to fix in brief

To solve this issue for the VM proceed with the following steps.

  1. Gather this information: Workspace ID, Workspace Key, VM Name, Location and Resource Group Name
  2. Connect to Cloud Shell
  3. Run some PowerShell to set the stopOnMultipleConnections flag to false.
  4. Activate AUM or restart the SCOM agent if the management server was already entered.

Note: The Azure portal is using lots of JavaScript, HTML and other web technologies. I suggest using Microsoft’s Edge browser.


Steps in detail

Search for Log Analytics and click on Virtual Machines to find the problematic VM:

Locating correct Log Analytics Workspace

Choose Advanced Settings

Select Advanced Settings

On Connected sources, note the Workspace ID and the Primary Key (Workspace Key)

Note values for WorkSpaceID and WorkspaceKey ( Primary ID )

Start the Cloud shell and get virtual machine details as mentioned above.

Start Azure Cloud Shell and get VM details

Use a text editor (e.g. notepad++) and prepare use following code based on the values collected above.


$PublicSettings = @{"workspaceId" = "c94e5249-e224…";"stopOnMultipleConnections" = $false}
$ProtectedSettings =@{'workspaceKey' = 'FwxRLqbRg9/…'}

Set-AzVMExtension -ResourceGroupName "rsg-wegc-commontest-server" `
 -VMName "vm-WEGCXX0001" `
-Publisher Microsoft.EnterpriseCloud.Monitoring `
-ExtensionType MicrosoftMonitoringAgent  `
-TypeHandlerVersion 1.0 `
-Settings $PublicSettings `
-ProtectedSettings $ProtectedSettings `
-Location "West Europe" `
-Name MicrosoftMonitoringAgent

Copy the code into the clipboard and paste it into the Cloud Shell. Confirm with Return.

Copy the code into the clipboard and paste it into the Cloud Shell. Confirm with Return.

Verify that communication with SCOM and AUM works

Start a RDP session, open the control panel and launch the MoM Agent.

Verify MoM Agent - OperationsManager

Verify MoM Agent - Log Analytics

The configuration on the VM looks healthy now.

Within the SCOM Console the server is shown and as fully monitored
Verify VM in Operations Manager Console

Next steps

To ensure that these steps are performed automatically on server creation it make sense to add those steps to an ARM template.

A good starting point provide this link: by @KasunSJC

Hybrid Monitoring Solutions during your transition to Cloud

Most enterprises now have either moved to cloud, or are moving towards it. And why not? Running your workloads on cloud services such as Azure frees you up from a lot of maintenance and administrative overheads, and you can use this time to do something better.

Here are some major benefits to moving to cloud:

  1. Less administration tasks – The cloud providers are responsible for managing and upgrading their infrastructure and so the customer does not have to worry about that.
  2. Cloud is flexible – It can adjust to the rapid growth or the fluctuations in business and adopt to that to provide you the optimized resources and hence managing costs.
  3. Cost efficient – Since you don’t have to spend on the big hardware and the maintenance that comes along with it, you can save that initial capital investment. Moreover, on cloud you mostly only pay for what you use and for the time you use it, it saves a lot of cost there as well.
  4. Disaster Recovery – Not every company, especially smaller sized, can invest into a Disaster Recovery strategy. On premise, it’s basically like running two datacenters and so double the cost. Moving to cloud eliminates that since the cloud provider is responsible to provide resiliency on their side to make sure your servers are up and running even if there is any hardware failure.

These are just some of the major benefits transitioning to cloud provides, there are many more. So if you’ve made a decision to move to the cloud – you’re looking at the right direction!

Now, on premise or on cloud – monitoring your infrastructure is equally critical. While the cloud provider will look after the hardware components, monitoring your servers and applications is still your responsibility, and something that you need to invest the time, money and efforts into. There are some great tools out there in the market to let you effectively monitor your infrastructure, like Microsoft’s System Center Operations Manager (SCOM), and Azure Monitor, which is a monitoring solution residing in Azure. So which one should you use to monitor your infrastructure?

Since you are transitioning to cloud (say Azure), you already have an on premise infrastructure. That most likely means you also already have made an investment in a tool like SCOM for monitoring it. So now you’re wondering, “So…does moving to Azure mean I have to decommission my SCOM now and move my monitoring to Azure Monitor?”

The good news is – you don’t have to choose between SCOM and Azure Monitor at all! (click to read more). They work the best together in a hybrid environment and complement each other very well.

SCOM is generally considered better in monitoring on premise workloads and has been used for it since a very long time. SCOM provides deep insights and a very thorough leveled monitoring of the workloads you want to monitor. It is also very easy to monitor your custom applications by authoring your own management packs. In short, it gives you a more detailed look into your infrastructure and alerts you based on it.

Azure Monitor on the other hand suits the best for Azure resources. Since it does not require installation, it is up and running in a matter of minutes. It also does not require you to worry about maintaining it, upgrading it or troubleshooting it. It is highly scalable, which means you can start on-boarding your servers immediately without worrying about the underlying infrastructure sizing capacity. However the biggest highlight of using Azure Monitor is probably its ability to query the data. Once the agent collects the data you can query it and get very granular. It is a very efficient way to make sure you’re only dealing with the data you want, and are only alerted for what you’re concerned with.

SCOM integrates seamlessly with Azure Monitor and can upload all the data it is collecting on premise to Azure Monitor where it can be queried. There are some great advantages of integrating SCOM with Azure Monitor, for example:

  1. You’re now getting more useful data rather than spam. Azure Monitor’s querying capability plays an important role here. You collect the data from Azure resources as well as on premise servers, and only extracting the data you need for alerting meaningful to you.
  2. Azure Monitor provides a single pane of glass for alerts and ways to manage them across your infrastructure, so it reduces the administrative overhead considerably.
  3. With all the data going into Azure Monitor, you can actually shut off a lot of workloads you don’t need from SCOM which means better performance with less resources used!
  4. SCOM monitors what it monitors the best – on premise infrastructure while Azure Monitor Monitors what it monitors the best – Cloud resources.
  5. You can reduce the dependency on only one monitoring solution, and run these two in parallel for resiliency.
  6. You can leverage PowerBI to visualize the data
  7. With release of SCOM 2019, with all its new capabilities and better visibility into cloud resources, this integration has become even better!
  8. It is much more cost-effective considering the returns it provides in value in a long run.

Hope this helps you plan your transition to cloud while maintaining the monitoring it all!

(Featured image credits to Microsoft!)


SCOM 2019 vs Azure Monitor: Which one to choose?

Having worked with both SCOM and Azure Monitor, recently I was asked to compare them both and suggest the right choice. First off, I have a disclaimer to make – Azure Monitor is great, but it can not replace SCOM entirely, not just yet.

SCOM 2019 was recently released and it came loaded with some great new features. Read more about it here. What I especially like is the new capabilities of it to monitor Azure resources. It now has insights more than ever before into the cloud. And with the ever rising numbers of cloud migrations or new cloud deployments, Azure Monitor’s popularity and importance keeps getting higher.

However, I believe these two tools have their own “personality” if you will, and work the best with each other. Here’s what I have to say about this in more details:

Defining Your Enterprise Monitoring Strategy: Close the Gaps with SCOM 2019 and Azure Monitor


Improve Customer Experience For Less Than 50$

I was recently presenting a lecture speaking about how a large number of social networks cause companies to lose their control over communication channels with their customers. I was also explaining why it is a problem and how Azure serverless services can reduce the problem dramatically for less than 50$. The reaction from the audience was great. So I decided to share it with you. Here is the explanation:

Before social networks existed on the internet (Facebook, Twitter, Instagram and so on), the traditional communication channels between companies and customer were telephone, fax, email, mail or even a company website. Basically, companies had several common channels of communication. So, they could decide which of the communication channels to use.

Nowadays, social networks have become a legitimate communication channel between the customer and the company. As a result, the customer expects the company to respond within the social network. If the company does not, it becomes irrelevant. This means that companies no longer have the privilege to decide which of the communication channels to use.

Social networks are not the problem. Their quantity is. There are too many social networks and they continue to grow. If companies want to communicate with customers through all social networks or some of them, companies will have to pay an enormous amount of money. If they don’t use social networks, they will lose customers. Companies have no more the privilege to decide which of the communication channels to use.

Companies that understand how important this issue is. Spend time and money to develop their private solutions from scratch. Other companies are using external services such as Hootsuite or SproutSocial. These services are excellent, and companies should consider using these kinds of external services. Another option I would like to suggest is to consider using Azure serverless services to create your tailor-made solution. Let’s look at a diagram (1), it demonstrates an automated communication channel for customers that are using Twitter to interact with the company.

(1) – Twitter channel diagram.

An explanation of a diagram:  

The flow trigger is a tweet with the hashtag #expertsliveisrael (Experts Live Israel was a  conference that I organized and took part in during July 2018 in Tel Aviv). Next, it is splitting into using Azure Cognitive Services (Text Translate and Text Analytics) to analyze the tweet sentiment (is it a positive or negative tweet) in order to create a tailor-made response. It also stores the data inside an Azure Cosmos DB for future analyzing of the data.

Note: I used a database to store the data for demonstration purposes, it is not necessary for automatic response. I used it to give you an idea of possible insights that you might use, such as analyzing complaints and improving tailor-made responses.

You may follow my steps to achieve a similar result (I won’t describe all the workflow steps, only those that I think will be important for you to understand the solution):

  1. Create a Twitter account or use an existing account, if you don’t have a Twitter account click here to signup.

  2. Open Azure portal, and create a new Logic App flow ( I’m assuming you already know Logic App, if not I suggest to read this simple guide or start playing with it.)

My main Logic App flow is ExpertsLiveIsreal-Demo-Twitter, and here is how it visualized on designer view.TwitterFlow1

To simplify the visuality and for future maintenance purpose, I create a three separated Logic app flows.flowslist.JPG

  1. Add a Twitter trigger and fill in the fieldsFlow1.jpg

  2. Create a new Logic App flow for cognitive servicesFlow2.jpg

Note: You will need to create two Cognitive Services, a Text Translate and Text Analytics (If you don’t know what it is and how to use it, click on this link.)

  1. Create a new Logic App flow for a tailor-made responseFlow3.jpg

  2. Currently, to reply a tweet, there is no simple built-in action in Logic apps, therefore, at first, you will need to create a Function app and a developer account in Twitter.

  3. Once you created a developer account in a Twitter, you may continue to build a Function. Flow4

How I use a free API?

To be able to do so you need to download it and save it. Next, upload a file into the bin folder inside the Function app folder. To open the function folder, click on the function name in my case it is an ExpertsLiveIL,  then change the tap to a Platform features and click on an App Service Editor.Flow5

You will open a new window of an App Service Editor.flow6

Select a bin folder and Right-click on a mouseflow7

Select Upload Files, browse to the location where you saved a, select it and click on Upload button.

  1. Create a Cosmos DB follow the 5 minutes guide to creating it.

  2. Now, you are ready, to complete other actions on main flow.Flow10.jpg

Note: I didn’t mention a small function GenerateGuid, that I needed since Cosmos DB has a mandatory property “id” and it must be unique. So, I created a function that outputs a unique value to pass as “id” value.flow8.jpg

It took me a day maybe a little more to complete a whole flow. And here is a list of my costs:


As you can see the total price was 45.87$. It’s a low price, wouldn’t you agree?

You should try it, be creative and productive 🙂

You are welcome to write me anything (a question or a comment or feedback).

And don’t forget to subscribe! Thanks.