Azure Lighthouse, the secret sauce for any Managed Cloud Solution Provider

Managed Cloud Solution Providers (MCSP) are those thirty party companies that help your business to expand and provide muscle and expertise in two ways:

  1. Skill matrix to support – They have a bunch of experts in several disciplines to go through your IT service challenges and digital transformation, they are your mentor to understand your risk and how aligned is your investment in cloud solutions with your business. They have cloud architect and cloud strategist personas in their team to support your journey to the cloud on mostly hybrid scenarios.
  2. Tools to support– They have the right tools to support those business needs and to leverage your current digital state to a new version of your company achieving better efficient in your daily processes, simplifying your employees work, even their quality of life, and for sure, optimizing the time to react to your competitors with innovation. Just to remark, tools means not just thirty party tools but also the native cloud provider tools you have available when consuming cloud services.

Adding to those key points, all the most important operatives to support IT Services on the cloud are based in some specific daily tasks. Monitoring, backup, process automation or security are part of those operatives. Moreover, MCSP need to be effective to solve issues in order to provide the right quality to our customers. Something that it´s called “Operational Excellence” within the “Well Architected Framework”.With the massive expansion of cloud first IT services and migrations to the cloud of a huge amount of IT infrastructure to support data analytics, web services, disaster recovery and legacy applications in the road to be modernized, we need the right tools to cover some clear objetives. Azure Lighthouse has a tremendous maturity to solve lots of aspects and challenges any MCSP need to cope with:

  • Scale as soon as we need to grow. Here i mean scale horizontally. So even when you have to assist lots of customers you can cover their need with granularity and focus on their specific roadmap to the cloud.
  • Segment your IT cloud infrastructure from the customers IT cloud one. So any security issue or IT service downtime that you are providing internally as well as providing to others is limited and it just can affect a customer or group of customers.
  • Provide permissions to some IT resources in the cloud and delegate depending on your customers projects and skills involved access to other partners, to freelance or to sum up to collaborate within this new project with several profiles.
  • Achieve a whole picture of the IT services you are providing to your customers in several Azure contracts and tenants in terms of security posture, alerts with performance or health issues, triage misconfigured Items, provide the right azure governance, etc.

Azure Lighthouse has the potencial and flexibility to include monitoring and traceability to all the customers in several tenants, you get a holistic view, delegate specific permission with a great security level for a period or the time you want to the whole subscriptions or resources groups, integrate all in a Hybrid strategy together with Azure ARC or furthermore integrate security posture and SIEM for several tenants as well. Azure offers top native cloud tools to support your investments in almost any technology tendency.

Let´s go deeper into some nice strategies to any MCSP so they don´t get struggle trying to solve how to translate what they are doing right now on premise compare with Azure.

Access. To access you have as mandatory a secure authentication and authorization strategy., That´s why Microsoft offers the least privilege access approach with Azure Active Directory Privileged Identity Management (Azure AD PIM) to enhance even more access to the customers tenants with just a user or a security group.

Monitoring. Absolutely key for any MSCP. It is the core of your support to your customers. Adding you have to use the right ITSM (Information Technology Service Management) software to be aligned and strive in the right direction to assess and resolve customers issues from high priority to low priority.

Security Posture. Do you know how many misconfigurations and vulnerabilities exist in your customers Azure cloud?. Yes, you can add Azure Security Center to provide the right security posture and know which security controls are affected or can be aligned to your regulatory compliance. We can leverage the Security Score to see in a single pane of glass your customers security posture. Not easy peasy but helps a lot.

Incident Hunting. Maybe you know, maybe you don´t, Azure Sentinel, the Microsoft native SIEM can contribute to consolidate your security threats and deep dive any root cause of a security compromise across several tenants. https://techcommunity.microsoft.com/t5/azure-sentinel/using-azure-lighthouse-and-azure-sentinel-to-investigate-attacks/ba-p/1043899

It´s a powerful tool to track logs, see layer to layer what´s is happening and determine how to step up suitable hardening for your technologies.

Hybrid Scenarios. Regarding Hybrid scenarios, Azure ARC, can be integrate as well with Azure Lighthouse bringing a great benefit to that holistic overview i mentioned before. The main target in this case, will be to provide the right governance to your customers even if they have some private clouds or on premise infrastructure. Therefore, an exciting approach for those companies which already have a lot of legacy staff to migrate during years but want to explore the benefits of public cloud such as Azure.

To sum up depending on your cloud provider maturity level, there are some key native tools to improve your support on your own or with the help of a MCSP. Azure is one of the most important providers together with AWS to offer this level nowadays.


Enjoy the journey to the cloud with me…see you soon.

Testing Azure File sync: A Headquarter file server and a Branch file server working together

Microsoft Hybrid scenario with Azure File Sync

As I showed in the previous post Azure file sync needs an agent to be installed in your Windows File Servers thought Windows Admin Center or directly if you download it and install it in your file server. Once it´s done you can proceed to leverage the power of this feature in your global environment, but please take into account the agent is right now only available with the following operating system for your file servers.



Remember to create an Azure file sync, you need an storage account as we did in the previous post, (better general purpose v2), a file share and install the agent on those file server where you want to share data. Then as we did, you can proceed to configure the cloud endpoint and servers endpoint within your sync group on the Azure Portal.

Add the servers from several braches and obviously your head quarter file server..

Verify your servers are synchronized..

Proceed to create a file in your local head quarter file server..

It is automatically replicated on our example to the branch file server..

Even if you pay attention to the File share in the Azure Portal you can see all the files from both servers (one in the Head quarter and the other one your branch file server) replicating their data on the File Share in Azure..

Now imagine, you have users all over the world, you need your employees are working on the same page with flexibility and on demand, even you need a backup of that day from time to time and a disaster recovery strategy. Even more, you need to empower your users to be more productive remotely, with their MacOS or Windows Laptops from anywhere.

You can have users working with the same files all around the world and several operating systems (MacOS, Windows 7 , 8.1 or 10 and Linux Ubuntu, Red hat or CentOS) leveraging any protocol that’s available on Windows Server to access your data locally, including SMB, Network File System (NFS), and File Transfer Protocol Service (FTPS). For them it´s transparent where the files are.

But what about performance and scalability?…Well, You can create as much sync groups as your Infrastructure would demand. Just be aware you should design and plan thinking on the amount of data, resiliency and Azure regions where you are extending your business. Anyway it is important to understand the way our data it will be replicated:

  • Initial cloud change enumeration: When a new sync group is created, initial cloud change enumeration is the first step that will execute. In this process, the system will enumerate all the items in the Azure File Share. During this process, there will be no sync activity i.e. no items will be downloaded from cloud endpoint to server endpoint and no items will be uploaded from server endpoint to cloud endpoint. Sync activity will resume once initial cloud change enumeration completes. The rate of performance is 20 objects per second
  • Initial sync of data from Windows Server to Azure File share: Many Azure File Sync deployments start with an empty Azure file share because all the data is on the Windows Server. In these cases, the initial cloud change enumeration is fast and the majority of time will be spent syncing changes from the Windows Server into the Azure file share(s).
  • Set up network limits: While sync uploads data to the Azure file share, there is no downtime on the local file server, and administrators can setup network limits to restrict the amount of bandwidth used for background data upload. Initial sync is typically limited by the initial upload rate of 20 files per second per sync group.
  • Namespace download throughput When a new server endpoint is added to an existing sync group, the Azure File Sync agent does not download any of the file content from the cloud endpoint. It first syncs the full namespace and then triggers background recall to download the files, either in their entirety or, if cloud tiering is enabled, to the cloud tiering policy set on the server endpoint.
  • Cloud Tiering enabled. If cloud tiering is enabled, you are likely to observe better performance as only some of the file data is downloaded. Azure File Sync only downloads the data of cached files when they are changed on any of the endpoints. For any tiered or newly created files, the agent does not download the file data, and instead only syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as they are accessed by the user.

Here I show an example with a 25 MB file. Synchronization was almost immediate as the Sync Groups was already set up. If we upload a file to Folder 02 in our head quarter file server you can see it on the branch in the Folder 01 as well in a matter of second or even less depending on the configuration as we said..


Azure Files supports locally redundant storage (LRS), zone redundant storage (ZRS), geo-redundant storage (GRS), and geo-zone-redundant storage (GZRS). Azure Files premium tier currently only supports LRS and ZRS. That means an incredible potential to replicate data depending on resilience and with solid granularity to several regions in the world.


In the next post we´ll see how to integrate Azure File with AAD or enhance your Windows VDI strategy with FSLogic app containers. See you them..

How to consolidate data for Headquarter and Branches in a smart way: Windows Admin Center and Azure File Sync

Windows Admin Center on Microsoft docs

As i mentioned in a previous post, explaining different alternatives to work in a colaborative way when your company has employees all over the globe https://cloudvisioneers.com/2020/06/06/how-to-consolidate-data-for-small-branches-globally-and-work-together-with-few-investment-i/ , one of the best options in terms of simplicity to share data would be Azure File Sync as you can consolidate data for several file servers. But now as preview you can set up a whole sync folders strategy from Windows Admin Center to consolidate data from branches and head quarters on a file share with all the changes done during the working day.


The Azure hybrid services can be manage from Windows Admin Center where you have all your integrated Azure services into a centralized hub.


But what can kind of tasks can we do?

  1. Set up a backup an disaster recovery strategy. – Yes, you can define which data to backup from your file servers on premise to the cloud, retentions as well as determine an strategy to replicate your VMs in Hyper-V using Windows Admin Center.
  2. Planning an storage migration.– Identify and migrate storage from on premise to the cloud based on your previous assessment. Even more not just data from Windows file servers or windows shares but also Linux SMB shares with Samba.
  3. Monitoring events and track logs from on premise servers. – It quite interesting to collect data from on premise in a Log Analytics workspace in Azure Monitor.  Then it is very flexible to customize queries to figure out what is happening and when is happening on those servers.
  4. Update and patch your local servers. – With Azure Update Management you have the right solution using Automation that allows you to manage updates and patches for multiple virtual machines on premise or even baremetal.
  5. Consolidate daily work from a distributed data enviroment without limitations on storage or locations. – You can use Windows Admin center to set up as we told before a centralized point for your organization’s file shares in Azure, while keeping the flexibility, performance, and compatibility of an on-premises file servers. In this post, we are going to explain this feature more in depth.
  6. Other features:
    1. Extent your control of your infrastructure with Azure Arc and configure it with Windows Admin Center so, for example, you can run azure policies and configure regulations for your virtual machines a baremetal on prem.
    2. Create VMs on Azure directly from the console.
    3. Manage VMs on Azure from the console.
    4. Even deploy Windows Admin Center in a VM on Azure and work with the console on the cloud.

Now we know some of the important features that you can use with Windows Admin Center, let´s focus on Azure file sync configuration and see how it works.


Let´s start downloading Windows Admin Center from here

Now after installing you will see the console using the url from local called: https://machinename/servermanager where you can browse information and details of your local or virtual machine and leverage lot of features to manage it.

If you click on hybrid center you can configure an account on your azure portal to connect to your azure suscriptions. It will create an Azure AD application from which you can manage gateway user and gateway administrator access if you want in the future. Therefore you will need to first register your Windows Admin Center gateway with Azure. Remember, you only need to do this once for your Windows Admin Center gateway.

You have two options,create a new Azure Active Directory application or use and existing one, on the AAD your choose.

To pointed out here, you can configure MFA for your user later, or create several groups with users for RBAC to work with your Windows Admin Console. Now you will have available several wizards to deploy the strategy it suits better to your business case.

Let´s start configuring all the parameters. Maybe it takes some time to respond, it is on Preview right now (april 2021).

Choose an existing Storage Sync Service or create a new one..

Prepare the Azure File sync agent as needed to be installed on your Windows Admin Center host machine..and hit on “Set up” button.

And now register your server on the “Storage sync service”..

We got to register our new server on Azure File Sync service on Azure to syncronize data on a file share with other File Servers localted all over the world.

Register servers on my Azure suscription

In the next post we´ll configure the following steps to consolidate a global repository using this file share on Azure so all the employees, no matter where they work can be on the same page with the rest of braches and head quarter.

See you them..

Be hybrid my friend: Global AWS Vision

AWS reacted with a powerful solution to Google Anthos and to the Azure Stack “Fiji” project which launched as i´ve explained in the previous post Azure Stack hub, Edge and Azure Stack HCI actors to the Microsoft scene. AWS Outposts is a compendium of technical solutions together with best in class AWS management support. Outposts, provides the same experience for the applications as being in the cloud and unified hybrid cloud management through the use of the same APIs and management tools across on-premises and AWS infrastructure.

How is the AWS hybrid strategy?

On one hand, AWS knows that the battle with those legacy applications and monolithic workloads that will remain during some years more in the backbone of business logic is a key factor. But moreover they focus on four scenarios: Cloud Bursting, Backup and Disaster Recovery, Distributed data processing, Geographic expansion.


Scenarios to leverage the AWS cloud

Cloud Bursting is an application deployment model in which the application primarily runs in an on-premises infrastructure, but when the application requires to increase performance or need more storage, AWS resources are utilized. Let´s say a HPC scenario using Fargate or maybe a migration from legacy applications to containers on ECS or EKS.

Backup and Disaster Recovery where the customer can set up business continuity strategies improving resilience, data durability and high availability even. For example, archiving and data tiering with S3.

Distributed data processing to integrate your origin data from near -real time processes or batch processes on your company and being transform quickly with a cost-effective approach on AWS using for example Firehose together with data lake or data warehouse strategist using Redshift.

Finally, Geographic expansion which drives a tremendous potential when you use global data base approaches (SQL or not SQL) supporting your data on DynamoDB or Aurora Database.

On the other hand, related to networking you can extend your existing Amazon VPC to your Outpost in your on premises location. After installation, you can create a subnet in your regional VPC and associate it with an Outpost just as you associate subnets with an Availability Zone in an AWS Region. Instances in Outpost subnets communicate with other instances in the AWS Region using private IP addresses, all within the same VPC.

For example, let’s say you need to maintain on premise a data warehouse due to regulations but you need a HPC (high performance computing) or even MPP (massive parallel processing ) from time to time to perform some calculations with some dataset and you don’t want to invest a lot of money for this stationary estimations. All the outcomes will be store locally once they are prepared and transformed in more accurate data in the cloud. Obviously, the cluster and the slaves nodes will be shut down afterwards.

AWS helps you to identify the right VMs profiles for the right hybrid workload you want to run.

Edge Computing

With Snowball Edge you can collect data in remote locations, use machine learning and processing, and storage a first define datasets in environments with intermittent connectivity. There are three different flavors: Snowball Edge computing perfect as i said for IoT solutions, Snowball data transfer to migrate massive information to the cloud or Snowball edge storage as a first layer to your data on prem before being process move to S3 for example.

AWS Outposts is fully managed and supported by AWS. Your Outpost is delivered, installed, monitored, patched, and updated by AWS. With Outposts you can reduce the time, resources, operational risk, and maintenance downtime required for managing IT infrastructure.

As we mentioned with the Microsoft Hybrid solution, AWS can also manage in a single pane of glass the whole infrastructure . Can you figure out the tremendous benefits to your customers, users and partners to be there when it’s needed reducing risks as eliminate single point of failure, reduce latency and improve business continuity, better security and governance or increase in an exponential manner your Go-To-Market strategies?.

See you then in the next post, take care and stay safe…

Be hybrid my friend: Global Azure Vision

After one year of pandemic there is a very clear fact, a majority of enterprises expect to increase cloud usage. On this scenario, there are traditional lift & shift migrations but also many companies choose to paassify applications (strategy to move applications on VMs to cloud multi tenancy managed platforms like Azure AppServices or AWS  Elastic Beanstalk) or even more transform their applications to a containerization (the process of packaging an application along with its required libraries, frameworks, and configuration files together over a containerization engine as Docker )

In this context there are still lots of legacy applications and monolithic workloads that will remain during some years more in the backbone of business logic for a huge number of industries. Not to mention, some compliance or sovereignty policies to retain specific information in local data centers for the company or the country. So the battle in coming years for the cloud providers is go hybrid enough to leverage the cloud for new innovative solutions, for those areas where the competitors can win opportunities in our market, or where we can see the benefits to transform applications to the cloud, such as increase Got-To-market in other regions, improve efficiency, reduce risk, save money and eliminate points of failure.

How is the Microsoft hybrid strategy?

There are several technologies that bring a lot of value to the Azure hybrid scenario. The Mantra here is run what you want where you need it without losing control even if it’s on premise, a private cloud like BT or Telefonica or a different cloud provider like AWS with their compute IaaS solutions on EC2.


Azure Stack Hub

Azure Stack will be your solution if you want to leverage the potential of serverless but using your infrastructure locally as well. You can connect your local data center using Azure Stack Hub.

For example, let’s say you need to maintain on premise a data warehouse due to regulations but you need a HPC (high performance computing) or even MPP (massive parallel processing ) from time to time to perform some calculations with some dataset and you don’t want to invest a lot of money for this stationary estimations. All the outcomes will be store locally once they are prepared and transformed in more accurate data in the cloud. Obviously, the cluster and the slaves nodes will be shut down afterwards.

Azure Stack Edge

Collect data, analyse, transform and filter data at the edge, sending only the data you need to the cloud for further processing or storage. Use ML (Machine Learning) to prepare datasets that you need to upload to the cloud. Azure Stack Edge acts as a cloud storage gateway which transfers to Azure whats is needed, while retaining local access to files. It has local cache capability and bandwidth throttling to limit usage during peak business hours.

Boost your IoT, and Edge computing solutions with this technology. The opportunities to grow here is just your imagination.

There are several models that can work at your edge depending on your needs. Just need to Simply order your appliance from the Azure portal in a hardware-as-a-service model and paid monthly via an Azure subscription.

Azure Stack HCI

It is a new hyperconverged infrastructure (HCI) operating system delivered as an Azure service that provides the latest azure features as well as performance to work with the cloud. Therefore, you can roll out Windows and Linux virtual machines (VMs) in your data centre or at the edge using the previous appliances showed above.

For example, let’s say you want to set up a Disaster Recovery strategy using world-class hyper-converged infrastructure with some Linux LAMP solutions or specific applications with the backend tier running on Azure Stack HCI and the frontend with some Web Services running on Azure. But the data remains on your data center once again if your country or company regulations don´t allow to store it on the cloud.

But the strongest point in this Microsoft Hybrid solution will be to provide an integration with AKS so your applications will be running anywhere from on premise to any azure region. You will be able to deploy containers on the same network, your VNET on Azure together with your VLAN on premise and move, create or kill containers of thousand on applications with their own libraries, run time and piece of software from cluster to cluster empowered with Kubernetes. Can you believe such potential for a global enterprise company?.

Azure Arc

Here comes the key ingredient of the recipe. Azure Arc let users to connect Kubernetes clusters running on-premises or on any other cloud provider with Azure for a unified management experience. Arc provides a single pane of glass operating model to users for all their Kubernetes clusters deployed across multiple locations and platforms. Arc provides capabilities of Azure management to the clusters — even improving the experience with Azure features like Azure Policy, Azure Monitor, and Azure Resource Graph.

In a single pane of glass you can embrace the potential of Azure hybrid model across multiple tenants and subscriptions working together with Azure Lighthouse as well as integrating Azure Stack to roll out your application modernization strategy anywhere, anytime. Can you figure out the tremendous benefits to your customers, users and partners to be there when it’s needed reducing risks as eliminate single point of failure, reduce latency and improve business continuity, better security and governance or increase in an exponential manner your Go-To-Market strategies?.

In the next post, we will compare the hybrid potential that Microsoft offers with another big gigant, AWS.

See you them, take care and stay safe…