Azure Lighthouse, the secret sauce for any Managed Cloud Solution Provider

Managed Cloud Solution Providers (MCSP) are those thirty party companies that help your business to expand and provide muscle and expertise in two ways:

  1. Skill matrix to support – They have a bunch of experts in several disciplines to go through your IT service challenges and digital transformation, they are your mentor to understand your risk and how aligned is your investment in cloud solutions with your business. They have cloud architect and cloud strategist personas in their team to support your journey to the cloud on mostly hybrid scenarios.
  2. Tools to support– They have the right tools to support those business needs and to leverage your current digital state to a new version of your company achieving better efficient in your daily processes, simplifying your employees work, even their quality of life, and for sure, optimizing the time to react to your competitors with innovation. Just to remark, tools means not just thirty party tools but also the native cloud provider tools you have available when consuming cloud services.

Adding to those key points, all the most important operatives to support IT Services on the cloud are based in some specific daily tasks. Monitoring, backup, process automation or security are part of those operatives. Moreover, MCSP need to be effective to solve issues in order to provide the right quality to our customers. Something that it´s called “Operational Excellence” within the “Well Architected Framework”.With the massive expansion of cloud first IT services and migrations to the cloud of a huge amount of IT infrastructure to support data analytics, web services, disaster recovery and legacy applications in the road to be modernized, we need the right tools to cover some clear objetives. Azure Lighthouse has a tremendous maturity to solve lots of aspects and challenges any MCSP need to cope with:

  • Scale as soon as we need to grow. Here i mean scale horizontally. So even when you have to assist lots of customers you can cover their need with granularity and focus on their specific roadmap to the cloud.
  • Segment your IT cloud infrastructure from the customers IT cloud one. So any security issue or IT service downtime that you are providing internally as well as providing to others is limited and it just can affect a customer or group of customers.
  • Provide permissions to some IT resources in the cloud and delegate depending on your customers projects and skills involved access to other partners, to freelance or to sum up to collaborate within this new project with several profiles.
  • Achieve a whole picture of the IT services you are providing to your customers in several Azure contracts and tenants in terms of security posture, alerts with performance or health issues, triage misconfigured Items, provide the right azure governance, etc.

Azure Lighthouse has the potencial and flexibility to include monitoring and traceability to all the customers in several tenants, you get a holistic view, delegate specific permission with a great security level for a period or the time you want to the whole subscriptions or resources groups, integrate all in a Hybrid strategy together with Azure ARC or furthermore integrate security posture and SIEM for several tenants as well. Azure offers top native cloud tools to support your investments in almost any technology tendency.

Let´s go deeper into some nice strategies to any MCSP so they don´t get struggle trying to solve how to translate what they are doing right now on premise compare with Azure.

Access. To access you have as mandatory a secure authentication and authorization strategy., That´s why Microsoft offers the least privilege access approach with Azure Active Directory Privileged Identity Management (Azure AD PIM) to enhance even more access to the customers tenants with just a user or a security group.

Monitoring. Absolutely key for any MSCP. It is the core of your support to your customers. Adding you have to use the right ITSM (Information Technology Service Management) software to be aligned and strive in the right direction to assess and resolve customers issues from high priority to low priority.

Security Posture. Do you know how many misconfigurations and vulnerabilities exist in your customers Azure cloud?. Yes, you can add Azure Security Center to provide the right security posture and know which security controls are affected or can be aligned to your regulatory compliance. We can leverage the Security Score to see in a single pane of glass your customers security posture. Not easy peasy but helps a lot.

Incident Hunting. Maybe you know, maybe you don´t, Azure Sentinel, the Microsoft native SIEM can contribute to consolidate your security threats and deep dive any root cause of a security compromise across several tenants. https://techcommunity.microsoft.com/t5/azure-sentinel/using-azure-lighthouse-and-azure-sentinel-to-investigate-attacks/ba-p/1043899

It´s a powerful tool to track logs, see layer to layer what´s is happening and determine how to step up suitable hardening for your technologies.

Hybrid Scenarios. Regarding Hybrid scenarios, Azure ARC, can be integrate as well with Azure Lighthouse bringing a great benefit to that holistic overview i mentioned before. The main target in this case, will be to provide the right governance to your customers even if they have some private clouds or on premise infrastructure. Therefore, an exciting approach for those companies which already have a lot of legacy staff to migrate during years but want to explore the benefits of public cloud such as Azure.

To sum up depending on your cloud provider maturity level, there are some key native tools to improve your support on your own or with the help of a MCSP. Azure is one of the most important providers together with AWS to offer this level nowadays.


Enjoy the journey to the cloud with me…see you soon.

7 Rs – Seven roads to take the right decision to move to the cloud

AWS (Amazon) , Azure (Microsoft) and GCP (Google) hyper-scale data centers are increasing their number during the last years in many regions supported by millions of investment in submarine cables to reduce latency. Southern Europe in not an exception. We can just take a look to Italy, Spain and France to realize what it´s happening.

Public cloud providers know many customers will move massively thousand of services in the coming years. The process just started some years ago. But due to the pandemic and the need to provide remote services, to analyse data quicker and with efficiency, the big expansion on sensors to measure almost all in our lives or a global market to beat the competitors in any continent with innovation, accelerates even more.

There are 7 Rs to take the right decision so the CIOs and CTOs know what make sense to move or not to the cloud. What is a priority and moreover the impact and effort to transform their business.

AWS perspective to move IT Services to the cloud

Move to the cloud with a clear perspective on outcomes and goals to achieve will be able to bring value to our customers if you evaluate with care each of your IT services so you can take decisions according with your business alignment. Some Applications could be retire other would enter in a cycle of modernization, other just resize to reduce cost and improve resilience..

Let´s explain our 7 Rs from simple to complex scenarios:

Retire. Some applications are not used any more. Just a couple of users need to do some queries from time to time. Hence maybe it´s better to move that old data to a data warehouse and retire the old application.

Retain. It means literally “do nothing at all”. May be this application use some API or backend from an on premise solution with some compliance limitations. May be it was recently upgraded and you want to amortize the investment for a while.

Repurchase. Here you have the opportunity to change the IT solution. Let´s say you are not happy with your firewall on premise and maybe you think it´s better to change to a different provider with a better firewall adoption for AWS or Azure, even to move from IaaS to SaaS some applications.

Relocate. For example, relocate the ESX hypervisor hosting your database and Web Services to VMware Cloud on AWS / Azure / GPC or move your legacy Citrix server with Windows 2008 R2 to a dedicated host on AWS.

Rehost. It means lift/shift. Move some VMs with clear dependence between them to the cloud just to provide better backup, cheaper replication on several regions and resize their compute consumption to reduce cost.

Replatform. Lift and optimize somehow your application. For instance, when you move your web services from a farm of VMs on Vmware with a HLB (Hardware Load Balancer) on premise to a external LB service on Azure with some APP Services where you can adopt the logic of your business and migrate your PhP or Java application. Therefore you don´t have to worry for Operating system patching or security at the Windows Server level anymore. Even eliminate the Windows operating license.

Refactor. The most complex scenario. You have a big monolithic database with lots of applications using that data, reading and writing heavily. You know, you need to move the applications and the monolithic database and modify its architecture by taking full advantage of cloud-native features to improve performance and scalability as well as to reduce risk. Any failure in a component provoke a general failure. Here you need to decouple your components and move to microservices sooner or later.

I hope you could understand better those strategies to move to the cloud your applications, so you can be laser focus on your needs and achieve the best approach for each of them.

To sum up use the right tools to evaluate your applications/ IT Services on premise and based on the 7Rs choose the suitable journey to the cloud for them..

Don´t forget to leverage all the potential of the CAF (Cloud Adoption Framework) https://wordpress.com/post/cloudvisioneers.com/287 that i´ve mentioned before in my blog together with the 7Rs strategy.


Enjoy the journey to the cloud with me…see you soon.

AWS Security Hub (I). The orchestra conductor which protect your IT solutions on the cloud

AWS Security Hub is a great approach to gather findings from several AWS services as well as Security partners like Sophos, Barracuda or Splunk. It brings fresh air to the AWS strategy to protect your data against hackers.

If you pay attention to the AWS current security services you would think they work on their own and not like a team. Even more if you come from Azure where Security Center and Sentinel combine very clearly their capacities.

In the case of AWS, you have to figure out how to set up the right approach leveraging at least the potential of three AWS security services to ingest data on Security Hub:

  • AWS GuardDuty + (Likely use AWS Detective with it)
  • AWS Macie
  • AWS Identity and Access Management

Even maybe you can include AWS Firewall Manager. Anyway, just show a first approach on how to connect each services with Security hub, i´ve drawn what i think would sum up how they can interact together to move on in the right direction.

Aws Security hub as the important piece of our puzzle

AWS Security hub receive a lot of information from several AWS services and can provide some specific dashboards with a very easy to use and comprehensive console. So your blue team can execute the right strategy to prevent and react with incidents or strange users behaviours.

Don´t get struggle with so many AWS security services and names. It´s easier than you expect..Cognito, AWS Shield, Amazon inspector, or others are just used in specific scenarios..

So based in our scenario above, we are going to deep dive in the different tools and how they serve the data to Security Hub. Let´s start:

GuardDuty. It´s a threat detection solution that you can enable when needed and monitors malicious or unauthorized users or roles behaviours. For example, unusual or failed API calls, unauthorized scripts or json deployment, suspicious traffic from or to a Virtual Machine.

It takes the data from DNS logs, VPC Flow and CloudTrail which read logs of several user or rol logins, diagnostics logs, etc in your AWS accounts. Take into account that GuardDuty doesn´t retain your logs, just read them, identify findings and discard them. It works in the backend so there isn´t impact in terms of performance. Finally to point it out, AWS, include a new component to get information, Intelligent threat source from AWS and partners, which makes even more powerful and flexible the AWS Service.

AWS Macie. Use machine learning to discover, classify and protect the data you have at rest in thousands of S3 folders. So you can understand what data do you have and how your users and roles are accessing it.

So it works providing alerts on critical information not protected or exposed somehow to the bad guys as well as combine CloudTrail information to see if someone tries to leverage the hole or vulnerability.

AWS IAM access analyzer. First let´s understand for those with no experience in AWS, what is IAM. Amazon Identity and Access Management (IAM) is a web service that helps you securely control access to Amazon resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. Once we do this introduction, it´s time we´ll focus on this service.

AWS IAM analyzer is based in zones of trust. When we enable Access Analyzer, we can create an analysis for all our AWS accounts or maybe just one account. Therefore, the AWS organization or AWS account we choose is known as the zone of trust for the analyzer.

Once enabled, Access Analyzer analyzes the policies applied to all of the supported resources in your zone of trust. After the first analysis, Access Analyzer analyzes these policies from time to time. If a new policy is added, or modify, Access Analyzer analyzes the new or updated policy within about 30 minutes. If the tool finds an external entity such as another AWS account which belongs to other company, a AWS role or service or even a federeted user, it will generate a finding where indicates details as permissions granted and possible risk of compromise data. You can fix the security hole and If you want to confirm that the change you make to a policy resolves an access issue reported in a finding, you can once again rescan the resource reported in a finding by using the Rescan link. So you are sure you solve the issue.

To recap, Amazon GuardDuty, AWS Macie and AWS Analyzer are the pillars of the data ingested and KPIs to AWS Security Hub. AWS Firewall Manager , AWS detective or CloudWatch can add in some cases more value to the dashboards with your security posture for your AWS organization or AWS account.

In the next post, now that you understood well several AWS Security services, we´ll explain how Security Hub works and why it´s a big change on how to maintain security posture and compliance in the suitable way it should be.


NOTE: Amazon Directory Services should be explained from my point of view separately from the rest as it´s related to users authentication and authorisation in Microsoft environments

Moreover AWS engineers use to include those tools in the AWS Security Webinars and for me, it makes more confusing the AWS security posture to those guys who are just starting with AWS cloud.

Enjoy the journey to the cloud with me…see you soon.

LEVERAGE THE AWS SECURITY POSTURE!… A STRATEGY WITH SOLID BASIS

To understand the challenge on how to address issues and incidents is not easy peasy. Not in an on premise IT infrastructure neither in the cloud. AWS has a clear roadmap to prioritize what it´s important and what not so important.

As in any approach related to technology on the cloud there are some native tools to be use and leverage to protect, monitor and reduce IT services exposures and vulnerabilities. Even some quite powerful thirty party security solutions well integrated in each cloud workload.

Aws security areas and tools provided to achieve the security posture

But before all of this, let start with the foundations on security posture.

AWS consider there is a shared responsibility model between the customers and their cloud. Maybe you know, but for those who think that the cloud native security solutions and the cloud provider will do all the work end-to-end full stack, we need to clarify this point.

AWS will be responsable for the security of their foundation services and global infrastructure. Customers will be responsable for all related to their workloads and IT solutions on the cloud as well as all related to business logic and processes deployed on AWS data centers.

Security should evolve with your landing zone on the cloud in parallel. Many companies struggle with this area trying to reduce the exposure to hackers after some real time services are already deployed.

Added to this, AWS has seven security principles to follow:

Least Privilege – Reduce as much as possible the permissions or credentials you are using on AWS for each IT Service you deployed.

Monitor and troubleshoot your logs and metrics. – Collect the logs and critical metrics from our IT services with AWS native tools or from other vendors. Design and configure alerts and messages to be on time to react on security issues.

Secure all Layers. From storage and networking to the applications. Moreover, to achieve the right security level you need to combine a security posture layer by layer. For example, storage data at rest and in transit, filter network traffic from origin to destination or how to authorize an API to get information from your applications would be part of you global full stack hardening strategy. To achieve that just combine the right AWS Services.

Automate as much as possible. Standardize blue prints for your infrastructure using IaC is part of this goal as well as repetitive tasks to access applications or databases. Remember a previous post about this. Take care with Automation silos or workarounds, it should be a holistic approach on your Hybrid cloud model or public cloud scenario.

Protect data in transit and at rest. This is part of the Secure all layers principle. But AWS find it so crucial that wants to remark that it´s important. But to be able to achieve this goal you need to identify, classify and tag the appropriate item or data in the most suitable way.

Security events as a solid pillar. To react against a hacker attack and provide a feasible reliability you need to design a repository to gather logs and metadata. Collect the events, prepare tracking strategy and granular alarms.

Reduce attack surface. Therefore, if you have the right guardrails, the right desire state configuration based on automation. If you have an approach with a layer by layer combining native AWS security tools as well as thirty party providers, you can minimize attacks and be ready to react.


So what kind of native security tools can i use to achieve those principals?

We can use tools such as Amazon Guard Duty to detect threats, Cloud Watch events to monitor and track your logs as well as Security Hub to consolidate security automation and compliance or regulations. Even Amazon Macie to review your protection at rest or leverage AWS control tower when starting with the journey to the AWS cloud.

In the next post we will see how use these tools and what kind of dependences and relations exist between them to bring the best in class hardening of your cloud IT Services.

Enjoy the journey to the cloud with me…see you soon.

Automation, a key recipe to be a best-in-class cloud company that many CIOs forget..

Automation is key for improving infrastructure standardization in order to speed up deployments, replicate the same environment several times, or reduce wrong configurations which are not aligned with regulatory or security policies.

Even to react quicker to market with new web services in a region where we are extending our business or some new attractive features to sell our products and services to worldwide customers.

Moreover, it´s crucial for leveraging performance and reliability as much as possible and increase productivity which impact directly in cost. If you can read between lines, we are talking about Well-Architected-Framework. A popular concept nowadays…

But, can we consider automation like some run books and scripts here and there to solve specific issues in our private or public cloud?

Milind Govekar, research vice president at Gartner, said in 2016, that IT organizations need to move from opportunistic to systematic automation of IT processes.

As a consequence of opportunistic automation, he remarked “Most current use of automation in IT involves scripting,” . “Scripts are more fragile than agile. What you end up with is disconnected islands of automation, with spaghetti code throughout the organization when what you need is a systematic, enterprise-wide lasagne

Therefore, it is clear like water, let´s focus on automation as a centralize and systematic approach to rule all those aspects and pain points that i mentioned before. Automation as part of our operational excellence, our security posture, to improve reliability and resilience and reduce cost. To summary, automation as a solid requirement for our WAF strategy within our organization or company.


But how can we afford automation in our current hybrid model?

Well, we need to choose the tool but it depends on environments you have to maintain. Let say your infra, how many clouds, public or private, are you using. To be honest, the complexity of your technologies, current infrastructures and DEVOPS daily effort determine many of the approach.

Let see the best options in the market:

Azure Devops & ARM & PowerShell – are the right technologies to provide automation with a systematic strategy not just with ARM templates deployment alone but also with other alternatives such as PowerShell tasks or with Yaml files. So you have a RBAC, you have traceability and you consolidate and deploy all your automation actions just for one place with a single pane of glass. Adding to that works in perfect harmony with Github.

Furthermore you can include those solutions with other such as Azure Arc, Azure Security Center or Azure monitor to achieve the suitable Well-Architected-Framework for your platform.

Terraform – Another strong solution in the market. A leader to consolidate automation in multicloud environments as it works with many providers or plugins as to ingest data. Just take a look to this incredible list: Browse Providers | Terraform Registry

It is a great approach in IaC (Infrastructure as Code) for complex environments as it can work together with your Active directory (announced some days ago), or with AWS, Alibaba, GPC, Vmware,etc.

The new Windows Active Directory (AD) provider for Terraform allows admins and sysops to manage users, groups and group policies in your AD installation. It is a very flexible solution in terms of versioning code within Github and allows changes to be tracked and audited easily.

Cloud formation – AWS Cloud Formation is a framework for provisioning your cloud resources with infrastructure as code within AWS accounts. Specifically a Cloud Formation template is a JSON or YAML formatted declarative text file where you will define your cloud resources. AWS defines it as “CloudFormation enables you to create and provision AWS infrastructure deployments predictably and repeatedly. It helps you leverage AWS products to build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure”. 

AWS use Control Tower somehow similar to Ansible Tower that i´ll introduce below, works across AWS accounts and regions. It uses a more advanced AWS CloudFormation feature; StackSets: AWS CloudFormation StackSets extends the functionality of stacks by enabling admins and sysops to create, update, or delete stacks across multiple accounts and regions with a single operation.

Ansible -It is defined as “a simple open source IT engine which automates application deployment, intra service orchestration, cloud provisioning and many other IT tools.” Ansible works very well on Red hat open shift and open Stack and has a centralize solution calle Ansible Tower to orchestrate your IT infrastructure with a visual dashboard including RBAC. Ansible use playbooks. A playbook is a configuration file written in YAML that provides instructions for what needs to be done in order to bring a managed node into the desired state. It works on open source platforms and hardware solutions integrating modules as follows: All modules — Ansible Documentation

Chef and Puppet – Also are quite extended but more related to Devops and CI/CD solutions on complex environments to achieve DSC (Desired State Configuration). Both are configuration managers with an image. Chef is similar to previous approaches in the sense it use Json or Yaml declarative text files and more focus on supporting administrators. It use recipes and cookbooks through a Chef Server VM to orchestrate standardize IaC to other VMs from scratch while Puppet is more focus on programing some criteria or controls thought the VM´s life. Anyway these alternatives are today no so popular in Enterprise companies and private and public clouds are adopting other automation solutions as above.

To summarize, your automation strategy depends on your platform or platforms that determine a holistic tool or not, your goals more related to Devops or Well-Architected-Framework, and how complex is your environment.

What is out of discussion and a solid tendency, its to apply a systematic automation strategy to reduce silos in your daily infrastructure deployment. Please take into account those scripts here and there fixing issues.

In the next post we will see how it works automation as IaC with Azure Devops.

Enjoy the journey to the cloud with me…see you then in the next post.

Containerization to become the RockStar on the stage

CNFC (Cloud Native Computing Foundation) can´t be more clear on their 2020 survey report:

The use of containers in production has increased to 92%, up from 84% last year, and up 300% from our first survey in 2016. Moreover, Kubernetes use in production has increased to 83%, up from 78% last year.

Related to the usage of cloud native tools there are also some clear tendences:
• 82% of respondents use CI/CD pipelines in production.
• 30% of respondents use serverless technologies in production.
• 27% of respondents use a service mesh in production, a 50% increase over last year.
• 55% of respondents use stateful applications in containers in production
.

What happens when someone adopts containers just for testing in their company?…in less than 2 years the containers are adopted in pre-production and production as well.

Why containerization is so extended?

Here are some facts i figure out.

Devops friendly – Well, there are some reasons, clear as water .. Almost all the big companies within the enterprise segment have a devops CI/CD strategy already deployed..so they´ve realised that integrating the builds and delivery versions with containers it´s quite agile and effective to compare those software last versions with several libraries as the runtime can be isolated easily and doesn´t depend on a operating system. So to summarize you can have quite quick several pods with containers ready to test two or three versions of your products with their libraries and plugins, packet managers or several artifacts depending on the version and test features, UX, bugs or just performance.. All aligned with your preferred repository solution: Bitbucket, Git, Github, etc.

Multicloud – Another fact and quite solid, it´s Kubernetes run on any cloud, private or public and you can orchestrate clusters with nodes wherever you want, without limitations on storage, compute or locations. Even you have at your disposal a great number of tools to orchestrate containers, not just Kubernetes but also Docker Swarm. To conclude, you can see bellow Docker as simple container runtime which was a tendency in RightScale 2019 survey. Now ,and that´s how technology change from one day to the next, Docker as an underlying runtime is being deprecated in favor of runtimes that use the Container Runtime Interface (CRI) created for Kubernetes. … Anyway, Docker is still a useful tool for building containers.

Cost Savings -You can roll out Microservices on demand and without investing a euro on hardware if you want a pure cloud solution. Just create your pads or simple containers and kill them when you want. Pay as you go, pure OPEX. That means reduce CAPEX on hardware and licenses and forget amortization.

Remove your Legacy applications on your on pace – Also, on one hand, big companies want to reduce legacy applications as they need to eliminate monolithic applications, which use to be very critical, with old versions software and dependences on hardware and licences and poor performance and scalability. On the other hand, they are compromise more than ever with the “Cloud first” principle for new IT services because they need to be global, reduce cost and improve resiliency and many CIOs know that public cloud bring those advantages from scratch.

Security – Least but not less. Containerization reduce the expose surface of your applications, eliminate any operating system bug, and allow to take control on known library vulnerabilities with your Software Quality team and your CISO. Networking is also an area where you can watch out the bad guys as traffic is flowing in and out of the containers and you can configure with granularity what is allowed and what not. Finally, you can monitorice the whole microservices solution with open source tools, cloud providers integrated tools or more veteran thirty party solutions.

In the next post we will see differences and similarities between AKS and EKS.

Enjoy the journey to the cloud with me…see you then in the next post.

Testing Azure File sync: A Headquarter file server and a Branch file server working together

Microsoft Hybrid scenario with Azure File Sync

As I showed in the previous post Azure file sync needs an agent to be installed in your Windows File Servers thought Windows Admin Center or directly if you download it and install it in your file server. Once it´s done you can proceed to leverage the power of this feature in your global environment, but please take into account the agent is right now only available with the following operating system for your file servers.



Remember to create an Azure file sync, you need an storage account as we did in the previous post, (better general purpose v2), a file share and install the agent on those file server where you want to share data. Then as we did, you can proceed to configure the cloud endpoint and servers endpoint within your sync group on the Azure Portal.

Add the servers from several braches and obviously your head quarter file server..

Verify your servers are synchronized..

Proceed to create a file in your local head quarter file server..

It is automatically replicated on our example to the branch file server..

Even if you pay attention to the File share in the Azure Portal you can see all the files from both servers (one in the Head quarter and the other one your branch file server) replicating their data on the File Share in Azure..

Now imagine, you have users all over the world, you need your employees are working on the same page with flexibility and on demand, even you need a backup of that day from time to time and a disaster recovery strategy. Even more, you need to empower your users to be more productive remotely, with their MacOS or Windows Laptops from anywhere.

You can have users working with the same files all around the world and several operating systems (MacOS, Windows 7 , 8.1 or 10 and Linux Ubuntu, Red hat or CentOS) leveraging any protocol that’s available on Windows Server to access your data locally, including SMB, Network File System (NFS), and File Transfer Protocol Service (FTPS). For them it´s transparent where the files are.

But what about performance and scalability?…Well, You can create as much sync groups as your Infrastructure would demand. Just be aware you should design and plan thinking on the amount of data, resiliency and Azure regions where you are extending your business. Anyway it is important to understand the way our data it will be replicated:

  • Initial cloud change enumeration: When a new sync group is created, initial cloud change enumeration is the first step that will execute. In this process, the system will enumerate all the items in the Azure File Share. During this process, there will be no sync activity i.e. no items will be downloaded from cloud endpoint to server endpoint and no items will be uploaded from server endpoint to cloud endpoint. Sync activity will resume once initial cloud change enumeration completes. The rate of performance is 20 objects per second
  • Initial sync of data from Windows Server to Azure File share: Many Azure File Sync deployments start with an empty Azure file share because all the data is on the Windows Server. In these cases, the initial cloud change enumeration is fast and the majority of time will be spent syncing changes from the Windows Server into the Azure file share(s).
  • Set up network limits: While sync uploads data to the Azure file share, there is no downtime on the local file server, and administrators can setup network limits to restrict the amount of bandwidth used for background data upload. Initial sync is typically limited by the initial upload rate of 20 files per second per sync group.
  • Namespace download throughput When a new server endpoint is added to an existing sync group, the Azure File Sync agent does not download any of the file content from the cloud endpoint. It first syncs the full namespace and then triggers background recall to download the files, either in their entirety or, if cloud tiering is enabled, to the cloud tiering policy set on the server endpoint.
  • Cloud Tiering enabled. If cloud tiering is enabled, you are likely to observe better performance as only some of the file data is downloaded. Azure File Sync only downloads the data of cached files when they are changed on any of the endpoints. For any tiered or newly created files, the agent does not download the file data, and instead only syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as they are accessed by the user.

Here I show an example with a 25 MB file. Synchronization was almost immediate as the Sync Groups was already set up. If we upload a file to Folder 02 in our head quarter file server you can see it on the branch in the Folder 01 as well in a matter of second or even less depending on the configuration as we said..


Azure Files supports locally redundant storage (LRS), zone redundant storage (ZRS), geo-redundant storage (GRS), and geo-zone-redundant storage (GZRS). Azure Files premium tier currently only supports LRS and ZRS. That means an incredible potential to replicate data depending on resilience and with solid granularity to several regions in the world.


In the next post we´ll see how to integrate Azure File with AAD or enhance your Windows VDI strategy with FSLogic app containers. See you them..

How to consolidate data for Headquarter and Branches in a smart way: Windows Admin Center and Azure File Sync

Windows Admin Center on Microsoft docs

As i mentioned in a previous post, explaining different alternatives to work in a colaborative way when your company has employees all over the globe https://cloudvisioneers.com/2020/06/06/how-to-consolidate-data-for-small-branches-globally-and-work-together-with-few-investment-i/ , one of the best options in terms of simplicity to share data would be Azure File Sync as you can consolidate data for several file servers. But now as preview you can set up a whole sync folders strategy from Windows Admin Center to consolidate data from branches and head quarters on a file share with all the changes done during the working day.


The Azure hybrid services can be manage from Windows Admin Center where you have all your integrated Azure services into a centralized hub.


But what can kind of tasks can we do?

  1. Set up a backup an disaster recovery strategy. – Yes, you can define which data to backup from your file servers on premise to the cloud, retentions as well as determine an strategy to replicate your VMs in Hyper-V using Windows Admin Center.
  2. Planning an storage migration.– Identify and migrate storage from on premise to the cloud based on your previous assessment. Even more not just data from Windows file servers or windows shares but also Linux SMB shares with Samba.
  3. Monitoring events and track logs from on premise servers. – It quite interesting to collect data from on premise in a Log Analytics workspace in Azure Monitor.  Then it is very flexible to customize queries to figure out what is happening and when is happening on those servers.
  4. Update and patch your local servers. – With Azure Update Management you have the right solution using Automation that allows you to manage updates and patches for multiple virtual machines on premise or even baremetal.
  5. Consolidate daily work from a distributed data enviroment without limitations on storage or locations. – You can use Windows Admin center to set up as we told before a centralized point for your organization’s file shares in Azure, while keeping the flexibility, performance, and compatibility of an on-premises file servers. In this post, we are going to explain this feature more in depth.
  6. Other features:
    1. Extent your control of your infrastructure with Azure Arc and configure it with Windows Admin Center so, for example, you can run azure policies and configure regulations for your virtual machines a baremetal on prem.
    2. Create VMs on Azure directly from the console.
    3. Manage VMs on Azure from the console.
    4. Even deploy Windows Admin Center in a VM on Azure and work with the console on the cloud.

Now we know some of the important features that you can use with Windows Admin Center, let´s focus on Azure file sync configuration and see how it works.


Let´s start downloading Windows Admin Center from here

Now after installing you will see the console using the url from local called: https://machinename/servermanager where you can browse information and details of your local or virtual machine and leverage lot of features to manage it.

If you click on hybrid center you can configure an account on your azure portal to connect to your azure suscriptions. It will create an Azure AD application from which you can manage gateway user and gateway administrator access if you want in the future. Therefore you will need to first register your Windows Admin Center gateway with Azure. Remember, you only need to do this once for your Windows Admin Center gateway.

You have two options,create a new Azure Active Directory application or use and existing one, on the AAD your choose.

To pointed out here, you can configure MFA for your user later, or create several groups with users for RBAC to work with your Windows Admin Console. Now you will have available several wizards to deploy the strategy it suits better to your business case.

Let´s start configuring all the parameters. Maybe it takes some time to respond, it is on Preview right now (april 2021).

Choose an existing Storage Sync Service or create a new one..

Prepare the Azure File sync agent as needed to be installed on your Windows Admin Center host machine..and hit on “Set up” button.

And now register your server on the “Storage sync service”..

We got to register our new server on Azure File Sync service on Azure to syncronize data on a file share with other File Servers localted all over the world.

Register servers on my Azure suscription

In the next post we´ll configure the following steps to consolidate a global repository using this file share on Azure so all the employees, no matter where they work can be on the same page with the rest of braches and head quarter.

See you them..

Windows Admin Center to rule hybrid cloud solutions on Azure and on prem

Windows admin center is a free tool to configure and manage your windows servers and linux servers on premise as well as on the cloud. No matter if you want to monitor performance on lots of servers, maintain TCP Services for your network or active directory validations, check security updates history on several servers, manage storage massively even for several partner providers such as HPE, DELL-EMC, etc or last but not least you want to migrate VMs to the cloud.


A simple pane of glass to work with your infrastructure on a hybrid model increase efficiency, reduce human errors and improve rapid response when needed.


Previously there were lots of “mmc consoles” to manage several aspect of daily sysops tasks. With Windows admin center, you solve this problem and also provide a unique approach to manage a hybrid cloud. Something that not all the cloud providers are facing right now. And believe me, it’s pretty useful.

What are the most important features that brings this tool?

  • Extension to integrate Servers hardware details– Let’s say you need to know the health of several components on your on prem HPE Servers, Fujitsu, Lenovo, Dell-EMC,etc. Now you have extensions to manage all this information and check the power supply status, the memory usage, CPU,storage capacity and other details. Even if you want an integration with ILO or IDrac, for example, well, through the Window Admin Center it’s possible.
  • Active Directory extension- It is crucial for a sysop to maintain the Active Directory and to work with quite usual tasks such as:
    • Create a user
    • Create a group
    • Search for users, computers, and groups
    • Details pane for users, computers, and groups when selected in grid
    • Global Grid actions users, computers, and groups (disable/enable, remove)
    • Reset user password
    • User objects: configure basic properties & group memberships
    • Computer objects: configure delegation to a single machine
  • Manage a DHCP Scope- Another cool option, DHCP extension allows you to manage connected devices on a computer or server.
    • Create/configure/view IPV4 and IPV6 scopes
    • Create address exclusions and configure start and end IP address
    • Create address reservations and configure client MAC address (IPV4), DUID and IAID (IPV6)
  • DNS extension –  allows you to manage connected devices on a computer or server.
    • View details of DNS Forward Lookup zones, Reverse Lookup zones and DNS records
    • Create forward Lookup zones (primary, secondary, or stub), and configure forward lookup zone properties
    • Create Host (A or AAAA), CNAME or MX type of DNS records
    • Configure DNS records properties
    • Create IPV4 and IPV6 Reverse Lookup zones (primary, secondary and stub), configure reverse lookup zone properties
    • Create PTR, CNAME type of DNS records under reverse lookup zone.
  • Updates– allows you to manage Microsoft and/or Windows Updates on a computer or server.
    • View available Windows or Microsoft Updates
    • View a list of update history
    • Install Updates
    • Check online for updates from Microsoft Update
    • Manage Azure Update Management integration

To summarize Microsoft is supporting now daily sysops duties through Windows Admin Center.

  • Storage extensions– allows you to manage storage devices on a computer or server. For example, let’s say you want to use the Storage Migration Service because you’ve got a server (or a lot of servers) that you want to migrate to newer hardware or virtual machines on Azure. You can install the Storage Migration Service extension on your Windows 2019 version 1809 or on a later operating system to work. Previous OS versions don´t have this extension available. With this extension you can do cool staff as follows:
    • Inventory multiple servers and their data
    • Transfer files, file shares, and security configuration from the source servers to destination. Even some Linux Samba repositories if needed.
    • Optionally “Copy&Paste” the identity of the source servers (also known as cutting over) so that users and apps don’t have to change anything to access existing data
    • Manage one or multiple migrations from the Windows Admin Center user interface in parallel
  • Create Virtual Machines on Azure- Windows Admin Center can deploy the Azure VMs, configure its storage, join it to your domain, install roles, and then set up your distributed system. This integrates VM deployment into the Storage Migration Service that i was explaining above. So you don’t need to connect to the Azure Portal or run a powershell script for example. But create and configure directly the VMs you need on the Windows Admin Center Portal.
  • Integration with Azure File Sync- Consolidate shared files using Azure File Sync. This is pretty useful if you have lots of small branches and want to centralize all the daily documents on a cloud repository with backup included. We will explain how it works in the next post.

As you can see , Microsoft has done a big effort providing a tool to work daily tasks as maintain TCP Services or manage your data, no matter if it is on premise on a Fujitsu Server with local disks, a SAN or on the cloud. Even helps you to leverage the cloud functionalities to remove old servers and hardware.


See you them in the next post. I hope you enjoy the journey to the cloud…

Be hybrid my friend: Global AWS Vision

AWS reacted with a powerful solution to Google Anthos and to the Azure Stack “Fiji” project which launched as i´ve explained in the previous post Azure Stack hub, Edge and Azure Stack HCI actors to the Microsoft scene. AWS Outposts is a compendium of technical solutions together with best in class AWS management support. Outposts, provides the same experience for the applications as being in the cloud and unified hybrid cloud management through the use of the same APIs and management tools across on-premises and AWS infrastructure.

How is the AWS hybrid strategy?

On one hand, AWS knows that the battle with those legacy applications and monolithic workloads that will remain during some years more in the backbone of business logic is a key factor. But moreover they focus on four scenarios: Cloud Bursting, Backup and Disaster Recovery, Distributed data processing, Geographic expansion.


Scenarios to leverage the AWS cloud

Cloud Bursting is an application deployment model in which the application primarily runs in an on-premises infrastructure, but when the application requires to increase performance or need more storage, AWS resources are utilized. Let´s say a HPC scenario using Fargate or maybe a migration from legacy applications to containers on ECS or EKS.

Backup and Disaster Recovery where the customer can set up business continuity strategies improving resilience, data durability and high availability even. For example, archiving and data tiering with S3.

Distributed data processing to integrate your origin data from near -real time processes or batch processes on your company and being transform quickly with a cost-effective approach on AWS using for example Firehose together with data lake or data warehouse strategist using Redshift.

Finally, Geographic expansion which drives a tremendous potential when you use global data base approaches (SQL or not SQL) supporting your data on DynamoDB or Aurora Database.

On the other hand, related to networking you can extend your existing Amazon VPC to your Outpost in your on premises location. After installation, you can create a subnet in your regional VPC and associate it with an Outpost just as you associate subnets with an Availability Zone in an AWS Region. Instances in Outpost subnets communicate with other instances in the AWS Region using private IP addresses, all within the same VPC.

For example, let’s say you need to maintain on premise a data warehouse due to regulations but you need a HPC (high performance computing) or even MPP (massive parallel processing ) from time to time to perform some calculations with some dataset and you don’t want to invest a lot of money for this stationary estimations. All the outcomes will be store locally once they are prepared and transformed in more accurate data in the cloud. Obviously, the cluster and the slaves nodes will be shut down afterwards.

AWS helps you to identify the right VMs profiles for the right hybrid workload you want to run.

Edge Computing

With Snowball Edge you can collect data in remote locations, use machine learning and processing, and storage a first define datasets in environments with intermittent connectivity. There are three different flavors: Snowball Edge computing perfect as i said for IoT solutions, Snowball data transfer to migrate massive information to the cloud or Snowball edge storage as a first layer to your data on prem before being process move to S3 for example.

AWS Outposts is fully managed and supported by AWS. Your Outpost is delivered, installed, monitored, patched, and updated by AWS. With Outposts you can reduce the time, resources, operational risk, and maintenance downtime required for managing IT infrastructure.

As we mentioned with the Microsoft Hybrid solution, AWS can also manage in a single pane of glass the whole infrastructure . Can you figure out the tremendous benefits to your customers, users and partners to be there when it’s needed reducing risks as eliminate single point of failure, reduce latency and improve business continuity, better security and governance or increase in an exponential manner your Go-To-Market strategies?.

See you then in the next post, take care and stay safe…