HandsOnLab – Minikube with Hyper-V on Windows 2019

I was wondering sometime ago how to start to explain a big bunch of theory about containerization and orchestrate such containers strategy with Kubernetes in a practical way. I know how it works, i know the architecture and the most important benefits of application modernization using Kubernetes. But, why not to play with MiniKube a little bit and explain some takeaways about Application Modernization most important actors like Kubernetes?.

A all-in-one software to test Kubernetes on your own pace

Therefore, it´s good to set up some examples in your LAB so you can play with the commands as well as with the concepts. We will be using MiniKube, as i mentioned before, this is a software for test Kubernetes with an “all-in-one” architecture. A master node and a worker node together in my case for Windows. So i started downloading minikube.exe from here.

To summarize, you have a Master Node or some Master Nodes which will be in charge of managing several Worker Nodes. Developers will deploy applications images on some Pods with usually just one Container inside (they can have more than a container but it is not usual). A container is like a small package with an application, its libraries and some dependences needed to run isolated from a Operating System (So you don´t need a licence for a container). Pods are the smallest compute unit within Kubernetes and share networking and storage to work together with some kind of data in the same Worker Node or not, as they can be distributed between Worker Nodes. You can kill or create pods base on your application requirements. They are ephemeral in nature and depends on how many should be run during a period of time to solve a task or support an specific service for their application.

URL Origin: phoenixnap.com KB article

For Users all this scenario, we have already explained, is absolutely transparent. For example, AirBnB use Kubernetes because the company can spread clusters in several regions with the public cloud. Those clusters are scalable. So when there is a “World trade center” in Barcelona, thousand of queries go to AirBnB kubernetes services to respond with some specific results. Indeed, depending on how much people are asking for accommodation, Kubernetes clusters will create and replicate worker nodes, even replicate pods to several worker nodes to improve the user experiences and answer asap so the user doesn´t leave their web site.

Step back to our lab, we start our MiniKube using the Hyper-V driver.. We´ll set up a fresh Linux VM on Hyper-V with all the components needed to test the previous scenario we showed above.

We can check the components status. The API server is an endpoint where all the cluster components are communicated. Scheduler (It will detect which pod to place on which node based on the resource requirements), Controller manager (handles node failures, replicating or maintaining the correct number of pods) and other worker node component communicate with the API server. Also, below pay attention to Kubelet (in charge of the containers on each node and it talks to API server as well). Kubeconfig has all the credentials and access to connect to your Kubernetes cluster. Here, it is needed because we will connect our “Kubectl” command to create or kill pods in a minute..

So, here we are…Please also see we have a Kube-Proxy component..it is use to distribute and balance traffic to services to each worker node, so in backend several pods are providing information..it is like a referee in soccer.

Just to recap we have Minikube in a Windows 2019 with Hyper-v Server running with all the components needed to test some Kubectl commands and create a simple pod in charge of provide a single query to its service.

I´ve created a pod which is just going to wait for information (in JSON format)..

Forward the port to 8080, so i can connect to the endpoint..

Previously i have installed a “Postman” Trial, you know, developers love this tool. Moreover, they need it to verify that the information or data they are sending (usually in JSON format) or asking to an endpoint is consistent and works as they expect. Here, below i send some data in JSON format and, as you can see, the pod accepts it with a “200” Code, OK.

In the next post we will see AKS and why it has a lot of opportunities to be a very important game player in the application modernization market.

Enjoy the journey to the cloud with me…see you then in the next post.

Testing Azure File sync: A Headquarter file server and a Branch file server working together

Microsoft Hybrid scenario with Azure File Sync

As I showed in the previous post Azure file sync needs an agent to be installed in your Windows File Servers thought Windows Admin Center or directly if you download it and install it in your file server. Once it´s done you can proceed to leverage the power of this feature in your global environment, but please take into account the agent is right now only available with the following operating system for your file servers.



Remember to create an Azure file sync, you need an storage account as we did in the previous post, (better general purpose v2), a file share and install the agent on those file server where you want to share data. Then as we did, you can proceed to configure the cloud endpoint and servers endpoint within your sync group on the Azure Portal.

Add the servers from several braches and obviously your head quarter file server..

Verify your servers are synchronized..

Proceed to create a file in your local head quarter file server..

It is automatically replicated on our example to the branch file server..

Even if you pay attention to the File share in the Azure Portal you can see all the files from both servers (one in the Head quarter and the other one your branch file server) replicating their data on the File Share in Azure..

Now imagine, you have users all over the world, you need your employees are working on the same page with flexibility and on demand, even you need a backup of that day from time to time and a disaster recovery strategy. Even more, you need to empower your users to be more productive remotely, with their MacOS or Windows Laptops from anywhere.

You can have users working with the same files all around the world and several operating systems (MacOS, Windows 7 , 8.1 or 10 and Linux Ubuntu, Red hat or CentOS) leveraging any protocol that’s available on Windows Server to access your data locally, including SMB, Network File System (NFS), and File Transfer Protocol Service (FTPS). For them it´s transparent where the files are.

But what about performance and scalability?…Well, You can create as much sync groups as your Infrastructure would demand. Just be aware you should design and plan thinking on the amount of data, resiliency and Azure regions where you are extending your business. Anyway it is important to understand the way our data it will be replicated:

  • Initial cloud change enumeration: When a new sync group is created, initial cloud change enumeration is the first step that will execute. In this process, the system will enumerate all the items in the Azure File Share. During this process, there will be no sync activity i.e. no items will be downloaded from cloud endpoint to server endpoint and no items will be uploaded from server endpoint to cloud endpoint. Sync activity will resume once initial cloud change enumeration completes. The rate of performance is 20 objects per second
  • Initial sync of data from Windows Server to Azure File share: Many Azure File Sync deployments start with an empty Azure file share because all the data is on the Windows Server. In these cases, the initial cloud change enumeration is fast and the majority of time will be spent syncing changes from the Windows Server into the Azure file share(s).
  • Set up network limits: While sync uploads data to the Azure file share, there is no downtime on the local file server, and administrators can setup network limits to restrict the amount of bandwidth used for background data upload. Initial sync is typically limited by the initial upload rate of 20 files per second per sync group.
  • Namespace download throughput When a new server endpoint is added to an existing sync group, the Azure File Sync agent does not download any of the file content from the cloud endpoint. It first syncs the full namespace and then triggers background recall to download the files, either in their entirety or, if cloud tiering is enabled, to the cloud tiering policy set on the server endpoint.
  • Cloud Tiering enabled. If cloud tiering is enabled, you are likely to observe better performance as only some of the file data is downloaded. Azure File Sync only downloads the data of cached files when they are changed on any of the endpoints. For any tiered or newly created files, the agent does not download the file data, and instead only syncs the namespace to all the server endpoints. The agent also supports partial downloads of tiered files as they are accessed by the user.

Here I show an example with a 25 MB file. Synchronization was almost immediate as the Sync Groups was already set up. If we upload a file to Folder 02 in our head quarter file server you can see it on the branch in the Folder 01 as well in a matter of second or even less depending on the configuration as we said..


Azure Files supports locally redundant storage (LRS), zone redundant storage (ZRS), geo-redundant storage (GRS), and geo-zone-redundant storage (GZRS). Azure Files premium tier currently only supports LRS and ZRS. That means an incredible potential to replicate data depending on resilience and with solid granularity to several regions in the world.


In the next post we´ll see how to integrate Azure File with AAD or enhance your Windows VDI strategy with FSLogic app containers. See you them..

How to consolidate data for Headquarter and Branches in a smart way: Windows Admin Center and Azure File Sync

Windows Admin Center on Microsoft docs

As i mentioned in a previous post, explaining different alternatives to work in a colaborative way when your company has employees all over the globe https://cloudvisioneers.com/2020/06/06/how-to-consolidate-data-for-small-branches-globally-and-work-together-with-few-investment-i/ , one of the best options in terms of simplicity to share data would be Azure File Sync as you can consolidate data for several file servers. But now as preview you can set up a whole sync folders strategy from Windows Admin Center to consolidate data from branches and head quarters on a file share with all the changes done during the working day.


The Azure hybrid services can be manage from Windows Admin Center where you have all your integrated Azure services into a centralized hub.


But what can kind of tasks can we do?

  1. Set up a backup an disaster recovery strategy. – Yes, you can define which data to backup from your file servers on premise to the cloud, retentions as well as determine an strategy to replicate your VMs in Hyper-V using Windows Admin Center.
  2. Planning an storage migration.– Identify and migrate storage from on premise to the cloud based on your previous assessment. Even more not just data from Windows file servers or windows shares but also Linux SMB shares with Samba.
  3. Monitoring events and track logs from on premise servers. – It quite interesting to collect data from on premise in a Log Analytics workspace in Azure Monitor.  Then it is very flexible to customize queries to figure out what is happening and when is happening on those servers.
  4. Update and patch your local servers. – With Azure Update Management you have the right solution using Automation that allows you to manage updates and patches for multiple virtual machines on premise or even baremetal.
  5. Consolidate daily work from a distributed data enviroment without limitations on storage or locations. – You can use Windows Admin center to set up as we told before a centralized point for your organization’s file shares in Azure, while keeping the flexibility, performance, and compatibility of an on-premises file servers. In this post, we are going to explain this feature more in depth.
  6. Other features:
    1. Extent your control of your infrastructure with Azure Arc and configure it with Windows Admin Center so, for example, you can run azure policies and configure regulations for your virtual machines a baremetal on prem.
    2. Create VMs on Azure directly from the console.
    3. Manage VMs on Azure from the console.
    4. Even deploy Windows Admin Center in a VM on Azure and work with the console on the cloud.

Now we know some of the important features that you can use with Windows Admin Center, let´s focus on Azure file sync configuration and see how it works.


Let´s start downloading Windows Admin Center from here

Now after installing you will see the console using the url from local called: https://machinename/servermanager where you can browse information and details of your local or virtual machine and leverage lot of features to manage it.

If you click on hybrid center you can configure an account on your azure portal to connect to your azure suscriptions. It will create an Azure AD application from which you can manage gateway user and gateway administrator access if you want in the future. Therefore you will need to first register your Windows Admin Center gateway with Azure. Remember, you only need to do this once for your Windows Admin Center gateway.

You have two options,create a new Azure Active Directory application or use and existing one, on the AAD your choose.

To pointed out here, you can configure MFA for your user later, or create several groups with users for RBAC to work with your Windows Admin Console. Now you will have available several wizards to deploy the strategy it suits better to your business case.

Let´s start configuring all the parameters. Maybe it takes some time to respond, it is on Preview right now (april 2021).

Choose an existing Storage Sync Service or create a new one..

Prepare the Azure File sync agent as needed to be installed on your Windows Admin Center host machine..and hit on “Set up” button.

And now register your server on the “Storage sync service”..

We got to register our new server on Azure File Sync service on Azure to syncronize data on a file share with other File Servers localted all over the world.

Register servers on my Azure suscription

In the next post we´ll configure the following steps to consolidate a global repository using this file share on Azure so all the employees, no matter where they work can be on the same page with the rest of braches and head quarter.

See you them..

Windows Admin Center to rule hybrid cloud solutions on Azure and on prem

Windows admin center is a free tool to configure and manage your windows servers and linux servers on premise as well as on the cloud. No matter if you want to monitor performance on lots of servers, maintain TCP Services for your network or active directory validations, check security updates history on several servers, manage storage massively even for several partner providers such as HPE, DELL-EMC, etc or last but not least you want to migrate VMs to the cloud.


A simple pane of glass to work with your infrastructure on a hybrid model increase efficiency, reduce human errors and improve rapid response when needed.


Previously there were lots of “mmc consoles” to manage several aspect of daily sysops tasks. With Windows admin center, you solve this problem and also provide a unique approach to manage a hybrid cloud. Something that not all the cloud providers are facing right now. And believe me, it’s pretty useful.

What are the most important features that brings this tool?

  • Extension to integrate Servers hardware details– Let’s say you need to know the health of several components on your on prem HPE Servers, Fujitsu, Lenovo, Dell-EMC,etc. Now you have extensions to manage all this information and check the power supply status, the memory usage, CPU,storage capacity and other details. Even if you want an integration with ILO or IDrac, for example, well, through the Window Admin Center it’s possible.
  • Active Directory extension- It is crucial for a sysop to maintain the Active Directory and to work with quite usual tasks such as:
    • Create a user
    • Create a group
    • Search for users, computers, and groups
    • Details pane for users, computers, and groups when selected in grid
    • Global Grid actions users, computers, and groups (disable/enable, remove)
    • Reset user password
    • User objects: configure basic properties & group memberships
    • Computer objects: configure delegation to a single machine
  • Manage a DHCP Scope- Another cool option, DHCP extension allows you to manage connected devices on a computer or server.
    • Create/configure/view IPV4 and IPV6 scopes
    • Create address exclusions and configure start and end IP address
    • Create address reservations and configure client MAC address (IPV4), DUID and IAID (IPV6)
  • DNS extension –  allows you to manage connected devices on a computer or server.
    • View details of DNS Forward Lookup zones, Reverse Lookup zones and DNS records
    • Create forward Lookup zones (primary, secondary, or stub), and configure forward lookup zone properties
    • Create Host (A or AAAA), CNAME or MX type of DNS records
    • Configure DNS records properties
    • Create IPV4 and IPV6 Reverse Lookup zones (primary, secondary and stub), configure reverse lookup zone properties
    • Create PTR, CNAME type of DNS records under reverse lookup zone.
  • Updates– allows you to manage Microsoft and/or Windows Updates on a computer or server.
    • View available Windows or Microsoft Updates
    • View a list of update history
    • Install Updates
    • Check online for updates from Microsoft Update
    • Manage Azure Update Management integration

To summarize Microsoft is supporting now daily sysops duties through Windows Admin Center.

  • Storage extensions– allows you to manage storage devices on a computer or server. For example, let’s say you want to use the Storage Migration Service because you’ve got a server (or a lot of servers) that you want to migrate to newer hardware or virtual machines on Azure. You can install the Storage Migration Service extension on your Windows 2019 version 1809 or on a later operating system to work. Previous OS versions don´t have this extension available. With this extension you can do cool staff as follows:
    • Inventory multiple servers and their data
    • Transfer files, file shares, and security configuration from the source servers to destination. Even some Linux Samba repositories if needed.
    • Optionally “Copy&Paste” the identity of the source servers (also known as cutting over) so that users and apps don’t have to change anything to access existing data
    • Manage one or multiple migrations from the Windows Admin Center user interface in parallel
  • Create Virtual Machines on Azure- Windows Admin Center can deploy the Azure VMs, configure its storage, join it to your domain, install roles, and then set up your distributed system. This integrates VM deployment into the Storage Migration Service that i was explaining above. So you don’t need to connect to the Azure Portal or run a powershell script for example. But create and configure directly the VMs you need on the Windows Admin Center Portal.
  • Integration with Azure File Sync- Consolidate shared files using Azure File Sync. This is pretty useful if you have lots of small branches and want to centralize all the daily documents on a cloud repository with backup included. We will explain how it works in the next post.

As you can see , Microsoft has done a big effort providing a tool to work daily tasks as maintain TCP Services or manage your data, no matter if it is on premise on a Fujitsu Server with local disks, a SAN or on the cloud. Even helps you to leverage the cloud functionalities to remove old servers and hardware.


See you them in the next post. I hope you enjoy the journey to the cloud…

Fast and Furious: Azure Pipelines (2) deploy your Infra and Apps or releases with automation..

Living in a world faster than ever, tools focus on provide infrastructure, applications, mobile apps in an automated way are not important but crucial to survive in a market where companies change their strategies from a week to the next. One region can be a first product market for a company today, but tomorrow it´s a different one.

Devops platforms for the most important providers assumed the principle as native. Azure Devops is focus on CI/CD as many of its competitors but include one secret weapon: flexibility to deploy infra and apps in a question of minutes anywhere, anytime..with reliability and control.

Azure Devops has compatibility with Terraform: https://azure.microsoft.com/en-us/solutions/devops/terraform/ with Ansible: https://docs.microsoft.com/en-us/azure/developer/ansible/overview as a way to provide IaC (infrastructure as code). But also can facilitate its own ARM templates https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/add-template-to-azure-pipelines to create the infrastructure previously needed to deploy the mobile APPs or releases to sell our products in the new target market.

Finally and quite interesting as a Software Company you need to ensure that your code is secure and safe of bugs..so you don´t make the life easier to hackers or governments who are more than happy to know your pillars in the market, your most important technology or science secrets.

To solve such a problem you can integrate in a very flexible manner tools like SonarQube: https://www.sonarqube.org/microsoft-azure-devops-integration/

Be aware that this is real and the last case happened to Solar Winds just some days ago of publishing this post, provoking a big impact in US security: https://www.reuters.com/article/us-global-cyber-usa-dhs-idUSKBN28O2LY

So after this clarification, let me tell you Azure Pipeline can reduce your technical debt (impact you have if you are programming your code a lot of times by choosing an easy and limited workaround causing delay and security holes and performance issues instead of using a better approach from scratch) as well as improve the speed to deliver anywhere, anytime.

So we are going to create a new release and choose an scenario i´ve created previously on Azure where we have just a simple WebAPP on a APP Service plan F1 which is free as it´s just a small lab.

We hit on “create a new release” ..and choose our infra in an Azure Tenant..this is opt to you..and the build we want to deploy over there..

Right now we have prepared our build and the infra where we want to deploy the software or release..

Just to remember, Microsoft provide two type of agent to run the jobs in charge of compile the code or deploy the releases:

Microsoft hosted agents, those run in Azure by Microsoft to support your software projects.

Self-Hosted agents, those that can be installed on a VMs or prem or a different private cloud for example to support the software projects.

We run here a Microsoft Azure agent:

We have the infra ready on Azure as well:

Finally hit on “Deploy”…and the magic happens..

You can see a sample Microsoft Lab website appear on the WebApp quite quick..

With this post we finish a global view to Devops from the Microsoft perspective. A solid respond to current times that solves many of the software life cycle challenges.

Enjoy the journey to the cloud with me…see you then in the next post.

Your code and your builds from anywhere, anytime – Azure Repos and Azure Pipelines (1)

It’s not magic, but a very versatile tool that can provide all you need to work remotely on you Continuous Integration. If you want a repository with security, SSO and integrated with the best tools to work on your builds, if you want a solution to automate the builds as well as the releases, you are in the right post.

First of all and somehow starting from the end, yes the end, you can choose Git, Github, Bit bucket (Atlassian), GitLab, etc as the origin of your code to run builds. Yes, it is opt to you where you have your code. I wanted to pointed out this to show you how flexible is the solution.

So after this clarification, let me tell you Azure Repos has by default a Git repository. You will use a Gitflow repository approach where you have the master branch on Azure repos and several branches for developers to solve issues, develop new features or fix a bug on a distribute way.

So after a pull request and some specific branch policies that maybe or maybe not you can put in place and the approval of several stakeholders involved in application development, you will merge your code with the master one on the cloud, on your azure repos for this specific project.

On repos you can maintain the code, json file, yaml files, clone, download or do some changes from your Eclipse or Visual Studio client for example.

Furthermore you have the tracking of commits and pushes done in your project code as well as a correlation of all the branches right now active.

On a perfect strategy with your builds you can prepare a new one in a very easy way…just hit on “New Pipeline”

Choose as i´ve told you at the start of this post where is your code..

Let’s say we are going to use a Yaml file. A Yaml file is nothing else than a data-oriented language configuration file where you describe all related to the application that you want to compile. As a run time, programming language and package manager to include some specific libraries..for example.

And finally save and run your build. In this case, it will be roll out manually but you can configure automation and trigger a code compilation after some changes in the code and maybe some approvals from you Project Manager.

So then if you want configure a trigger for the automation…just configure depending on your needs the triggers tab..and that´s all.

In the next post i’ll follow explaining more about azure repositories and azure pipelines so you can see the tremendous mechanism to accelerate the Continuous Deployment for the top developers performers companies.

Enjoy the journey to the cloud with me…see you then in the next post.

Work as a team on Covid times – Azure boards

Developers, Project managers, stakeholders during an application modernization project are the pillars to create a real powerful APP. This in previous times to Covid was still a challenge and now even more. Are you developers disconnected?, are your teams more than teams a silo? .

There is a component quite important within Azure Devops called Boards. This component is part of your solution as you can roll out Scrum or agile approaches quite easily to your developers and stakeholders .

So within your Organization on Azure Devops, click on new project. you can choose between a private one (requires authentication and it´s focus on a team) or public (and open source development for the Linux community for example).

So when i start a project, i can choose the methodology and strategy to follow up my project and foster collaboration?

Azure Boards support several actors: user stories, tasks, bugs, features, and epics. When you create a project, you have the option to choose if you want a basic project with just Epics (the goals that you would like to achieve. Let’s say), Issues (what kind of steps should be follow or milestones) and Tasks (they are included per Issue and means the lists of points you need to execute to get the issue done) or Agile, etc. You can then assign all these items to several people and correlate those efforts on several springs.

So you can create work items to track what happens on the follow up of your development. You can coordinate and motive your team to solve delays, problems and  you would be home and dry.

On one hand, you have all the developers working remotely on this tough times and using SSO as Azure Devops is integrated with your Azure Active Directory.

On the other hand, you can invite as guest other employees or users to access your projects if they are private. Keep in mind that you can even create public projects to open software as i mentioned previously.

Let’s see how a project manager can track an application using a Scrum approach. On the Epic you will establish the goals: some business requirements or maybe an specific enhancement to the Application. To achieve that the team will work on a Feature with a bunch of Tasks to be done. Also all this effort will be tracked on boards.

So in this case of an Agile project you can use an approach like this one where you have a goal or Epic a “New Web Service Version”, you have some user´s stories like project manager or developers and obviously some issues which involves tasks to be done.

For example, create a new CI/CD process with a pipeline where you will deploy the releases on a Web App (with an slots for staging, production or development).

Also, you can see this process including the issues or the tasks associated in each sprint. You will have as much springs as needed to achieve all the goals on the final application release. To show that just check Sprints within Azure boards. Take into account you need to determine with steps or issues/ tasks should be done on each of those springs.

Finally, pay attention to identify the timeline of your springs so the project manager can detect delays and help the team to progress properly.

Adding Azure Boards to a Tab within Teams to foster the collaboration between the stakeholders bring a lot of potential as make very flexible the access, the follow up of the project and checking of every milestone.

On Teams you can add a new tab and choose Azure DevOps…

When selecting the app, you can choose the organization, project and team..

So once you have selected all, you can hit save..

Now as a project manager, you can stay on the same page with a few clicks.

In the next post we’ll show and explain more about azure repositories and azure pipelines so you can see the tremendous mechanism to accelerate the Continuous Integration for the top developers performers companies.

Enjoy the journey to the cloud with me…see you then in the next post.

How to consolidate data for small branches globally and work together with few investment..(I)

Do you remember some time ago?…but not too many years ago, to consolidate a file share platform for many small branches it was a headache. Not simple, not cheap and with some operational effort not feasible for lots of S&M companies.

You had a file server per branch which should be replicated, using asymmetric replication usually, with others in regular basis. To do that there where several solutions in the market. Some IT departments tried it with Microsoft DFS (Distributed File System) technology, others with Amazon Elastic File System (Amazon EFS) which provides a simple NFS file system for use with the AWS Cloud or on-premises but limited to Linux or Unix world.

Even someone tried it with more budget using Talon CORE and Talon EDGE which works well but means as the other approaches an infrastructure with specific needs to take care of.


No better, no perfect approach but at least a way to share files through many cities, even countries and no IT experts on those locations, with very narrow band communications and poor budget. Do we want to cry?….no, no… 😉


So how to solve the GAP?..here we are:

  1. Use Microsoft Sharepoint Online + One drive– But only if you want to pay Office365 licences and also want to pay a good money for storage as it´s not raw but within a Content Data Base for Office or PDF documents to mention some files (first TERA is included on the SPO Tenant). In the case, you need extra features as Taxonomy search, Fast Search or index content, to put some examples on the table, it a candidate for you. When two employees want to work the same proposal or presentation, they can do it. That´s called coauthoring. Perfect for editing documents at the same time. It is more focus on regions but also Microsoft provide a global solution. If suits your needs and you want it globally you have a new feature called Multi-Geo in SharePoint which enables global businesses control.
  2. Use Microsoft Azure File Sync – If you want the cheapest approach and don´t want to pay for complex maintenance, you don´t need Fast Search or any Content Data base as you are just replicating some files on file on disk storage with any more complex outcome than replicate the changes and facilitate the data for all the employees all over the World. This approach it´s very attractive as brings also the option of using Data Tearing which provides 2 layers for the data in this case, one your “working set”, what you use daily, and the rest of files an folders which will be move to the cloud (depending on policies or volume limits) and only bring back to your local file server if someone needs them. All the local file servers consolidate their data to a file share on the cloud, on Azure, and it is available for employees even with the new changes because any change happens first locally and Azure File Sync will decide which one is the last version on a document to be updated on the cloud file share. On the other hand, you have snapshot if someone did a mistake. You can create as much sync groups as you want with a cloud endpoint, your cloud file share and a Server endpoint or several of them, all the servers that are consolidating folders and files together in the cloud and visible for the employees where ever they are. So perfect solution for concurrence even if you modify the docs locally.
  3. AWS solutions are also a good option. Amazon FSx for Windows File Server resources provides file Systems, backups, and file Shares. It also support DFS as we´ve said before (in the first traditional solutions some years ago) to be integrated with Amazon FSx. If you want also to make it simple you can use S3 (the global storage AWS service) with a private or a public bucket depending on your needs working together with Edge Locations which are facilities present in many countries and accelerate with cache the data access for the users. But keep in mind the collaborative approach in this last case is not so optimal as the Amazon FSx + DFS in terms of concurrence.
  4. Azure also offers Azure Files or NET APP Files. Those solutions can also bring some value to offer flexible storage and consolidated files share with more or less performance and SMB or NFS flavours depending what you choose. But as the AWS S3, they are not focus on concurrence and simplify a consolidated photo for users all over the world.
  5. Google Suit + Google Drive. It is another content collaboration platform as Sharepoint Online + One Drive. It´s not exactly the same but both embrace sharing and collaborative work for the users documents all over the region.
Azure file sync scenario
Azure file sync scenario

Shapoint Online + One drive and Gsuite + GDrive are productivity cloud products while Azure File sync or Amazon Fsx + DFS are more infrastructure approaches. For those with a very limited budget and needs to reduce operational complexity, to increase productivity and improve UX (user experience), we are going in depth with them in the next post.


See you them…

When your instincts tell you something is wrong and it´s not for you..

Someone come out to me and ask me if i would like to evaluate some IT services to be migrated to Azure or AWS, even Google or Alibaba..

Yes, definitely, that can happen and may be it is terrifying. But you have probably old legacy applications with no support at all, may be you have operating systems with no support at all, or even may be, and this is the most common driver, you are running out of disk space.

Let´s say your IT provider explains how to backup or how planning a disaster recovery of your Vmware farm and VMs hosted on some private cloud provider which ask you for a lot of money when you want a snapshot or more space for previous backups. Would you open your ears to them?

Let´s say you know your limited infrastructure budget and in the last meeting you attended, your CFO wanted to know how to expand your products to other markets and how you can reduce IT investment and increase the Go-to-Market in other countries. Do you dare to ask about AWS or Azure?

I need to be sure and want a clear roadmap to the public cloud, how to determine if some applications are suitable for the cloud, i want to compare my TCO with the cloud scenario, cloud consumption forecast, and please security, is it really secure?

The journey to the cloud depends on a solid assessment methodology, right tools, right knowledge and licensing experts. Pay attention to that. As far as you have all of that there is no fear at all.

There are lots of simple or quite simple applications which can be migrated to the cloud reducing their hardware profile in many cases oversized. Anyway the Six cloud migration strategies (the 6 Rs) should be follow by the company in charge of your cloud transition projects. On the other hard, please keep in mind: a lift&shift approach without no previous consultancy, no enough discovery of your data center infrastructure, not being focused on challenges and stoppers is something to be worried about. 

Another factor to be evaluated it´s your coexistence model with your on premise IT world, private cloud and other SaaS applications which may be have a SSO with your company. Designing the right strategy related to your hybrid cloud and embracing the best of east and west is a must in your roadmap. Don´t forget that.

Finally security. You don´t need to struggle with security as it is more a matter of adoption with the right experts who will guide you in parallel with your workloads migration. It is true that the cloud provider, AWS, Azure or whatever you choose, take care of the cloud (they are compliant with many certifications such as FIPS, HIPPA or ISO2007) but you take care on the cloud of your applications. There are a lot of cutting edge technologies provided by them to minimize efforts and reduce the complexity of your security maintenance. Even there are a tremendous marketplace with all the best thirty party security products aligned with them.

How complex would be my journey to the cloud?, i know there are lots of new functionalities every day.

Yes, we can refactoring some applications in a second phase or we can even modify your DEVOPS model to be more efficient. With governance, we can reduce cost, improve availability and scalability and standardize your deployment of new services. We can work with you on data analytics scenarios with different ETLs or help your with IoT, PaaS and serverless solutions for new web services and so on. But that depends on you and your business needs.

Rating: 1 out of 5.