MARKETPLACE: AN EXCLUSIVE AZURE & AWS SHOPPING CENTER – PART II

When you take a look to AWS, you can smell the origin of their public cloud strategy and why you can buy thirty-party technology solutions such as Palo Alto Firewalls, Linux Red Hat or SUSE VMs with lots of applications, or even Cisco or other network providers products. As you know, marketplaces are nothing else than platforms which enable transactions between customers and thirty party sellers.

Jim Collins identified the term “flywheel effect” and explained the concept to Jeff Bezos who saw an incredible opportunity where other people would have seen just a methodology without options to survive.

The idea is simple. Create a virtuous cycle that increases the number of sellers who offer their products and services, which therefore, increase the amount of offers and prices of those products or services so it´s more interesting to the users in order to find exactly what they want with the right prices.

Hence, improves the traffic to the platform and drives more sellers and customers to buy there. Moreover, you reduce prices to users, and they are used to visit your platform or marketplace from time to time.

From Amazon to AWS (Amazon Web Service) Marketplace

AWS marketplace was the first cloud marketplace for Hyperscalers AWS started the journey to sell thirty party IT products and services following in the footsteps of Amazon platform.

Customers can buy thousands of ISV products and services to deploy with agility and just for testing or find out if a specific software make sense and fill the gap in their company.

There is flexibility of prices, offer terms and conditions. There are pricing plans with an annual approach for 12 months subscriptions or even for just one month if you need for example to roll out a POC. there are others such as usage pricing where customers just pay for what they use in a PAYG approach or pricing models for specific product delivery methods such as containers or ML.

It is very flexible as you can buy even professional services product prices which are in general offerings of professional services packages. All the offers can be tailored for your target company if you are an AWS partner, and you can access to public or even private offers if you are a customer to leverage better discounts or improve some aspects of the ISV product or the consultancy company you deal with.

Lots of solutions are waiting for you…

You can make plans for some offer types publicly available or available to only a specific (private) audience. Likewise Azure marketplace we have explained in the previous post and follow up the same marketplace strategy as AWS did.

In summary, if you are figuring out which value can bring an ISV to your business on the cloud, and you want to leverage some AWS partner professional services in a specific area as cybersecurity or SAP, if a chance you should not forget.

Enjoy the journey to the cloud with me…see you then in the next post.

MARKETPLACE: AN EXCLUSIVE AZURE & AWS SHOPPING CENTER

When you, as a user, access your Azure Portal or AWS portal, you have the option to buy thousands of products or solutions preconfigure for you. You don´t need to worry for the licences or the IT capabilities to design or deploy an specific solution as there are all build following customer needs for several AWS or Microsoft partners and ISV (independent software vendors).

We will speak about the AWS marketplace later, in a new post. Just to pointed out, it was launched in 2012 to accommodate and foster the growth of AWS services from third-party providers that have built their own solutions on top of the Amazon Web Services platform such as ISV, SI (System Integrator) an resellers so the customer would buy exactly what they needed and when they needed adding a tremendous flexibility to grow their cloud solutions aligned with the business.

In the case of Azure was launched in 2014, it is a starting point for go-to-market IT software applications and services built by industry-leading technology companies.The commercial marketplace is available in 141 regions, on a per-plan basis.

What are the plans and how to use them (as a Partner)?

Microsoft partners can publish interesting solutions which involve licenses and services together within the Azure Marketplace On one hand, you don´t need to acquire licenses which prices are prorated within the price. On the other hand, you have access to expertise without hiring new employees in you IT team.

A plan defines an offer’s scope and limits, and the associated pricing when applicable. For example, depending on the offer type, you can select regional markets and choose whether a plan is visible to the public or only to a private audience. Some offer types support an scope of subscriptions, some support price related to consumption, and some let a customer purchase the offer with a license (BYOL), they have purchased directly from the publisher. 

Offer typePlans with pricing optionsPlans without pricing optionsPrivate audience option
Azure managed application
Azure solution template
Azure container✔ (BYOL)
IoT Edge module
Managed service✔ (BYOL)
Software as a service
Azure virtual machine
  • Markets: Every plan must be available in at least one market. You have the option to select only “Tax Remitted” countries, in which Microsoft remits sales and use tax on your behalf.
  • Pricing: Pricing models only apply to plans for Azure managed application, SaaS, and Azure virtual machine offers. An offer can have only one pricing model. For example, a SaaS offer cannot have one plan that’s flat rate and another plan that’s per user.
  • Plan visibility: Depending on the offer type, you can define a private audience or hide the offer or plan from Azure Marketplace.

How to publish and what kind of visibility can we provide (As a Partner)?

You can make plans for some offer types publicly available or available to only a specific (private) audience. Offers with private plans will be published to the Azure portal.

You can configure a single offer type in different ways to enable different publishing options, listing option, provisioning, or pricing. The publishing option and configuration of the offer type also align to the offer eligibility and technical requirements.

Be sure to review the online store and offer type eligibility requirements and the technical publishing requirements before creating your offer.

To publish your offers to Azure Marketplace, you need to have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program.

Also if your offer is published with a private plan, you can update the audience or choose to make the plan available to everyone. After a plan is published as visible to everyone, it must remain visible to everyone and cannot be configured as a private plan again.

Finally, as a partner can enable a free trial on plans for transactable Azure virtual machine and SaaS offers.

For example, azure virtual machine plans allow for 1, 3, or 6-month free trials. When a customer selects a free trial, we collect their billing information, but we don’t start billing the customer until the trial is converted to a paid subscription.

What are your benefits when using Azure Marketplace (As a User)?

Marketplace brings flexibility to customers as they can buy immediately any kind of offer based in a plan which provide several products from thousands of ISV without losing time to deal with any vendor or understand in detail the support model or licensing options.

In the Azure portal, select + Create a resource or search for “marketplace”. Then, browse the categories on the left side or use the search bar, which includes a filter function and choose what you need..

Likewise, there are lots of consultancy services provided from several Microsoft partners, some of them as a free trial so you can test the quality of their professional services and see their approach to fix your pain points.

Enjoy the journey to the cloud with me…see you then in the next post.

KUBERNETES ON THE CLOUD – WHAT TO EXPECT FROM AZURE & AWS

It sounds quite familiar when we have pointed out some functionalities inside Kubernetes as a service on the cloud comparing with others on premise kubernetes solutions. But landing with the Cloud Native Computing Foundation (CNCF) in 2016, K8S came to the scene with open source solutions as Red Hat OpenShift and Suse Rancher. Therefore as you can imagine, Kubernetes doesn’t take that long on the cloud.

To be honest, EKS started on 5th june 2018 and AKS was launched by Microsoft 24th October 2017 . Here we have the AWS EKS current timeline release calendar.

And here, Microsoft approach, AKS timeline release calendar as well.

Upgrade Kubernetes on Azure & AWS

No surprise related to upgrading version for both cloud kubernetes solution On one hand, with reference to Azure, when you upgrade a supported AKS cluster, Kubernetes minor versions cannot be skipped. On the other hand, because Amazon EKS runs a highly available control plane, you can update only one minor version at a time. Therefore even on the cloud magic doesn´t exist. But also, we have some bad news for the lazy boys.

Related to AKS cluster version allowed, Microsoft says: “If a cluster has been out of support for more than three (3) minor versions and has been found to carry security risks, Azure proactively contacts you to upgrade your cluster. If you do not take further action, Azure reserves the right to automatically upgrade your cluster on your behalf“. So watch out, if you are a lazy boy, as Microsoft can do the job for you :-).

But, what happens to the lazy boys on AWS?. Yes, you knew it. Good guess: ” If any clusters in your account are running the version nearing the end of support, Amazon EKS sends out a notice through the AWS Health Dashboard approximately 12 months after the Kubernetes version was released on Amazon EKS. The notice includes the end of support date.“. Adding to that: “On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version. Existing control planes are automatically updated by Amazon EKS to the earliest supported version through a gradual deployment process after the end of support date. After the automatic control plane update, make sure to manually update cluster add-ons and Amazon EC2 nodes“.

Moreover, take into account your API versions, if they are deprecated you are in trouble. Please, follow this guide to solve the issue asap.

Quotas, Limits and Regions

Regarding service quotas, limits and regions maybe you can draw a line in the sand to know what makes sense for you and your applications. Each hyperscalers have its own capabilities in terms of scalability and resilience. Let´s take a look to this..

MICROSOFT AZURE

AKS offers a strong solutions for your applications…

In terms of region availability, all we´ll be smooth and cheerful if you don´t have to work in China. In this case, if you don´t want to get struggle on this, just verify which region has AKS ready to be roll out.

AWS

Here you can see the EKS approach..

Related to regions, please regarding China ask directly to AWS. In the case of Africa, there isn´t still EKS deployed at the time of writing this post.

To summarize, AWS runs Kubernetes control plane instances across multiple availability zones. It automatically replaces unhealthy nodes and provides scalability and security to applications. It has capabilities for 3,000 nodes for each cluster, but you have to pay for the control plane apart. AWS is more conservative to facilitate kubernetes cluster new versions and maintain the oldest more time.

Meanwhile, Microsoft AKS, just about 1000 nodes per cluster, use to be faster than AWS to provide newer kubernetes versions, can repair unhealthy nodes automatically, and support native GitOps strategies, and integrates azure monitor metrics. Control plane has two flavours depending on your needs, free tier and paid tier.

SECURITY

EKS can encrypt its storage persistent data with a service called AWS KMS which as many know it´s very flexible using customer keys or AWS keys. In the case of AKS use Azure Storage Service Encryption (SSE) by default that encrypts data at rest.

Finally, AKS can take advantages of Azure policies as well as Calico policies.

Anyway, AWS EKS also supports Calico. I hope this article can somehow clarify your vision as a cloud architect, tech guy or just CIO who wants to know where to migrate & refactor their monolith on premise solutions.

Enjoy the journey to the cloud with me…see you then in the next post.

KUBERNETES, THE FOOD IS ON YOUR PLATE!

Many companies started the adoption on new technologies such as microservices supported on Kubernetes as a main actor, to provide business agility for new applications deployment but also to facilitate a solid platform for the most critical web services such as shopping portals and booking services, so they have the elasticity to provision resources to meet demand instantly.

Azure has their own flavour in their restaurant, AKS, such as AWS with EKS and GCP with GKE. Today i wanted to show a first approach to the Microsoft solution with advantages, and maybe and depends on the customer vertical, disadvantages.

But first, let´s dive into a traditional opensource + hardware kubernetes deployment and compare it with AKS. Just to show you it´s not exactly the same TCO and ROI.

Hardware approach to deploy kubernetes Some customers do prefer invest in CAPEX, mostly storage and compute, and minimize licensing cost using open source kubernetes solutions. So let´s bring up an estimation on this kind of solution.

On one hand, in average, following for example an opensource company recommendations as Kublr similar to Suse Rancher or Red Hat OpenShift, a cluster (simple scenario with a master node + two worker nodes) has at least these hardware requirements from scratch, without adding applications to run inside:

Master node: Kublr-Kubernetes master components (2 GB, 1.5 vCPU),

Worker node 1: Kublr-Kubernetes worker components (0.7 GB, 0.5 vCPU), Feature: ControlPlane (1.9GB, 1.2 vCPU), Feature: Centralized monitoring (5 GB, 1.2 vCPU) Feature: k8s core components (0.5 GB, 0.15 vCPU) Feature: Centralized logging (11GB, 1.4 vCPU)

Worker node 2: Kublr-Kubernetes worker components (0.7 GB, 0.5 vCPU), Feature: Centralized logging (11GB, 1.4 vCPU)

Obviously, if you want to deploy some applications and depending on their needs, this would increase constantly. The rule on thumb can be:

Available memory = (number of nodes) × (memory per node) – (number of nodes) × 0.7GB – (has Self-hosted logging) × 9GB – (has Self-hosted monitoring) × 2.9GB – 0.4 GB – 2GB (Central monitoring agent per every cluster).

Available CPU = (number of nodes) × (vCPU per node) – (number of nodes) × 0.5 – (has Self-hosted logging) × 1 – (has Self-hosted monitoring) × 1.4 – 0.1 – 0.7 (Central monitoring agent per every cluster).

*By default, Kublr disables scheduling business application on the master, which can be modified. Thus, they use only worker nodes in our formula.

Adding to this, take into account to buy some Vmware Vsphere licenses + Vdirector and Hardware + maintenance.

VMware vSphere Deployment Scheme for KBLR. Origin: https://docs.kublr.com/

Some Key Takeaways for on premise deployment

  • SLA depends on the number of clusters with master and working nodes, their hardware profile, if you are using HCI (for example VMware simply integrated Kubernetes with its hypervisor to serve HCI if needed. The solution is called TANZU). Maybe you can set up a 99999 scenario, though quite expensive.
  • CAPEX is always good for some companies as they reduce taxes. But deal with the purchase department sometimes it´s needed if you want to leverage cloud benefits as well.
  • Open source instead of vendors as Wmware or IBM, provides more flexibility to use K8s, with not just one vendor configuration by default but a flexible configurations
  • Watch up performance, underutilization, security and disaster recovery. They are all quite challenging and usual to find in those scenarios.
  • On premise approach maybe it´s a good option if you want to customize your applications with some specific CI/CD tools and plugins for example for Jenkins.

AKS approach to deploy kubernetes on the cloud- Microsoft´s alternative provides a cluster with a master node and as many worker nodes as needed with the right hardware profiles, even scaling out or down depending on customers scenarios, what is per se, more flexible that a traditional on premise option.

On the other hand, AKS is an approach that brings the best of Kubernetes but reduce the investment and its pure OPEX. We can pointed out some excellent benefits comparing with an on premise Kubernetes solution.  Azure Kubernetes Service (AKS) Baseline Cluster reference would be the following approach.

Some Key Takeaways for AKS deployment

There are no costs associated for AKS in deployment, management, and operations of the Kubernetes cluster. The main cost driver is the virtual machine instances, storage, and networking resources consumed by the cluster. Consider choosing cheaper VMs for system node pools and mainly Linux as possible.

  • We can start deploying a cluster with a master node and two worker nodes. The cost exist mostly on worker nodes compute where Microsoft recommends DS2_v2 instances, so pay attention on how many name spaces do you need and how many pods are included for each application deployment.
  • Adding to that, we have to consider, persistent storage (pods have an ephemeral storage, remember), BBDD requirements associated with each application and to conclude, network traffic between Azure to on premise.
  • Some clear advantages are:
    1. On premise Kubernetes investment use to be an oversized cluster. AKS can be flexible and scalable to meet the current business expectations, provision a cluster with minimum number of nodes and enable the cluster autoscaler to monitor and make sizing decisions.
    2. Balance pod requests and limits to allow Kubernetes to allocate worker node resources with higher density so that Azure hardware (under the hood) is utilized to cloud paid capacity
    3. Use reserve instances for worker nodes for one or three years or even use Dev/Test plan to reduce cost for AKS on development or pre-production environments.
    4. Automatize, automatize, automatize. We can deployed AKS infra from scratch with some few clicks. For example with Bicep, ARM, etc.
    5. AKS can be work in a multi-region approach. Anyway, data transfers within availability zones of a region are not free. Microsoft says clearly that if your workload is multi-region or there are transfers across billing zones, then expect additional bandwidth cost.
    6. GITOPS for AKS ready. As many of you know, GitOps provides some best practices like version control, collaboration, compliance, and continuous integration/continuous deployment (CI/CD) to infrastructure automation.
    7. Finally, if you figure out how to facilite governance to all you Azure Kubernetes scenarios, Azure Monitor (using container insight overview) provides a multi-cluster view that shows the health status of all monitored Kubernetes clusters running Linux and Windows Server 2019. Moreover, we can leverage native azure tools such as kusto queries, azure cost management or azure advisor to control your K8S cost.
Microsoft docs

In the next post we will be focus on EKS and ECS in AWS environments. We´ll see how to identify the TCO and ROI for both K8S solutions.

Enjoy the journey to the cloud with me…see you then in the next post.

HandsOnLab – Minikube with Hyper-V on Windows 2019

I was wondering sometime ago how to start to explain a big bunch of theory about containerization and orchestrate such containers strategy with Kubernetes in a practical way. I know how it works, i know the architecture and the most important benefits of application modernization using Kubernetes. But, why not to play with MiniKube a little bit and explain some takeaways about Application Modernization most important actors like Kubernetes?.

A all-in-one software to test Kubernetes on your own pace

Therefore, it´s good to set up some examples in your LAB so you can play with the commands as well as with the concepts. We will be using MiniKube, as i mentioned before, this is a software for test Kubernetes with an “all-in-one” architecture. A master node and a worker node together in my case for Windows. So i started downloading minikube.exe from here.

To summarize, you have a Master Node or some Master Nodes which will be in charge of managing several Worker Nodes. Developers will deploy applications images on some Pods with usually just one Container inside (they can have more than a container but it is not usual). A container is like a small package with an application, its libraries and some dependences needed to run isolated from a Operating System (So you don´t need a licence for a container). Pods are the smallest compute unit within Kubernetes and share networking and storage to work together with some kind of data in the same Worker Node or not, as they can be distributed between Worker Nodes. You can kill or create pods base on your application requirements. They are ephemeral in nature and depends on how many should be run during a period of time to solve a task or support an specific service for their application.

URL Origin: phoenixnap.com KB article

For Users all this scenario, we have already explained, is absolutely transparent. For example, AirBnB use Kubernetes because the company can spread clusters in several regions with the public cloud. Those clusters are scalable. So when there is a “World trade center” in Barcelona, thousand of queries go to AirBnB kubernetes services to respond with some specific results. Indeed, depending on how much people are asking for accommodation, Kubernetes clusters will create and replicate worker nodes, even replicate pods to several worker nodes to improve the user experiences and answer asap so the user doesn´t leave their web site.

Step back to our lab, we start our MiniKube using the Hyper-V driver.. We´ll set up a fresh Linux VM on Hyper-V with all the components needed to test the previous scenario we showed above.

We can check the components status. The API server is an endpoint where all the cluster components are communicated. Scheduler (It will detect which pod to place on which node based on the resource requirements), Controller manager (handles node failures, replicating or maintaining the correct number of pods) and other worker node component communicate with the API server. Also, below pay attention to Kubelet (in charge of the containers on each node and it talks to API server as well). Kubeconfig has all the credentials and access to connect to your Kubernetes cluster. Here, it is needed because we will connect our “Kubectl” command to create or kill pods in a minute..

So, here we are…Please also see we have a Kube-Proxy component..it is use to distribute and balance traffic to services to each worker node, so in backend several pods are providing information..it is like a referee in soccer.

Just to recap we have Minikube in a Windows 2019 with Hyper-v Server running with all the components needed to test some Kubectl commands and create a simple pod in charge of provide a single query to its service.

I´ve created a pod which is just going to wait for information (in JSON format)..

Forward the port to 8080, so i can connect to the endpoint..

Previously i have installed a “Postman” Trial, you know, developers love this tool. Moreover, they need it to verify that the information or data they are sending (usually in JSON format) or asking to an endpoint is consistent and works as they expect. Here, below i send some data in JSON format and, as you can see, the pod accepts it with a “200” Code, OK.

In the next post we will see AKS and why it has a lot of opportunities to be a very important game player in the application modernization market.

Enjoy the journey to the cloud with me…see you then in the next post.

WELL ARCHITECTED FRAMEWORK FROM AWS TO AZURE, FACE TO FACE (II).

As we said in a previous post, AWS Well Architected Framework was launched officially in 2015. The Microsoft Azure WAF approach took more time as they started later, about 2020 with their own WAF methodology. Anyway it´s a collection of best practices, guides and blue prints in the same way that their competitors, Google (in this case, they called it “4 key architecture principles/pillars”, but covers the similar points), and AWS based in experiences and feedback from several stakeholders.

To Summarize, Azure or AWS WAF or the 4 Key Google architecture principles/pillars, helps cloud architects to build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads for their business. Moreover, provide a better UX (user experience) for the employees and users.

Azure Approach...

From the Microsoft point of view, there are also 5 clear pillars as well as for AWS:

  • Cost Optimization – Focus on managing costs and reduce it as much as possible according with the scenario
  • Operational Excellence –Focus on achieving excellence on operations processes that keep a system running in production.
  • Performance Efficiency – Focus on achieving the best adoption of an IT solution in the cloud.
  • Reliability – Focus on recovering a system or IT solution from failures and continue to function in the cloud.
  • Security – Protecting applications and data from threats, keeping in mind the shared responsibility where a customer and Microsoft or some partners work together for a giving IT solution.

Did you notice any change comparing to AWS below?. Well, Microsoft wants to pointed out the same pillars but involve some extra staff around the pillars to make more powerful their offering. That means: references architectures, Azure Advisor as a point to start as well as CCO Dashboard, Cloudockit, AZGovViz, specific partners offerings or the WAF Review reporting (this is not different from AWS).

The Azure approach for the Well Architected Framework provides some changes in the steps to go ahead comparing to AWS. They are more HLD (high level design) to drill down later while Microsoft try to gather more details in order to sort out priorities, responsibilities and tools to address the right technologies to the right issues sooner.

It seems that this workshop process will be run smoothly and easy to use. The truth is, you will get struggle with some Workloads or specific IT components for sure. But what are the most important Microsoft Azure architecture “Quality Inhibitors” to face with?

Cost Optimization –

Operational Excellence –

Performance Efficiency –

Reliability –

Security –

Underused or orphaned resources

No automation or Silos automation

No design for scaling

No support for disaster recovery

No security threat
detection mechanism

As you can see, each hyperscaler has its own vision. But they are similar in the areas to evaluate and to fix when something is not working properly.

In the next post, we will cover more in depth similarities and differences between the two big cloud titans, Azure and AWS. In the meanwhile, the ball is in your court. Read, read and read…for sure you do… 🙂

Enjoy the journey to the cloud with me…see you then in the next post.

WELL ARCHITECTED FRAMEWORK FROM AWS TO AZURE, FACE TO FACE (I).

After some years migrating workloads from on premise to the cloud, after some years developing cloud first apps, the amount of architectures, technologies and hyperscalers have been expanding their value and support for millions of business and companies… The Well Architected Framework is nothing else that an approach to optimize all those IT solutions from several perspectives.

AWS Approach...

In 2012 AWS created the “Well-Architected” initiative to share with their customers and partners best practices for building in the cloud, and started publishing them in 2015. Now these set of principles are a reality and expanded to many cloud scenarios.

Let say we have some workloads and IT solutions in cloud providers such as AWS or Microsoft Azure with some complexity. Adding to that, we are not sure if the current scenario is designed according the best practices in terms of reliability as the IT service has some small delay responses to the users from time to time. Moreover, when you browse your AWS cost explorer console, this IT Service has a high consumption..

What can we do?, how can we shed some light on this?. OK, AWS provides a set of best practices, principles and strategists to reduce risk and impacts on these areas i´ve mentioned before as well as in other areas. Those areas, indeed, pillars are five:

  • Operational Excellence: The ability to support development and run workloads effectively, gain insight into their operations, and to continuously improve supporting processes and procedures to deliver business value.
  • Security: it´s focused on protect data, systems, and assets to take advantage of cloud technologies to improve your security.
  • Reliability: Enforces the ability of a workload to perform its intended function correctly and consistently when it’s expected to.
  • Performance Efficiency: The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
  • Cost Optimization: The ability to run systems to deliver business value at the lowest price point.

The AWS approach for the Well Architected Framework provides a great value to improve a specific workload or some workloads with some interdependence. To leverage the five pillars potential, the Well-Architected Tool helps you review the digital state of your workloads and compares them to the latest AWS architectural best practices on those areas.

Even, If you want to be more specific and deep dive in a technology or a disruptive solution to identify a clear impact or reduce risk for your workloads, AWS offers AWS Well-Architected Lenses since 2017.

Some examples of Lens which, from my point of view, bring value, are:

Management and Governance Lens – AWS Well-Architected

Hybrid Networking Lens – AWS Well-Architected

SAP Lens – AWS Well-Architected

Financial Services Industry Lens – AWS Well-Architected

Serverless Applications Lens – AWS Well-Architected

In the second part of this post, we will explain the Azure Well Architected Framework. I hope it´s useful to you and it makes your day!.

Enjoy the journey to the cloud with me…see you then in the next post.

Azure Lighthouse, the secret sauce for any Managed Cloud Solution Provider

Managed Cloud Solution Providers (MCSP) are those thirty party companies that help your business to expand and provide muscle and expertise in two ways:

  1. Skill matrix to support – They have a bunch of experts in several disciplines to go through your IT service challenges and digital transformation, they are your mentor to understand your risk and how aligned is your investment in cloud solutions with your business. They have cloud architect and cloud strategist personas in their team to support your journey to the cloud on mostly hybrid scenarios.
  2. Tools to support– They have the right tools to support those business needs and to leverage your current digital state to a new version of your company achieving better efficient in your daily processes, simplifying your employees work, even their quality of life, and for sure, optimizing the time to react to your competitors with innovation. Just to remark, tools means not just thirty party tools but also the native cloud provider tools you have available when consuming cloud services.

Adding to those key points, all the most important operatives to support IT Services on the cloud are based in some specific daily tasks. Monitoring, backup, process automation or security are part of those operatives. Moreover, MCSP need to be effective to solve issues in order to provide the right quality to our customers. Something that it´s called “Operational Excellence” within the “Well Architected Framework”.With the massive expansion of cloud first IT services and migrations to the cloud of a huge amount of IT infrastructure to support data analytics, web services, disaster recovery and legacy applications in the road to be modernized, we need the right tools to cover some clear objetives. Azure Lighthouse has a tremendous maturity to solve lots of aspects and challenges any MCSP need to cope with:

  • Scale as soon as we need to grow. Here i mean scale horizontally. So even when you have to assist lots of customers you can cover their need with granularity and focus on their specific roadmap to the cloud.
  • Segment your IT cloud infrastructure from the customers IT cloud one. So any security issue or IT service downtime that you are providing internally as well as providing to others is limited and it just can affect a customer or group of customers.
  • Provide permissions to some IT resources in the cloud and delegate depending on your customers projects and skills involved access to other partners, to freelance or to sum up to collaborate within this new project with several profiles.
  • Achieve a whole picture of the IT services you are providing to your customers in several Azure contracts and tenants in terms of security posture, alerts with performance or health issues, triage misconfigured Items, provide the right azure governance, etc.

Azure Lighthouse has the potencial and flexibility to include monitoring and traceability to all the customers in several tenants, you get a holistic view, delegate specific permission with a great security level for a period or the time you want to the whole subscriptions or resources groups, integrate all in a Hybrid strategy together with Azure ARC or furthermore integrate security posture and SIEM for several tenants as well. Azure offers top native cloud tools to support your investments in almost any technology tendency.

Let´s go deeper into some nice strategies to any MCSP so they don´t get struggle trying to solve how to translate what they are doing right now on premise compare with Azure.

Access. To access you have as mandatory a secure authentication and authorization strategy., That´s why Microsoft offers the least privilege access approach with Azure Active Directory Privileged Identity Management (Azure AD PIM) to enhance even more access to the customers tenants with just a user or a security group.

Monitoring. Absolutely key for any MSCP. It is the core of your support to your customers. Adding you have to use the right ITSM (Information Technology Service Management) software to be aligned and strive in the right direction to assess and resolve customers issues from high priority to low priority.

Security Posture. Do you know how many misconfigurations and vulnerabilities exist in your customers Azure cloud?. Yes, you can add Azure Security Center to provide the right security posture and know which security controls are affected or can be aligned to your regulatory compliance. We can leverage the Security Score to see in a single pane of glass your customers security posture. Not easy peasy but helps a lot.

Incident Hunting. Maybe you know, maybe you don´t, Azure Sentinel, the Microsoft native SIEM can contribute to consolidate your security threats and deep dive any root cause of a security compromise across several tenants. https://techcommunity.microsoft.com/t5/azure-sentinel/using-azure-lighthouse-and-azure-sentinel-to-investigate-attacks/ba-p/1043899

It´s a powerful tool to track logs, see layer to layer what´s is happening and determine how to step up suitable hardening for your technologies.

Hybrid Scenarios. Regarding Hybrid scenarios, Azure ARC, can be integrate as well with Azure Lighthouse bringing a great benefit to that holistic overview i mentioned before. The main target in this case, will be to provide the right governance to your customers even if they have some private clouds or on premise infrastructure. Therefore, an exciting approach for those companies which already have a lot of legacy staff to migrate during years but want to explore the benefits of public cloud such as Azure.

To sum up depending on your cloud provider maturity level, there are some key native tools to improve your support on your own or with the help of a MCSP. Azure is one of the most important providers together with AWS to offer this level nowadays.


Enjoy the journey to the cloud with me…see you soon.

7 Rs – Seven roads to take the right decision to move to the cloud

AWS (Amazon) , Azure (Microsoft) and GCP (Google) hyper-scale data centers are increasing their number during the last years in many regions supported by millions of investment in submarine cables to reduce latency. Southern Europe in not an exception. We can just take a look to Italy, Spain and France to realize what it´s happening.

Public cloud providers know many customers will move massively thousand of services in the coming years. The process just started some years ago. But due to the pandemic and the need to provide remote services, to analyse data quicker and with efficiency, the big expansion on sensors to measure almost all in our lives or a global market to beat the competitors in any continent with innovation, accelerates even more.

There are 7 Rs to take the right decision so the CIOs and CTOs know what make sense to move or not to the cloud. What is a priority and moreover the impact and effort to transform their business.

AWS perspective to move IT Services to the cloud

Move to the cloud with a clear perspective on outcomes and goals to achieve will be able to bring value to our customers if you evaluate with care each of your IT services so you can take decisions according with your business alignment. Some Applications could be retire other would enter in a cycle of modernization, other just resize to reduce cost and improve resilience..

Let´s explain our 7 Rs from simple to complex scenarios:

Retire. Some applications are not used any more. Just a couple of users need to do some queries from time to time. Hence maybe it´s better to move that old data to a data warehouse and retire the old application.

Retain. It means literally “do nothing at all”. May be this application use some API or backend from an on premise solution with some compliance limitations. May be it was recently upgraded and you want to amortize the investment for a while.

Repurchase. Here you have the opportunity to change the IT solution. Let´s say you are not happy with your firewall on premise and maybe you think it´s better to change to a different provider with a better firewall adoption for AWS or Azure, even to move from IaaS to SaaS some applications.

Relocate. For example, relocate the ESX hypervisor hosting your database and Web Services to VMware Cloud on AWS / Azure / GPC or move your legacy Citrix server with Windows 2008 R2 to a dedicated host on AWS.

Rehost. It means lift/shift. Move some VMs with clear dependence between them to the cloud just to provide better backup, cheaper replication on several regions and resize their compute consumption to reduce cost.

Replatform. Lift and optimize somehow your application. For instance, when you move your web services from a farm of VMs on Vmware with a HLB (Hardware Load Balancer) on premise to a external LB service on Azure with some APP Services where you can adopt the logic of your business and migrate your PhP or Java application. Therefore you don´t have to worry for Operating system patching or security at the Windows Server level anymore. Even eliminate the Windows operating license.

Refactor. The most complex scenario. You have a big monolithic database with lots of applications using that data, reading and writing heavily. You know, you need to move the applications and the monolithic database and modify its architecture by taking full advantage of cloud-native features to improve performance and scalability as well as to reduce risk. Any failure in a component provoke a general failure. Here you need to decouple your components and move to microservices sooner or later.

I hope you could understand better those strategies to move to the cloud your applications, so you can be laser focus on your needs and achieve the best approach for each of them.

To sum up use the right tools to evaluate your applications/ IT Services on premise and based on the 7Rs choose the suitable journey to the cloud for them..

Don´t forget to leverage all the potential of the CAF (Cloud Adoption Framework) https://wordpress.com/post/cloudvisioneers.com/287 that i´ve mentioned before in my blog together with the 7Rs strategy.


Enjoy the journey to the cloud with me…see you soon.

AWS Security Hub (I). The orchestra conductor which protect your IT solutions on the cloud

AWS Security Hub is a great approach to gather findings from several AWS services as well as Security partners like Sophos, Barracuda or Splunk. It brings fresh air to the AWS strategy to protect your data against hackers.

If you pay attention to the AWS current security services you would think they work on their own and not like a team. Even more if you come from Azure where Security Center and Sentinel combine very clearly their capacities.

In the case of AWS, you have to figure out how to set up the right approach leveraging at least the potential of three AWS security services to ingest data on Security Hub:

  • AWS GuardDuty + (Likely use AWS Detective with it)
  • AWS Macie
  • AWS Identity and Access Management

Even maybe you can include AWS Firewall Manager. Anyway, just show a first approach on how to connect each services with Security hub, i´ve drawn what i think would sum up how they can interact together to move on in the right direction.

Aws Security hub as the important piece of our puzzle

AWS Security hub receive a lot of information from several AWS services and can provide some specific dashboards with a very easy to use and comprehensive console. So your blue team can execute the right strategy to prevent and react with incidents or strange users behaviours.

Don´t get struggle with so many AWS security services and names. It´s easier than you expect..Cognito, AWS Shield, Amazon inspector, or others are just used in specific scenarios..

So based in our scenario above, we are going to deep dive in the different tools and how they serve the data to Security Hub. Let´s start:

GuardDuty. It´s a threat detection solution that you can enable when needed and monitors malicious or unauthorized users or roles behaviours. For example, unusual or failed API calls, unauthorized scripts or json deployment, suspicious traffic from or to a Virtual Machine.

It takes the data from DNS logs, VPC Flow and CloudTrail which read logs of several user or rol logins, diagnostics logs, etc in your AWS accounts. Take into account that GuardDuty doesn´t retain your logs, just read them, identify findings and discard them. It works in the backend so there isn´t impact in terms of performance. Finally to point it out, AWS, include a new component to get information, Intelligent threat source from AWS and partners, which makes even more powerful and flexible the AWS Service.

AWS Macie. Use machine learning to discover, classify and protect the data you have at rest in thousands of S3 folders. So you can understand what data do you have and how your users and roles are accessing it.

So it works providing alerts on critical information not protected or exposed somehow to the bad guys as well as combine CloudTrail information to see if someone tries to leverage the hole or vulnerability.

AWS IAM access analyzer. First let´s understand for those with no experience in AWS, what is IAM. Amazon Identity and Access Management (IAM) is a web service that helps you securely control access to Amazon resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. Once we do this introduction, it´s time we´ll focus on this service.

AWS IAM analyzer is based in zones of trust. When we enable Access Analyzer, we can create an analysis for all our AWS accounts or maybe just one account. Therefore, the AWS organization or AWS account we choose is known as the zone of trust for the analyzer.

Once enabled, Access Analyzer analyzes the policies applied to all of the supported resources in your zone of trust. After the first analysis, Access Analyzer analyzes these policies from time to time. If a new policy is added, or modify, Access Analyzer analyzes the new or updated policy within about 30 minutes. If the tool finds an external entity such as another AWS account which belongs to other company, a AWS role or service or even a federeted user, it will generate a finding where indicates details as permissions granted and possible risk of compromise data. You can fix the security hole and If you want to confirm that the change you make to a policy resolves an access issue reported in a finding, you can once again rescan the resource reported in a finding by using the Rescan link. So you are sure you solve the issue.

To recap, Amazon GuardDuty, AWS Macie and AWS Analyzer are the pillars of the data ingested and KPIs to AWS Security Hub. AWS Firewall Manager , AWS detective or CloudWatch can add in some cases more value to the dashboards with your security posture for your AWS organization or AWS account.

In the next post, now that you understood well several AWS Security services, we´ll explain how Security Hub works and why it´s a big change on how to maintain security posture and compliance in the suitable way it should be.


NOTE: Amazon Directory Services should be explained from my point of view separately from the rest as it´s related to users authentication and authorisation in Microsoft environments

Moreover AWS engineers use to include those tools in the AWS Security Webinars and for me, it makes more confusing the AWS security posture to those guys who are just starting with AWS cloud.

Enjoy the journey to the cloud with me…see you soon.