The first definitions of FinOps as such, arise during the period of 2018 – 2020 to consolidate as a framework or financial culture in the cloud world and be sponsored by the FinOps Foundation.

Today there are more than 10.000 FinOps practitioners and more than 80% of enterprises in US are looking for a way to work, a financial culture, to allocate cost and understand the OPEX challenges that public cloud brings up.

In EMEA, we are getting on slower. Countries such as UK, France, Germany, Italy or here in Spain expect a strong expansion of FinOps culture in their companies (even SMB) as well as public sector.

But why is there so much confusion about what objectives this framework has?. Well, to be honest, there are a few misunderstandings that have led to this situation.

Finops is something we are still doing, we are using RI, we usually resize VMs and delete Orphants elements, even we are doing tagging properly

Cloud Economics, all related to optimize cloud spending with our engineers from a technical perspective, is part of Finops. Indeed, there is a capability called “Resource Utilization & Efficiency” and another called “Workload Management & Automation” that cover those scenarios when we need to optimize a cloud or multicloud environment.

Finops is just about save money and reduce cloud consumption

Finops is a culture to integrate Financial team, IT Engineers, procurement, PMO and CIOs and broke silos in order to allocate cloud cost, understand the appropriate cost metrics and ROI for each business or motivations to invest in the cloud. It is about being cost – effective, not just about save money.

Finops doesn´t make sense. It doesn´t provide any value to our business

Some roles as CTO, Engineers, some IT managers don´t see any value in a framework such as Finops. They think it´s too much “High Level”, maybe a fashion. It´s remarkable, they are technical people and don´t see technologies are needed to support a business. We have to pointed out here, public cloud ends up in caos if you don´t understand how to manage cloud cost in a new OPEX world.

When we buy a Cost Management Console, we will be moving faster to a RUN phase within FINOPS

This is not the approach, if you buy a cost console tool such as Cloud Health, Cloudability or whatever in the market, this is just a tool, you need to adapt to your cloud provider or multicloud solution together with governance, automation, forecast, budget strategies, etc.

If you have a FINOPS Team and you have applied some Finops principles, follow up stages, use a cost management console with some reports, you are done.

This is the start of a journey. Even if you set up a FINOPS TEAM, and you think you are on the way to a RUN stage, you need to be sure, your CEO and executive staff push down this culture, you need to be sure you have appropriate business metrics and the cost allocation and owners are part of your company culture. This is a cycle of cost effective optimization.

In the coming post, i am going to underline more confusing concepts about FinOps and deep dive in the possibilities that this framework provides to Finance and CFO´s roles.

Enjoy the journey to the cloud with me…see you soon.


Hybrid cloud is a big challenge for mostly all the companies out there. They need to integrate their on premise workloads and cloud native solutions with similar governance, security posture and devops for instance. Some solutions can use more or less VMs, Microservices, Data analytics, ETL. But what happens when you want to use AWS as well as AZURE and obviously you need a single pane of glass to provide a holistic view of your multicloud environment?

Are there technologies to solve such a mess?. Let try to be focus laser on the big pain points to cope with:

  • Your IT team has a solid knowledge in Azure but very limited to AWS
  • You want to achieve a governance to services and IT solution as a whole even if workloads are spread between both clouds
  • AWS account are isolated with no landing zone as they are inherited from previous merged o company acquisitions.

Here you can see a Lab where i was testing VMs on a AWS account with visibility on my Azure ARC console.

Tagging and cost control: If you want within Azure ARC you can edit tags to some VMs on EC2 and build a unique perspective to a IT service for VMs even if they are located in a multicloud environment. So from you favourite cost management console, Azure cost management, you can connect to AWS and speed up your multicloud FINOPS strategy.

Standardization  for Policies and Governance: Linux or Windows VMs on EC2 can be managed exactly in the same way as you are working with VMs on Azure or on premise. Your Azure Policies will address all the issues regarding permissions, compliance, authorization to resources, etc. The best point, it doesn´t matter if they are on Azure or AWS.

Working with Microsoft Defender Anywhere: Azure ARC provides an agent to be deployed in some VMs so you can afterwards set up specific iniciativas to active and to roll Microsoft Defender for Endpoint. Taking into account that you will receive all the antimalware alarms and security tracking in the same console.

Another approach would be to register and deploy EKS from Azure ARC so you can provide governance to AWS kubernetes cluster from the Azure ARC console. Something quite interesting to those who has a strong knowledge on AZURE but want to deal with AWS as well.

I hope you enjoy this post. See you in the cloud.


The drums are beating, can you hear them?. But I don’t know where the sound is coming from. It’s thunderous, ringing in my ears…boom,boom…pause,..boom,boom. Like elephant heartbeats…

It is Cloudmanji!. A game that I don’t like to play as a financial manager, as a specialist in the purchasing department although i have been invited without wanting to attend that appointment.

OPEX is all around your work. It´s a new jungle where CAPEX is coming off the board. You are buying software subscriptions, Software as a Service (software that you can consume but you don´t need to install or maintain), Cloud infrastructure as PAYG (Pay as You Go), software products within the marketplace of your favourited Hyperscale. Moreover, others are buying, likely someone in the IT department, those software solutions and you just received invoices with not explanations at all.

Therefore, there are “Silos” in your company where not everybody is aligned about cloud spending or maybe the cloud adoption framework was rolled out to be focused on some “Personas” and business areas but not all stakeholders that should be involved in cloud projects.

First beat of drums

Cost Spike in the top one. This kind of scenario usually comes out of “Data Analytics” or “Big Data” Labs or Proof of concept aimed to analyse some specific information in order to take quick business decisions. Sponsor could be HR, Marketing or PMO directly. CIO is aligned with those guys, but he can´t figure out what comes next.

Sometimes happens because a junior consultant is responsable for deploying the prototype infrastructure on AWS, Oracle cloud or Azure as he just follows a default configuration which in many cases means to choose a “Premium Tier” for storage or data bases. Adding to that, there was no governance at all regarding what IT guys can do or can´t do.

The outcome is an unexpected invoice to Finance which means a spot in the budget for the big fish companies and a “cash burn out” for a small one or for a Startup.

The CFO wants to cut heads and he doesn´t know where to start. Who was guilty?, if any?, Who did the things wrong?. Where to start to fire up your team and see the cloud as an alliance?.

Second beat of drums

In the top two, an orphan and shared cost for the company, expensive and necessary, a cost which nobody wants to be assigned in their cost center code.

How do you split this kind of cost for several departments or countries?. Let´s say, you have several factories in your country, four in France, and even worst, two more in Spain, one in Portugal and UK. Due to the brexit and the currency things get extraordinary complex as you need to invoice in Euros and Pounds. There are also withholding taxes between Europe and UK.

You started with an application modernization strategy and migrated legacy applications to Azure. Application refactoring, (changing code and decoupling of architecture in small pieces called microservices) improved the deployment and scalation on demand to all those factories, supplying a quick and effective support for assembly line modifications.

All the factories need that cloud infrastructure and it´s critical and means a shared cost to all of them. It´s a complex cost which you can´t estimate properly. A Production, preproduction and development deployed in a multi-region approach. UK says France should pay the bill as they are the head quarter. Spain and Portugal say they can´t pay the bill as they have a smaller market than others and less profitable. France says cost should be splitted into euros (hence they pay France, Spain and Portugal) and into pounds. They say UK pays their factory in pounds and assumes all related to Brexit as withholding taxes.

How do you allocate cost in the cloud for such infrastructure?. How to estimate the appropriate average consumption for each factory?. How do you align Finance, IT guys and the Board to be on the same page and work together to find out a solution?

Third beat of drums

A company jumped into the cloud. They migrated three on premise data centers on premise. No cloud adoption was put on the table ended up in a bunch of solutions with no adequate scalation, security issues and no governance or cost allocate at all. A caos or nightmare that each new CIO has to cope with.

Not to mention, the company has workloads in two different hyperscalers. For instance, GCP and Azure.

After four CIOs and two CISOs went through the company, who is in charge of this scenario?. Finance says the situation is terrifying and horrible. No clear budgets, no budget alerts, no cost allocation, etc. How to deal with OPEX?.

To sum up, these scenarios are covered by FINOPS. This is a methodology where you are going to work with the IT cloud engineers, the CIO, your devops team, your Purchase department, Finance, PMO and some skilled people called Finops specialist.

In the next Cloudmanji episode, i´ll explain what it´s all about this approach and how you can leverage the methodology to deal with all these situations.

Dear CFOs, purchasing managers and IT guys enjoy the journey to the cloud with me…see you then in the next post.


When you take a look to AWS, you can smell the origin of their public cloud strategy and why you can buy thirty-party technology solutions such as Palo Alto Firewalls, Linux Red Hat or SUSE VMs with lots of applications, or even Cisco or other network providers products. As you know, marketplaces are nothing else than platforms which enable transactions between customers and thirty party sellers.

Jim Collins identified the term “flywheel effect” and explained the concept to Jeff Bezos who saw an incredible opportunity where other people would have seen just a methodology without options to survive.

The idea is simple. Create a virtuous cycle that increases the number of sellers who offer their products and services, which therefore, increase the amount of offers and prices of those products or services so it´s more interesting to the users in order to find exactly what they want with the right prices.

Hence, improves the traffic to the platform and drives more sellers and customers to buy there. Moreover, you reduce prices to users, and they are used to visit your platform or marketplace from time to time.

From Amazon to AWS (Amazon Web Service) Marketplace

AWS marketplace was the first cloud marketplace for Hyperscalers AWS started the journey to sell thirty party IT products and services following in the footsteps of Amazon platform.

Customers can buy thousands of ISV products and services to deploy with agility and just for testing or find out if a specific software make sense and fill the gap in their company.

There is flexibility of prices, offer terms and conditions. There are pricing plans with an annual approach for 12 months subscriptions or even for just one month if you need for example to roll out a POC. there are others such as usage pricing where customers just pay for what they use in a PAYG approach or pricing models for specific product delivery methods such as containers or ML.

It is very flexible as you can buy even professional services product prices which are in general offerings of professional services packages. All the offers can be tailored for your target company if you are an AWS partner, and you can access to public or even private offers if you are a customer to leverage better discounts or improve some aspects of the ISV product or the consultancy company you deal with.

Lots of solutions are waiting for you…

You can make plans for some offer types publicly available or available to only a specific (private) audience. Likewise Azure marketplace we have explained in the previous post and follow up the same marketplace strategy as AWS did.

In summary, if you are figuring out which value can bring an ISV to your business on the cloud, and you want to leverage some AWS partner professional services in a specific area as cybersecurity or SAP, if a chance you should not forget.

Enjoy the journey to the cloud with me…see you then in the next post.


When you, as a user, access your Azure Portal or AWS portal, you have the option to buy thousands of products or solutions preconfigure for you. You don´t need to worry for the licences or the IT capabilities to design or deploy an specific solution as there are all build following customer needs for several AWS or Microsoft partners and ISV (independent software vendors).

We will speak about the AWS marketplace later, in a new post. Just to pointed out, it was launched in 2012 to accommodate and foster the growth of AWS services from third-party providers that have built their own solutions on top of the Amazon Web Services platform such as ISV, SI (System Integrator) an resellers so the customer would buy exactly what they needed and when they needed adding a tremendous flexibility to grow their cloud solutions aligned with the business.

In the case of Azure was launched in 2014, it is a starting point for go-to-market IT software applications and services built by industry-leading technology companies.The commercial marketplace is available in 141 regions, on a per-plan basis.

What are the plans and how to use them (as a Partner)?

Microsoft partners can publish interesting solutions which involve licenses and services together within the Azure Marketplace On one hand, you don´t need to acquire licenses which prices are prorated within the price. On the other hand, you have access to expertise without hiring new employees in you IT team.

A plan defines an offer’s scope and limits, and the associated pricing when applicable. For example, depending on the offer type, you can select regional markets and choose whether a plan is visible to the public or only to a private audience. Some offer types support an scope of subscriptions, some support price related to consumption, and some let a customer purchase the offer with a license (BYOL), they have purchased directly from the publisher. 

Offer typePlans with pricing optionsPlans without pricing optionsPrivate audience option
Azure managed application
Azure solution template
Azure container✔ (BYOL)
IoT Edge module
Managed service✔ (BYOL)
Software as a service
Azure virtual machine
  • Markets: Every plan must be available in at least one market. You have the option to select only “Tax Remitted” countries, in which Microsoft remits sales and use tax on your behalf.
  • Pricing: Pricing models only apply to plans for Azure managed application, SaaS, and Azure virtual machine offers. An offer can have only one pricing model. For example, a SaaS offer cannot have one plan that’s flat rate and another plan that’s per user.
  • Plan visibility: Depending on the offer type, you can define a private audience or hide the offer or plan from Azure Marketplace.

How to publish and what kind of visibility can we provide (As a Partner)?

You can make plans for some offer types publicly available or available to only a specific (private) audience. Offers with private plans will be published to the Azure portal.

You can configure a single offer type in different ways to enable different publishing options, listing option, provisioning, or pricing. The publishing option and configuration of the offer type also align to the offer eligibility and technical requirements.

Be sure to review the online store and offer type eligibility requirements and the technical publishing requirements before creating your offer.

To publish your offers to Azure Marketplace, you need to have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program.

Also if your offer is published with a private plan, you can update the audience or choose to make the plan available to everyone. After a plan is published as visible to everyone, it must remain visible to everyone and cannot be configured as a private plan again.

Finally, as a partner can enable a free trial on plans for transactable Azure virtual machine and SaaS offers.

For example, azure virtual machine plans allow for 1, 3, or 6-month free trials. When a customer selects a free trial, we collect their billing information, but we don’t start billing the customer until the trial is converted to a paid subscription.

What are your benefits when using Azure Marketplace (As a User)?

Marketplace brings flexibility to customers as they can buy immediately any kind of offer based in a plan which provide several products from thousands of ISV without losing time to deal with any vendor or understand in detail the support model or licensing options.

In the Azure portal, select + Create a resource or search for “marketplace”. Then, browse the categories on the left side or use the search bar, which includes a filter function and choose what you need..

Likewise, there are lots of consultancy services provided from several Microsoft partners, some of them as a free trial so you can test the quality of their professional services and see their approach to fix your pain points.

Enjoy the journey to the cloud with me…see you then in the next post.


It sounds quite familiar when we have pointed out some functionalities inside Kubernetes as a service on the cloud comparing with others on premise kubernetes solutions. But landing with the Cloud Native Computing Foundation (CNCF) in 2016, K8S came to the scene with open source solutions as Red Hat OpenShift and Suse Rancher. Therefore as you can imagine, Kubernetes doesn’t take that long on the cloud.

To be honest, EKS started on 5th june 2018 and AKS was launched by Microsoft 24th October 2017 . Here we have the AWS EKS current timeline release calendar.

And here, Microsoft approach, AKS timeline release calendar as well.

Upgrade Kubernetes on Azure & AWS

No surprise related to upgrading version for both cloud kubernetes solution On one hand, with reference to Azure, when you upgrade a supported AKS cluster, Kubernetes minor versions cannot be skipped. On the other hand, because Amazon EKS runs a highly available control plane, you can update only one minor version at a time. Therefore even on the cloud magic doesn´t exist. But also, we have some bad news for the lazy boys.

Related to AKS cluster version allowed, Microsoft says: “If a cluster has been out of support for more than three (3) minor versions and has been found to carry security risks, Azure proactively contacts you to upgrade your cluster. If you do not take further action, Azure reserves the right to automatically upgrade your cluster on your behalf“. So watch out, if you are a lazy boy, as Microsoft can do the job for you :-).

But, what happens to the lazy boys on AWS?. Yes, you knew it. Good guess: ” If any clusters in your account are running the version nearing the end of support, Amazon EKS sends out a notice through the AWS Health Dashboard approximately 12 months after the Kubernetes version was released on Amazon EKS. The notice includes the end of support date.“. Adding to that: “On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version. Existing control planes are automatically updated by Amazon EKS to the earliest supported version through a gradual deployment process after the end of support date. After the automatic control plane update, make sure to manually update cluster add-ons and Amazon EC2 nodes“.

Moreover, take into account your API versions, if they are deprecated you are in trouble. Please, follow this guide to solve the issue asap.

Quotas, Limits and Regions

Regarding service quotas, limits and regions maybe you can draw a line in the sand to know what makes sense for you and your applications. Each hyperscalers have its own capabilities in terms of scalability and resilience. Let´s take a look to this..


AKS offers a strong solutions for your applications…

In terms of region availability, all we´ll be smooth and cheerful if you don´t have to work in China. In this case, if you don´t want to get struggle on this, just verify which region has AKS ready to be roll out.


Here you can see the EKS approach..

Related to regions, please regarding China ask directly to AWS. In the case of Africa, there isn´t still EKS deployed at the time of writing this post.

To summarize, AWS runs Kubernetes control plane instances across multiple availability zones. It automatically replaces unhealthy nodes and provides scalability and security to applications. It has capabilities for 3,000 nodes for each cluster, but you have to pay for the control plane apart. AWS is more conservative to facilitate kubernetes cluster new versions and maintain the oldest more time.

Meanwhile, Microsoft AKS, just about 1000 nodes per cluster, use to be faster than AWS to provide newer kubernetes versions, can repair unhealthy nodes automatically, and support native GitOps strategies, and integrates azure monitor metrics. Control plane has two flavours depending on your needs, free tier and paid tier.


EKS can encrypt its storage persistent data with a service called AWS KMS which as many know it´s very flexible using customer keys or AWS keys. In the case of AKS use Azure Storage Service Encryption (SSE) by default that encrypts data at rest.

Finally, AKS can take advantages of Azure policies as well as Calico policies.

Anyway, AWS EKS also supports Calico. I hope this article can somehow clarify your vision as a cloud architect, tech guy or just CIO who wants to know where to migrate & refactor their monolith on premise solutions.

Enjoy the journey to the cloud with me…see you then in the next post.


Many companies started the adoption on new technologies such as microservices supported on Kubernetes as a main actor, to provide business agility for new applications deployment but also to facilitate a solid platform for the most critical web services such as shopping portals and booking services, so they have the elasticity to provision resources to meet demand instantly.

Azure has their own flavour in their restaurant, AKS, such as AWS with EKS and GCP with GKE. Today i wanted to show a first approach to the Microsoft solution with advantages, and maybe and depends on the customer vertical, disadvantages.

But first, let´s dive into a traditional opensource + hardware kubernetes deployment and compare it with AKS. Just to show you it´s not exactly the same TCO and ROI.

Hardware approach to deploy kubernetes Some customers do prefer invest in CAPEX, mostly storage and compute, and minimize licensing cost using open source kubernetes solutions. So let´s bring up an estimation on this kind of solution.

On one hand, in average, following for example an opensource company recommendations as Kublr similar to Suse Rancher or Red Hat OpenShift, a cluster (simple scenario with a master node + two worker nodes) has at least these hardware requirements from scratch, without adding applications to run inside:

Master node: Kublr-Kubernetes master components (2 GB, 1.5 vCPU),

Worker node 1: Kublr-Kubernetes worker components (0.7 GB, 0.5 vCPU), Feature: ControlPlane (1.9GB, 1.2 vCPU), Feature: Centralized monitoring (5 GB, 1.2 vCPU) Feature: k8s core components (0.5 GB, 0.15 vCPU) Feature: Centralized logging (11GB, 1.4 vCPU)

Worker node 2: Kublr-Kubernetes worker components (0.7 GB, 0.5 vCPU), Feature: Centralized logging (11GB, 1.4 vCPU)

Obviously, if you want to deploy some applications and depending on their needs, this would increase constantly. The rule on thumb can be:

Available memory = (number of nodes) × (memory per node) – (number of nodes) × 0.7GB – (has Self-hosted logging) × 9GB – (has Self-hosted monitoring) × 2.9GB – 0.4 GB – 2GB (Central monitoring agent per every cluster).

Available CPU = (number of nodes) × (vCPU per node) – (number of nodes) × 0.5 – (has Self-hosted logging) × 1 – (has Self-hosted monitoring) × 1.4 – 0.1 – 0.7 (Central monitoring agent per every cluster).

*By default, Kublr disables scheduling business application on the master, which can be modified. Thus, they use only worker nodes in our formula.

Adding to this, take into account to buy some Vmware Vsphere licenses + Vdirector and Hardware + maintenance.

VMware vSphere Deployment Scheme for KBLR. Origin:

Some Key Takeaways for on premise deployment

  • SLA depends on the number of clusters with master and working nodes, their hardware profile, if you are using HCI (for example VMware simply integrated Kubernetes with its hypervisor to serve HCI if needed. The solution is called TANZU). Maybe you can set up a 99999 scenario, though quite expensive.
  • CAPEX is always good for some companies as they reduce taxes. But deal with the purchase department sometimes it´s needed if you want to leverage cloud benefits as well.
  • Open source instead of vendors as Wmware or IBM, provides more flexibility to use K8s, with not just one vendor configuration by default but a flexible configurations
  • Watch up performance, underutilization, security and disaster recovery. They are all quite challenging and usual to find in those scenarios.
  • On premise approach maybe it´s a good option if you want to customize your applications with some specific CI/CD tools and plugins for example for Jenkins.

AKS approach to deploy kubernetes on the cloud- Microsoft´s alternative provides a cluster with a master node and as many worker nodes as needed with the right hardware profiles, even scaling out or down depending on customers scenarios, what is per se, more flexible that a traditional on premise option.

On the other hand, AKS is an approach that brings the best of Kubernetes but reduce the investment and its pure OPEX. We can pointed out some excellent benefits comparing with an on premise Kubernetes solution.  Azure Kubernetes Service (AKS) Baseline Cluster reference would be the following approach.

Some Key Takeaways for AKS deployment

There are no costs associated for AKS in deployment, management, and operations of the Kubernetes cluster. The main cost driver is the virtual machine instances, storage, and networking resources consumed by the cluster. Consider choosing cheaper VMs for system node pools and mainly Linux as possible.

  • We can start deploying a cluster with a master node and two worker nodes. The cost exist mostly on worker nodes compute where Microsoft recommends DS2_v2 instances, so pay attention on how many name spaces do you need and how many pods are included for each application deployment.
  • Adding to that, we have to consider, persistent storage (pods have an ephemeral storage, remember), BBDD requirements associated with each application and to conclude, network traffic between Azure to on premise.
  • Some clear advantages are:
    1. On premise Kubernetes investment use to be an oversized cluster. AKS can be flexible and scalable to meet the current business expectations, provision a cluster with minimum number of nodes and enable the cluster autoscaler to monitor and make sizing decisions.
    2. Balance pod requests and limits to allow Kubernetes to allocate worker node resources with higher density so that Azure hardware (under the hood) is utilized to cloud paid capacity
    3. Use reserve instances for worker nodes for one or three years or even use Dev/Test plan to reduce cost for AKS on development or pre-production environments.
    4. Automatize, automatize, automatize. We can deployed AKS infra from scratch with some few clicks. For example with Bicep, ARM, etc.
    5. AKS can be work in a multi-region approach. Anyway, data transfers within availability zones of a region are not free. Microsoft says clearly that if your workload is multi-region or there are transfers across billing zones, then expect additional bandwidth cost.
    6. GITOPS for AKS ready. As many of you know, GitOps provides some best practices like version control, collaboration, compliance, and continuous integration/continuous deployment (CI/CD) to infrastructure automation.
    7. Finally, if you figure out how to facilite governance to all you Azure Kubernetes scenarios, Azure Monitor (using container insight overview) provides a multi-cluster view that shows the health status of all monitored Kubernetes clusters running Linux and Windows Server 2019. Moreover, we can leverage native azure tools such as kusto queries, azure cost management or azure advisor to control your K8S cost.
Microsoft docs

In the next post we will be focus on EKS and ECS in AWS environments. We´ll see how to identify the TCO and ROI for both K8S solutions.

Enjoy the journey to the cloud with me…see you then in the next post.

HandsOnLab – Minikube with Hyper-V on Windows 2019

I was wondering sometime ago how to start to explain a big bunch of theory about containerization and orchestrate such containers strategy with Kubernetes in a practical way. I know how it works, i know the architecture and the most important benefits of application modernization using Kubernetes. But, why not to play with MiniKube a little bit and explain some takeaways about Application Modernization most important actors like Kubernetes?.

A all-in-one software to test Kubernetes on your own pace

Therefore, it´s good to set up some examples in your LAB so you can play with the commands as well as with the concepts. We will be using MiniKube, as i mentioned before, this is a software for test Kubernetes with an “all-in-one” architecture. A master node and a worker node together in my case for Windows. So i started downloading minikube.exe from here.

To summarize, you have a Master Node or some Master Nodes which will be in charge of managing several Worker Nodes. Developers will deploy applications images on some Pods with usually just one Container inside (they can have more than a container but it is not usual). A container is like a small package with an application, its libraries and some dependences needed to run isolated from a Operating System (So you don´t need a licence for a container). Pods are the smallest compute unit within Kubernetes and share networking and storage to work together with some kind of data in the same Worker Node or not, as they can be distributed between Worker Nodes. You can kill or create pods base on your application requirements. They are ephemeral in nature and depends on how many should be run during a period of time to solve a task or support an specific service for their application.

URL Origin: KB article

For Users all this scenario, we have already explained, is absolutely transparent. For example, AirBnB use Kubernetes because the company can spread clusters in several regions with the public cloud. Those clusters are scalable. So when there is a “World trade center” in Barcelona, thousand of queries go to AirBnB kubernetes services to respond with some specific results. Indeed, depending on how much people are asking for accommodation, Kubernetes clusters will create and replicate worker nodes, even replicate pods to several worker nodes to improve the user experiences and answer asap so the user doesn´t leave their web site.

Step back to our lab, we start our MiniKube using the Hyper-V driver.. We´ll set up a fresh Linux VM on Hyper-V with all the components needed to test the previous scenario we showed above.

We can check the components status. The API server is an endpoint where all the cluster components are communicated. Scheduler (It will detect which pod to place on which node based on the resource requirements), Controller manager (handles node failures, replicating or maintaining the correct number of pods) and other worker node component communicate with the API server. Also, below pay attention to Kubelet (in charge of the containers on each node and it talks to API server as well). Kubeconfig has all the credentials and access to connect to your Kubernetes cluster. Here, it is needed because we will connect our “Kubectl” command to create or kill pods in a minute..

So, here we are…Please also see we have a Kube-Proxy is use to distribute and balance traffic to services to each worker node, so in backend several pods are providing is like a referee in soccer.

Just to recap we have Minikube in a Windows 2019 with Hyper-v Server running with all the components needed to test some Kubectl commands and create a simple pod in charge of provide a single query to its service.

I´ve created a pod which is just going to wait for information (in JSON format)..

Forward the port to 8080, so i can connect to the endpoint..

Previously i have installed a “Postman” Trial, you know, developers love this tool. Moreover, they need it to verify that the information or data they are sending (usually in JSON format) or asking to an endpoint is consistent and works as they expect. Here, below i send some data in JSON format and, as you can see, the pod accepts it with a “200” Code, OK.

In the next post we will see AKS and why it has a lot of opportunities to be a very important game player in the application modernization market.

Enjoy the journey to the cloud with me…see you then in the next post.


As we said in a previous post, AWS Well Architected Framework was launched officially in 2015. The Microsoft Azure WAF approach took more time as they started later, about 2020 with their own WAF methodology. Anyway it´s a collection of best practices, guides and blue prints in the same way that their competitors, Google (in this case, they called it “4 key architecture principles/pillars”, but covers the similar points), and AWS based in experiences and feedback from several stakeholders.

To Summarize, Azure or AWS WAF or the 4 Key Google architecture principles/pillars, helps cloud architects to build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads for their business. Moreover, provide a better UX (user experience) for the employees and users.

Azure Approach...

From the Microsoft point of view, there are also 5 clear pillars as well as for AWS:

  • Cost Optimization – Focus on managing costs and reduce it as much as possible according with the scenario
  • Operational Excellence –Focus on achieving excellence on operations processes that keep a system running in production.
  • Performance Efficiency – Focus on achieving the best adoption of an IT solution in the cloud.
  • Reliability – Focus on recovering a system or IT solution from failures and continue to function in the cloud.
  • Security – Protecting applications and data from threats, keeping in mind the shared responsibility where a customer and Microsoft or some partners work together for a giving IT solution.

Did you notice any change comparing to AWS below?. Well, Microsoft wants to pointed out the same pillars but involve some extra staff around the pillars to make more powerful their offering. That means: references architectures, Azure Advisor as a point to start as well as CCO Dashboard, Cloudockit, AZGovViz, specific partners offerings or the WAF Review reporting (this is not different from AWS).

The Azure approach for the Well Architected Framework provides some changes in the steps to go ahead comparing to AWS. They are more HLD (high level design) to drill down later while Microsoft try to gather more details in order to sort out priorities, responsibilities and tools to address the right technologies to the right issues sooner.

It seems that this workshop process will be run smoothly and easy to use. The truth is, you will get struggle with some Workloads or specific IT components for sure. But what are the most important Microsoft Azure architecture “Quality Inhibitors” to face with?

Cost Optimization –

Operational Excellence –

Performance Efficiency –

Reliability –

Security –

Underused or orphaned resources

No automation or Silos automation

No design for scaling

No support for disaster recovery

No security threat
detection mechanism

As you can see, each hyperscaler has its own vision. But they are similar in the areas to evaluate and to fix when something is not working properly.

In the next post, we will cover more in depth similarities and differences between the two big cloud titans, Azure and AWS. In the meanwhile, the ball is in your court. Read, read and read…for sure you do… 🙂

Enjoy the journey to the cloud with me…see you then in the next post.


After some years migrating workloads from on premise to the cloud, after some years developing cloud first apps, the amount of architectures, technologies and hyperscalers have been expanding their value and support for millions of business and companies… The Well Architected Framework is nothing else that an approach to optimize all those IT solutions from several perspectives.

AWS Approach...

In 2012 AWS created the “Well-Architected” initiative to share with their customers and partners best practices for building in the cloud, and started publishing them in 2015. Now these set of principles are a reality and expanded to many cloud scenarios.

Let say we have some workloads and IT solutions in cloud providers such as AWS or Microsoft Azure with some complexity. Adding to that, we are not sure if the current scenario is designed according the best practices in terms of reliability as the IT service has some small delay responses to the users from time to time. Moreover, when you browse your AWS cost explorer console, this IT Service has a high consumption..

What can we do?, how can we shed some light on this?. OK, AWS provides a set of best practices, principles and strategists to reduce risk and impacts on these areas i´ve mentioned before as well as in other areas. Those areas, indeed, pillars are five:

  • Operational Excellence: The ability to support development and run workloads effectively, gain insight into their operations, and to continuously improve supporting processes and procedures to deliver business value.
  • Security: it´s focused on protect data, systems, and assets to take advantage of cloud technologies to improve your security.
  • Reliability: Enforces the ability of a workload to perform its intended function correctly and consistently when it’s expected to.
  • Performance Efficiency: The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
  • Cost Optimization: The ability to run systems to deliver business value at the lowest price point.

The AWS approach for the Well Architected Framework provides a great value to improve a specific workload or some workloads with some interdependence. To leverage the five pillars potential, the Well-Architected Tool helps you review the digital state of your workloads and compares them to the latest AWS architectural best practices on those areas.

Even, If you want to be more specific and deep dive in a technology or a disruptive solution to identify a clear impact or reduce risk for your workloads, AWS offers AWS Well-Architected Lenses since 2017.

Some examples of Lens which, from my point of view, bring value, are:

Management and Governance Lens – AWS Well-Architected

Hybrid Networking Lens – AWS Well-Architected

SAP Lens – AWS Well-Architected

Financial Services Industry Lens – AWS Well-Architected

Serverless Applications Lens – AWS Well-Architected

In the second part of this post, we will explain the Azure Well Architected Framework. I hope it´s useful to you and it makes your day!.

Enjoy the journey to the cloud with me…see you then in the next post.