When you take a look to AWS, you can smell the origin of their public cloud strategy and why you can buy thirty-party technology solutions such as Palo Alto Firewalls, Linux Red Hat or SUSE VMs with lots of applications, or even Cisco or other network providers products. As you know, marketplaces are nothing else than platforms which enable transactions between customers and thirty party sellers.
Jim Collins identified the term “flywheel effect” and explained the concept to Jeff Bezos who saw an incredible opportunity where other people would have seen just a methodology without options to survive.
The idea is simple. Create a virtuous cycle that increases the number of sellers who offer their products and services, which therefore, increase the amount of offers and prices of those products or services so it´s more interesting to the users in order to find exactly what they want with the right prices.
Hence, improves the traffic to the platform and drives more sellers and customers to buy there. Moreover, you reduce prices to users, and they are used to visit your platform or marketplace from time to time.
From Amazon to AWS (Amazon Web Service) Marketplace
AWS marketplace was the first cloud marketplace for Hyperscalers – AWS started the journey to sell thirty party IT products and services following in the footsteps of Amazon platform.
Customers can buy thousands of ISV products and services to deploy with agility and just for testing or find out if a specific software make sense and fill the gap in their company.
There is flexibility of prices, offer terms and conditions. There are pricing plans with an annual approach for 12 months subscriptions or even for just one month if you need for example to roll out a POC. there are others such as usage pricing where customers just pay for what they use in a PAYG approach or pricing models for specific product delivery methods such as containers or ML.
It is very flexible as you can buy even professional services product prices which are in general offerings of professional services packages. All the offers can be tailored for your target company if you are an AWS partner, and you can access to public or even private offers if you are a customer to leverage better discounts or improve some aspects of the ISV product or the consultancy company you deal with.
Lots of solutions are waiting for you…
You can make plans for some offer types publicly available or available to only a specific (private) audience. Likewise Azure marketplace we have explained in the previous post and follow up the same marketplace strategy as AWS did.
In summary, if you are figuring out which value can bring an ISV to your business on the cloud, and you want to leverage some AWS partner professional services in a specific area as cybersecurity or SAP, if a chance you should not forget.
Enjoy the journey to the cloud with me…see you then in the next post.
When you, as a user, access your Azure Portal or AWS portal, you have the option to buy thousands of products or solutions preconfigure for you. You don´t need to worry for the licences or the IT capabilities to design or deploy an specific solution as there are all build following customer needs for several AWS or Microsoft partners and ISV (independent software vendors).
We will speak about the AWS marketplace later, in a new post. Just to pointed out, it was launched in 2012 to accommodate and foster the growth of AWS services from third-party providers that have built their own solutions on top of the Amazon Web Services platform such as ISV, SI (System Integrator) an resellers so the customer would buy exactly what they needed and when they needed adding a tremendous flexibility to grow their cloud solutions aligned with the business.
In the case of Azure was launched in 2014, it is a starting point for go-to-market IT software applications and services built by industry-leading technology companies.The commercial marketplace is available in 141 regions, on a per-plan basis.
What are the plans and how to use them (as a Partner)?
Microsoft partners can publish interesting solutions which involve licenses and services together within the Azure Marketplace –On one hand, you don´t need to acquire licenses which prices are prorated within the price. On the other hand, you have access to expertise without hiring new employees in you IT team.
A plan defines an offer’s scope and limits, and the associated pricing when applicable. For example, depending on the offer type, you can select regional markets and choose whether a plan is visible to the public or only to a private audience. Some offer types support an scope of subscriptions, some support price related to consumption, and some let a customer purchase the offer with a license (BYOL), they have purchased directly from the publisher.
Offer type
Plans with pricing options
Plans without pricing options
Private audience option
Azure managed application
✔
✔
Azure solution template
✔
✔
Azure container
✔ (BYOL)
IoT Edge module
✔
Managed service
✔ (BYOL)
✔
Software as a service
✔
✔
Azure virtual machine
✔
✔
Markets: Every plan must be available in at least one market. You have the option to select only “Tax Remitted” countries, in which Microsoft remits sales and use tax on your behalf.
Pricing: Pricing models only apply to plans for Azure managed application, SaaS, and Azure virtual machine offers. An offer can have only one pricing model. For example, a SaaS offer cannot have one plan that’s flat rate and another plan that’s per user.
Plan visibility: Depending on the offer type, you can define a private audience or hide the offer or plan from Azure Marketplace.
How to publish and what kind of visibility can we provide (As a Partner)?
You can make plans for some offer types publicly available or available to only a specific (private) audience. Offers with private plans will be published to the Azure portal.
You can configure a single offer type in different ways to enable different publishing options, listing option, provisioning, or pricing. The publishing option and configuration of the offer type also align to the offer eligibility and technical requirements.
Be sure to review the online store and offer type eligibility requirements and the technical publishing requirements before creating your offer.
To publish your offers to Azure Marketplace, you need to have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program.
Also if your offer is published with a private plan, you can update the audience or choose to make the plan available to everyone. After a plan is published as visible to everyone, it must remain visible to everyone and cannot be configured as a private plan again.
Finally, as a partner can enable a free trial on plans for transactable Azure virtual machine and SaaS offers.
For example, azure virtual machine plans allow for 1, 3, or 6-month free trials. When a customer selects a free trial, we collect their billing information, but we don’t start billing the customer until the trial is converted to a paid subscription.
What are your benefits when using Azure Marketplace (As a User)?
Marketplace brings flexibility to customers as they can buy immediately any kind of offer based in a plan which provide several products from thousands of ISV without losing time to deal with any vendor or understand in detail the support model or licensing options.
In the Azure portal, select + Create a resource or search for “marketplace”. Then, browse the categories on the left side or use the search bar, which includes a filter function and choose what you need..
Likewise, there are lots of consultancy services provided from several Microsoft partners, some of them as a free trial so you can test the quality of their professional services and see their approach to fix your pain points.
Enjoy the journey to the cloud with me…see you then in the next post.
It sounds quite familiar when we have pointed out some functionalities inside Kubernetes as a service on the cloud comparing with others on premise kubernetes solutions. But landing with the Cloud Native Computing Foundation (CNCF) in 2016, K8S came to the scene with open source solutions as Red Hat OpenShift and Suse Rancher. Therefore as you can imagine, Kubernetes doesn’t take that long on the cloud.
To be honest, EKS started on 5th june 2018 and AKS was launched by Microsoft 24th October 2017 . Here we have the AWS EKS current timeline release calendar.
And here, Microsoft approach, AKS timeline release calendar as well.
Upgrade Kubernetes on Azure & AWS
No surprise related to upgrading version for both cloud kubernetes solution –On one hand, with reference to Azure, when you upgrade a supported AKS cluster, Kubernetes minor versions cannot be skipped. On the other hand, because Amazon EKS runs a highly available control plane, you can update only one minor version at a time. Therefore even on the cloud magic doesn´t exist.But also, we have some bad news for the lazy boys.
Related to AKS cluster version allowed, Microsoft says: “If a cluster has been out of support for more than three (3) minor versions and has been found to carry security risks, Azure proactively contacts you to upgrade your cluster. If you do not take further action, Azure reserves the right to automatically upgrade your cluster on your behalf“. So watch out, if you are a lazy boy, as Microsoft can do the job for you :-).
But, what happens to the lazy boys on AWS?. Yes, you knew it. Good guess: ” If any clusters in your account are running the version nearing the end of support, Amazon EKS sends out a notice through the AWS Health Dashboard approximately 12 months after the Kubernetes version was released on Amazon EKS. The notice includes the end of support date.“. Adding to that: “On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version. Existing control planes are automatically updated by Amazon EKS to the earliest supported version through a gradual deployment process after the end of support date. After the automatic control plane update, make sure to manually update cluster add-ons and Amazon EC2 nodes“.
Moreover, take into account your API versions, if they are deprecated you are in trouble. Please, follow this guide to solve the issue asap.
Quotas, Limits and Regions
Regarding service quotas, limits and regions maybe you can draw a line in the sand to know what makes sense for you and your applications. Each hyperscalers have its own capabilities in terms of scalability and resilience. Let´s take a look to this..
MICROSOFT AZURE
AKS offers a strong solutions for your applications…
In terms of region availability, all we´ll be smooth and cheerful if you don´t have to work in China. In this case, if you don´t want to get struggle on this, just verify which region has AKS ready to be roll out.
AWS
Here you can see the EKS approach..
Related to regions, please regarding China ask directly to AWS. In the case of Africa, there isn´t still EKS deployed at the time of writing this post.
To summarize, AWS runs Kubernetes control plane instances across multiple availability zones. It automatically replaces unhealthy nodes and provides scalability and security to applications. It has capabilities for 3,000 nodes for each cluster, but you have to pay for the control plane apart. AWS is more conservative to facilitate kubernetes cluster new versions and maintain the oldest more time.
Meanwhile, Microsoft AKS, just about 1000 nodes per cluster, use to be faster than AWS to provide newer kubernetes versions, can repair unhealthy nodes automatically, and support native GitOps strategies, and integrates azure monitor metrics. Control plane has two flavours depending on your needs, free tier and paid tier.
SECURITY
EKS can encrypt its storage persistent data with a service called AWS KMS which as many know it´s very flexible using customer keys or AWS keys. In the case of AKS use Azure Storage Service Encryption (SSE) by default that encrypts data at rest.
Finally, AKS can take advantages of Azure policies as well as Calico policies.
Anyway, AWS EKS also supports Calico. I hope this article can somehow clarify your vision as a cloud architect, tech guy or just CIO who wants to know where to migrate & refactor their monolith on premise solutions.
Enjoy the journey to the cloud with me…see you then in the next post.
Many companies started the adoption on new technologies such as microservices supported on Kubernetes as a main actor, to provide business agility for new applications deployment but also to facilitate a solid platform for the most critical web services such as shopping portals and booking services, so they have the elasticity to provision resources to meet demand instantly.
Azure has their own flavour in their restaurant, AKS, such as AWS with EKS and GCP with GKE. Today i wanted to show a first approach to the Microsoft solution with advantages, and maybe and depends on the customer vertical, disadvantages.
But first, let´s dive into a traditional opensource + hardware kubernetes deployment and compare it with AKS. Just to show you it´s not exactly the same TCO and ROI.
Hardware approach to deploy kubernetes –Some customers do prefer invest in CAPEX, mostly storage and compute, and minimize licensing cost using open source kubernetes solutions. So let´s bring up an estimation on this kind of solution.
On one hand, in average, following for example an opensource company recommendations as Kublr similar to Suse Rancher or Red Hat OpenShift, a cluster (simple scenario with a master node + two worker nodes) has at least these hardware requirements from scratch, without adding applications to run inside:
SLA depends on the number of clusters with master and working nodes, their hardware profile, if you are using HCI (for example VMware simply integrated Kubernetes with its hypervisor to serve HCI if needed. The solution is called TANZU). Maybe you can set up a 99999 scenario, though quite expensive.
CAPEX is always good for some companies as they reduce taxes. But deal with the purchase department sometimes it´s needed if you want to leverage cloud benefits as well.
Open source instead of vendors as Wmware or IBM, provides more flexibility to use K8s, with not just one vendor configuration by default but a flexible configurations
Watch up performance, underutilization, security and disaster recovery. They are all quite challenging and usual to find in those scenarios.
On premise approach maybe it´s a good option if you want to customize your applications with some specific CI/CD tools and plugins for example for Jenkins.
AKS approach to deploy kubernetes on the cloud-Microsoft´s alternative provides a cluster with a master node and as many worker nodes as needed with the right hardware profiles, even scaling out or down depending on customers scenarios, what is per se, more flexible that a traditional on premise option.
On the other hand, AKS is an approach that brings the best of Kubernetes but reduce the investment and its pure OPEX. We can pointed out some excellent benefits comparing with an on premise Kubernetes solution. Azure Kubernetes Service (AKS) Baseline Cluster reference would be the following approach.
Some Key Takeawaysfor AKS deployment
There are no costs associated for AKS in deployment, management, and operations of the Kubernetes cluster. The main cost driver is the virtual machine instances, storage, and networking resources consumed by the cluster. Consider choosing cheaper VMs for system node pools and mainly Linux as possible.
We can start deploying a cluster with a master node and two worker nodes. The cost exist mostly on worker nodes compute where Microsoft recommends DS2_v2 instances, so pay attention on how many name spaces do you need and how many pods are included for each application deployment.
Adding to that, we have to consider, persistent storage (pods have an ephemeral storage, remember), BBDD requirements associated with each application and to conclude, network traffic between Azure to on premise.
Some clear advantages are:
On premise Kubernetes investment use to be an oversized cluster. AKS can be flexible and scalable to meet the current business expectations, provision a cluster with minimum number of nodes and enable the cluster autoscaler to monitor and make sizing decisions.
Balance pod requests and limits to allow Kubernetes to allocate worker node resources with higher density so that Azure hardware (under the hood) is utilized to cloud paid capacity
Use reserve instances for worker nodes for one or three years or even use Dev/Test plan to reduce cost for AKS on development or pre-production environments.
Automatize, automatize, automatize. We can deployed AKS infra from scratch with some few clicks. For example with Bicep, ARM, etc.
AKS can be work in a multi-region approach. Anyway, data transfers within availability zones of a region are not free. Microsoft says clearly that if your workload is multi-region or there are transfers across billing zones, then expect additional bandwidth cost.
GITOPS for AKS ready. As many of you know, GitOps provides some best practices like version control, collaboration, compliance, and continuous integration/continuous deployment (CI/CD) to infrastructure automation.
Finally, if you figure out how to facilite governance to all you Azure Kubernetes scenarios, Azure Monitor (using container insight overview) provides a multi-cluster view that shows the health status of all monitored Kubernetes clusters running Linux and Windows Server 2019. Moreover, we can leverage native azure tools such as kusto queries, azure cost management or azure advisor to control your K8S cost.
Microsoft docs
In the next post we will be focus on EKS and ECS in AWS environments. We´ll see how to identify the TCO and ROI for both K8S solutions.
Enjoy the journey to the cloud with me…see you then in the next post.
I was wondering sometime ago how to start to explain a big bunch of theory about containerization and orchestrate such containers strategy with Kubernetes in a practical way. I know how it works, i know the architecture and the most important benefits of application modernization using Kubernetes. But, why not to play with MiniKube a little bit and explain some takeaways about Application Modernization most important actors like Kubernetes?.
A all-in-one software to test Kubernetes on your own pace
Therefore, it´s good to set up some examples in your LAB so you can play with the commands as well as with the concepts. We will be using MiniKube, as i mentioned before, this is a software for test Kubernetes with an “all-in-one” architecture. A master node and a worker node together in my case for Windows. So i started downloading minikube.exe from here.
To summarize, you have a Master Node or some Master Nodes which will be in charge of managing several Worker Nodes. Developers will deploy applications images on some Pods with usually just one Container inside (they can have more than a container but it is not usual). A container is like a small package with an application, its libraries and some dependences needed to run isolated from a Operating System (So you don´t need a licence for a container). Pods are the smallest compute unit within Kubernetes and share networking and storage to work together with some kind of data in the same Worker Node or not, as they can be distributed between Worker Nodes. You can kill or create pods base on your application requirements. They are ephemeral in nature and depends on how many should be run during a period of time to solve a task or support an specific service for their application.
For Users all this scenario, we have already explained, is absolutely transparent. For example, AirBnB use Kubernetes because the company can spread clusters in several regions with the public cloud. Those clusters are scalable. So when there is a “World trade center” in Barcelona, thousand of queries go to AirBnB kubernetes services to respond with some specific results. Indeed, depending on how much people are asking for accommodation, Kubernetes clusters will create and replicate worker nodes, even replicate pods to several worker nodes to improve the user experiences and answer asap so the user doesn´t leave their web site.
Step back to our lab, we start our MiniKube using the Hyper-V driver.. We´ll set up a fresh Linux VM on Hyper-V with all the components needed to test the previous scenario we showed above.
We can check the components status. The API server is an endpoint where all the cluster components are communicated. Scheduler (It will detect which pod to place on which node based on the resource requirements), Controller manager (handles node failures, replicating or maintaining the correct number of pods) and other worker node component communicate with the API server. Also, below pay attention to Kubelet (in charge of the containers on each node and it talks to API server as well). Kubeconfig has all the credentials and access to connect to your Kubernetes cluster. Here, it is needed because we will connect our “Kubectl” command to create or kill pods in a minute..
So, here we are…Please also see we have a Kube-Proxy component..it is use to distribute and balance traffic to services to each worker node, so in backend several pods are providing information..it is like a referee in soccer.
Just to recap we have Minikube in a Windows 2019 with Hyper-v Server running with all the components needed to test some Kubectl commands and create a simple pod in charge of provide a single query to its service.
I´ve created a pod which is just going to wait for information (in JSON format)..
Forward the port to 8080, so i can connect to the endpoint..
Previously i have installed a “Postman” Trial, you know, developers love this tool. Moreover, they need it to verify that the information or data they are sending (usually in JSON format) or asking to an endpoint is consistent and works as they expect. Here, below i send some data in JSON format and, as you can see, the pod accepts it with a “200” Code, OK.
In the next post we will see AKS and why it has a lot of opportunities to be a very important game player in the application modernization market.
Enjoy the journey to the cloud with me…see you then in the next post.