It sounds quite familiar when we have pointed out some functionalities inside Kubernetes as a service on the cloud comparing with others on premise kubernetes solutions. But landing with the Cloud Native Computing Foundation (CNCF) in 2016, K8S came to the scene with open source solutions as Red Hat OpenShift and Suse Rancher. Therefore as you can imagine, Kubernetes doesn’t take that long on the cloud.

To be honest, EKS started on 5th june 2018 and AKS was launched by Microsoft 24th October 2017 . Here we have the AWS EKS current timeline release calendar.

And here, Microsoft approach, AKS timeline release calendar as well.

Upgrade Kubernetes on Azure & AWS

No surprise related to upgrading version for both cloud kubernetes solution On one hand, with reference to Azure, when you upgrade a supported AKS cluster, Kubernetes minor versions cannot be skipped. On the other hand, because Amazon EKS runs a highly available control plane, you can update only one minor version at a time. Therefore even on the cloud magic doesn´t exist. But also, we have some bad news for the lazy boys.

Related to AKS cluster version allowed, Microsoft says: “If a cluster has been out of support for more than three (3) minor versions and has been found to carry security risks, Azure proactively contacts you to upgrade your cluster. If you do not take further action, Azure reserves the right to automatically upgrade your cluster on your behalf“. So watch out, if you are a lazy boy, as Microsoft can do the job for you :-).

But, what happens to the lazy boys on AWS?. Yes, you knew it. Good guess: ” If any clusters in your account are running the version nearing the end of support, Amazon EKS sends out a notice through the AWS Health Dashboard approximately 12 months after the Kubernetes version was released on Amazon EKS. The notice includes the end of support date.“. Adding to that: “On the end of support date, you can no longer create new Amazon EKS clusters with the unsupported version. Existing control planes are automatically updated by Amazon EKS to the earliest supported version through a gradual deployment process after the end of support date. After the automatic control plane update, make sure to manually update cluster add-ons and Amazon EC2 nodes“.

Moreover, take into account your API versions, if they are deprecated you are in trouble. Please, follow this guide to solve the issue asap.

Quotas, Limits and Regions

Regarding service quotas, limits and regions maybe you can draw a line in the sand to know what makes sense for you and your applications. Each hyperscalers have its own capabilities in terms of scalability and resilience. Let´s take a look to this..


AKS offers a strong solutions for your applications…

In terms of region availability, all we´ll be smooth and cheerful if you don´t have to work in China. In this case, if you don´t want to get struggle on this, just verify which region has AKS ready to be roll out.


Here you can see the EKS approach..

Related to regions, please regarding China ask directly to AWS. In the case of Africa, there isn´t still EKS deployed at the time of writing this post.

To summarize, AWS runs Kubernetes control plane instances across multiple availability zones. It automatically replaces unhealthy nodes and provides scalability and security to applications. It has capabilities for 3,000 nodes for each cluster, but you have to pay for the control plane apart. AWS is more conservative to facilitate kubernetes cluster new versions and maintain the oldest more time.

Meanwhile, Microsoft AKS, just about 1000 nodes per cluster, use to be faster than AWS to provide newer kubernetes versions, can repair unhealthy nodes automatically, and support native GitOps strategies, and integrates azure monitor metrics. Control plane has two flavours depending on your needs, free tier and paid tier.


EKS can encrypt its storage persistent data with a service called AWS KMS which as many know it´s very flexible using customer keys or AWS keys. In the case of AKS use Azure Storage Service Encryption (SSE) by default that encrypts data at rest.

Finally, AKS can take advantages of Azure policies as well as Calico policies.

Anyway, AWS EKS also supports Calico. I hope this article can somehow clarify your vision as a cloud architect, tech guy or just CIO who wants to know where to migrate & refactor their monolith on premise solutions.

Enjoy the journey to the cloud with me…see you then in the next post.


Many companies started the adoption on new technologies such as microservices supported on Kubernetes as a main actor, to provide business agility for new applications deployment but also to facilitate a solid platform for the most critical web services such as shopping portals and booking services, so they have the elasticity to provision resources to meet demand instantly.

Azure has their own flavour in their restaurant, AKS, such as AWS with EKS and GCP with GKE. Today i wanted to show a first approach to the Microsoft solution with advantages, and maybe and depends on the customer vertical, disadvantages.

But first, let´s dive into a traditional opensource + hardware kubernetes deployment and compare it with AKS. Just to show you it´s not exactly the same TCO and ROI.

Hardware approach to deploy kubernetes Some customers do prefer invest in CAPEX, mostly storage and compute, and minimize licensing cost using open source kubernetes solutions. So let´s bring up an estimation on this kind of solution.

On one hand, in average, following for example an opensource company recommendations as Kublr similar to Suse Rancher or Red Hat OpenShift, a cluster (simple scenario with a master node + two worker nodes) has at least these hardware requirements from scratch, without adding applications to run inside:

Master node: Kublr-Kubernetes master components (2 GB, 1.5 vCPU),

Worker node 1: Kublr-Kubernetes worker components (0.7 GB, 0.5 vCPU), Feature: ControlPlane (1.9GB, 1.2 vCPU), Feature: Centralized monitoring (5 GB, 1.2 vCPU) Feature: k8s core components (0.5 GB, 0.15 vCPU) Feature: Centralized logging (11GB, 1.4 vCPU)

Worker node 2: Kublr-Kubernetes worker components (0.7 GB, 0.5 vCPU), Feature: Centralized logging (11GB, 1.4 vCPU)

Obviously, if you want to deploy some applications and depending on their needs, this would increase constantly. The rule on thumb can be:

Available memory = (number of nodes) × (memory per node) – (number of nodes) × 0.7GB – (has Self-hosted logging) × 9GB – (has Self-hosted monitoring) × 2.9GB – 0.4 GB – 2GB (Central monitoring agent per every cluster).

Available CPU = (number of nodes) × (vCPU per node) – (number of nodes) × 0.5 – (has Self-hosted logging) × 1 – (has Self-hosted monitoring) × 1.4 – 0.1 – 0.7 (Central monitoring agent per every cluster).

*By default, Kublr disables scheduling business application on the master, which can be modified. Thus, they use only worker nodes in our formula.

Adding to this, take into account to buy some Vmware Vsphere licenses + Vdirector and Hardware + maintenance.

VMware vSphere Deployment Scheme for KBLR. Origin: https://docs.kublr.com/

Some Key Takeaways for on premise deployment

  • SLA depends on the number of clusters with master and working nodes, their hardware profile, if you are using HCI (for example VMware simply integrated Kubernetes with its hypervisor to serve HCI if needed. The solution is called TANZU). Maybe you can set up a 99999 scenario, though quite expensive.
  • CAPEX is always good for some companies as they reduce taxes. But deal with the purchase department sometimes it´s needed if you want to leverage cloud benefits as well.
  • Open source instead of vendors as Wmware or IBM, provides more flexibility to use K8s, with not just one vendor configuration by default but a flexible configurations
  • Watch up performance, underutilization, security and disaster recovery. They are all quite challenging and usual to find in those scenarios.
  • On premise approach maybe it´s a good option if you want to customize your applications with some specific CI/CD tools and plugins for example for Jenkins.

AKS approach to deploy kubernetes on the cloud- Microsoft´s alternative provides a cluster with a master node and as many worker nodes as needed with the right hardware profiles, even scaling out or down depending on customers scenarios, what is per se, more flexible that a traditional on premise option.

On the other hand, AKS is an approach that brings the best of Kubernetes but reduce the investment and its pure OPEX. We can pointed out some excellent benefits comparing with an on premise Kubernetes solution.  Azure Kubernetes Service (AKS) Baseline Cluster reference would be the following approach.

Some Key Takeaways for AKS deployment

There are no costs associated for AKS in deployment, management, and operations of the Kubernetes cluster. The main cost driver is the virtual machine instances, storage, and networking resources consumed by the cluster. Consider choosing cheaper VMs for system node pools and mainly Linux as possible.

  • We can start deploying a cluster with a master node and two worker nodes. The cost exist mostly on worker nodes compute where Microsoft recommends DS2_v2 instances, so pay attention on how many name spaces do you need and how many pods are included for each application deployment.
  • Adding to that, we have to consider, persistent storage (pods have an ephemeral storage, remember), BBDD requirements associated with each application and to conclude, network traffic between Azure to on premise.
  • Some clear advantages are:
    1. On premise Kubernetes investment use to be an oversized cluster. AKS can be flexible and scalable to meet the current business expectations, provision a cluster with minimum number of nodes and enable the cluster autoscaler to monitor and make sizing decisions.
    2. Balance pod requests and limits to allow Kubernetes to allocate worker node resources with higher density so that Azure hardware (under the hood) is utilized to cloud paid capacity
    3. Use reserve instances for worker nodes for one or three years or even use Dev/Test plan to reduce cost for AKS on development or pre-production environments.
    4. Automatize, automatize, automatize. We can deployed AKS infra from scratch with some few clicks. For example with Bicep, ARM, etc.
    5. AKS can be work in a multi-region approach. Anyway, data transfers within availability zones of a region are not free. Microsoft says clearly that if your workload is multi-region or there are transfers across billing zones, then expect additional bandwidth cost.
    6. GITOPS for AKS ready. As many of you know, GitOps provides some best practices like version control, collaboration, compliance, and continuous integration/continuous deployment (CI/CD) to infrastructure automation.
    7. Finally, if you figure out how to facilite governance to all you Azure Kubernetes scenarios, Azure Monitor (using container insight overview) provides a multi-cluster view that shows the health status of all monitored Kubernetes clusters running Linux and Windows Server 2019. Moreover, we can leverage native azure tools such as kusto queries, azure cost management or azure advisor to control your K8S cost.
Microsoft docs

In the next post we will be focus on EKS and ECS in AWS environments. We´ll see how to identify the TCO and ROI for both K8S solutions.

Enjoy the journey to the cloud with me…see you then in the next post.

Automation, a key recipe to be a best-in-class cloud company that many CIOs forget..

Automation is key for improving infrastructure standardization in order to speed up deployments, replicate the same environment several times, or reduce wrong configurations which are not aligned with regulatory or security policies.

Even to react quicker to market with new web services in a region where we are extending our business or some new attractive features to sell our products and services to worldwide customers.

Moreover, it´s crucial for leveraging performance and reliability as much as possible and increase productivity which impact directly in cost. If you can read between lines, we are talking about Well-Architected-Framework. A popular concept nowadays…

But, can we consider automation like some run books and scripts here and there to solve specific issues in our private or public cloud?

Milind Govekar, research vice president at Gartner, said in 2016, that IT organizations need to move from opportunistic to systematic automation of IT processes.

As a consequence of opportunistic automation, he remarked “Most current use of automation in IT involves scripting,” . “Scripts are more fragile than agile. What you end up with is disconnected islands of automation, with spaghetti code throughout the organization when what you need is a systematic, enterprise-wide lasagne

Therefore, it is clear like water, let´s focus on automation as a centralize and systematic approach to rule all those aspects and pain points that i mentioned before. Automation as part of our operational excellence, our security posture, to improve reliability and resilience and reduce cost. To summary, automation as a solid requirement for our WAF strategy within our organization or company.

But how can we afford automation in our current hybrid model?

Well, we need to choose the tool but it depends on environments you have to maintain. Let say your infra, how many clouds, public or private, are you using. To be honest, the complexity of your technologies, current infrastructures and DEVOPS daily effort determine many of the approach.

Let see the best options in the market:

Azure Devops & ARM & PowerShell – are the right technologies to provide automation with a systematic strategy not just with ARM templates deployment alone but also with other alternatives such as PowerShell tasks or with Yaml files. So you have a RBAC, you have traceability and you consolidate and deploy all your automation actions just for one place with a single pane of glass. Adding to that works in perfect harmony with Github.

Furthermore you can include those solutions with other such as Azure Arc, Azure Security Center or Azure monitor to achieve the suitable Well-Architected-Framework for your platform.

Terraform – Another strong solution in the market. A leader to consolidate automation in multicloud environments as it works with many providers or plugins as to ingest data. Just take a look to this incredible list: Browse Providers | Terraform Registry

It is a great approach in IaC (Infrastructure as Code) for complex environments as it can work together with your Active directory (announced some days ago), or with AWS, Alibaba, GPC, Vmware,etc.

The new Windows Active Directory (AD) provider for Terraform allows admins and sysops to manage users, groups and group policies in your AD installation. It is a very flexible solution in terms of versioning code within Github and allows changes to be tracked and audited easily.

Cloud formation – AWS Cloud Formation is a framework for provisioning your cloud resources with infrastructure as code within AWS accounts. Specifically a Cloud Formation template is a JSON or YAML formatted declarative text file where you will define your cloud resources. AWS defines it as “CloudFormation enables you to create and provision AWS infrastructure deployments predictably and repeatedly. It helps you leverage AWS products to build highly reliable, highly scalable, cost-effective applications in the cloud without worrying about creating and configuring the underlying AWS infrastructure”. 

AWS use Control Tower somehow similar to Ansible Tower that i´ll introduce below, works across AWS accounts and regions. It uses a more advanced AWS CloudFormation feature; StackSets: AWS CloudFormation StackSets extends the functionality of stacks by enabling admins and sysops to create, update, or delete stacks across multiple accounts and regions with a single operation.

Ansible -It is defined as “a simple open source IT engine which automates application deployment, intra service orchestration, cloud provisioning and many other IT tools.” Ansible works very well on Red hat open shift and open Stack and has a centralize solution calle Ansible Tower to orchestrate your IT infrastructure with a visual dashboard including RBAC. Ansible use playbooks. A playbook is a configuration file written in YAML that provides instructions for what needs to be done in order to bring a managed node into the desired state. It works on open source platforms and hardware solutions integrating modules as follows: All modules — Ansible Documentation

Chef and Puppet – Also are quite extended but more related to Devops and CI/CD solutions on complex environments to achieve DSC (Desired State Configuration). Both are configuration managers with an image. Chef is similar to previous approaches in the sense it use Json or Yaml declarative text files and more focus on supporting administrators. It use recipes and cookbooks through a Chef Server VM to orchestrate standardize IaC to other VMs from scratch while Puppet is more focus on programing some criteria or controls thought the VM´s life. Anyway these alternatives are today no so popular in Enterprise companies and private and public clouds are adopting other automation solutions as above.

To summarize, your automation strategy depends on your platform or platforms that determine a holistic tool or not, your goals more related to Devops or Well-Architected-Framework, and how complex is your environment.

What is out of discussion and a solid tendency, its to apply a systematic automation strategy to reduce silos in your daily infrastructure deployment. Please take into account those scripts here and there fixing issues.

In the next post we will see how it works automation as IaC with Azure Devops.

Enjoy the journey to the cloud with me…see you then in the next post.

Fast and Furious: Azure Pipelines (2) deploy your Infra and Apps or releases with automation..

Living in a world faster than ever, tools focus on provide infrastructure, applications, mobile apps in an automated way are not important but crucial to survive in a market where companies change their strategies from a week to the next. One region can be a first product market for a company today, but tomorrow it´s a different one.

Devops platforms for the most important providers assumed the principle as native. Azure Devops is focus on CI/CD as many of its competitors but include one secret weapon: flexibility to deploy infra and apps in a question of minutes anywhere, anytime..with reliability and control.

Azure Devops has compatibility with Terraform: https://azure.microsoft.com/en-us/solutions/devops/terraform/ with Ansible: https://docs.microsoft.com/en-us/azure/developer/ansible/overview as a way to provide IaC (infrastructure as code). But also can facilitate its own ARM templates https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/add-template-to-azure-pipelines to create the infrastructure previously needed to deploy the mobile APPs or releases to sell our products in the new target market.

Finally and quite interesting as a Software Company you need to ensure that your code is secure and safe of bugs..so you don´t make the life easier to hackers or governments who are more than happy to know your pillars in the market, your most important technology or science secrets.

To solve such a problem you can integrate in a very flexible manner tools like SonarQube: https://www.sonarqube.org/microsoft-azure-devops-integration/

Be aware that this is real and the last case happened to Solar Winds just some days ago of publishing this post, provoking a big impact in US security: https://www.reuters.com/article/us-global-cyber-usa-dhs-idUSKBN28O2LY

So after this clarification, let me tell you Azure Pipeline can reduce your technical debt (impact you have if you are programming your code a lot of times by choosing an easy and limited workaround causing delay and security holes and performance issues instead of using a better approach from scratch) as well as improve the speed to deliver anywhere, anytime.

So we are going to create a new release and choose an scenario i´ve created previously on Azure where we have just a simple WebAPP on a APP Service plan F1 which is free as it´s just a small lab.

We hit on “create a new release” ..and choose our infra in an Azure Tenant..this is opt to you..and the build we want to deploy over there..

Right now we have prepared our build and the infra where we want to deploy the software or release..

Just to remember, Microsoft provide two type of agent to run the jobs in charge of compile the code or deploy the releases:

Microsoft hosted agents, those run in Azure by Microsoft to support your software projects.

Self-Hosted agents, those that can be installed on a VMs or prem or a different private cloud for example to support the software projects.

We run here a Microsoft Azure agent:

We have the infra ready on Azure as well:

Finally hit on “Deploy”…and the magic happens..

You can see a sample Microsoft Lab website appear on the WebApp quite quick..

With this post we finish a global view to Devops from the Microsoft perspective. A solid respond to current times that solves many of the software life cycle challenges.

Enjoy the journey to the cloud with me…see you then in the next post.

Your code and your builds from anywhere, anytime – Azure Repos and Azure Pipelines (1)

It’s not magic, but a very versatile tool that can provide all you need to work remotely on you Continuous Integration. If you want a repository with security, SSO and integrated with the best tools to work on your builds, if you want a solution to automate the builds as well as the releases, you are in the right post.

First of all and somehow starting from the end, yes the end, you can choose Git, Github, Bit bucket (Atlassian), GitLab, etc as the origin of your code to run builds. Yes, it is opt to you where you have your code. I wanted to pointed out this to show you how flexible is the solution.

So after this clarification, let me tell you Azure Repos has by default a Git repository. You will use a Gitflow repository approach where you have the master branch on Azure repos and several branches for developers to solve issues, develop new features or fix a bug on a distribute way.

So after a pull request and some specific branch policies that maybe or maybe not you can put in place and the approval of several stakeholders involved in application development, you will merge your code with the master one on the cloud, on your azure repos for this specific project.

On repos you can maintain the code, json file, yaml files, clone, download or do some changes from your Eclipse or Visual Studio client for example.

Furthermore you have the tracking of commits and pushes done in your project code as well as a correlation of all the branches right now active.

On a perfect strategy with your builds you can prepare a new one in a very easy way…just hit on “New Pipeline”

Choose as i´ve told you at the start of this post where is your code..

Let’s say we are going to use a Yaml file. A Yaml file is nothing else than a data-oriented language configuration file where you describe all related to the application that you want to compile. As a run time, programming language and package manager to include some specific libraries..for example.

And finally save and run your build. In this case, it will be roll out manually but you can configure automation and trigger a code compilation after some changes in the code and maybe some approvals from you Project Manager.

So then if you want configure a trigger for the automation…just configure depending on your needs the triggers tab..and that´s all.

In the next post i’ll follow explaining more about azure repositories and azure pipelines so you can see the tremendous mechanism to accelerate the Continuous Deployment for the top developers performers companies.

Enjoy the journey to the cloud with me…see you then in the next post.

Work as a team on Covid times – Azure boards

Developers, Project managers, stakeholders during an application modernization project are the pillars to create a real powerful APP. This in previous times to Covid was still a challenge and now even more. Are you developers disconnected?, are your teams more than teams a silo? .

There is a component quite important within Azure Devops called Boards. This component is part of your solution as you can roll out Scrum or agile approaches quite easily to your developers and stakeholders .

So within your Organization on Azure Devops, click on new project. you can choose between a private one (requires authentication and it´s focus on a team) or public (and open source development for the Linux community for example).

So when i start a project, i can choose the methodology and strategy to follow up my project and foster collaboration?

Azure Boards support several actors: user stories, tasks, bugs, features, and epics. When you create a project, you have the option to choose if you want a basic project with just Epics (the goals that you would like to achieve. Let’s say), Issues (what kind of steps should be follow or milestones) and Tasks (they are included per Issue and means the lists of points you need to execute to get the issue done) or Agile, etc. You can then assign all these items to several people and correlate those efforts on several springs.

So you can create work items to track what happens on the follow up of your development. You can coordinate and motive your team to solve delays, problems and  you would be home and dry.

On one hand, you have all the developers working remotely on this tough times and using SSO as Azure Devops is integrated with your Azure Active Directory.

On the other hand, you can invite as guest other employees or users to access your projects if they are private. Keep in mind that you can even create public projects to open software as i mentioned previously.

Let’s see how a project manager can track an application using a Scrum approach. On the Epic you will establish the goals: some business requirements or maybe an specific enhancement to the Application. To achieve that the team will work on a Feature with a bunch of Tasks to be done. Also all this effort will be tracked on boards.

So in this case of an Agile project you can use an approach like this one where you have a goal or Epic a “New Web Service Version”, you have some user´s stories like project manager or developers and obviously some issues which involves tasks to be done.

For example, create a new CI/CD process with a pipeline where you will deploy the releases on a Web App (with an slots for staging, production or development).

Also, you can see this process including the issues or the tasks associated in each sprint. You will have as much springs as needed to achieve all the goals on the final application release. To show that just check Sprints within Azure boards. Take into account you need to determine with steps or issues/ tasks should be done on each of those springs.

Finally, pay attention to identify the timeline of your springs so the project manager can detect delays and help the team to progress properly.

Adding Azure Boards to a Tab within Teams to foster the collaboration between the stakeholders bring a lot of potential as make very flexible the access, the follow up of the project and checking of every milestone.

On Teams you can add a new tab and choose Azure DevOps…

When selecting the app, you can choose the organization, project and team..

So once you have selected all, you can hit save..

Now as a project manager, you can stay on the same page with a few clicks.

In the next post we’ll show and explain more about azure repositories and azure pipelines so you can see the tremendous mechanism to accelerate the Continuous Integration for the top developers performers companies.

Enjoy the journey to the cloud with me…see you then in the next post.

Azure Devops integrates all in one

The current landscape is full of companies with a little mess in terms of software life cycle. I mean, different repositories, several control version approaches, open source not clearly identify in some cases, several programming languages and packet management and even the CI/CD strategy can change a lot from one application to the next.

What happens here brings delay to deliver releases, developers team miscommunication, fixes for bug not clear, builds are a pain, springs extended for more than expected, poor software quality with scarce testing and in general risk on the code security.

Why Azure Devops?

Azure Devops provide all that is needed in order to cover repositories integration, tools to review the quality of the code, big actors from the open source world like Jenkins a friendly approach for several packages as NuGet, npm and Maven. Even the operational maintenance can be done with Microsoft solution as Azure monitor (Insight is now a component), Azure security center, azure policy, etc. Or even if you want you can use Nagios, splunk or Zabbix.  

If your team works mainly with Eclipse, jenkins, selenium, sonarqube, even with Jira you have a full and flexible integration with Azure Devops.

If your team works with Visual Studio and you have MSDN subscriptions you can get azure devops users subscription for free. It sounds good, isn’t it?

But what benefits in a nutshell can bring Azure Devops to our company?.

  • Timely Access to New Features

Every three weeks, DevOps users receive access to new features.

  • Remote cloud accessible anywhere, anytime, SSO

Users can access anywhere, anytime, without VPN and with SSO and MFA. So with security but adding mobility and flexibility to work remotely.

  • No Upgrades to Worry About

Users need not worry about upgrading or patching up the toolchain because the Azure DevOps is a SaaS product. Companies that run on a CI/CD model no longer need to slow things down for the sake of upgrading.

  • Reliability

Azure DevOps is backed by 24 x7 support and a 99.9% SLA.

  • Flexibility

If your DevOps team doesn’t want or need the full suite of services,they can acquire the specific components needed to fulfill your expectations. Even it´s possible to integrate Open source solutions if requiere as we´ve mentioned and other competitors as well.

  • It’s Platform-agnostic

DevOps is designed to run on any platform (Linux, macOS, and Windows) or language (Android, C/C++, Node.js, Python, Java, PHP, Ruby, .Net, and iOS apps). Woowowww!!

  • It’s Cloud-agnostic

Azure DevOps works with AWS and GCP.

In the next post we will show and explain the components within this solid and strong developers platform.

Enjoy the journey to the cloud with me…see you then in the next post.

Any Company is a Software Company. Why devops matters?

Any company needs software to support its processes, its daily work and the systems which interact with their customers and partners.

Many companies don´t know how to fix the gap on their software needs and have lots of repositories, even more than one version control platform, eternal development cycles with more than a programming language, some build and tests tools, and to increase the risk, using waterfall traditional phases to achieve a final software product release.

In addition, there are in many scenarios some silos developing several software solutions in response to several areas of the same business overlapping efforts, not collaborating with other teams properly and not empowering the developers and stakeholders with a flexible, anywhere-anytime distributed devops cloud approach.

The most important leaders on the market focus on Devops and also providing an approach on CI/CD are Atlassian Jira and Microsoft Azure Devops..

There are some devops products on the market helping or supporting Agile or Lean methodologies to be apply on their software development. Some of them are focus on just collaboration, team work and facilitate user stories, backlogs and work items to Scrum Masters or Key developers. Some also are focus on provide control version, integrate Git-flows strategies or improve testing.

But, to be honest, those ready to provide a CI/CD strategy with their own tools or integrating such solutions as Jenkins, CircleCI or Octopus with efficiency are just these two market leaders from my point of view.

Azure Devops is off-road. It can support SCRUM or CMMI with its component called Azure Boards, build packets with Nuget, NPM or Maven on CI with Azure Pipeline and at the same time deliver the release to Web Services or Containers on CD if needed. It can provide Test Scenarios or use open source such as Selenium or SonarQube to reinforce the software code in terms of quality and security. As everybody knows the source control -source version Microsoft repository bet it´s GITHUB an Git on Azure Repos.

Atlassian Jira can handle Agile while can be integrated with Azure Pipeline as well as their own CI/CD Atlassian Bamboo. As repository git management approach you can use Bitbucket. Even you can use Jira Service desk as their ITSM solution. It´s a veteran in the market and a very solid approach.

So when it makes sense use one or another?. It all depends on what you want to develop, what requirements should be take into account, dead lines, dependencies and if your origin is a legacy monolithic mono repository or a first cloud devops strategy.

Do you need a flexible cloud devops platform with powerful features to work remotely? without loosing security and improving collaboration with partners and customers?. Do you want SSO?. Then the answer is Azure Devops.

In the next post we will figure out what challenges we have to cope with on software development, quality, risks and best effort strategies leveraging Azure Devops to fix all in one.

Enjoy the journey to the cloud with me…see you then in the next post.