HOW TO PROVIDE GOVERNANCE ON AWS FROM AZURE? A HOLISTIC VIEW

Hybrid cloud is a big challenge for mostly all the companies out there. They need to integrate their on premise workloads and cloud native solutions with similar governance, security posture and devops for instance. Some solutions can use more or less VMs, Microservices, Data analytics, ETL. But what happens when you want to use AWS as well as AZURE and obviously you need a single pane of glass to provide a holistic view of your multicloud environment?

Are there technologies to solve such a mess?. Let try to be focus laser on the big pain points to cope with:

  • Your IT team has a solid knowledge in Azure but very limited to AWS
  • You want to achieve a governance to services and IT solution as a whole even if workloads are spread between both clouds
  • AWS account are isolated with no landing zone as they are inherited from previous merged o company acquisitions.

Here you can see a Lab where i was testing VMs on a AWS account with visibility on my Azure ARC console.

Tagging and cost control: If you want within Azure ARC you can edit tags to some VMs on EC2 and build a unique perspective to a IT service for VMs even if they are located in a multicloud environment. So from you favourite cost management console, Azure cost management, you can connect to AWS and speed up your multicloud FINOPS strategy.

Standardization  for Policies and Governance: Linux or Windows VMs on EC2 can be managed exactly in the same way as you are working with VMs on Azure or on premise. Your Azure Policies will address all the issues regarding permissions, compliance, authorization to resources, etc. The best point, it doesn´t matter if they are on Azure or AWS.

Working with Microsoft Defender Anywhere: Azure ARC provides an agent to be deployed in some VMs so you can afterwards set up specific iniciativas to active and to roll Microsoft Defender for Endpoint. Taking into account that you will receive all the antimalware alarms and security tracking in the same console.

Another approach would be to register and deploy EKS from Azure ARC so you can provide governance to AWS kubernetes cluster from the Azure ARC console. Something quite interesting to those who has a strong knowledge on AZURE but want to deal with AWS as well.

I hope you enjoy this post. See you in the cloud.

CLOUDMANJI !! – WHEN YOU WORK IN FINANCE AND ALL AROUND YOU IS OPEX

The drums are beating, can you hear them?. But I don’t know where the sound is coming from. It’s thunderous, ringing in my ears…boom,boom…pause,..boom,boom. Like elephant heartbeats…

It is Cloudmanji!. A game that I don’t like to play as a financial manager, as a specialist in the purchasing department although i have been invited without wanting to attend that appointment.

OPEX is all around your work. It´s a new jungle where CAPEX is coming off the board. You are buying software subscriptions, Software as a Service (software that you can consume but you don´t need to install or maintain), Cloud infrastructure as PAYG (Pay as You Go), software products within the marketplace of your favourited Hyperscale. Moreover, others are buying, likely someone in the IT department, those software solutions and you just received invoices with not explanations at all.

Therefore, there are “Silos” in your company where not everybody is aligned about cloud spending or maybe the cloud adoption framework was rolled out to be focused on some “Personas” and business areas but not all stakeholders that should be involved in cloud projects.

First beat of drums

Cost Spike in the top one. This kind of scenario usually comes out of “Data Analytics” or “Big Data” Labs or Proof of concept aimed to analyse some specific information in order to take quick business decisions. Sponsor could be HR, Marketing or PMO directly. CIO is aligned with those guys, but he can´t figure out what comes next.

Sometimes happens because a junior consultant is responsable for deploying the prototype infrastructure on AWS, Oracle cloud or Azure as he just follows a default configuration which in many cases means to choose a “Premium Tier” for storage or data bases. Adding to that, there was no governance at all regarding what IT guys can do or can´t do.

The outcome is an unexpected invoice to Finance which means a spot in the budget for the big fish companies and a “cash burn out” for a small one or for a Startup.

The CFO wants to cut heads and he doesn´t know where to start. Who was guilty?, if any?, Who did the things wrong?. Where to start to fire up your team and see the cloud as an alliance?.

Second beat of drums

In the top two, an orphan and shared cost for the company, expensive and necessary, a cost which nobody wants to be assigned in their cost center code.

How do you split this kind of cost for several departments or countries?. Let´s say, you have several factories in your country, four in France, and even worst, two more in Spain, one in Portugal and UK. Due to the brexit and the currency things get extraordinary complex as you need to invoice in Euros and Pounds. There are also withholding taxes between Europe and UK.

You started with an application modernization strategy and migrated legacy applications to Azure. Application refactoring, (changing code and decoupling of architecture in small pieces called microservices) improved the deployment and scalation on demand to all those factories, supplying a quick and effective support for assembly line modifications.

All the factories need that cloud infrastructure and it´s critical and means a shared cost to all of them. It´s a complex cost which you can´t estimate properly. A Production, preproduction and development deployed in a multi-region approach. UK says France should pay the bill as they are the head quarter. Spain and Portugal say they can´t pay the bill as they have a smaller market than others and less profitable. France says cost should be splitted into euros (hence they pay France, Spain and Portugal) and into pounds. They say UK pays their factory in pounds and assumes all related to Brexit as withholding taxes.

How do you allocate cost in the cloud for such infrastructure?. How to estimate the appropriate average consumption for each factory?. How do you align Finance, IT guys and the Board to be on the same page and work together to find out a solution?

Third beat of drums

A company jumped into the cloud. They migrated three on premise data centers on premise. No cloud adoption was put on the table ended up in a bunch of solutions with no adequate scalation, security issues and no governance or cost allocate at all. A caos or nightmare that each new CIO has to cope with.

Not to mention, the company has workloads in two different hyperscalers. For instance, GCP and Azure.

After four CIOs and two CISOs went through the company, who is in charge of this scenario?. Finance says the situation is terrifying and horrible. No clear budgets, no budget alerts, no cost allocation, etc. How to deal with OPEX?.

To sum up, these scenarios are covered by FINOPS. This is a methodology where you are going to work with the IT cloud engineers, the CIO, your devops team, your Purchase department, Finance, PMO and some skilled people called Finops specialist.

In the next Cloudmanji episode, i´ll explain what it´s all about this approach and how you can leverage the methodology to deal with all these situations.

Dear CFOs, purchasing managers and IT guys enjoy the journey to the cloud with me…see you then in the next post.

MARKETPLACE: AN EXCLUSIVE AZURE & AWS SHOPPING CENTER – PART II

When you take a look to AWS, you can smell the origin of their public cloud strategy and why you can buy thirty-party technology solutions such as Palo Alto Firewalls, Linux Red Hat or SUSE VMs with lots of applications, or even Cisco or other network providers products. As you know, marketplaces are nothing else than platforms which enable transactions between customers and thirty party sellers.

Jim Collins identified the term “flywheel effect” and explained the concept to Jeff Bezos who saw an incredible opportunity where other people would have seen just a methodology without options to survive.

The idea is simple. Create a virtuous cycle that increases the number of sellers who offer their products and services, which therefore, increase the amount of offers and prices of those products or services so it´s more interesting to the users in order to find exactly what they want with the right prices.

Hence, improves the traffic to the platform and drives more sellers and customers to buy there. Moreover, you reduce prices to users, and they are used to visit your platform or marketplace from time to time.

From Amazon to AWS (Amazon Web Service) Marketplace

AWS marketplace was the first cloud marketplace for Hyperscalers AWS started the journey to sell thirty party IT products and services following in the footsteps of Amazon platform.

Customers can buy thousands of ISV products and services to deploy with agility and just for testing or find out if a specific software make sense and fill the gap in their company.

There is flexibility of prices, offer terms and conditions. There are pricing plans with an annual approach for 12 months subscriptions or even for just one month if you need for example to roll out a POC. there are others such as usage pricing where customers just pay for what they use in a PAYG approach or pricing models for specific product delivery methods such as containers or ML.

It is very flexible as you can buy even professional services product prices which are in general offerings of professional services packages. All the offers can be tailored for your target company if you are an AWS partner, and you can access to public or even private offers if you are a customer to leverage better discounts or improve some aspects of the ISV product or the consultancy company you deal with.

Lots of solutions are waiting for you…

You can make plans for some offer types publicly available or available to only a specific (private) audience. Likewise Azure marketplace we have explained in the previous post and follow up the same marketplace strategy as AWS did.

In summary, if you are figuring out which value can bring an ISV to your business on the cloud, and you want to leverage some AWS partner professional services in a specific area as cybersecurity or SAP, if a chance you should not forget.

Enjoy the journey to the cloud with me…see you then in the next post.

MARKETPLACE: AN EXCLUSIVE AZURE & AWS SHOPPING CENTER

When you, as a user, access your Azure Portal or AWS portal, you have the option to buy thousands of products or solutions preconfigure for you. You don´t need to worry for the licences or the IT capabilities to design or deploy an specific solution as there are all build following customer needs for several AWS or Microsoft partners and ISV (independent software vendors).

We will speak about the AWS marketplace later, in a new post. Just to pointed out, it was launched in 2012 to accommodate and foster the growth of AWS services from third-party providers that have built their own solutions on top of the Amazon Web Services platform such as ISV, SI (System Integrator) an resellers so the customer would buy exactly what they needed and when they needed adding a tremendous flexibility to grow their cloud solutions aligned with the business.

In the case of Azure was launched in 2014, it is a starting point for go-to-market IT software applications and services built by industry-leading technology companies.The commercial marketplace is available in 141 regions, on a per-plan basis.

What are the plans and how to use them (as a Partner)?

Microsoft partners can publish interesting solutions which involve licenses and services together within the Azure Marketplace On one hand, you don´t need to acquire licenses which prices are prorated within the price. On the other hand, you have access to expertise without hiring new employees in you IT team.

A plan defines an offer’s scope and limits, and the associated pricing when applicable. For example, depending on the offer type, you can select regional markets and choose whether a plan is visible to the public or only to a private audience. Some offer types support an scope of subscriptions, some support price related to consumption, and some let a customer purchase the offer with a license (BYOL), they have purchased directly from the publisher. 

Offer typePlans with pricing optionsPlans without pricing optionsPrivate audience option
Azure managed application
Azure solution template
Azure container✔ (BYOL)
IoT Edge module
Managed service✔ (BYOL)
Software as a service
Azure virtual machine
  • Markets: Every plan must be available in at least one market. You have the option to select only “Tax Remitted” countries, in which Microsoft remits sales and use tax on your behalf.
  • Pricing: Pricing models only apply to plans for Azure managed application, SaaS, and Azure virtual machine offers. An offer can have only one pricing model. For example, a SaaS offer cannot have one plan that’s flat rate and another plan that’s per user.
  • Plan visibility: Depending on the offer type, you can define a private audience or hide the offer or plan from Azure Marketplace.

How to publish and what kind of visibility can we provide (As a Partner)?

You can make plans for some offer types publicly available or available to only a specific (private) audience. Offers with private plans will be published to the Azure portal.

You can configure a single offer type in different ways to enable different publishing options, listing option, provisioning, or pricing. The publishing option and configuration of the offer type also align to the offer eligibility and technical requirements.

Be sure to review the online store and offer type eligibility requirements and the technical publishing requirements before creating your offer.

To publish your offers to Azure Marketplace, you need to have a commercial marketplace account in Partner Center and ensure your account is enrolled in the commercial marketplace program.

Also if your offer is published with a private plan, you can update the audience or choose to make the plan available to everyone. After a plan is published as visible to everyone, it must remain visible to everyone and cannot be configured as a private plan again.

Finally, as a partner can enable a free trial on plans for transactable Azure virtual machine and SaaS offers.

For example, azure virtual machine plans allow for 1, 3, or 6-month free trials. When a customer selects a free trial, we collect their billing information, but we don’t start billing the customer until the trial is converted to a paid subscription.

What are your benefits when using Azure Marketplace (As a User)?

Marketplace brings flexibility to customers as they can buy immediately any kind of offer based in a plan which provide several products from thousands of ISV without losing time to deal with any vendor or understand in detail the support model or licensing options.

In the Azure portal, select + Create a resource or search for “marketplace”. Then, browse the categories on the left side or use the search bar, which includes a filter function and choose what you need..

Likewise, there are lots of consultancy services provided from several Microsoft partners, some of them as a free trial so you can test the quality of their professional services and see their approach to fix your pain points.

Enjoy the journey to the cloud with me…see you then in the next post.

WELL ARCHITECTED FRAMEWORK FROM AWS TO AZURE, FACE TO FACE (II).

As we said in a previous post, AWS Well Architected Framework was launched officially in 2015. The Microsoft Azure WAF approach took more time as they started later, about 2020 with their own WAF methodology. Anyway it´s a collection of best practices, guides and blue prints in the same way that their competitors, Google (in this case, they called it “4 key architecture principles/pillars”, but covers the similar points), and AWS based in experiences and feedback from several stakeholders.

To Summarize, Azure or AWS WAF or the 4 Key Google architecture principles/pillars, helps cloud architects to build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads for their business. Moreover, provide a better UX (user experience) for the employees and users.

Azure Approach...

From the Microsoft point of view, there are also 5 clear pillars as well as for AWS:

  • Cost Optimization – Focus on managing costs and reduce it as much as possible according with the scenario
  • Operational Excellence –Focus on achieving excellence on operations processes that keep a system running in production.
  • Performance Efficiency – Focus on achieving the best adoption of an IT solution in the cloud.
  • Reliability – Focus on recovering a system or IT solution from failures and continue to function in the cloud.
  • Security – Protecting applications and data from threats, keeping in mind the shared responsibility where a customer and Microsoft or some partners work together for a giving IT solution.

Did you notice any change comparing to AWS below?. Well, Microsoft wants to pointed out the same pillars but involve some extra staff around the pillars to make more powerful their offering. That means: references architectures, Azure Advisor as a point to start as well as CCO Dashboard, Cloudockit, AZGovViz, specific partners offerings or the WAF Review reporting (this is not different from AWS).

The Azure approach for the Well Architected Framework provides some changes in the steps to go ahead comparing to AWS. They are more HLD (high level design) to drill down later while Microsoft try to gather more details in order to sort out priorities, responsibilities and tools to address the right technologies to the right issues sooner.

It seems that this workshop process will be run smoothly and easy to use. The truth is, you will get struggle with some Workloads or specific IT components for sure. But what are the most important Microsoft Azure architecture “Quality Inhibitors” to face with?

Cost Optimization –

Operational Excellence –

Performance Efficiency –

Reliability –

Security –

Underused or orphaned resources

No automation or Silos automation

No design for scaling

No support for disaster recovery

No security threat
detection mechanism

As you can see, each hyperscaler has its own vision. But they are similar in the areas to evaluate and to fix when something is not working properly.

In the next post, we will cover more in depth similarities and differences between the two big cloud titans, Azure and AWS. In the meanwhile, the ball is in your court. Read, read and read…for sure you do… 🙂

Enjoy the journey to the cloud with me…see you then in the next post.

WELL ARCHITECTED FRAMEWORK FROM AWS TO AZURE, FACE TO FACE (I).

After some years migrating workloads from on premise to the cloud, after some years developing cloud first apps, the amount of architectures, technologies and hyperscalers have been expanding their value and support for millions of business and companies… The Well Architected Framework is nothing else that an approach to optimize all those IT solutions from several perspectives.

AWS Approach...

In 2012 AWS created the “Well-Architected” initiative to share with their customers and partners best practices for building in the cloud, and started publishing them in 2015. Now these set of principles are a reality and expanded to many cloud scenarios.

Let say we have some workloads and IT solutions in cloud providers such as AWS or Microsoft Azure with some complexity. Adding to that, we are not sure if the current scenario is designed according the best practices in terms of reliability as the IT service has some small delay responses to the users from time to time. Moreover, when you browse your AWS cost explorer console, this IT Service has a high consumption..

What can we do?, how can we shed some light on this?. OK, AWS provides a set of best practices, principles and strategists to reduce risk and impacts on these areas i´ve mentioned before as well as in other areas. Those areas, indeed, pillars are five:

  • Operational Excellence: The ability to support development and run workloads effectively, gain insight into their operations, and to continuously improve supporting processes and procedures to deliver business value.
  • Security: it´s focused on protect data, systems, and assets to take advantage of cloud technologies to improve your security.
  • Reliability: Enforces the ability of a workload to perform its intended function correctly and consistently when it’s expected to.
  • Performance Efficiency: The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
  • Cost Optimization: The ability to run systems to deliver business value at the lowest price point.

The AWS approach for the Well Architected Framework provides a great value to improve a specific workload or some workloads with some interdependence. To leverage the five pillars potential, the Well-Architected Tool helps you review the digital state of your workloads and compares them to the latest AWS architectural best practices on those areas.

Even, If you want to be more specific and deep dive in a technology or a disruptive solution to identify a clear impact or reduce risk for your workloads, AWS offers AWS Well-Architected Lenses since 2017.

Some examples of Lens which, from my point of view, bring value, are:

Management and Governance Lens – AWS Well-Architected

Hybrid Networking Lens – AWS Well-Architected

SAP Lens – AWS Well-Architected

Financial Services Industry Lens – AWS Well-Architected

Serverless Applications Lens – AWS Well-Architected

In the second part of this post, we will explain the Azure Well Architected Framework. I hope it´s useful to you and it makes your day!.

Enjoy the journey to the cloud with me…see you then in the next post.

Azure Lighthouse, the secret sauce for any Managed Cloud Solution Provider

Managed Cloud Solution Providers (MCSP) are those thirty party companies that help your business to expand and provide muscle and expertise in two ways:

  1. Skill matrix to support – They have a bunch of experts in several disciplines to go through your IT service challenges and digital transformation, they are your mentor to understand your risk and how aligned is your investment in cloud solutions with your business. They have cloud architect and cloud strategist personas in their team to support your journey to the cloud on mostly hybrid scenarios.
  2. Tools to support– They have the right tools to support those business needs and to leverage your current digital state to a new version of your company achieving better efficient in your daily processes, simplifying your employees work, even their quality of life, and for sure, optimizing the time to react to your competitors with innovation. Just to remark, tools means not just thirty party tools but also the native cloud provider tools you have available when consuming cloud services.

Adding to those key points, all the most important operatives to support IT Services on the cloud are based in some specific daily tasks. Monitoring, backup, process automation or security are part of those operatives. Moreover, MCSP need to be effective to solve issues in order to provide the right quality to our customers. Something that it´s called “Operational Excellence” within the “Well Architected Framework”.With the massive expansion of cloud first IT services and migrations to the cloud of a huge amount of IT infrastructure to support data analytics, web services, disaster recovery and legacy applications in the road to be modernized, we need the right tools to cover some clear objetives. Azure Lighthouse has a tremendous maturity to solve lots of aspects and challenges any MCSP need to cope with:

  • Scale as soon as we need to grow. Here i mean scale horizontally. So even when you have to assist lots of customers you can cover their need with granularity and focus on their specific roadmap to the cloud.
  • Segment your IT cloud infrastructure from the customers IT cloud one. So any security issue or IT service downtime that you are providing internally as well as providing to others is limited and it just can affect a customer or group of customers.
  • Provide permissions to some IT resources in the cloud and delegate depending on your customers projects and skills involved access to other partners, to freelance or to sum up to collaborate within this new project with several profiles.
  • Achieve a whole picture of the IT services you are providing to your customers in several Azure contracts and tenants in terms of security posture, alerts with performance or health issues, triage misconfigured Items, provide the right azure governance, etc.

Azure Lighthouse has the potencial and flexibility to include monitoring and traceability to all the customers in several tenants, you get a holistic view, delegate specific permission with a great security level for a period or the time you want to the whole subscriptions or resources groups, integrate all in a Hybrid strategy together with Azure ARC or furthermore integrate security posture and SIEM for several tenants as well. Azure offers top native cloud tools to support your investments in almost any technology tendency.

Let´s go deeper into some nice strategies to any MCSP so they don´t get struggle trying to solve how to translate what they are doing right now on premise compare with Azure.

Access. To access you have as mandatory a secure authentication and authorization strategy., That´s why Microsoft offers the least privilege access approach with Azure Active Directory Privileged Identity Management (Azure AD PIM) to enhance even more access to the customers tenants with just a user or a security group.

Monitoring. Absolutely key for any MSCP. It is the core of your support to your customers. Adding you have to use the right ITSM (Information Technology Service Management) software to be aligned and strive in the right direction to assess and resolve customers issues from high priority to low priority.

Security Posture. Do you know how many misconfigurations and vulnerabilities exist in your customers Azure cloud?. Yes, you can add Azure Security Center to provide the right security posture and know which security controls are affected or can be aligned to your regulatory compliance. We can leverage the Security Score to see in a single pane of glass your customers security posture. Not easy peasy but helps a lot.

Incident Hunting. Maybe you know, maybe you don´t, Azure Sentinel, the Microsoft native SIEM can contribute to consolidate your security threats and deep dive any root cause of a security compromise across several tenants. https://techcommunity.microsoft.com/t5/azure-sentinel/using-azure-lighthouse-and-azure-sentinel-to-investigate-attacks/ba-p/1043899

It´s a powerful tool to track logs, see layer to layer what´s is happening and determine how to step up suitable hardening for your technologies.

Hybrid Scenarios. Regarding Hybrid scenarios, Azure ARC, can be integrate as well with Azure Lighthouse bringing a great benefit to that holistic overview i mentioned before. The main target in this case, will be to provide the right governance to your customers even if they have some private clouds or on premise infrastructure. Therefore, an exciting approach for those companies which already have a lot of legacy staff to migrate during years but want to explore the benefits of public cloud such as Azure.

To sum up depending on your cloud provider maturity level, there are some key native tools to improve your support on your own or with the help of a MCSP. Azure is one of the most important providers together with AWS to offer this level nowadays.


Enjoy the journey to the cloud with me…see you soon.

7 Rs – Seven roads to take the right decision to move to the cloud

AWS (Amazon) , Azure (Microsoft) and GCP (Google) hyper-scale data centers are increasing their number during the last years in many regions supported by millions of investment in submarine cables to reduce latency. Southern Europe in not an exception. We can just take a look to Italy, Spain and France to realize what it´s happening.

Public cloud providers know many customers will move massively thousand of services in the coming years. The process just started some years ago. But due to the pandemic and the need to provide remote services, to analyse data quicker and with efficiency, the big expansion on sensors to measure almost all in our lives or a global market to beat the competitors in any continent with innovation, accelerates even more.

There are 7 Rs to take the right decision so the CIOs and CTOs know what make sense to move or not to the cloud. What is a priority and moreover the impact and effort to transform their business.

AWS perspective to move IT Services to the cloud

Move to the cloud with a clear perspective on outcomes and goals to achieve will be able to bring value to our customers if you evaluate with care each of your IT services so you can take decisions according with your business alignment. Some Applications could be retire other would enter in a cycle of modernization, other just resize to reduce cost and improve resilience..

Let´s explain our 7 Rs from simple to complex scenarios:

Retire. Some applications are not used any more. Just a couple of users need to do some queries from time to time. Hence maybe it´s better to move that old data to a data warehouse and retire the old application.

Retain. It means literally “do nothing at all”. May be this application use some API or backend from an on premise solution with some compliance limitations. May be it was recently upgraded and you want to amortize the investment for a while.

Repurchase. Here you have the opportunity to change the IT solution. Let´s say you are not happy with your firewall on premise and maybe you think it´s better to change to a different provider with a better firewall adoption for AWS or Azure, even to move from IaaS to SaaS some applications.

Relocate. For example, relocate the ESX hypervisor hosting your database and Web Services to VMware Cloud on AWS / Azure / GPC or move your legacy Citrix server with Windows 2008 R2 to a dedicated host on AWS.

Rehost. It means lift/shift. Move some VMs with clear dependence between them to the cloud just to provide better backup, cheaper replication on several regions and resize their compute consumption to reduce cost.

Replatform. Lift and optimize somehow your application. For instance, when you move your web services from a farm of VMs on Vmware with a HLB (Hardware Load Balancer) on premise to a external LB service on Azure with some APP Services where you can adopt the logic of your business and migrate your PhP or Java application. Therefore you don´t have to worry for Operating system patching or security at the Windows Server level anymore. Even eliminate the Windows operating license.

Refactor. The most complex scenario. You have a big monolithic database with lots of applications using that data, reading and writing heavily. You know, you need to move the applications and the monolithic database and modify its architecture by taking full advantage of cloud-native features to improve performance and scalability as well as to reduce risk. Any failure in a component provoke a general failure. Here you need to decouple your components and move to microservices sooner or later.

I hope you could understand better those strategies to move to the cloud your applications, so you can be laser focus on your needs and achieve the best approach for each of them.

To sum up use the right tools to evaluate your applications/ IT Services on premise and based on the 7Rs choose the suitable journey to the cloud for them..

Don´t forget to leverage all the potential of the CAF (Cloud Adoption Framework) https://wordpress.com/post/cloudvisioneers.com/287 that i´ve mentioned before in my blog together with the 7Rs strategy.


Enjoy the journey to the cloud with me…see you soon.

Containerization to become the RockStar on the stage

CNFC (Cloud Native Computing Foundation) can´t be more clear on their 2020 survey report:

The use of containers in production has increased to 92%, up from 84% last year, and up 300% from our first survey in 2016. Moreover, Kubernetes use in production has increased to 83%, up from 78% last year.

Related to the usage of cloud native tools there are also some clear tendences:
• 82% of respondents use CI/CD pipelines in production.
• 30% of respondents use serverless technologies in production.
• 27% of respondents use a service mesh in production, a 50% increase over last year.
• 55% of respondents use stateful applications in containers in production
.

What happens when someone adopts containers just for testing in their company?…in less than 2 years the containers are adopted in pre-production and production as well.

Why containerization is so extended?

Here are some facts i figure out.

Devops friendly – Well, there are some reasons, clear as water .. Almost all the big companies within the enterprise segment have a devops CI/CD strategy already deployed..so they´ve realised that integrating the builds and delivery versions with containers it´s quite agile and effective to compare those software last versions with several libraries as the runtime can be isolated easily and doesn´t depend on a operating system. So to summarize you can have quite quick several pods with containers ready to test two or three versions of your products with their libraries and plugins, packet managers or several artifacts depending on the version and test features, UX, bugs or just performance.. All aligned with your preferred repository solution: Bitbucket, Git, Github, etc.

Multicloud – Another fact and quite solid, it´s Kubernetes run on any cloud, private or public and you can orchestrate clusters with nodes wherever you want, without limitations on storage, compute or locations. Even you have at your disposal a great number of tools to orchestrate containers, not just Kubernetes but also Docker Swarm. To conclude, you can see bellow Docker as simple container runtime which was a tendency in RightScale 2019 survey. Now ,and that´s how technology change from one day to the next, Docker as an underlying runtime is being deprecated in favor of runtimes that use the Container Runtime Interface (CRI) created for Kubernetes. … Anyway, Docker is still a useful tool for building containers.

Cost Savings -You can roll out Microservices on demand and without investing a euro on hardware if you want a pure cloud solution. Just create your pads or simple containers and kill them when you want. Pay as you go, pure OPEX. That means reduce CAPEX on hardware and licenses and forget amortization.

Remove your Legacy applications on your on pace – Also, on one hand, big companies want to reduce legacy applications as they need to eliminate monolithic applications, which use to be very critical, with old versions software and dependences on hardware and licences and poor performance and scalability. On the other hand, they are compromise more than ever with the “Cloud first” principle for new IT services because they need to be global, reduce cost and improve resiliency and many CIOs know that public cloud bring those advantages from scratch.

Security – Least but not less. Containerization reduce the expose surface of your applications, eliminate any operating system bug, and allow to take control on known library vulnerabilities with your Software Quality team and your CISO. Networking is also an area where you can watch out the bad guys as traffic is flowing in and out of the containers and you can configure with granularity what is allowed and what not. Finally, you can monitorice the whole microservices solution with open source tools, cloud providers integrated tools or more veteran thirty party solutions.

In the next post we will see differences and similarities between AKS and EKS.

Enjoy the journey to the cloud with me…see you then in the next post.

Azure Synapse: A new kid on the Block. Empower you company Big Data and Analytics

Some years ago, an investment to analyze data was quite expensive in terms of hardware, networking, knowledge and skills usually external to the organization and obviously data center facilities. Nowadays you can enjoy cloud native data analytics tools that can be deployed in minutes in any region of the world. This cutting edge technologies are evolving to work better together as evolves a music orchestra when musicians and the conductor know each other better. He can give then a splendid performance in the concert. So happens in the cloud, the maturity of the native tools lets you decouple components so that they run and scale independently.

But why Big Data on prem is called to extinction?. Well, it is a matter of being cost-effective in middle-terms. There are some factors that have a great impact on CIOs and CFOs to change their minds:

Big Data on premise is rigid and inelastic as the capacity planning done by the architects to build those solutions is based on picks and needs to take into account the worst cases in performance. They can not scale on demand and if you need more resources you have to wait till they are available even weeks. On the other hand, you have a technical debt if you are underutilize your Big data infrastructure.

Big Data and Data analytics platforms on premise requires a lot skills and knowledge in place from Storage, to networking, from data engineering to data science. It is complex to maintain and upgrade. What is prone to failures and low productivity.

Data and AI&ML live in separate worlds in an on premise infrastructure. Two silos that you need to interconnect. Something that doesn’t happen on the cloud.


Move to the next level. Azure Synapse

Azure Synapse is a whole orchestra prepare to give a splendid performance in the concert. It is the evolution of Azure Data Warehouse as joins enterprise data warehousing with Big Data analytics.

It unifies data ingestion, preparation & transformation of data . So companies can combine and serve enterprise data on-demand for BI and AI/ML. It supports two types of analytics runtimes – SQL and Spark based that can process data in a batch, streaming, and interactive manner. For a Data Science is great because supports a number of languages like SQL, Python, .NET, Java, Scala, and R that are typically used by analytic workloads. You don’t have to worry for escalation, you has a virtually unlimited scale to support analytics workloads.

Deploy Azure Synapse in minutes – Using Azure Quick-Start templates it is possible to deploy your data analytics platform in minutes..choose 201-sql-data-warehouse-transparent-encryption-create to do so synchronize with your Repo on Azure devops and start to configure your deployment strategy.

Ingesting and Processing Data enhacements- Data from several origins can be load to the SQL pool component on Azure Synapse. Let’s say the old data warehouse. To load that data we can use a storage account or even better a data lake storage with the help of polybase, we can use other Azure component called Azure Data factory to bring data from several origins or traditional ones like BCP for those working with SQL. After cleaning the data on staging tables you can proceed to copy to production all that make senses.

A great advantage is that you can now get rich insights on your operational data in near real-time, using Azure Synapse Link. ETL-based systems tend to have higher latency for analyzing your operational data, due to many layers needed to extract, transform and load the operational data. With native integration of Azure Cosmos DB analytical store with Azure Synapse Analytics, you can analyze operational data in near real-time enabling new business scenarios.

Querying Data – You can uses Massive Parallel Processing (MPP) to run queries across petabytes of data quickly. Data Engineers can use the familiar Transact-SQL to query the contents of a data warehouse in Azure Synapse Analytics as well as developers can use Python, Scala and R against the Spark engine. There is also support for .Net and Java.

Moreover now it is possible to query on demand…

Authentication and Security – Azure Synapse Analytics supports both SQL Server authentication as well as Azure Active Directory. Also you can configure a RBAC strategy to access data with less privileged principals.

Finally, even you can implement MFA to protect your data and operational work.

In the next post, i will show you how work other pieces and components of Data Cloud solutions and the great benefits they bring in cost-savings and technical advantages..

See you them…