IBM watsonx AI and data platform, security solutions and consulting services for generative AI to be showcased at AWS re:Invent

According to a Gartner® report, “By 2026, more than 80% of enterprises will have used generative AI APIs or models, and/or deployed GenAI-enabled applications in production environments, up from less than 5% in 2023.”* However, to be successful they need the flexibility to run it on their existing cloud environments. That’s why we continue expanding the IBM and AWS collaboration, providing clients flexibility to build and govern their AI projects using the watsonx AI and data platform with AI assistants on AWS.

With sprawling data underpinning these AI projects, enterprises are increasingly looking to data lakehouses to bring it all together in one place where they can access, cleanse and manage it. To that end,, a fit-for-purpose data store built on an open data lakehouse architecture, is already available as a fully managed software-as-a-service (SaaS) on Red Hat OpenShift and Red Hat OpenShift Services on AWS (ROSA)—all accessible in the AWS Marketplace.

The watsonx.governance toolkit and next generation studio for AI builders will follow in early 2024, making the full watsonx platform available on AWS. This provides clients a full stack of capabilities to train, tune and deploy AI models with trusted data, speed and governance with increased flexibility to run their AI workflows wherever they reside.

During AWS ReInvent, IBM will show how clients accessing Llama 2 from AWS Sagemaker will be able to use the watsonx.governance toolkit to govern both the training data and the AI to operate and scale with trust and transparency. Watsonx.governance can also help manage these models against regulatory guidelines and risks tied to the model itself and the application using it.

We’ll also be unveiling several exciting pieces of news about our fast-growing partnership, and showcasing the following joint innovations:

  • IBM Security’s Program for Service Providers: A new program for Managed Security Service Providers (MSSPs) and Cloud System Integrators to accelerate their adoption of IBM security software delivered on AWS. This program helps security providers develop and deliver threat detection and data security services, designed specifically for protecting SMB clients. It also enables service providers to deliver services that can be listed in the AWS Marketplace, leveraging IBM Security software, which feature AWS built-in integrations — significantly speeding and simplifying onboarding.
  • Apptio Cloudability and IBM Turbonomic Integration: Since IBM’s acquisition of Apptio closed in August, teams have been working on the integration of Apptio Cloudability, a cloud cost-management tool, and Turbonomic, an IT resource management tool for continuous hybrid cloud optimization. Today, key optimization metrics from Turbonomic can be visualized within the Cloudability interface, providing deeper cost analysis and savings for AWS Cloud environments.
  • Workload Modernization: We’re providing tools and services for deployment and support to simplify and automate the modernization and migration path for on-premise to as-a-service versions of IBM Planning AnalyticsDb2 Warehouse and IBM Maximo Application Suite on AWS.
  • Growing Software Portfolio: We now have 25 SaaS products currently available on AWS including, APP Connect, Maximo Application Suite, IBM Turbonomic and three new SaaS editions of Guardium Insights. There are now more than 70 IBM listings in the AWS marketplace. As part of an ongoing global expansion of our partnership, the IBM software and SaaS catalog (limited release) is now available for our clients in Denmark, France, Germany and the United Kingdom to procure via the AWS Marketplace.

In addition to these software capabilities, IBM is growing its generative AI capabilities and expertise with AWS—delivering new solutions to clients and training thousands of consultants on AWS generative AI services. IBM also launched an Innovation Lab in collaboration with AWS at the IBM Client Experience Center in Bangalore. This builds on IBM’s existing expertise with AWS generative AI services including Amazon SageMaker and Amazon CodeWhisperer and Amazon Bedrock.

IBM is the only technology company with both AWS-specific consulting expertise and complementary technology spanning data and AI, automation, security and sustainability capabilities—all built on Red Hat Open Shift Service on AWS—that run cloud-native on AWS.

For more information about the IBM and AWS partnership, please visit Visit us at AWS re:Invent in booth #930. Don’t miss these sessions from IBM experts exploring hybrid cloud and AI:

  • Hybrid by Design at USAA: 5:00 p.m.​, Tuesday, November 28, The Venetian, Murano 3306
  • Scale and Accelerate the Impact of Generative AI with watsonx: 4:30 p.m., Wednesday, November 29, Wynn Las Vegas, Cristal 7

Learn more about the IBM and AWS partnership

*Gartner. Hype Cycle for Generative AI, 2023, 11 September 2023. Gartner and Hype Cycle are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

Source: IBM Blockchain

Top 6 Kubernetes use cases

Kubernetes, the world’s most popular open-source container orchestration platform, is considered a major milestone in the history of cloud-native technologies. Developed internally at Google and released to the public in 2014, Kubernetes has enabled organizations to move away from traditional IT infrastructure and toward the automation of operational tasks tied to the deployment, scaling and managing of containerized applications (or microservices). While Kubernetes has become the de facto standard for container management, many companies also use the technology for a broader range of use cases.

Overview of Kubernetes

Containers—lightweight units of software that package code and all its dependencies to run in any environment—form the foundation of Kubernetes and are mission-critical for modern microservices, cloud-native software and DevOps workflows.

Docker was the first open-source software tool to popularize building, deploying and managing containerized applications. But Docker lacked an automated “orchestration” tool, which made it time-consuming and complex for data science teams to scale applications. Kubernetes, also referred to as K8s, was specifically created to address these challenges by automating the management of containerized applications.

In broad strokes, the Kubernetes orchestration platform runs via containers with pods and nodes. A pod operates one or more Linux containers and can run in multiples for scaling and failure resistance. Nodes run the pods and are usually grouped in a Kubernetes cluster, abstracting the underlying physical hardware resources. 

Kubernetes’s declarative, API-driven infrastructure has helped free up DevOps and other teams from manually driven processes so they can work more independently and efficiently to achieve their goals. In 2015, Google donated Kubernetes as a seed technology to the Cloud Native Computing Foundation (CNCF) (link resides outside, the open-source, vendor-neutral hub of cloud-native computing.

Read about the history of Kubernetes

Today, Kubernetes is widely used in production to manage Docker and essentially any other type of container runtime. While Docker includes its own orchestration tool, called Docker Swarm, most developers choose Kubernetes container orchestration instead.

As an open-source system, Kubernetes services are supported by all the leading public cloud providers, including IBM, Amazon Web Services (AWS), Microsoft Azure and Google. Kubernetes can also run on bare metal servers and virtual machines (VMs) in private cloud, hybrid cloud and edge settings, provided the host OS is a version of Linux or Windows.

Six top Kubernetes use cases

Here’s a rundown of six top Kubernetes use cases that reveal how Kubernetes is transforming IT infrastructure.

1. Large-scale app deployment

Heavily trafficked websites and cloud computing applications receive millions of user requests each day. A key advantage of using Kubernetes for large-scale cloud app deployment is autoscaling. This process allows applications to adjust to demand changes automatically, with speed, efficiency and minimal downtime. For instance, when demand fluctuates, Kubernetes enables applications to run continuously and respond to changes in web traffic patterns This helps maintain the right amount of workload resources, without over- or under-provisioning.

Kubernetes employs horizontal pod autoscaling (HPA) (link resides outside to carry out load balancing (as for CPU usage or custom metrics) by scaling the number of pod replicas (clones that facilitate self-healing) related to a specific deployment. This mitigates potential issues like traffic surges, hardware problems or network disruptions.

Note: HPA is not to be confused with Kubernetes vertical pod autoscaling (VPA), which assigns additional resources, such as memory or CPU, to the pods that are already running for the workload.

2. High-performance computing

Industries including government, science, finance and engineering rely heavily on high-performance computing (HPC), the technology that processes big data to perform complex calculations. HPC uses powerful processors at extremely high speeds to make instantaneous data-driven decisions. Real-world uses of HPC include  automating stock trading, weather prediction, DNA sequencing and aircraft flight simulation.

HPC-heavy industries use Kubernetes to manage the distribution of HPC calculations across hybrid and multicloud environments. Kubernetes can also serve as a flexible tool to support batch job processing involved in high performance computing workloads, which enhances data and code portability.

3. AI and machine learning

Building and deploying artificial intelligence (AI) and machine learning (ML) systems requires huge volumes of data and complex processes like high performance computing and big data analysis. Deploying machine learning on Kubernetes makes it easier for organizations to automate the management and scaling of ML lifecycles and reduces the need for manual intervention.

For example, the Kubernetes containerized orchestration platform can automate portions of AI and ML predictive maintenance workflows, including health checks and resource planning. And Kubernetes can scale ML workloads up or down to meet user demands, adjust resource usage and control costs.

Machine learning relies on large language models to perform high-level natural language processing (NLP) like text classification, sentiment analysis and machine translation, and Kubernetes helps speed the deploy of large language models automate the NLP process. As more and more organization turn to generative AI capabilities, they are using Kubernetes to run and scale generative AI models, providing high availability and fault tolerance.

Overall, Kubernetes provides the flexibility, portability and scalability needed to train, test, schedule and deploy ML and generative AI models.

4. Microservices management

Microservices (or microservices architecture) offer a modern cloud-native architecture approach where each application is comprised of numerous loosely connected and independently deployable smaller components, or services. For instance, large retail e-commerce websites consist of many microservices. These typically include an order service, payment service, shipping service and customer service. Each service has its own REST API, which the other services use to communicate with it.

Kubernetes was designed to handle the complexity involved to manage all the independent components running simultaneously within microservices architecture. For instance, Kubernetes’ built-in high availability (HA) feature ensures continuous operations even in the event of failure. And the Kubernetes self-healing feature kicks in if a containerized app or an application component goes down. The self-healing feature can instantly redeploy the app or application component, matching the desired state, which helps to maintain uptime and reliability.

5. Hybrid and multicloud deployments

Kubernetes is built to be used anywhere, making it easier for organizations to migrate applications from on-premises to hybrid cloud and multicloud environments. Kubernetes standardizes migration by providing software developers with built-in commands for effective app deployment. Kubernetes can also roll out changes to apps and scale them up and down depending on environment requirements.

Kubernetes offers portability across on-premises and cloud environments since it abstracts away infrastructure details from applications. This eliminates the need for platform-specific app dependencies and makes it easy to move applications between different cloud providers or data centers with minimal effort.

6. Enterprise DevOps

For enterprise DevOps teams, being able to update and deploy applications rapidly is critical for business success. Kubernetes provides teams with both software system development and maintenance to improve overall agility. And the Kubernetes API interface allows software developers and other DevOps stakeholders to easily view, access, deploy, update and optimize their container ecosystems.

CI/CD—which stands for continuous integration (CI) and continuous delivery (CD)—has become a key aspect of software development. In DevOps, CI/CD streamlines application coding, testing and deployment by giving teams a single repository for storing work and automation tools to consistently combine and test the code and ensure it works. Kubernetes plays an important role in cloud-native CI/CD pipelines by automating container deployment across cloud infrastructure environments and ensuring efficient use of resources.

The future of Kubernetes

Kubernetes plays a critical IT infrastructure role, as can be seen in its many value-driven use cases that go beyond container orchestration. This is why so many businesses continue to implement Kubernetes. In a 2021 Cloud Native Survey (link resides outside conducted by the CNCF, Kubernetes usage is shown to have reached its highest point ever, with 96% of organizations using or evaluating the containerized platform. According to the same study, Kubernetes usage continues to rise in emerging technology regions, such as Africa, where 73% of survey respondents are using Kubernetes in production.

IBM and Kubernetes

Kubernetes schedules and automates tasks integral to managing container-based architectures, spanning container deployment, updates, service discovery, storage provisioning, load balancing, health monitoring and more. At IBM we are helping clients modernize their applications and optimize their IT infrastructure with Kubernetes and other cloud-native solutions.

Deploy secure, highly available clusters in a native Kubernetes experience with IBM Cloud® Kubernetes Service.

Explore IBM Cloud Kubernetes Service

Containerize and deploy Kubernetes clusters for containerized platforms using Red Hat® OpenShift® on IBM Cloud.

Explore Red Hat OpenShift on IBM CloudSource: IBM Blockchain

Building on a year of focus to help IBM Power clients grow with hybrid cloud and AI

At the beginning of the year, we laid out a new strategy for IBM Power under the leadership of Ken King, who will be retiring by the end of 2023 after forty years with IBM. It is with immense gratitude that I thank Ken for his leadership not only across IBM Power, but for his service to IBM in various roles spanning IP, strategy and software during his distinguished IBM career.

I am excited to announce, therefore, that a few months ago I took on the role of IBM Power general manager. As Ken passes the baton, I want to take stock of the progress we’ve made — and point to where we are prioritizing — across four critical areas to help address our clients’ digital transformation imperatives:

  • Continuing to innovate key capabilities for core business workloads by strategically investing in three operating environments on IBM Power: AIX, IBM i and Linux
  • Driving growth with SAP HANA on Power on-premises and in the cloud
  • Supporting clients’ banking and industry modernization journey
  • Providing greater flexibility with subscription services and Power as a Service

Our value proposition is the ability to combine hybrid cloud and AI with clients’ trusted data on IBM Power to fuel business outcomes. Let’s dig in to some specifics.

Down to the core

We’ve continued to innovate and invest in operating environments on IBM Power to help ensure business continuity, reliability, availability, serviceability and security for clients. In the latest IBM i Technology Refresh — IBM i 7.5 TR3 and 7.4 TR9 — announced in October, we listened to feedback from our IBM i Advisory Councils and prioritized advancements in ease of use, productivity, and automation with enhancements to Navigator for i and new additions to SYSTOOLS for automating Db2 for i. 

We also have a new release of AIX — AIX 7.3 TL2 — building on Power10’s high availability leadership with performance and scale enhancements to Live Kernel Update (designed to give the ability to update AIX without unplanned downtime), optimized file system performance and enhancements designed to improve AIX encryption performance and audit event checking. You can learn more about this latest release on the AIX webcast on November 14.

We are expanding IBM Db2 Warehouse on Power with a new Base Rack Express at a 30% lower entry list price, adding to today’s S, M and L configurations, while still providing the same total-solution experience, including Db2 Data Warehouse’s connectivity with to unlock the potential of data for analytics and AI.

Oracle will be releasing Oracle Database 23c on Power, as part of their next Long Term Release as reported in April 2023. Separately, in 2024, clients will be able to look forward to continued enhancements to the AIX, IBM i and Linux roadmaps.

Accelerating business transformation with SAP HANA on Power 

As the 2027 end of mainstream maintenance for SAP’s legacy ERP is approaching, our customers are all in different stages of their business transformation journey. SAP is accelerating this journey by offering the current ERP, S/4HANA, as a managed service offering with SAP RISE. IBM Power is supporting our customers in their business transformation journey by offering customer infrastructure solutions designed to meet customers where they are. Whether they need Power10 systems on-premises to upgrade their SAP landscapes, Power Virtual Server capacity to accelerate migration to S/4HANA, or SAP RISE in IBM Cloud on Power, we are providing solutions on Power infrastructure.

In addition, IBM is also offering a hybrid cloud consumption model that will allow flexibility for both on-premises and cloud expenditures. Initially this program will allow clients to leverage the investment of on-premises hardware and, with a commitment to IBM Power Virtual Server, receive cloud capacity credits for IBM Power Private Cloud.

With this hybrid cloud consumption program, clients can leverage the benefits of cloud while also nurturing their on-premises SAP on Power environments as they build out their long-term hybrid cloud strategy.

To continue our momentum on AI with SAP, in 1Q24, as we announced in September, we will also be delivering the first release of SAP ABAP SDK for watsonx, which is intended to simplify and accelerate customers’ ability to consume watsonx services from their custom ABAP environments.

Driving industry modernization

Whether clients need to deploy large language models (LLMs), integrated with watsonx, close to their data and transactions, or integrate mission-critical data into their data fabric architecture, Power10’s powerful core can help embed AI-driven insights into business processes and safeguard AI workflows.

For instance, a Thai hospital chain was facing a challenge with its current pathology process which prolonged the overall workflow, resulting in delayed responses to diagnosis, patient management, and support for more patients. By deploying an AI inference solution for both Speech-to-Text and Image analysis on Power10, the pathology unit was able to increase sensitivity in detecting lesions to prioritize higher probability cases. These are important steps for their mission to achieve better clinical outcomes, a faster time to treatment for patients, and an efficient reduction in pathologist workloads.

Later this month, clients will be able to take advantage of expanded data science services with the release of IBM Cloud Pak for Data V4.8, which will deliver the underpinnings for, IBM’s next-generation AI studio. To further help our clients on their AI journeys, we continue to double-down on hybrid cloud with Red Hat so that workloads can run in a best-fit environment. To that end:

  • Red Hat OpenShift 4.14 has just been released and is available to run natively on IBM Power, providing support for multi-architecture compute (MAC) worker nodes across Power, IBM Z, ARM, and x86 environments.
  • Red Hat Ansible Automation Platform components now run natively on IBM Power. Clients can consolidate their environments and run Ansible Automation Platform on the same Power servers where their business-critical workloads are already running, instead of having to run Ansible automation hub and automation controller on separate x86 processor-based servers to manage Power endpoints. Read more here.

A vibrant ecosystem enables a range of use cases for our clients running software on Power. Finacle is a leading digital banking suite from Infosys. With Finacle solutions on Red Hat OpenShift on IBM Power, IBM and Finacle are expanding on our 20+ year collaboration. I’m happy to share that clients can soon leverage solutions from the Finacle Digital Banking Suite using Red Hat OpenShift on IBM Power to meet evolving customer demands, regulatory, and market dynamics.  

Power as a service

To meet client demand, throughout the year, we’re focusing on transforming the Proof of Concept (PoC) experience for IBM Power Virtual Server. We’re simplifying the process, making network configuration easier, adding power edge routers, and implementing a step-by-step automated modernization approach for IBM i, AIX and Linux that’s designed to be as straightforward as an on-premises migration from Power9 to Power10.

We’re also moving away from a “Do It Yourself” (DIY) model for High Availability/Disaster Recovery (HA/DR) solutions to a prescriptive and automated one. The goal is to provide clients with a clear path forward for business continuity, ensuring a smoother and more efficient process.

For more on our fourth quarter plans to meet clients’ expectations for running production workloads effectively on IBM Power Virtual Server, read here.

IBM Power backed by IBM Expert Care

We’re also making strides in our service offerings for IBM Power. IBM Power10 can be sold together with IBM Power Expert Care, a tiered support model that makes it easier for clients to choose the right level of support for their needs and budget at the time of sale. Earlier this year, IBM adjusted the IBM Power E1080 Expert Care Premium tier to align to client expectations for proactive support. IBM Power Expert Care Remote Support and Parts is also now available in many countries with no physical IBM presence.

Additionally, all IBM Power support contracts come with access to IBM Support Insights, which provides clients with actionable insights for multivendor IT infrastructures to proactively assess and remediate IT risks. The IBM Support Insights Pro subscription, announced on September 12, is designed to expand and strengthen the scope of security risk coverage to include community open source, provide prioritized actions by vendor and product family to speed IT lifecycle decision-making, and further address reliability with an extended case history and analysis to better learn from previous support issues.

What’s next for IBM Power

We’ve listened to our community and advisory councils, and we’re dedicated to creating solutions with partners and clients so we can continue to strive to provide the most trusted and open computing platform for mission critical, scalable transaction processing, and data serving workloads. Our goals include making it easier for clients to run AI workloads closer to their data with on-chip AI acceleration, improving total cost of ownership and performance, increasing availability with up to 8x9s (99.999999%) for mission-critical workloads and fewer outages as compared to x86 servers, and enhancing security and sustainability features.

I’m extremely excited for the road ahead. We’ll continue to meet our clients where they are in their digital journey and strive to make the path to success as simple as possible, whether it’s by making more aaS options available, increasing pathways for workloads to move across hybrid environments, or helping to extract even more value from SAP workloads on Power.

Reach out to your IBM Power representative or Business Partner to discuss how we can keep making progress together.

Book a meeting with our team of experts

Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Source: IBM Blockchain

The history of Kubernetes

When it comes to modern IT infrastructure, the role of Kubernetes—the open-source container orchestration platform that automates the deployment, management and scaling of containerized software applications (apps) and services—can’t be underestimated.

According to a Cloud Native Computing Foundation (CNCF) report (link resides outside, Kubernetes is the second largest open-source project in the world after Linux and the primary container orchestration tool for 71% of Fortune 100 companies. To understand how Kubernetes came to dominate the cloud computing and microservices marketplaces, we have to examine its history.

The evolution of Kubernetes

The history of Kubernetes, whose name comes from the Ancient Greek for “pilot or “helmsman” (the person at the helm who steers the ship) is often traced to 2013 when a trio of engineers at Google—Craig McLuckie, Joe Beda and Brendan Burns—pitched an idea to build an open-source container management system. These tech pioneers were looking for ways to bring Google’s internal infrastructure expertise into the realm of large-scale cloud computing and also enable Google to compete with Amazon Web Services (AWS)—the unrivaled leader among cloud providers at the time.

Traditional IT infrastructure versus virtual IT infrastructure

But to truly understand the history of Kubernetes—also often referred to as “Kube” or “K8s,” a “numeronym” (link resides outside—we have to look at containers in the context of traditional IT infrastructure versus virtual IT infrastructure.

In the past, organizations ran their apps solely on physical servers (also known as bare metal servers). However, there was no way to maintain system resource boundaries for those apps. For instance, whenever a physical server ran multiple applications, one application might eat up all of the processing power, memory, storage space or other resources on that server. To prevent this from happening, businesses would run each application on a different physical server. But running apps on multiple servers creates underutilized resources and problems with an inability to scale. What’s more, having a large number of physical machines takes up space and is a costly endeavor.


Then came virtualization—the process that forms the foundation for cloud computing. While virtualization technology can be traced back to the late 1960s, it wasn’t widely adopted until the early 2000s.

Virtualization relies on software known as a hypervisor. A hypervisor is a lightweight form of software that enables multiple virtual machines (VMs) to run on a single physical server’s central processing unit (CPU). Each virtual machine has a guest operating system (OS), a virtual copy of the hardware that the OS requires to run and an application and its associated libraries and dependencies. 

While VMs create more efficient usage of hardware resources to run apps than physical servers, they still take up a large amount of system resources. This is especially the case when numerous VMs are run on the same physical server, each with its own guest operating system.


Enter container technology. A historical milestone in container development occurred in 1979 with the development of chroot (link resides outside, part of the Unix version 7 operating system. Chroot introduced the concept of process isolation by restricting an application’s file access to a specific directory (the root) and its children (or subprocesses).

Modern-day containers are defined as units of software where application code is packaged with all its libraries and dependencies. This allows applications to run quickly in any environment—whether on- or off-premises—from a desktop, private data center or public cloud.

Rather than virtualizing the underlying hardware like VMs, containers virtualize the operating system (usually as Linux or Windows). The lack of the guest OS is what makes containers lightweight, as well as faster and more portable than VMs.

Borg: The predecessor to Kubernetes

Back in the early 2000s, Google needed a way to get the best performance out of its virtual server to support its growing infrastructure and deliver its public cloud platform. This led to the creation of Borg, the first unified container management system. Developed between 2003 and 2004, the Borg system is named after a group of Star Trek aliens—the Borg—cybernetic organisms who function by sharing a hive mind (collective consciousness) called “The Collective.”

The Borg name fit the Google project well. Borg’s large-scale cluster management system essentially acts as a central brain for running containerized workloads across its data centers. Designed to run alongside Google’s search engine, Borg was used to build Google’s internet services, including Gmail, Google Docs, Google Search, Google Maps and YouTube.

Borg allowed Google to run hundreds of thousands of jobs, from many different applications, across many machines. This enabled Google to accomplish high resource utilization, fault tolerance and scalability for its large-scale workloads. Borg is still used at Google today as the company’s primary internal container management system.

In 2013, Google introduced Omega, its second-generation container management system. Omega took the Borg ecosystem further, providing a flexible, scalable scheduling solution for large-scale computer clusters. It was also in 2013 that Docker, a key player in Kubernetes history, came into the picture.

Docker ushers in open-source containerization

Developed by dotCloud, a Platform-as-a-Service (PaaS) technology company, Docker was released in 2013 as an open-source software tool that allowed online software developers to build, deploy and manage containerized applications.

Docker container technology uses the Linux kernel (the base component of the operating system) and features of the kernel to separate processes so they can run independently. To clear up any confusion, the Docker namesake also refers to Docker, Inc. (formerly dotCloud, link resides outside, which develops productivity tools built around its open-source containerization platform, as well as the Docker open source ecosystem and community (link resides outside

By popularizing a lightweight container runtime and providing a simple way to package, distribute and deploy applications onto a machine, Docker provided the seeds or inspiration for the founders of Kubernetes. When Docker came on the scene, Googlers Craig McLuckie, Joe Beda and Brendan Burns were excited by Docker’s ability to build individual containers and run them on individual machines.

While Docker had changed the game for cloud-native infrastructure, it had limitations because it was built to run on a single node, which made automation impossible. For instance, as apps were built for thousands of separate containers, managing them across various environments became a difficult task where each individual development had to be manually packaged. The Google team saw a need—and an opportunity—for a container orchestrator that could deploy and manage multiple containers across multiple machines. Thus, Google’s third-generation container management system, Kubernetes, was born.

Learn more about the differences and similarities between Kubernetes and Docker

The birth of Kubernetes

Many of the developers of Kubernetes had worked to develop Borg and wanted to build a container orchestrator that incorporated everything they had learned through the design and development of the Borg and Omega systems to produce a less complex open-source tool with a user-friendly interface (UI). As an ode to Borg, they named it Project Seven of Nine after a Star Trek: Voyager character who is a former Borg drone. While the original project name didn’t stick, it was memorialized by the seven points on the Kubernetes logo (link resides outside

Inside a Kubernetes cluster

Kubernetes architecture is based on running clusters that allow containers to run across multiple machines and environments. Each cluster typically consists of two classes of nodes:

  • Worker nodes, which run the containerized applications.
  • Control plane nodes, which control the cluster.

The control plane basically acts as the orchestrator of the Kubernetes cluster and includes several components—the API server (manages all interactions with Kubernetes), the control manager (handles all control processes), cloud controller manager (the interface with the cloud provider’s API), and so forth. Worker nodes run containers using container runtimes such as Docker. Pods, the smallest deployable units in a cluster hold one or more app containers and share resources, such as storage and networking information.

Read more about how Kubernetes clusters work

Kubernetes goes public

In 2014, Kubernetes made its debut as an open-source version of Borg, with Microsoft, RedHat, IBM and Docker signing on as early members of the Kubernetes community. The software tool included basic features for container orchestration, including the following:

  • Replication to deploy multiple instances of an application
  • Load balancing and service discovery
  • Basic health checking and repair
  • Scheduling to group many machines together and distribute work to them

In 2015, at the O’Reilly Open Source Convention (OSCON) (link resides outside, the Kubernetes founders unveiled an expanded and refined version of Kubernetes—Kubernetes 1.0. Soon after, developers from the Red Hat® OpenShift® team joined the Google team, lending their engineering and enterprise experience to the project.

The history of Kubernetes and the Cloud Native Computing Foundation

Coinciding with the release of Kubernetes 1.0 in 2015, Google donated Kubernetes to the Cloud Native Computing Foundation (CNCF) (link resides outside, part of the nonprofit Linux Foundation. The CNCF was jointly created by numerous members of the world’s leading computing companies, including Docker, Google, Microsoft, IBM and Red Hat. The mission (link resides outside of the CNCF is “to make cloud-native computing ubiquitous.”

In 2016, Kubernetes became the CNCF’s first hosted project, and by 2018, Kubernetes was CNCF’s first project to graduate. The number of actively contributing companies rose quickly to over 700 members, and Kubernetes quickly became one of the fastest-growing open-source projects in history. By 2017, it was outpacing competitors like Docker Swarm and Apache Mesos to become the industry standard for container orchestration.

Kubernetes and cloud-native applications

Before cloud, software applications were tied to the hardware servers they were running on. But in 2018, as Kubernetes and containers became the management standard for cloud vending organizations, the concept of cloud-native applications began to take hold. This opened the gateway for the research and development of cloud-based software.

Kubernetes aids in developing cloud-native microservices-based programs and allows for the containerization of existing apps, enabling faster app development. Kubernetes also provides the automation and observability needed to efficiently manage multiple applications at the same time. The declarative, API-driven infrastructure of Kubernetes allows cloud-native development teams to operate independently and increase their productivity.

The continued impact of Kubernetes

The history of Kubernetes and its role as a portable, extensible, open-source platform for managing containerized workloads and microservices, continues to unfold.

Since Kubernetes joined the CNCF in 2016, the number of contributors has grown to 8,012—a 996% increase (link resides outside The CNCF’s flagship global conference, KubeCon + CloudNativeCon (link resides outside, attracts thousands of attendees and provides an annual forum for developers’ and users’ information and insights on Kubernetes and other DevOps trends.

On the cloud transformation and application modernization fronts, the adoption of Kubernetes shows no signs of slowing down. According to a report from Gartner, The CTO’s Guide to Containers and Kubernetes (link resides outside, more than 90% of the world’s organizations will be running containerized applications in production by 2027.

IBM and Kubernetes

Back in 2014, IBM was one of the first major companies to join forces with the Kubernetes open-source community and bring container orchestration to the enterprise. Today, IBM is helping businesses navigate their ongoing cloud journeys with the implementation of Kubernetes container orchestration and other cloud-based management solutions.

Whether your goal is cloud-native application development, large-scale app deployment or managing microservices, we can help you leverage Kubernetes and its many use cases.

Get started with IBM Cloud® Kubernetes Service

Red Hat® OpenShift® on IBM Cloud® offers OpenShift developers a fast and secure way to containerize and deploy enterprise workloads in Kubernetes clusters.

Explore Red Hat OpenShift on IBM Cloud

IBM Cloud® Code Engine, a fully managed serverless platform, allows you to run container, application code or batch job on a fully managed container runtime.

Learn more about IBM Cloud Code Engine

The post The history of Kubernetes appeared first on IBM Blog.

Source: IBM Blockchain

Better together: IBM and Microsoft make enterprise-wide transformation a reality

IBM® and Microsoft—the two largest global IT companies—are working together.

While many may see IBM and Microsoft as competitors, we are much better partners bringing the best of both companies together to accelerate impact and influence at scale. We provide current and new clients the unique opportunity to take advantage of the combined value from our technology, cloud, and consulting services alongside our ecosystem of partners to meet ever-evolving business needs.

We have a proven track record of delivering meaningful innovation and industry firsts through the combination of our aligned technology portfolios and consulting services. We boast a robust partner network of hybrid multicloud, AI and security professionals and teams that are vertical smart and multicloud adept.

Companies can engage one of IBM and Microsoft’s distinguished roster of top worldwide global systems integrator (GSI) partners, such as IBM Consulting™. With IBM Consulting and technology, Microsoft Cloud, and Azure Red Hat® OpenShift® (ARO), companies can simplify application modernization and speed cloud adoption with flexibility and security. As a result, companies can break down silos and open closed systems to increase operational efficiency and drive real business transformation. 

Providing joint solutions

For IBM and Microsoft, it’s not just a platform discussion. It’s about providing joint solutions companies seek to improve costs, productivity and resilience by accelerating their hybrid cloud and AI journeys. Why? Because we know a hybrid cloud approach brings an average of 2.5 times higher ROI versus public cloud alone. And trailblazers investing in AI are seeing a revenue uplift of 3 to 15 percent and a sales ROI uplift of 10-20%.

Read more about how IBM and Microsoft are collaborating on generative AI

Through our dedicated IBM Consulting Microsoft practice we have co-funded and co-developed key solutions with Microsoft and have invested in training our consultants on Microsoft Cloud while expanding our bench of experts and capabilities through acquisitions like Neudesic. We are the #1 Red Hat GSI specialized in ARO. This commitment has helped us become recognized in 13 categories for this year’s Microsoft’s Partner of the Year Awards, including Microsoft’s U.S. Partner of the Year Winner for GSI Growth Champion.

Leveraging our deep expertise in managing large SAP environments, we are working tightly together on numerous modernization projects like the one led for international oil and gas company OMV Aktiengesellschaft to drive digital transformation.

Many companies still see incredible value in their on-premise technology: 71% of executives say mainframe-based applications are central to their business strategy. This is why our technology teams co-developed key patterns to accelerate mainframe application modernization, enabling companies to leverage both IBM Z and Microsoft Cloud capabilities to quickly grow and extend where best to meet business needs.

We are also working together to drive more sustainable, resilient operations through IBM Maximo Application Suite the top-rated intelligent enterprise asset management software combined with Microsoft Azure, now certified for Microsoft Cloud for Manufacturing. This power duo has helped transform how clients like Sund & Baelt approach business outcomes and drive accelerated deployment, minimized costs, and ensured compliance.

Through our notable technology efforts, IBM has been designated as an official Microsoft independent software vendor (ISV) partner. This designation distinguishes the partner solution’s interoperability, specific capabilities, commercial marketplace transactability, and demonstrated track record of customer success. This helps clients better identify proven software solutions that are best-suited for their business needs and provides IBM enhanced benefits from Microsoft to better deliver for our clients.

Power of the Microsoft Azure Marketplace

Digital marketplaces are disrupting the way clients buy software. Forrester projects a shift of USD 2 trillion away from traditional direct sales to digital marketplaces.

We are seeing this with our clients who are asking to deploy more IBM software on the Microsoft Azure Marketplace. With 4 million active visitors every month across 140+ geographies and ranked as one of the most favored cloud service providers, the Azure Marketplace is critical for the IBM and Microsoft partnership and our ecosystem partners.

To that end, IBM has deployed over 36 software offerings on the Azure Marketplace, including the following SaaS products on Azure: IBM® Netezza® Performance Server, IBM Turbonomic Application Resource Management, IBM Aspera® on Cloud and SingleStoreDB as a Service with IBM. We plan to deliver more offerings on the Marketplace, in addition to the recently launched IBM watsonx. With a strategic roadmap to be available on Azure, watsonx enables businesses to harness the power of foundational models and generative AI through an exceptional SaaS experience.[1] More offerings mean more value for our clients, helping them use up their Microsoft Azure Consumption Commitments (MACC). And in support, Microsoft recognizes Marketplace purchases as Azure Consumption Revenue (ACR) for our wider partner ecosystem.

Looking ahead

Underlying the strength of the growing IBM and Microsoft partnership is great alignment and a shared commitment to ongoing co-development, co-investments, and co-delivery for our joint clients. This means expanding our dedicated team and portfolio of offerings, and extending and building our overall ecosystem of partners that are critical to our joint clients’ growth and success. We are actively working with Microsoft engineering teams to explore areas of integrations and complementary offerings in hybrid cloud and AI and are excited for the key launches and announcements in the year ahead.

Learn more about the IBM and Microsoft partnership Visit IBM’s partner page for Microsoft Ignite Explore IBM Consulting for Microsoft

[1] This statement is based on IBM’s current product plans and strategy and may change at any time at IBM’s sole discretion based on market opportunities or other factors. It is not a commitment by IBM to future product or feature availability.

The post Better together: IBM and Microsoft make enterprise-wide transformation a reality appeared first on IBM Blog.

Source: IBM Blockchain

Reinforcing IBM’s commitment to open source Hyperledger Fabric

It’s been an exciting year for blockchain development. From privacy-preserving digital health credentials to new, digitized supply chain infrastructure — some of the most interesting technology developments of late have drawn on the benefits of enterprise blockchain and distributed ledgers. That’s why today we are doubling down on our contributions to enterprise blockchain and Hyperledger, […]
The post Reinforcing IBM’s commitment to open source Hyperledger Fabric appeared first on Blockchain Pulse: IBM Blockchain Blog. Source: IBM Blockchain…

Recognizing the winners of our Back to Work COVID contest

We at IBM have always believed that some of the most exciting innovations and advancements are happening outside of the major technology companies, a belief embodied in IBM initiatives like Call for Code. As economies and organizations around the world found themselves emerging from lockdowns and beginning to reopen, not only were we asking ourselves […]
The post Recognizing the winners of our Back to Work COVID contest appeared first on Blockchain Pulse: IBM Blockchain Blog. Source: IBM Blockchain…

IBM Blockchain Platform 2.5: A new era of multi-party systems

New collaboration models are emerging out of necessity today and for better preparedness tomorrow. It’s evidenced in the way supply chains are changing to provide better visibility for the distribution of emergency supplies. We see the need for public and private information to be validated and shared between healthcare institutes and government authorities to support […]
The post IBM Blockchain Platform 2.5: A new era of multi-party systems appeared first on Blockchain Pulse: IBM Blockchain Blog. Source: IBM Blockchain…

IBM Blockchain Platform is full steam ahead with Hyperledger Fabric 2.0, Red Hat Integrations

This week marks the first fully digital IBM Think conference! There is no shortage of blockchain content rolling out which can be accessed by anyone around the world, on-demand. Learn about enabling trusted data exchange in time-critical situations from our Vice President of Blockchain, Jerry Cuomo, and our partners and blockchain leaders from HACERA, SecureKey […]
The post IBM Blockchain Platform is full steam ahead with Hyperledger Fabric 2.0, Red Hat Integrations appeared first on Blockchain Pulse: IBM Blockchain Blog. Source: IBM…

Hyperledger Fabric and the power of the group

I always say that enterprise blockchain is a team sport. And I’m often asked, “Jerry, what does that mean?” Blockchain is a technology that draws its strength and stability from the team — or in this context — the group or consortium of organizations working towards a common goal. Not coincidentally, open source — one […]
The post Hyperledger Fabric and the power of the group appeared first on Blockchain Pulse: IBM Blockchain Blog. Source: IBM Blockchain…