Your Black Friday observability checklist

Black Friday—and really, the entire Cyber Week—is a time when you want your applications running at peak performance without completely exhausting your operations teams.

Observability solutions can help you achieve this goal, whether you’re a small team with a single product or a large team operating complex ecommerce applications. But not all observability solutions (or tools) are alike, and if you are missing just one key capability, it could cause customer satisfaction issues, slower sales and even top- and bottom-line revenue catastrophes.

The observability market is full of vendors, with different descriptions, features and support capabilities. This can make it difficult to distinguish what’s critical from what is just nice to have in your observability solution.

Here’s a handy checklist to help you find and implement the best possible observability platform to keep all your applications running merry and bright:

  • Complete automation. You need automatic capture to achieve a comprehensive real-time view of your application. A full-stack tool that can automatically observe your environment will minimize mean time to detection (MTTD) and prevent potential outages.
  • High-fidelity data. The most powerful use of data is the ability to contextualize. Without context, your team has no idea how big or small your problem is. Contextualizing telemetry data by visualizing the relevant information or metadata enables teams to better understand and interpret the data. This combination of accuracy and context helps teams make more informed decisions and pinpoint the root causes of issues.
  • Real-time change detection. Monitoring your entire stack with a single platform (from mainframes to mobile) can contribute to your growth. How? You can now see how transactions are zipping around across the internet, keeping the wheels of your commerce well lubricated. Another advantage of real-time detection is the visibility you gain when you connect your application components with your underlying infrastructure. This is important to your IT team’s success, as they now have the visibility of your stack and services and can map them to your dependency.
  • Mobile and website digital experience management. End-user, mobile, website and synthetic monitoring all enable you to improve the end-user experience. You should use an observability tool with real-user monitoring to deliver an exceptional experience for users and accommodate growth. This allows you to track real users’ interactions with your applications, while end-user monitoring captures performance data from the user’s perspective. Synthetic monitoring creates simulated user interactions to proactively identify potential issues, ensuring your applications meet user expectations and performance standards. All three capabilities combined can: provide real-time insights into server performance and website load times; capture user interactions and provide detailed insights into user behaviour; and monitor server loads and traffic distribution. This can automatically adjust load balancing configurations to distribute traffic evenly, preventing server overloads and ensuring a smooth shopping experience.
  • Built-in AI and machine learning. Having AI-assisted root cause analysis in your observability platform is crucial if you want to diagnose the root causes of issues or anomalies within a system or application automatically. This capability is particularly valuable in complex and dynamic environments where manual analysis might be time consuming and less efficient.
  • Visibility deep and wide. The true advantage of full stack lies in connecting your application components with the underlying infrastructure. This is critical for IT success because it grants visibility into your stack and services and maps them to dependencies.
  • Ease of use. An automated and user-friendly installation procedure minimizes the complexity of deployment.
  • Broad platform support. This monitors popular cloud platforms (AWS, GCP, MS Azure, IBM Cloud®) for both Infrastructure as a Service and Platform as a Service with simplified installation.
  • Continuous production profiling. Profiles code issues when they occur for various programming languages, offering visibility into code-based performance hot spots and bottlenecks.

In a market with detection gaps, 10 seconds is too long. Let this checklist guide you as you build a real-time full-stack observability solution that keeps your business running smoothly for the entire holiday season.

Request a demo to learn moreSource: IBM Blockchain

Level up your Kafka applications with schemas

Apache Kafka is a well-known open-source event store and stream processing platform and has grown to become the de facto standard for data streaming. In this article, developer Michael Burgess provides an insight into the concept of schemas and schema management as a way to add value to your event-driven applications on the fully managed Kafka service, IBM Event Streams on IBM Cloud®.

What is a schema

A schema describes the structure of data.

For example:

A simple Java class modelling an order of some product from an online store might start with fields like:

public class Order{

private String productName

private String productCode

private int quantity



If order objects were being created using this class, and sent to a topic in Kafka, we could describe the structure of those records using a schema such as this Avro schema:

"type": "record",
"name": “Order”,
"fields": [
{"name": "productName", "type": "string"},
{"name": "productCode", "type": "string"},
{"name": "quantity", "type": "int"}

Why should you use a schema

Apache Kafka transfers data without validating the information in the messages. It does not have any visibility of what kind of data are being sent and received, or what data types it might contain. Kafka does not examine the metadata of your messages.

One of the functions of Kafka is to decouple consuming and producing applications, so that they communicate via a Kafka topic rather than directly. This allows them to each work at their own speed, but they still need to agree upon the same data structure; otherwise, the consuming applications have no way to deserialize the data they receive back into something with meaning. The applications all need to share the same assumptions about the structure of the data.

In the scope of Kafka, a schema describes the structure of the data in a message. It defines the fields that need to be present in each message and the types of each field.

This means a schema forms a well-defined contract between a producing application and a consuming application, allowing consuming applications to parse and interpret the data in the messages they receive correctly.

What is a schema registry?

A schema registry supports your Kafka cluster by providing a repository for managing and validating schemas within that cluster. It acts as a database for storing your schemas and provides an interface for managing the schema lifecycle and retrieving schemas. A schema registry also validates evolution of schemas.

Optimize your Kafka environment by using a schema registry.

A schema registry is essentially an agreement of the structure of your data within your Kafka environment. By having a consistent store of the data formats in your applications, you avoid common mistakes that can occur when building applications such as poor data quality, and inconsistencies between your producing and consuming applications that may eventually lead to data corruption. Having a well-managed schema registry is not just a technical necessity but also contributes to the strategic goals of treating data as a valuable product and helps tremendously on your data-as-a-product journey.

Using a schema registry increases the quality of your data and ensures data remain consistent, by enforcing rules for schema evolution. So as well as ensuring data consistency between produced and consumed messages, a schema registry ensures that your messages will remain compatible as schema versions change over time. Over the lifetime of a business, it is very likely that the format of the messages exchanged by the applications supporting the business will need to change. For example, the Order class in the example schema we used earlier might gain a new status field—the product code field might be replaced by a combination of department number and product number, or changes the like. The result is that the schema of the objects in our business domain is continually evolving, and so you need to be able to ensure agreement on the schema of messages in any particular topic at any given time.

There are various patterns for schema evolution:

  • Forward Compatibility: where the producing applications can be updated to a new version of the schema, and all consuming applications will be able to continue to consume messages while waiting to be migrated to the new version.
  • Backward Compatibility: where consuming applications can be migrated to a new version of the schema first, and are able to continue to consume messages produced in the old format while producing applications are migrated.
  • Full Compatibility: when schemas are both forward and backward compatible.

A schema registry is able to enforce rules for schema evolution, allowing you to guarantee either forward, backward or full compatibility of new schema versions, preventing incompatible schema versions being introduced.

By providing a repository of versions of schemas used within a Kafka cluster, past and present, a schema registry simplifies adherence to data governance and data quality policies, since it provides a convenient way to track and audit changes to your topic data formats.

What’s next?

In summary, a schema registry plays a crucial role in managing schema evolution, versioning and the consistency of data in distributed systems, ultimately supporting interoperability between different components. Event Streams on IBM Cloud provides a Schema Registry as part of its Enterprise plan. Ensure your environment is optimized by utilizing this feature on the fully managed Kafka offering on IBM Cloud to build intelligent and responsive applications that react to events in real time.

  • Provision an instance of Event Streams on IBM Cloud here.
  • Learn how to use the Event Streams Schema Registry here.
  • Learn more about Kafka and its use cases here.
  • For any challenges in set up, see our Getting Started Guide and FAQs.

Source: IBM Blockchain

Watsonx: a game changer for embedding generative AI into commercial solutions

IBM watsonx is changing the game for enterprises of all shapes and sizes, making it easy for them to embed generative AI into their operations. This week, the CEO of WellnessWits, an IBM Business Partner, announced they embed watsonx in their app to help patients ask questions about chronic disease and more easily schedule appointments with physicians.

Watsonx is comprised of three components that empower businesses to customize their AI solutions: offers intuitive tooling for powerful foundation models; enables compute-efficient, scalable workloads wherever data resides; and the third component, watsonx.governance, provides guardrails essential to responsible implementation. Watsonx gives organizations the ability to refine foundation models with their own domain-specific data to gain competitive advantage and ensure factual grounding to external sources of knowledge.

These features—along with a broad range of traditional machine learning and AI functions—are now available to independent software vendors (ISVs) and managed service providers (MSPs) as part of IBM’s embeddable software portfolio, supported by the IBM Ecosystem Engineering Build Lab and partner ecosystem.

The watsonx platform, along with other IBM AI applications, libraries and APIs help partners more quickly bring AI-powered commercial software to market, reducing the need for specialized talent and developer resources.

A platform prioritized for enterprise AI

IBM is focused on helping organizations create business value by embedding generative AI. Watsonx provides the functionality enterprise developers need most, including summarization of domain-specific text; classification of inputs based on sentiment analysis, threat levels or customer segmentation; text content generation; analysis and extraction (or redaction) of essential information; and question-answering functions. The most common use cases from partners often combine several of these AI tasks.

ISVs need the flexibility to choose models appropriate to their industry, domain and use case. Watsonx provides access to open-source models (through the Hugging Face catalog), third-party models (such as Meta’s Llama 2) and IBM’s own Granite models. IBM provides an IP indemnity (contractual protection) for its foundation models, enabling partners to be more confident AI creators. With watsonx, ISVs can further differentiate their offering and gain competitive advantage by harnessing proprietary data and tuning the models to domain-specific tasks. These capabilities allow ISVs to better address their clients’ industry-specific needs.

Let’s explore a few AI use cases that span different industries. 

Exceptional customer care through AI solutions

Today, customers expect seamless experiences and fast answers to their questions, and companies that fail to meet these expectations risk falling behind. Customer service has leapfrogged other functions to become CEOs’ top generative AI priority. Given this trend, companies should be looking for ways to embed generative AI into their customer care portals. To accelerate this process, companies can implement AI-infused customer care commercial solutions. IBM’s embeddable AI technology, such as IBM watsonx Assistant and, allows ISVs to quickly and easily build AI into their solutions, which in turn helps them to reduce time to market and reach their customers sooner.

Watsonx allows enterprises to effortlessly generate conversation transcripts with live agents or automate Q&A sessions. With, they can obtain concise conversation summaries, extract key information and classify interactions, such as conducting sentiment analysis to gauge customer satisfaction. This information will further refine and improve the information available to the agents.

Streamline your procurement process using watsonx

By embedding AI technology in enterprise solutions, organizational leaders can connect disparate, broken processes and data into integrated end-to-end solutions.

For example, supply chain management can be a challenge for companies. The process of changing suppliers can be a time-consuming and complex task, as it requires intensive research and collaboration across the organization. Instead of spending cycles and resources on creating an in-house solution that streamlines this process, companies can implement an AI-infused supply chain management solution developed by ISVs. ISVs are experts in their domain and build their solution with enterprise-grade AI–such as watsonx Assistant,, and–so companies can feel confident in their selection.

Watsonx Assistant can serve as a user-friendly, natural-language Q&A interface for your supplier database. In the background, generates database queries and content like Requests for Proposals (RFPs) or Requests for Information (RFIs), while Watson Discovery analyzes supplier financial reports. acts as a front end for the company’s ERP system, with up-to-date attributes about inventory items, ratings of suppliers, quantities available and so on, along with a third-party data warehouse providing further decision criteria. Thus, teams can work smarter and move toward better, more integrated business outcomes. 

Watch the demo of these use cases, or explore interactive demos in the IBM Digital Self-Serve Co-Create Experience.

Partner success stories

WellnessWits is using watsonx Assistant to create a virtual care solution that connects patients to chronic disease specialists–from anywhere. The platform features an AI-powered chat functionality that can help patients gather information and answers about their chronic disease and facilitates personalized, high-quality care from physicians that specialize in their condition.

Ubotica is leveraging IBM Cloud and in its CogniSAT platform, enabling developers deploy AI models to satellites for a wide variety of observational use cases such as detecting forest fires or space junk. CogniSAT improves the efficiency with which data is stored and processed, providing edge-based analysis onboard satellites.

IBM solution provider Krista Software helped its client Zimperium build a mobile-first security platform using embedded AI solutions. The platform accelerates mobile threat defense response by automating ticket creation, routing and software deployment, reducing a 4-hour process to minutes.

Benefits of building with IBM

ISVs who partner with IBM get more than just functionality. Our team will help you create a solution architecture that helps you embed our AI technology, explore how to monetize your solution set, provide technical resources and even help sell it through our seller network.

IBM Partner Plus, our partner program, provides business partners with a plethora of resources and benefits to help them embed technology. We find the following resonate especially well with partners looking to start their journey of building with IBM: the IBM Digital Self-Serve Co-Create Experience (DSCE), the IBM Ecosystem Engineering Build Lab and the IBM Sales Partner Advocacy Program.

DCSE helps data scientists, application developers and MLOps engineers discover and try IBM’s embeddable AI portfolio across watsonx, IBM Watson libraries, IBM Watson APIs and IBM AI applications. The IBM Ecosystem Engineering Build Lab provides partners with technical resources, experts and support to accelerate co-creation of their solution with embedded IBM technology. The IBM Sales Partner Advocacy Program is a co-sell benefit that encourages collaboration with IBM sales teams when partners sell their solution with embedded IBM technology to IBM clients.

Explore how your company can partner with IBM to build AI-powered commercial solutions today.

Explore AI-powered commercial solutions with IBMSource: IBM Blockchain

IBM named a Leader in The Forrester Wave™: Digital Process Automation Software, Q4 2023

Forrester Research just released “The Forrester Wave™: Digital Process Automation Software, Q4 2023: The 15 Providers That Matter Most And How They Stack Up” by Craig Le Clair with Glenn O’Donnell, Renee Taylor-Huot, Lok Sze Sung, Audrey Lynch, and Kara Hartig and IBM is proud to be recognized as a Leader.

IBM named a Leader

In the report, Forrester Research evaluated 15 digital process automation (DPA) providers against 26 criteria in three categories: Current offering, Strategy and Market presence.

IBM received the highest scores among all vendors in the Market presence category and the highest ranked scores in AI-led process transformation tools, tooling for process automation and among the highest scores in the ability to meet and govern use cases criteria in the Current offering category. In addition, IBM received the highest possible score in vision, innovation and partner ecosystem in the Strategy category.

You can download a complimentary copy of the full Forrester Wave™ report to learn more about IBM and other vendors’ offerings. 

Intelligent automation and deep expertise with IBM

IBM has embraced the convergence of AI and business automation, focusing on providing an AI-first framework of intelligent automation in our offerings. Intelligent automation allows customers to leverage digital scale to improve business operations, provide better customer experiences and free employees to do higher-level work.

The Forrester report recognizes IBM Cloud Pak for Business Automation when it comes to AI asset maturity. The report states, “IBM brings together AI assets with automation smarts for deep deployments.” In addition, Forrester says that “IBM has one of the stronger DPA governance solutions in the field.”

The Forrester report also acknowledges IBM’s experience stating, “Look to IBM for sophisticated use cases that require a wide breadth of DPA functionality and deep industry expertise.”

About IBM Cloud Pak for Business Automation

IBM Cloud Pak for Business Automation is a modular set of integrated software components that automates work and accelerates business growth. With this solution, customers can transform fragmented workflows — achieving 97% straight-through processing — to stay competitive, boost efficiency and reduce operational costs.

With IBM Cloud Pak for Business Automation, organizations can simplify complex workflows, build low-code and no-code automations with the help of AI and gain deployment flexibility.

Learn more with IBM

Learn more about how IBM’s intelligent business automation offerings can propel your organization into 2024.

Download your free copy of the Forrester Wave™ report Learn more about IBM Cloud Pak for Business Automation Get a 30-day free trial of IBM Cloud Pak for Business AutomationSource: IBM Blockchain

An introduction to Wazi as a Service

In today’s hyper-competitive digital landscape, the rapid development of new digital services is essential for staying ahead of the curve. However, many organizations face significant challenges when it comes to integrating their core systems, including Mainframe applications, with modern technologies. This integration is crucial for modernizing core enterprise applications on hybrid cloud platforms. Shockingly, a staggering 33% of developers lack the necessary skills or resources, hindering their productivity in delivering products and services. Moreover, 36% of developers struggle with the collaboration between development and IT Operations, leading to inefficiencies in the development pipeline. To compound these issues, repeated surveys highlight “testing” as the primary area causing delays in project timelines. Companies like State Farm and BNP Paribas are taking steps to standardize development tools and approaches across their platforms to overcome these challenges and drive transformation in their business processes.

How does Wazi as Service help drive modernization?

One solution that is making waves in this landscape is “Wazi as a Service.” This cloud-native development and testing environment for z/OS applications is revolutionizing the modernization process by enabling secure DevSecOps practices. With flexible consumption-based pricing, it provides on-demand access to z/OS systems, dramatically improving developer productivity by accelerating release cycles on secure, regulated hybrid cloud environments like IBM Cloud Framework for Financial Services (FS Cloud). Shift-left coding practices allow testing to begin as early as the code-writing stage, enhancing software quality. The platform can be automated through a standardized framework validated for Financial Services, leveraging the IBM Cloud Security and Compliance Center service (SCC). Innovating at scale is made possible with IBM Z modernization tools like Wazi Image Builder, Wazi Dev Spaces on OpenShift, CI/CD pipelines, z/OS Connect for APIs, zDIH for data integrations, and IBM Watson for generative AI.

What are the benefits of Wazi as a service on IBM Cloud?

Wazi as a Service operates on IBM LinuxONE, an enterprise-grade Linux server, providing a substantial speed advantage over emulated x86 machine environments. This unique feature makes it 15 times faster, ensuring swift and efficient application development. Furthermore, Wazi bridges the gap between developer experiences on distributed and mainframe platforms, facilitating the development of hybrid applications containing z/OS components. It combines the power of the z-Mod stack with secure DevOps practices, creating a seamless and efficient development process. The service also allows for easy scalability through automation, reducing support and maintenance overhead, and can be securely deployed on IBM FS Cloud, which comes with integrated security and compliance features. This means developers can build and deploy their environments and code with industry-grade regulations in mind, ensuring data security and regulatory compliance.

Additionally, Wazi VSI on VPC infrastructure within IBM FS Cloud establishes an isolated network, fortifying the cloud infrastructure’s perimeter against security threats. Furthermore, IBM Cloud services and ISVs validated for financial services come with robust security and compliance controls, enabling secure integration of on-prem core Mainframe applications with cloud services like API Connect, Event Streams, Code Engine, and HPCS encryptions. This transformation paves the way for centralized core systems to evolve into modernized, distributed solutions, keeping businesses agile and competitive in today’s digital landscape. Overall, Wazi as a Service is a game-changer in accelerating digital transformation while ensuring security, compliance, and seamless integration between legacy and modern technologies.

How IBM Cloud Financial Service Framework help in industry solutions?

The IBM Cloud Framework for Financial Services a.k.a IBM FS Cloud is a robust solution designed specifically to cater to the unique needs of financial institutions, ensuring regulatory compliance, top-notch security, and resiliency both during the initial deployment phase and in ongoing operations. This framework simplifies interactions between financial institutions and ecosystem partners that provide software or SaaS applications by establishing a set of requirements that all parties must meet. The key components of this framework include a comprehensive set of control requirements, which encompass security and regulatory compliance obligations, as well as cloud best practices. These best practices involve a shared responsibility model that applies to financial institutions, application providers, and IBM Cloud, ensuring that everyone plays a part in maintaining a secure and compliant environment.

Additionally, the IBM Cloud Framework for Financial Services provides detailed control-by-control guidance for implementation and offers supporting evidence to help financial institutions meet the rigorous security and regulatory requirements of the financial industry. To further facilitate compliance, reference architectures are provided to assist in the implementation of control requirements. These architectures can be deployed as infrastructure as code, streamlining the deployment and configuration process. IBM also offers a range of tools and services, such as the IBM Cloud Security and Compliance Center, to empower stakeholders to monitor compliance, address issues, and generate evidence of compliance efficiently. Furthermore, the framework is subject to ongoing governance, ensuring that it remains up-to-date and aligned with new and evolving regulations, as well as the changing needs of banks and public cloud environments. In essence, the IBM Cloud Framework for Financial Services is a comprehensive solution that empowers financial institutions to operate securely and in compliance with industry regulations, while also streamlining their interactions with ecosystem partners.

Get to know Wazi as a Service

Operating on the robust IBM LinuxONE infrastructure, Wazi as a Service bridges the gap between distributed and mainframe platforms, enabling seamless hybrid application development. The platform’s scalability, automation, and compliance features empower developers to navigate the intricate web of regulations and security, paving the way for businesses to thrive in the digital era. With Wazi, businesses can securely integrate on-premises core systems with cutting-edge cloud services, propelling them into the future of modernized, distributed solutions. In summary, Wazi as a Service exemplifies the transformative potential of technology in accelerating digital transformation, underlining its importance in achieving security, compliance, and the harmonious coexistence of legacy and modern technologies.

Get to know Wazi as a ServiceSource: IBM Blockchain

Top 6 Kubernetes use cases

Kubernetes, the world’s most popular open-source container orchestration platform, is considered a major milestone in the history of cloud-native technologies. Developed internally at Google and released to the public in 2014, Kubernetes has enabled organizations to move away from traditional IT infrastructure and toward the automation of operational tasks tied to the deployment, scaling and managing of containerized applications (or microservices). While Kubernetes has become the de facto standard for container management, many companies also use the technology for a broader range of use cases.

Overview of Kubernetes

Containers—lightweight units of software that package code and all its dependencies to run in any environment—form the foundation of Kubernetes and are mission-critical for modern microservices, cloud-native software and DevOps workflows.

Docker was the first open-source software tool to popularize building, deploying and managing containerized applications. But Docker lacked an automated “orchestration” tool, which made it time-consuming and complex for data science teams to scale applications. Kubernetes, also referred to as K8s, was specifically created to address these challenges by automating the management of containerized applications.

In broad strokes, the Kubernetes orchestration platform runs via containers with pods and nodes. A pod operates one or more Linux containers and can run in multiples for scaling and failure resistance. Nodes run the pods and are usually grouped in a Kubernetes cluster, abstracting the underlying physical hardware resources. 

Kubernetes’s declarative, API-driven infrastructure has helped free up DevOps and other teams from manually driven processes so they can work more independently and efficiently to achieve their goals. In 2015, Google donated Kubernetes as a seed technology to the Cloud Native Computing Foundation (CNCF) (link resides outside, the open-source, vendor-neutral hub of cloud-native computing.

Read about the history of Kubernetes

Today, Kubernetes is widely used in production to manage Docker and essentially any other type of container runtime. While Docker includes its own orchestration tool, called Docker Swarm, most developers choose Kubernetes container orchestration instead.

As an open-source system, Kubernetes services are supported by all the leading public cloud providers, including IBM, Amazon Web Services (AWS), Microsoft Azure and Google. Kubernetes can also run on bare metal servers and virtual machines (VMs) in private cloud, hybrid cloud and edge settings, provided the host OS is a version of Linux or Windows.

Six top Kubernetes use cases

Here’s a rundown of six top Kubernetes use cases that reveal how Kubernetes is transforming IT infrastructure.

1. Large-scale app deployment

Heavily trafficked websites and cloud computing applications receive millions of user requests each day. A key advantage of using Kubernetes for large-scale cloud app deployment is autoscaling. This process allows applications to adjust to demand changes automatically, with speed, efficiency and minimal downtime. For instance, when demand fluctuates, Kubernetes enables applications to run continuously and respond to changes in web traffic patterns This helps maintain the right amount of workload resources, without over- or under-provisioning.

Kubernetes employs horizontal pod autoscaling (HPA) (link resides outside to carry out load balancing (as for CPU usage or custom metrics) by scaling the number of pod replicas (clones that facilitate self-healing) related to a specific deployment. This mitigates potential issues like traffic surges, hardware problems or network disruptions.

Note: HPA is not to be confused with Kubernetes vertical pod autoscaling (VPA), which assigns additional resources, such as memory or CPU, to the pods that are already running for the workload.

2. High-performance computing

Industries including government, science, finance and engineering rely heavily on high-performance computing (HPC), the technology that processes big data to perform complex calculations. HPC uses powerful processors at extremely high speeds to make instantaneous data-driven decisions. Real-world uses of HPC include  automating stock trading, weather prediction, DNA sequencing and aircraft flight simulation.

HPC-heavy industries use Kubernetes to manage the distribution of HPC calculations across hybrid and multicloud environments. Kubernetes can also serve as a flexible tool to support batch job processing involved in high performance computing workloads, which enhances data and code portability.

3. AI and machine learning

Building and deploying artificial intelligence (AI) and machine learning (ML) systems requires huge volumes of data and complex processes like high performance computing and big data analysis. Deploying machine learning on Kubernetes makes it easier for organizations to automate the management and scaling of ML lifecycles and reduces the need for manual intervention.

For example, the Kubernetes containerized orchestration platform can automate portions of AI and ML predictive maintenance workflows, including health checks and resource planning. And Kubernetes can scale ML workloads up or down to meet user demands, adjust resource usage and control costs.

Machine learning relies on large language models to perform high-level natural language processing (NLP) like text classification, sentiment analysis and machine translation, and Kubernetes helps speed the deploy of large language models automate the NLP process. As more and more organization turn to generative AI capabilities, they are using Kubernetes to run and scale generative AI models, providing high availability and fault tolerance.

Overall, Kubernetes provides the flexibility, portability and scalability needed to train, test, schedule and deploy ML and generative AI models.

4. Microservices management

Microservices (or microservices architecture) offer a modern cloud-native architecture approach where each application is comprised of numerous loosely connected and independently deployable smaller components, or services. For instance, large retail e-commerce websites consist of many microservices. These typically include an order service, payment service, shipping service and customer service. Each service has its own REST API, which the other services use to communicate with it.

Kubernetes was designed to handle the complexity involved to manage all the independent components running simultaneously within microservices architecture. For instance, Kubernetes’ built-in high availability (HA) feature ensures continuous operations even in the event of failure. And the Kubernetes self-healing feature kicks in if a containerized app or an application component goes down. The self-healing feature can instantly redeploy the app or application component, matching the desired state, which helps to maintain uptime and reliability.

5. Hybrid and multicloud deployments

Kubernetes is built to be used anywhere, making it easier for organizations to migrate applications from on-premises to hybrid cloud and multicloud environments. Kubernetes standardizes migration by providing software developers with built-in commands for effective app deployment. Kubernetes can also roll out changes to apps and scale them up and down depending on environment requirements.

Kubernetes offers portability across on-premises and cloud environments since it abstracts away infrastructure details from applications. This eliminates the need for platform-specific app dependencies and makes it easy to move applications between different cloud providers or data centers with minimal effort.

6. Enterprise DevOps

For enterprise DevOps teams, being able to update and deploy applications rapidly is critical for business success. Kubernetes provides teams with both software system development and maintenance to improve overall agility. And the Kubernetes API interface allows software developers and other DevOps stakeholders to easily view, access, deploy, update and optimize their container ecosystems.

CI/CD—which stands for continuous integration (CI) and continuous delivery (CD)—has become a key aspect of software development. In DevOps, CI/CD streamlines application coding, testing and deployment by giving teams a single repository for storing work and automation tools to consistently combine and test the code and ensure it works. Kubernetes plays an important role in cloud-native CI/CD pipelines by automating container deployment across cloud infrastructure environments and ensuring efficient use of resources.

The future of Kubernetes

Kubernetes plays a critical IT infrastructure role, as can be seen in its many value-driven use cases that go beyond container orchestration. This is why so many businesses continue to implement Kubernetes. In a 2021 Cloud Native Survey (link resides outside conducted by the CNCF, Kubernetes usage is shown to have reached its highest point ever, with 96% of organizations using or evaluating the containerized platform. According to the same study, Kubernetes usage continues to rise in emerging technology regions, such as Africa, where 73% of survey respondents are using Kubernetes in production.

IBM and Kubernetes

Kubernetes schedules and automates tasks integral to managing container-based architectures, spanning container deployment, updates, service discovery, storage provisioning, load balancing, health monitoring and more. At IBM we are helping clients modernize their applications and optimize their IT infrastructure with Kubernetes and other cloud-native solutions.

Deploy secure, highly available clusters in a native Kubernetes experience with IBM Cloud® Kubernetes Service.

Explore IBM Cloud Kubernetes Service

Containerize and deploy Kubernetes clusters for containerized platforms using Red Hat® OpenShift® on IBM Cloud.

Explore Red Hat OpenShift on IBM CloudSource: IBM Blockchain

How to implement enterprise resource planning (ERP)

Once your business has decided to switch to an enterprise resource planning (ERP) software system, the next step is to implement ERP. For a business to see the benefits of an ERP adoption it must first be deployed properly and efficiently by a team that typically includes a project manager and department managers as well.

This process can be complicated and feel overwhelming, depending on the needs of your organization. However, once new software is implemented successfully, organizations will ideally see the increases in productivity and cost savings benefits an ERP system can bring to your business. The switch to an ERP system can streamline your business needs and be beneficial to both the end user and entire organization.

Steps to implement ERP 

Below is a breakdown of a step-by-step ERP implementation plan. We’ll start by going through what organizations should do prior to choosing an ERP system and then dive into best practices for implementation success.

1. Discover and plan to implement ERP

Before the ERP implementation process can occur, an organization must assess how its current systems are functioning. This is the first step to a successful enterprise resource planning integration and must be completed prior to choosing an ERP software.

In the first step of this implementation methodology, an organization must review the current system and processes to get a full picture of how the business is working and where there might be pitfalls. In this step, an ERP implementation project team should also be established for decision-making purposes. Areas to assess can include, financial, manufacturing, inventory, sales and more. This step will also be important to understand gaps and current issues, such as process inefficiencies and potential requirements for the ERP system.

Once the review of the organization’s current system, workflow, and everyday functions are assessed it’s time to select the right ERP system that meets your business requirements, such as budget forecasting and pricing. An ERP software can be acquired in this first step if the requirements have been well-defined. These requirements will depend on if an organization uses ERP system on-premises or in the cloud ERP.

A change to a modern ERP system can be very straight forward if there is a clear roadmap and project plan for your ERP deployment. A clear and honest conversation with employees will ensure organizational buy in.

Questions to ask as you define the scope of your organization’s needs:

  • What business functions will be automated by the ERP software?
  • What are the ERP system’s specific data requirements and is it compatible?
  • Which key performance indicators (KPIs) need to be tracked?
  • Is the software scalable and flexible enough to evolve with the organization’s needs?
  • What is the timeframe for implementation and deployment?

2. Create a design and prepare to implement

At this point you’ve chosen the ERP system for your business. The next step is the design phase. This is the step to configure the ERP software solution so it fits your organization’s specific needs.

A new design requires change management to make more efficient workflows, along with the potential need for new business processes that are a better fit with the soon-to-be implemented ERP system. It’s important to have a team within the organization dedicated to this design step and determining an appropriate plan.

Steps to configure the ERP system:

  • Create an organizational structure by defining all the necessary aspects of your business, such as the chart of accounts, cost centers and business units.
  • Customize your ERP software so that it aligns with the existing workflows in place and set up the modules you think your organization will need, such as customer relationship management (CRM), human resources and supply chain management.
  • Set the parameters for user roles and permissions so that you can control everyone’s access across the system and make sure data controls are put in place.
  • Integrate ERP software with other existing systems within your organization like accounting software, inventory management and e-commerce platforms if they apply.

3. Migrate and development

Once the design requirements have been established, the development phase can begin. This involves the customization of the software so that the redesign can occur. The development, or preparation, stage is vital and can be a daunting task; but if done properly, it could help your ERP system function for the long-term.

You have established a redesign, now it’s time to clean and format the current system’s data so that it’s compatible with the new system. In this step, an organization will need to assess and prepare all existing data into a compatible format that fits your new ERP software. Once loaded into the new system and formatted correctly, your first ERP test can be performed. In this step you should also monitor and note the key metrics of your business operation, including any disruptions.

Ways to plan and prepare your data for migration:

  • Complete a data audit of all existing legacy systems and applications to have a clear picture going into the data migration.
  • Categorize the types of data you need to migrate and identify any redundancy by combing through the data and cleaning for accuracy.
  • Define what data transfer method you want to use and test it to be sure it is the right migration process.
  • Make a backup plan and a recovery plan in case errors occur or data is lost.
  • Create a data governance policy and put protocols in place.

4. Test the ERP system

All the preemptive steps have been taken and now it’s time to do some system testing before you go-live. In this stage, development might still occur and that is normal. The testing of one ERP module may occur and fixes or adjustments might need to be made, while other modules are being tested simultaneously. Team members should be put through user training and key stakeholders should be involved in this testing process as well.

It is vital to test the entire system and ensure its functioning properly and running data accurately. This is the most important phase because it will ensure all system applications and processes are running as they should before the ERP software is officially up and running.

Things to remember during the testing phase:

  • Keep track of user acceptance criteria and document the information.
  • Test the system for functionality from start to finish and validate all migrated data for accuracy purposes.
  • Check for user accessibility by conducting reviews and reviewing feedback.
  • Conduct all necessary tests prior to deploying the ERP software, such as testing automation processes, workflows and system security.
  • Ensure the ERP system is compatible with the other existing systems and applications in place.
  • Make sure all employees are trained on the system; consider implementing ERP software in stages before going company-wide.

5. Deliver a successfully implemented ERP system

If the steps above have all been taken, then great news, your organization is ready to launch its new ERP system. Assuming all employees have been properly trained on the software, it’s now time to roll out the new ERP solution.

The project team that started the implementation process should be at the ready in case employees are confused or other potential issues arise. Be prepared for issues and have contingency plans in place if there is a serious malfunction. All ERP modules can be deployed concurrently but can be done in stages as well. Some organizations might choose to prioritize certain modules and add others as they go, which is completely normal.

No two organizations are alike and deploying an ERP software can differ greatly but remember to make the ERP solution accessible to all employees and make sure automated processes are activated.

What to look for once implementation occurs:

  • Is the data accurate and functioning properly?
  • Do all users have real-time accessibility without issue?
  • Are security protocols in place and functioning?
  • Is the workflow in place and processing as it should?

6. Manage your ERP solution

Now, assuming the implementation is complete, it’s important to create a protocol for ongoing maintenance for your ERP system. Your organization should be performing regular maintenance checks and upgrading software periodically. Creating a team or having a professional in place to maintain the health of your ERP system is key to the longevity of the solution.

The ERP vendor you select should be available for any questions and ongoing maintenance or updates needed. Best practices for this implementation process should include a well-managed team and strong communication between the organization, its employees and key stakeholders to ensure the ERP solution is working effectively and efficiently.

Best practices for managing your new ERP system:

  • Listen to user and client feedback often.
  • On-premises ERP systems will require periodic software updates and sometimes hardware updates as well, while cloud-based ERP will update automatically.
  • Create standard operating procedures (SOP) to ensure common issues can be addressed quickly.

Implement ERP solutions with IBM

IBM Consulting is the driving force behind your business transformation journey. We offer business consulting with expert advice and are all about working openly and bringing together different perspectives, experiences and essential AI and hybrid cloud technology to meet your business goals.

IBM offers a range of ERP solutions for your business, including consulting services for SAP on IBM Cloud, Microsoft Azure and AWS Cloud. Our SAP experts create custom roadmaps to lower costs and improve results. With these solutions and more, IBM Consulting experts can help you successfully migrate legacy ERP applications to the cloud, redesign processes to leverage data, AI and automation for your business, and transform finance into a competitive advantage.

Elevate your ERP with SAP consulting servicesSource: IBM Blockchain

How the semiconductor industry is leveraging high-performance computing to drive innovation

Semiconductors act as the secret powerhouse behind various industries, from healthcare to manufacturing to financial services. In the last few years alone, we’ve seen how essential semiconductors can be and why companies need to develop this technology rapidly to maximize productivity. As semiconductor manufacturers strive to keep up with customer expectations, electronic design automation (EDA) tools are the keys to unlocking the solution.  

However, to truly drive innovation at scale, EDA leaders need massive computing power. As the need to manage compute-intensive workloads with high levels of resiliency and performance grows, now is the time to turn to the cloud for high-performance computing (HPC).

By taking advantage of solutions like IBM Cloud® HPC, organizations can more effectively manage their peak workloads while mitigating the risk of downtime. In the coming years, we expect having high levels of compute power will become even more crucial as more organizations turn to generative artificial intelligence (AI) and large language models (LLM) to enhance productivity across the EDA space. This is where hybrid cloud HPC solutions can be especially valuable.

Cadence leverages IBM Cloud HPC

Cadence is a global leader in EDA. With over 30 years of computational software experience, Cadence continues to help companies design innovative electronic products that drive today’s emerging technology, including chips, boards and systems for dynamic market applications like hyperscale computing, 5G communications, automotive, mobile, aerospace, consumer, industrial and healthcare.  With over 10,000 engineers and millions of jobs being implemented every month, Cadence requires a significant amount of compute resources. Coupled with the growing demand for more chips and the company’s incorporation of AI and machine learning into its EDA processes, their need for compute power is at an all-time high. Organizations in the EDA industry like Cadence need solutions that enable workloads to seamlessly shift between on-premises and the cloud—while also allowing for differentiation from project to project. A hybrid cloud approach delivers the agility, flexibility and security required to fill these demands. 

The role of a hybrid cloud solution for HPC

Cadence started its public cloud journey in 2016 and now operates with a hybrid, multicloud approach, which includes IBM. With IBM Cloud® HPC to flexibly manage its compute-intensive workloads on-premises and in the cloud with high levels of resiliency and performance, the company can develop its chip and system design software faster and at scale.

As Cadence continues to drive computational software innovation, continuous operations are critical to optimizing operations across its business unit teams that are responsible for delivering chip and system design software to customers at a rapid pace. With the combined power of IBM Cloud as part of its multicloud environment and IBM LSF® as the HPC workload scheduler, Cadence has been able to achieve high-compute utilization, optimize its cloud budget, and streamline computational workloads. Cadence has also reported it is able to perform more regressions and, as a result, can support more predictable and faster time to value. As enterprises like Cadence aim to stay ahead of market trends, IBM Cloud HPC helps overcome large-scale, compute-intensive challenges and speed time to insight, which ultimately benefits the enablement of strategic R&D work.

Get started with IBM Cloud HPC

As enterprises look to solve their most complex challenges, IBM will continue to deliver clients an integrated solution across critical components of compute, network, storage and security—all while aiming to help them address regulatory and efficiency demands. IBM Cloud HPC also includes security and controls built into the platform to help clients across industries consume HPC as a managed service while helping them address third- and fourth-party risks.

With the combination of our suite of tools for workload management and scheduling—including IBM LSF and IBM Symphony® together with IBM Cloud HPC in a hybrid cloud environment—we aim to help our clients take advantage of automation that helps to optimize HPC jobs. This enables them to realize faster time to value, enhance performance and minimize cost—all critical capabilities for industries that move at a rapid pace like the semiconductor space.

Additionally, the IBM Storage Scale storage solution complements IBM Cloud HPC further, offering a single, globally integrated solution designed to help users manage and move data across hybrid environments, in a cost-effective manner, to implement HPC workloads in real-time.

Are you going to Supercomputing 23 in Denver? Join us at IBM Booth #1925 to learn more about how IBM Cloud HPC can scale your business with speed and security.

Explore IBM Cloud HPCSource: IBM Blockchain

How to automate certificate renewal in IBM Cloud Code Engine

This blog will focus on the integration of IBM Cloud Code Engine and IBM Cloud Event Notifications along with IBM Cloud Secrets Manager to build a robust use case that will automate your certificate renewal process for applications in your code engine project. We will build a simple app using IBM Cloud Code Engine to update your secrets in a Code Engine Project.

The services which we will be using are:

  1. IBM Cloud Code Engine
  2. IBM Cloud Event Notifications
  3. IBM Cloud Secrets Manager

It is not required to have a prerequisite knowledge on these services—although brief knowledge would be good. You can just follow the instructions and you will be able to build this sample application. All the code is provided in the Github URL. Before we continue let me give you a brief knowledge on these services.

What is IBM Cloud Code Engine?

IBM Cloud Code Engine is a fully managed, serverless platform that runs your containerized workloads, including web apps, microservices, event-driven functions, and batch jobs with run-to-completion characteristics. The Code Engine experience is designed so that you can focus on writing code and not on the infrastructure that is needed to host it.

What is IBM Cloud Event Notifications?

IBM Cloud Event Notifications is a routing service that provides you about critical events that occur in your IBM Cloud account. You can filter and route event notifications from IBM Cloud services like IBM Cloud Monitoring, Security and Compliance Center, Secrets Manager, IBM Cloud Projects, and Toolchain to communication channels like email, webhook, slack, IBM Code Engine, et al.

What is IBM Cloud Secrets Manager?

IBM Cloud Secrets Manager is a service where you can create, lease, and centrally manage secrets that are used in IBM Cloud services or your custom-built applications. Secrets are stored in a dedicated Secrets Manager instance, built on open source.

Embarking Journey with apps and certificates

Let’s say you have your Code Engine Application which has its own secret—TLS Certificate and Private Key. Generally, you would keep these secrets in something like a vault that would manage it. Assume that you store this secret in Secrets Manager. You will also store the same secret in your Code Engine Project where the App resides. So far, all good, your app will be able to use this secret and will be functional.

However, secrets can expire after a certain time period and therefore needs to be renewed. Everything was working fine until the secret expired, your app which uses this secret will be disrupted, thereby affecting your customers.

If you know about Secrets Manager, then you would be familiar that it can also rotate the secrets to new one automatically when they get expired. Let’s say you rotate the secrets in the Secrets Manager. Then what about your Code Engine Project? The secrets won’t be updated there, unless you manually do it. Let’s say you built another Code Engine Application which will retrieve the secrets from the Secrets Manager and update it in the project.

So far so good, but there is still one problem: How will your app know when to update the secret? Unless there was some way the app gets notified when the secrets were rotated in the Secrets Manager. In this scenario you can use Event Notifications to send notification to your app whenever the secret got rotated in the Secrets Manager. When the app gets notified, it can then do the update.

This is what we will do, we will use these different services and automate our secret renewal process. Therefore, you as a user do not have to manually update the secrets and preventing disruptions of your applications due to expired certificates

Let’s dive right in

Clone the repository and hop into the “app-n-event-notification” directory. You would have to create an API Key in your IBM Cloud Account. You would have to insert the API Key in the script. You must log into the IBM Cloud and select the Code Engine Project you want to work on. After that execute the run script and this is what will be happen after execution.

The run script will:

  1. Create an instance in the Secrets Manager and Event Notifications
  2. Create a secret in the Secrets Manager
  3. Build a Code Engine App (code is already provided)
  4. Create same secret in the Code Engine Project
  5. Create necessary sources, topics, destination etc., in Event Notifications
  6. Bind all these components together
  7. Rotate the secrets in Secrets Manager
  8. At last, we will check the logs of the apps to verify if secret got updated in Code Engine Project

Delving deeper: Unraveling the process

Here is an architecture which will help you visualize the components we are working with.

When you execute the run script in the samples, it creates the Event Notifications Instance and Secrets Manager Instance of lite plan in your IBM Cloud Account. We create custom certificates using openssl commands and store in a temporary directory. A secret is created in the Secret Manager and is populated with this certificate and key. Necessary components like topics, sources, destinations, and subscriptions are created in the Event Notification Instance. A Code Engine application is built using local source code and a Code Engine secret is also created containing the same secret (certificate and key). Both the app and secret will reside in the same project selected. At last, we rotate the secret in the Secrets Manager with a new certificate.

When the secret is rotated, your Secrets Manager will act as a source and it will send a notification payload of json structure to Event Notification Topic. The Topic will have a filter which is configured in such a way that it will extract the notification data and check if that particular certificate was rotated. If and only if it that particular certificate was rotated, then it can pass through to the topic. There would be a destination created with the app URL. A subscription would be made between the topic and the destination. When the notification comes to the topic, the Event Notification will invoke the Code Engine Application by sending POST request to it with data being the notification payload. The App is configured in such a way that it will retrieve the secret from Secrets Manager and after that it will update the code engine secret with the retrieved secret.

A word of caution

As we have seen that Event Notification will invoke our application via sending POST request to it with the notification. But there is one caveat here, there is a response timeout from Event Notifications which is 60 seconds. To know more about it check the documentation of retry policy.

Simply put the app should scale up and process the response (i.e retrieve secret from Secrets Manager and update it in the project) within 60 seconds. If you consider executing a longer workload then you can use the Code Engine Job for the same. Refer to this documentation to know more about Code Engine Jobs.


We learned and created an automation tool for certificate renewal. If you have your certificates from third-party vendors, then you can refer this documentation on how to connect third-party certificate authorities to Secrets Manager.

Learn more about IBM Cloud Code EngineSource: IBM Blockchain

Building on a year of focus to help IBM Power clients grow with hybrid cloud and AI

At the beginning of the year, we laid out a new strategy for IBM Power under the leadership of Ken King, who will be retiring by the end of 2023 after forty years with IBM. It is with immense gratitude that I thank Ken for his leadership not only across IBM Power, but for his service to IBM in various roles spanning IP, strategy and software during his distinguished IBM career.

I am excited to announce, therefore, that a few months ago I took on the role of IBM Power general manager. As Ken passes the baton, I want to take stock of the progress we’ve made — and point to where we are prioritizing — across four critical areas to help address our clients’ digital transformation imperatives:

  • Continuing to innovate key capabilities for core business workloads by strategically investing in three operating environments on IBM Power: AIX, IBM i and Linux
  • Driving growth with SAP HANA on Power on-premises and in the cloud
  • Supporting clients’ banking and industry modernization journey
  • Providing greater flexibility with subscription services and Power as a Service

Our value proposition is the ability to combine hybrid cloud and AI with clients’ trusted data on IBM Power to fuel business outcomes. Let’s dig in to some specifics.

Down to the core

We’ve continued to innovate and invest in operating environments on IBM Power to help ensure business continuity, reliability, availability, serviceability and security for clients. In the latest IBM i Technology Refresh — IBM i 7.5 TR3 and 7.4 TR9 — announced in October, we listened to feedback from our IBM i Advisory Councils and prioritized advancements in ease of use, productivity, and automation with enhancements to Navigator for i and new additions to SYSTOOLS for automating Db2 for i. 

We also have a new release of AIX — AIX 7.3 TL2 — building on Power10’s high availability leadership with performance and scale enhancements to Live Kernel Update (designed to give the ability to update AIX without unplanned downtime), optimized file system performance and enhancements designed to improve AIX encryption performance and audit event checking. You can learn more about this latest release on the AIX webcast on November 14.

We are expanding IBM Db2 Warehouse on Power with a new Base Rack Express at a 30% lower entry list price, adding to today’s S, M and L configurations, while still providing the same total-solution experience, including Db2 Data Warehouse’s connectivity with to unlock the potential of data for analytics and AI.

Oracle will be releasing Oracle Database 23c on Power, as part of their next Long Term Release as reported in April 2023. Separately, in 2024, clients will be able to look forward to continued enhancements to the AIX, IBM i and Linux roadmaps.

Accelerating business transformation with SAP HANA on Power 

As the 2027 end of mainstream maintenance for SAP’s legacy ERP is approaching, our customers are all in different stages of their business transformation journey. SAP is accelerating this journey by offering the current ERP, S/4HANA, as a managed service offering with SAP RISE. IBM Power is supporting our customers in their business transformation journey by offering customer infrastructure solutions designed to meet customers where they are. Whether they need Power10 systems on-premises to upgrade their SAP landscapes, Power Virtual Server capacity to accelerate migration to S/4HANA, or SAP RISE in IBM Cloud on Power, we are providing solutions on Power infrastructure.

In addition, IBM is also offering a hybrid cloud consumption model that will allow flexibility for both on-premises and cloud expenditures. Initially this program will allow clients to leverage the investment of on-premises hardware and, with a commitment to IBM Power Virtual Server, receive cloud capacity credits for IBM Power Private Cloud.

With this hybrid cloud consumption program, clients can leverage the benefits of cloud while also nurturing their on-premises SAP on Power environments as they build out their long-term hybrid cloud strategy.

To continue our momentum on AI with SAP, in 1Q24, as we announced in September, we will also be delivering the first release of SAP ABAP SDK for watsonx, which is intended to simplify and accelerate customers’ ability to consume watsonx services from their custom ABAP environments.

Driving industry modernization

Whether clients need to deploy large language models (LLMs), integrated with watsonx, close to their data and transactions, or integrate mission-critical data into their data fabric architecture, Power10’s powerful core can help embed AI-driven insights into business processes and safeguard AI workflows.

For instance, a Thai hospital chain was facing a challenge with its current pathology process which prolonged the overall workflow, resulting in delayed responses to diagnosis, patient management, and support for more patients. By deploying an AI inference solution for both Speech-to-Text and Image analysis on Power10, the pathology unit was able to increase sensitivity in detecting lesions to prioritize higher probability cases. These are important steps for their mission to achieve better clinical outcomes, a faster time to treatment for patients, and an efficient reduction in pathologist workloads.

Later this month, clients will be able to take advantage of expanded data science services with the release of IBM Cloud Pak for Data V4.8, which will deliver the underpinnings for, IBM’s next-generation AI studio. To further help our clients on their AI journeys, we continue to double-down on hybrid cloud with Red Hat so that workloads can run in a best-fit environment. To that end:

  • Red Hat OpenShift 4.14 has just been released and is available to run natively on IBM Power, providing support for multi-architecture compute (MAC) worker nodes across Power, IBM Z, ARM, and x86 environments.
  • Red Hat Ansible Automation Platform components now run natively on IBM Power. Clients can consolidate their environments and run Ansible Automation Platform on the same Power servers where their business-critical workloads are already running, instead of having to run Ansible automation hub and automation controller on separate x86 processor-based servers to manage Power endpoints. Read more here.

A vibrant ecosystem enables a range of use cases for our clients running software on Power. Finacle is a leading digital banking suite from Infosys. With Finacle solutions on Red Hat OpenShift on IBM Power, IBM and Finacle are expanding on our 20+ year collaboration. I’m happy to share that clients can soon leverage solutions from the Finacle Digital Banking Suite using Red Hat OpenShift on IBM Power to meet evolving customer demands, regulatory, and market dynamics.  

Power as a service

To meet client demand, throughout the year, we’re focusing on transforming the Proof of Concept (PoC) experience for IBM Power Virtual Server. We’re simplifying the process, making network configuration easier, adding power edge routers, and implementing a step-by-step automated modernization approach for IBM i, AIX and Linux that’s designed to be as straightforward as an on-premises migration from Power9 to Power10.

We’re also moving away from a “Do It Yourself” (DIY) model for High Availability/Disaster Recovery (HA/DR) solutions to a prescriptive and automated one. The goal is to provide clients with a clear path forward for business continuity, ensuring a smoother and more efficient process.

For more on our fourth quarter plans to meet clients’ expectations for running production workloads effectively on IBM Power Virtual Server, read here.

IBM Power backed by IBM Expert Care

We’re also making strides in our service offerings for IBM Power. IBM Power10 can be sold together with IBM Power Expert Care, a tiered support model that makes it easier for clients to choose the right level of support for their needs and budget at the time of sale. Earlier this year, IBM adjusted the IBM Power E1080 Expert Care Premium tier to align to client expectations for proactive support. IBM Power Expert Care Remote Support and Parts is also now available in many countries with no physical IBM presence.

Additionally, all IBM Power support contracts come with access to IBM Support Insights, which provides clients with actionable insights for multivendor IT infrastructures to proactively assess and remediate IT risks. The IBM Support Insights Pro subscription, announced on September 12, is designed to expand and strengthen the scope of security risk coverage to include community open source, provide prioritized actions by vendor and product family to speed IT lifecycle decision-making, and further address reliability with an extended case history and analysis to better learn from previous support issues.

What’s next for IBM Power

We’ve listened to our community and advisory councils, and we’re dedicated to creating solutions with partners and clients so we can continue to strive to provide the most trusted and open computing platform for mission critical, scalable transaction processing, and data serving workloads. Our goals include making it easier for clients to run AI workloads closer to their data with on-chip AI acceleration, improving total cost of ownership and performance, increasing availability with up to 8x9s (99.999999%) for mission-critical workloads and fewer outages as compared to x86 servers, and enhancing security and sustainability features.

I’m extremely excited for the road ahead. We’ll continue to meet our clients where they are in their digital journey and strive to make the path to success as simple as possible, whether it’s by making more aaS options available, increasing pathways for workloads to move across hybrid environments, or helping to extract even more value from SAP workloads on Power.

Reach out to your IBM Power representative or Business Partner to discuss how we can keep making progress together.

Book a meeting with our team of experts

Statements regarding IBM’s future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Source: IBM Blockchain