Application modernization overview

Application modernization is the process of updating legacy applications leveraging modern technologies, enhancing performance and making it adaptable to evolving business speeds by infusing cloud native principles like DevOps, Infrastructure-as-code (IAC) and so on. Application modernization starts with assessment of current legacy applications, data and infrastructure and applying the right modernization strategy (rehost, re-platform, refactor or rebuild) to achieve the desired result.

While rebuild results in maximum benefit, there is a need for high degree of investment, whereas rehost is about moving applications and data as such to cloud without any optimization and this requires less investments while value is low. Modernized applications are deployed, monitored and maintained, with ongoing iterations to keep pace with technology and business advancements. Typical benefits realized would range from increased agility, cost-effectiveness and competitiveness, while challenges include complexity and resource demands. Many enterprises are realizing that moving to cloud is not giving them the desired value nor agility/speed beyond basic platform-level automation. The real problem lies in how the IT is organized, which reflects in how their current applications/services are built and managed (refer to Conway’s law). This, in turn, leads to the following challenges:

  • Duplicative or overlapping capabilities offered by multiple IT systems/components create sticky dependencies and proliferations, which impact productivity and speed to market.
  • Duplicative capabilities across applications and channels give rise to duplicative IT resources (e.g., skills and infrastructure)
  • Duplicative capabilities (including data) resulting in duplication of business rules and the like give rise to inconsistent customer experience.
  • Lack of alignment of IT capabilities to business capabilities impacts time to market and business-IT. In addition, enterprises end up building several band-aids and architectural layers to support new business initiatives and innovations.

Hence, application modernization initiatives need to be focusing more on the value to business and this involves significant element of transformation of the applications to business capabilities aligned components and services. The biggest challenge with this is the amount of investment needed and many CIOs/CTOs are hesitant to invest due to the cost and timelines involved in realizing value. Many are addressing this via building accelerators that could be customized for enterprise consumption that helps accelerate specific areas of modernization and one such example from IBM is IBM Consulting Cloud Accelerators. While attempting to drive acceleration and optimize cost of modernization, Generative AI is becoming a critical enabler to drive change in how we accelerate modernization programs. We will explore key areas of acceleration with an example in this article.

A simplified lifecycle of application modernization programs (not meant to be exhaustive) is depicted below. Discovery focuses on understanding legacy application, infrastructure, data, interaction between applications, services and data and other aspects like security. Planning breaks down the complex portfolio of applications into iterations to be modernized to establish an iterative roadmap—and establishing an execution plan to implement the roadmap.

Blueprint/Design phase activities change based on the modernization strategy (from decomposing application and leveraging domain-driven design or establish target architecture based on new technology to build executable designs). Subsequent phases are build and test and deploy to production. Let us explore the Generative AI possibilities across these lifecycle areas.

Discovery and design:

The ability to understand legacy applications with minimal SME involvement is a critical acceleration point. This is because, in general, SMEs are busy with systems lights-on initiatives, while their knowledge could be limited based on how long they have been supporting the systems. Collectively, discovery and design is where significant time is spent during modernization, whereas development is much easier once the team has decoded the legacy application functionality, integration aspects, logic and data complexity.

Modernization teams perform their code analysis and go through several documents (mostly dated); this is where their reliance on code analysis tools becomes important. Further, for re-write initiatives, one needs to map functional capabilities to legacy application context so as to perform effective domain-driven design/decomposition exercises. Generative AI becomes very handy here through its ability to correlate domain/functional capabilities to code and data and establish business capabilities view and connected application code and data—of course the models need to be tuned/contextualized for a given enterprise domain model or functional capability map. Generative AI-assisted API mapping called out in this paper is a mini exemplar of this. While the above is for application decomposition/design, event-storming needs process maps and this is where Generative AI assists in contextualizing and mapping extracts from process mining tools. Generative AI also helps generate use cases based on code insights and functional mapping. Overall, Generative AI helps de-risk modernization programs via ensuring adequate visibility to legacy applications as well as dependencies.

Generative AI also helps generate target design for specific cloud service provider framework through tuning the models based on a set of standardized patterns (ingress/egress, application services, data services, composite patterns, etc.). Likewise, there are several other Generative AI use cases that include generating of target technology framework-specific code patterns for security controls. Generative AI helps to generate detail design specifications, for example, user stories, User Experience Wire Frames, API Specifications (e.g., Swagger files), component relationship diagram and component interaction diagrams.


One of the difficult tasks of a modernization program is to be able to establish a macro roadmap while balancing parallel efforts versus sequential dependencies and identifying co-existence scenarios to be addressed. While this is normally done as a one-time task—continuous realignment through Program Increments (PIs)—planning exercises incorporating execution level inputs is far more difficult. Generative AI comes in handy to be able to generate roadmaps based on historical data (applications to domain area maps, effort and complexity factors and dependency patterns, etc.), applying this to applications in the scope of a modernization program—for a given industry or domain.

The only way to address this is to make it consumable via a suite of assets and accelerators that can address enterprise complexity. This is where Generative AI plays a significant role in correlating application portfolio details with discovered dependencies.

Build and test:

Generating code is one of the most widest known Generative AI use case, but it is important to be able to generate a set of related code artifacts ranging from IAC (Terraform or Cloud Formation Template), pipeline code/configurations, embed security design points (encryption, IAM integrations, etc.), application code generation from swaggers or other code insights (from legacy) and firewall configurations (as resource files based on services instantiated, etc.). Generative AI helps generate each of the above through an orchestrated approach based on predefined application reference architectures built from patterns—while combining outputs of design tools.

Testing is another key area; Generative AI can generate the right set of test cases and test code along with test data so as to optimize the test cases being executed.


There are several last mile activities that typically takes days to weeks based on enterprise complexity. The ability to generate insights for security validation (from application and platform logs, design points, IAC, etc.) is a key use case that will help assist accelerated security review and approval cycles. Generating configuration management inputs (for CMDB)and changing management inputs based on release notes generated from Agility tool work items completed per release are key Generative AI leverage areas.

While the above-mentioned use cases across modernization phases appear to be a silver bullet, enterprise complexities will necessitate contextual orchestration of many of the above Generative AI use cases-based accelerators to be able to realize value and we are far from establishing enterprise contextual patterns that help accelerate modernization programs. We have seen significant benefits in investing time and energy upfront (and ongoing) in customizing many of these Generative AI accelerators for certain patterns based on potential repeatability.

Let us now examine a potential proven example:

Example 1: Re-imagining API Discovery with BIAN and AI for visibility of domain mapping and identification of duplicative API services

The Problem: Large Global Bank has more than 30000 APIs (both internal and external) developed over time across various domains (e.g., retail banking, wholesale banking, open banking and corporate banking). There is huge potential of duplicate APIs existing across the domains, leading to higher total cost of ownership for maintaining the large API portfolio and operational challenges of dealing with API duplication and overlap. A lack of visibility and discovery of the APIs leads API Development teams to develop the same or similar APIs rather than find relevant APIs for reuse. The inability to visualize the API portfolio from a Banking Industry Model perspective constrains the Business and IT teams to understand the capabilities that are already available and what new capabilities are needed for the bank.

Generative AI-based solution approach: The solution leverages BERT Large Language Model, Sentence Transformer, Multiple Negatives Ranking Loss Function and domain rules, fine-tuned with BIAN Service Landscape knowledge to learn the bank’s API portfolio and provide ability to discover APIs with auto-mapping to BIAN. It maps API Endpoint Method to level 4 BIAN Service Landscape Hierarchy, that is, BIAN Service Operations.

The core functions of solution are the ability to:

  • Ingest swagger specifications and other API documentations and understand the API, end points, the operations and the associated descriptions.
  • Ingest BIAN details and understand BIAN Service Landscape.
  • Fine-tune with matched and unmatched mapping between API Endpoint Method and BIAN Service Landscape.
  • Provide a visual representation of the mapping and matching score with BIAN Hierarchical navigation and filters for BIAN levels, API Category and matching score.

Overall logical view (Open Stack based) is as below:

User Interface for API Discovery with Industry Model:

Key Benefits: The solution helped developers to easily find re-usable APIs, based on BIAN business domains; they had multiple filter/search options to locate APIs. In addition, teams were able to identify key API categories for building right operational resilience. Next revision of search would be based on natural language and will be a conversational use case.

The ability to identify duplicative APIs based on BIAN service domains helped establish a modernization strategy that addresses duplicative capabilities while rationalizing them.

This use case was realized within 6–8 weeks, whereas the bank would have taken a year to achieve the same result (as there were several thousands of APIs to be discovered).

Example 2: Automated modernization of MuleSoft API to Java Spring Boot API

The Problem: While the current teams were on a journey to modernize MuleSoft APIs to Java Spring boot, sheer volume of APIs, lack of documentation and the complexity aspects were impacting the speed.

Generative AI-based Solution Approach: The Mule API to Java Spring boot modernization was significantly automated via a Generative AI-based accelerator we built. We began by establishing deep understanding of APIs, components and API logic followed by finalizing response structures and code. This was followed by building prompts using IBM’s version of Sidekick AI to generate Spring boot code, which satisfies the API specs from MuleSoft, unit test cases, design document and user interface.

Mule API components were provided into the tool one by one using prompts and generated corresponding Spring boot equivalent, which was subsequently wired together addressing errors that propped up. The accelerator generated UI for desired channel that could be integrated to the APIs, unit test cases and test data and design documentation. A design documentation that gets generated consists of sequence and class diagram, request, response, end point details, error codes and architecture considerations.

Key Benefits: Sidekick AI augments Application Consultants’ daily work by pairing multi-model Generative AI technical strategy contextualized through deep domain knowledge and technology. The key benefits are as follows:

  • Generates most of the Spring Boot code and test cases that are optimized, clean and adheres to best practices—key is repeatability.
  • Ease of integration of APIs with channel front-end layers.
  • Ease of understanding of code of developer and enough insights in debugging the code.

The Accelerator PoC was completed with 4 different scenarios of code migration, unit test cases, design documentation and UI generation in 3 sprints over 6 weeks.


Many CIOs/CTOs have had their own reservations in embarking on modernization initiatives due to a multitude of challenges called out at the beginning—amount of SME time needed, impact to business due to change, operating model change across security, change management and many other organizations and so on. While Generative AI is not a silver bullet to solve all of the problems, it helps the program through acceleration, reduction in cost of modernization and, more significantly, de-risking through ensuring no current functionality is missed out. However, one needs to understand that it takes time and effort to bring LLM Models and libraries to enterprise environment needs-significant security and compliance reviews and scanning. It also requires some focused effort to improve the data quality of data needed for tuning the models. While cohesive Generative AI-driven modernization accelerators are not yet out there, with time we will start seeing emergence of such integrated toolkits that help accelerate certain modernization patterns if not many.

Source: IBM Blockchain

Top 6 Kubernetes use cases

Kubernetes, the world’s most popular open-source container orchestration platform, is considered a major milestone in the history of cloud-native technologies. Developed internally at Google and released to the public in 2014, Kubernetes has enabled organizations to move away from traditional IT infrastructure and toward the automation of operational tasks tied to the deployment, scaling and managing of containerized applications (or microservices). While Kubernetes has become the de facto standard for container management, many companies also use the technology for a broader range of use cases.

Overview of Kubernetes

Containers—lightweight units of software that package code and all its dependencies to run in any environment—form the foundation of Kubernetes and are mission-critical for modern microservices, cloud-native software and DevOps workflows.

Docker was the first open-source software tool to popularize building, deploying and managing containerized applications. But Docker lacked an automated “orchestration” tool, which made it time-consuming and complex for data science teams to scale applications. Kubernetes, also referred to as K8s, was specifically created to address these challenges by automating the management of containerized applications.

In broad strokes, the Kubernetes orchestration platform runs via containers with pods and nodes. A pod operates one or more Linux containers and can run in multiples for scaling and failure resistance. Nodes run the pods and are usually grouped in a Kubernetes cluster, abstracting the underlying physical hardware resources. 

Kubernetes’s declarative, API-driven infrastructure has helped free up DevOps and other teams from manually driven processes so they can work more independently and efficiently to achieve their goals. In 2015, Google donated Kubernetes as a seed technology to the Cloud Native Computing Foundation (CNCF) (link resides outside, the open-source, vendor-neutral hub of cloud-native computing.

Read about the history of Kubernetes

Today, Kubernetes is widely used in production to manage Docker and essentially any other type of container runtime. While Docker includes its own orchestration tool, called Docker Swarm, most developers choose Kubernetes container orchestration instead.

As an open-source system, Kubernetes services are supported by all the leading public cloud providers, including IBM, Amazon Web Services (AWS), Microsoft Azure and Google. Kubernetes can also run on bare metal servers and virtual machines (VMs) in private cloud, hybrid cloud and edge settings, provided the host OS is a version of Linux or Windows.

Six top Kubernetes use cases

Here’s a rundown of six top Kubernetes use cases that reveal how Kubernetes is transforming IT infrastructure.

1. Large-scale app deployment

Heavily trafficked websites and cloud computing applications receive millions of user requests each day. A key advantage of using Kubernetes for large-scale cloud app deployment is autoscaling. This process allows applications to adjust to demand changes automatically, with speed, efficiency and minimal downtime. For instance, when demand fluctuates, Kubernetes enables applications to run continuously and respond to changes in web traffic patterns This helps maintain the right amount of workload resources, without over- or under-provisioning.

Kubernetes employs horizontal pod autoscaling (HPA) (link resides outside to carry out load balancing (as for CPU usage or custom metrics) by scaling the number of pod replicas (clones that facilitate self-healing) related to a specific deployment. This mitigates potential issues like traffic surges, hardware problems or network disruptions.

Note: HPA is not to be confused with Kubernetes vertical pod autoscaling (VPA), which assigns additional resources, such as memory or CPU, to the pods that are already running for the workload.

2. High-performance computing

Industries including government, science, finance and engineering rely heavily on high-performance computing (HPC), the technology that processes big data to perform complex calculations. HPC uses powerful processors at extremely high speeds to make instantaneous data-driven decisions. Real-world uses of HPC include  automating stock trading, weather prediction, DNA sequencing and aircraft flight simulation.

HPC-heavy industries use Kubernetes to manage the distribution of HPC calculations across hybrid and multicloud environments. Kubernetes can also serve as a flexible tool to support batch job processing involved in high performance computing workloads, which enhances data and code portability.

3. AI and machine learning

Building and deploying artificial intelligence (AI) and machine learning (ML) systems requires huge volumes of data and complex processes like high performance computing and big data analysis. Deploying machine learning on Kubernetes makes it easier for organizations to automate the management and scaling of ML lifecycles and reduces the need for manual intervention.

For example, the Kubernetes containerized orchestration platform can automate portions of AI and ML predictive maintenance workflows, including health checks and resource planning. And Kubernetes can scale ML workloads up or down to meet user demands, adjust resource usage and control costs.

Machine learning relies on large language models to perform high-level natural language processing (NLP) like text classification, sentiment analysis and machine translation, and Kubernetes helps speed the deploy of large language models automate the NLP process. As more and more organization turn to generative AI capabilities, they are using Kubernetes to run and scale generative AI models, providing high availability and fault tolerance.

Overall, Kubernetes provides the flexibility, portability and scalability needed to train, test, schedule and deploy ML and generative AI models.

4. Microservices management

Microservices (or microservices architecture) offer a modern cloud-native architecture approach where each application is comprised of numerous loosely connected and independently deployable smaller components, or services. For instance, large retail e-commerce websites consist of many microservices. These typically include an order service, payment service, shipping service and customer service. Each service has its own REST API, which the other services use to communicate with it.

Kubernetes was designed to handle the complexity involved to manage all the independent components running simultaneously within microservices architecture. For instance, Kubernetes’ built-in high availability (HA) feature ensures continuous operations even in the event of failure. And the Kubernetes self-healing feature kicks in if a containerized app or an application component goes down. The self-healing feature can instantly redeploy the app or application component, matching the desired state, which helps to maintain uptime and reliability.

5. Hybrid and multicloud deployments

Kubernetes is built to be used anywhere, making it easier for organizations to migrate applications from on-premises to hybrid cloud and multicloud environments. Kubernetes standardizes migration by providing software developers with built-in commands for effective app deployment. Kubernetes can also roll out changes to apps and scale them up and down depending on environment requirements.

Kubernetes offers portability across on-premises and cloud environments since it abstracts away infrastructure details from applications. This eliminates the need for platform-specific app dependencies and makes it easy to move applications between different cloud providers or data centers with minimal effort.

6. Enterprise DevOps

For enterprise DevOps teams, being able to update and deploy applications rapidly is critical for business success. Kubernetes provides teams with both software system development and maintenance to improve overall agility. And the Kubernetes API interface allows software developers and other DevOps stakeholders to easily view, access, deploy, update and optimize their container ecosystems.

CI/CD—which stands for continuous integration (CI) and continuous delivery (CD)—has become a key aspect of software development. In DevOps, CI/CD streamlines application coding, testing and deployment by giving teams a single repository for storing work and automation tools to consistently combine and test the code and ensure it works. Kubernetes plays an important role in cloud-native CI/CD pipelines by automating container deployment across cloud infrastructure environments and ensuring efficient use of resources.

The future of Kubernetes

Kubernetes plays a critical IT infrastructure role, as can be seen in its many value-driven use cases that go beyond container orchestration. This is why so many businesses continue to implement Kubernetes. In a 2021 Cloud Native Survey (link resides outside conducted by the CNCF, Kubernetes usage is shown to have reached its highest point ever, with 96% of organizations using or evaluating the containerized platform. According to the same study, Kubernetes usage continues to rise in emerging technology regions, such as Africa, where 73% of survey respondents are using Kubernetes in production.

IBM and Kubernetes

Kubernetes schedules and automates tasks integral to managing container-based architectures, spanning container deployment, updates, service discovery, storage provisioning, load balancing, health monitoring and more. At IBM we are helping clients modernize their applications and optimize their IT infrastructure with Kubernetes and other cloud-native solutions.

Deploy secure, highly available clusters in a native Kubernetes experience with IBM Cloud® Kubernetes Service.

Explore IBM Cloud Kubernetes Service

Containerize and deploy Kubernetes clusters for containerized platforms using Red Hat® OpenShift® on IBM Cloud.

Explore Red Hat OpenShift on IBM CloudSource: IBM Blockchain

What really happened with Cyber Week 2021? Analyzing the trends of the year’s biggest shopping week

Supply chain chaos did not deter holiday shoppers during Cyber Week, but it did shift shopping patterns and a few long-lived holiday shopping traditions. Black Thursday disappeared… In recent years as Black Friday grew, deep discounts and increased customer demand pushed stores to open earlier and earlier, until eventually they technically opened on Thanksgiving eve, […]
The post What really happened with Cyber Week 2021? Analyzing the trends of the year’s biggest shopping week appeared first on IBM Supply Chain and Blockchain…

Building a more sustainable, equitable future with trust

IBM has a strong heritage in social responsibility. Our technical and industry professionals across business units and research divisions develop new ways of helping to solve difficult environmental problems based upon data and today’s exponential information technologies — including AI, automation, IoT and blockchain, which also have the power to change business models, reinvent processes, […]
The post Building a more sustainable, equitable future with trust appeared first on Blockchain Pulse: IBM Blockchain Blog. Source: IBM Blockchain…

The ABCs of emerging technologies 2020 and beyond

In the 109-year of history of IBM, innovation and change has been the mantra with which we have lived, and we have never been shy of taking risks. With the recent announcement made by our CEO, Arvind Krishna, separating our Managed Infrastructure Services unit of Global Technology Services from IBM to create a new public […]
The post The ABCs of emerging technologies 2020 and beyond appeared first on Blockchain Pulse: IBM Blockchain Blog. Source: IBM Blockchain…

BlockpARTy — Making supply chains smarter and more efficient

The products that touch our lives daily make many stops as they progress through the supply chain from producer or manufacturer to customer. It’s largely a patchwork trail composed of mixed digital and paper transaction records rife with inefficiencies. Our innovative clients have a vision of how trusted information sharing and collaboration throughout the journey […]
The post BlockpARTy — Making supply chains smarter and more efficient appeared first on Blockchain Pulse: IBM Blockchain Blog. Source: IBM Blockchain…

Blockchain — but make it fashion

Perhaps the most widely referenced use case for blockchain is track and trace, the benefits of which have been expounded upon countlessly. Still, how often are these benefits applied to industries outside of banking and manufacturing? Blockchain as a service continues to prove itself across industries, having graduated from successful pilots such as those in […]
The post Blockchain — but make it fashion appeared first on Blockchain Pulse: IBM Blockchain Blog. Source: IBM Blockchain…