Diplomats, academics, and activists from around the globe will gather yet again this week to try to find common ground on a plan for combating climate change. This year’s COP, as the event is known, marks the 28th annual meeting of the conference of the parties to the United Nations Framework Convention on Climate Change. More than 70,000 people are expected to descend on Dubai for the occasion.
That has left some to wonder: Have these annual gatherings outlived their usefulness?
To some, the yearly get-togethers continue to be a critical centerpiece for international climate action, and any tweaks they might need lie mostly around the edges. “They aren’t perfect,” said Tom Evans, a policy analyst for the nonprofit climate change think tank E3G. “[But] they are still important and useful.” While he sees room for improvements — such as greater continuity between COP summits and ensuring ministerial meetings are more substantive — he supports the overall format. “We need to try and find a way to kind of invigorate and revitalize without distracting from the negotiations, which are key.”
Others say the summits no longer sufficiently meet the moment. “The job in hand has changed over the years,” said Rachel Kyte, a climate diplomacy expert and dean emerita of the Fletcher School of Law and Diplomacy at Tufts University. She is among those who believe the annual COP needs to evolve. “Form should follow function,” she said. “And we are using an old form.”
Durwood Zaelke, co-founder and former president of the Center for International Environmental Law, was more blunt. “You can’t say that an agreement that lets a problem grow into an emergency is doing a good job,” he said. “It’s not.”
Established in 1992, the United Nations Framework Convention on Climate Change is an international treaty that aims to stabilize greenhouse gas emissions and avoid the worst effects of climate change. Some 198 countries have ratified the Convention, which has seen some significant wins.
Get caught up on COP28
What is COP28? Every year, climate negotiators from around the world gather under the auspices of the United Nations Framework Convention on Climate Change to assess countries’ progress toward reducing carbon emissions and limiting global temperature rise.
The 28th Conference of Parties, or COP28, is taking place in Dubai, United Arab Emirates, between November 30 and December 12 this year.
What happens at COP? Part trade show, part high-stakes negotiations, COPs are annual convenings where world leaders attempt to move the needle on climate change. While activists up the ante with disruptive protests and industry leaders hash out deals on the sidelines, the most consequential outcomes of the conference will largely be negotiated behind closed doors. Over two weeks, delegates will pore over language describing countries’ commitments to reduce carbon emissions, jostling over the precise wording that all 194 countries can agree to.
What are the key issues at COP28 this year?
Global stocktake: The 2016 landmark Paris Agreement marked the first time countries united behind a goal to limit global temperature increase. The international treaty consists of 29 articles with numerous targets, including reducing greenhouse gas emissions, increasing financial flows to developing countries, and setting up a carbon market. For the first time since then, countries will conduct a “global stocktake” to measure how much progress they’ve made toward those goals at COP28 and where they’re lagging.
Fossil fuel phase-out or phase-down: Countries have agreed to reduce carbon emissions at previous COPs, but have not explicitly acknowledged the role of fossil fuels in causing the climate crisis until recently. This year, negotiators will be haggling over the exact phrasing that signals that the world needs to transition away from fossil fuels. They may decide that countries need to phase-down or phase-out fossil fuels or come up with entirely new wording that conveys the need to ramp down fossil fuel use.
Loss and damage: Last year, countries agreed to set up a historic fund to help developing nations deal with the so-called loss and damage that they are currently facing as a result of climate change. At COP28, countries will agree on a number of nitty-gritty details about the fund’s operations, including which country will host the fund, who will pay into it and withdraw from it, as well as the makeup of the fund’s board.
The 1997 Kyoto Protocol marked the first major breakthrough, and helped propel international action toward reducing emissions — though only some of the commitments are binding, and the United States is notably absent from the list signatories. The 2015 Paris Agreement laid out an even more robust roadmap for reducing greenhouse gas emissions, with a target of holding global temperature rise to “well below” 2 degrees Celsius (3.6 degrees Fahrenheit) above preindustrial levels, and “pursuing efforts” to limit the increase to 1.5 degrees C (2.7 degrees F).
Although the path to that future is narrowing, it is still within reach, according to the International Energy Agency. But, some experts say, relying primarily on once-a-year COP meetings to get there may no longer be the best approach.
“Multilateral engagement is not the issue anymore,” Christiana Figueres said at a conference earlier this year. She was the executive secretary of the Convention when the Paris agreement was reached, and said that while important issues that need to be ironed out on the international level — especially for developing countries — the hardest work must now be done domestically.
“We have to redesign the COPs…. Multilateral attention, frankly, is distracting governments from doing their homework at home,” she said. At another conference a month later, she added, “Honestly, I would prefer 90,000 people stay at home and do their job.”
Kyte agrees and thinks it’s time to take at least a step back from festival-like gatherings and toward more focused, year-round, work on the crisis at hand. “The UN has to find a way to break us into working groups to get things done,” she said. “And then work us back together into less of a jamboree and more of a somber working event.”
The list of potential topics for working groups to tackle is long, from ensuring a just transition to reigning in the use of coal. But one area that Zaelke points to as a possible exemplar for a sectoral approach is reducing emissions of methane, a greenhouse gas with more than 80 times the warming power of carbon dioxide in the first 20 years after it reaches the atmosphere.
“Methane is the blow torch that’s pushing us from global warming to global boiling,” he said. “It’s the single biggest and fastest way to turn down the heat.”
To tackle the methane problem, Zaelke points to another international agreement as a model: the Montreal Protocol. Adopted in 1987, that treaty was aimed at regulating chemicals that deplete the atmosphere’s ozone layer, and it has been a resounding success. The pollutants have been almost completely phased out and the ozone layer is on track to recover by the middle of the century. The compact was expanded in 2016 to include another class of chemicals, hydrochlorofluorocarbons.
“It’s an under-appreciated treaty, and it’s an under-appreciated model,” said Zaelke, noting that it included legally binding measures that the Paris agreement does not. “You could easily come to the conclusion we need another sectoral agreement for methane.”
Zaelke could see this tactic applying to other sectors as well, such as shipping and agriculture. Some advocates — including at least eight governments and the World Health Organisation — have also called for a “Fossil Fuel Non-Proliferation Treaty”, said Harjeet Singh, the global engagement director for the initiative. Like Zaelke, Kyte, and others, he envisions such sectoral pushes as running complementary to the main Convention process — a framework that, while flawed, he believes can continue to play an important role.
“The amount of time we spend negotiating each and every paragraph, line, comma, semicolon is just unimaginable and a colossal waste of time,” he said of the annual events. But he adds the forum is still crucial, in part because every country enjoys an equal amount of voting power, no matter its size or clout.
“I don’t see any other space which is as powerful as this to deliver climate justice,” he said. “We need more tools and more processes, but we cannot lose the space.”
Overview: Dry conditions persisted across much of Canada, South America, Australia, northern China, and the Mediterranean region during October 2023, while beneficial precipitation fell across some of the drought areas in the other continents. Anomalously warm conditions continued to dominate all of the continents. It was a record-warm October for Asia and South America, with Africa, Europe, and North America having the second warmest October.
For the first time “in history” we decided to jump on the “Giving Tuesday” bandwagon in order to make you aware of the options you have to contribute to our work!
Skeptical Science is an all-volunteer organization but our work is not without financial costs. Contributions supporting our publication mechanisms from our readers and users are a critical part of improving the general public’s critical thinking skills about science and in particular climate science. Your contribution is a solid investment in making possible a better future thanks to improving our ability to think productively, leading to better decisions at all levels of our climate change challenge. Please visit our support page to contribute.
The Cranky Uncle game adopts an active inoculation approach, where a Cranky Uncle cartoon character mentors players to learn the techniques of science denial. Cranky Uncle is a free game available on smartphones for iPhone and Android as well as web browsers. Even though the translations of the Cranky Uncle game are done by teams of volunteers, each language incurs costs for programming activities to get a language set up in the game. If you’d like to support Cranky Uncle “teaching” his science denial techniques in other languages, please use the dedicated form provided on this page to contribute.
Other options to contribute
Another very helpful way to support our work is to provide feedback on our rebuttals and especially the new at-a-glance sections in the basic-level rebuttals we are currently adding. And if you happen to be multi-lingual: we have a lot of content where translations could be updated or created!
Thanks for reading and any contribution you choose to make!
According to a Gartner® report, “By 2026, more than 80% of enterprises will have used generative AI APIs or models, and/or deployed GenAI-enabled applications in production environments, up from less than 5% in 2023.”* However, to be successful they need the flexibility to run it on their existing cloud environments. That’s why we continue expanding the IBM and AWS collaboration, providing clients flexibility to build and govern their AI projects using the watsonx AI and data platform with AI assistants on AWS.
With sprawling data underpinning these AI projects, enterprises are increasingly looking to data lakehouses to bring it all together in one place where they can access, cleanse and manage it. To that end, watsonx.data, a fit-for-purpose data store built on an open data lakehouse architecture, is already available as a fully managed software-as-a-service (SaaS) on Red Hat OpenShift and Red Hat OpenShift Services on AWS (ROSA)—all accessible in the AWS Marketplace.
The watsonx.governance toolkit and watsonx.ai next generation studio for AI builders will follow in early 2024, making the full watsonx platform available on AWS. This provides clients a full stack of capabilities to train, tune and deploy AI models with trusted data, speed and governance with increased flexibility to run their AI workflows wherever they reside.
During AWS ReInvent, IBM will show how clients accessing Llama 2 from AWS Sagemaker will be able to use the watsonx.governance toolkit to govern both the training data and the AI to operate and scale with trust and transparency. Watsonx.governance can also help manage these models against regulatory guidelines and risks tied to the model itself and the application using it.
We’ll also be unveiling several exciting pieces of news about our fast-growing partnership, and showcasing the following joint innovations:
IBM Security’s Program for Service Providers: A new program for Managed Security Service Providers (MSSPs) and Cloud System Integrators to accelerate their adoption of IBM security software delivered on AWS. This program helps security providers develop and deliver threat detection and data security services, designed specifically for protecting SMB clients. It also enables service providers to deliver services that can be listed in the AWS Marketplace, leveraging IBM Security software, which feature AWS built-in integrations — significantly speeding and simplifying onboarding.
Apptio Cloudability and IBM Turbonomic Integration: Since IBM’s acquisition of Apptio closed in August, teams have been working on the integration of Apptio Cloudability, a cloud cost-management tool, and Turbonomic, an IT resource management tool for continuous hybrid cloud optimization. Today, key optimization metrics from Turbonomic can be visualized within the Cloudability interface, providing deeper cost analysis and savings for AWS Cloud environments.
Workload Modernization: We’re providing tools and services for deployment and support to simplify and automate the modernization and migration path for on-premise to as-a-service versions of IBM Planning Analytics, Db2 Warehouse and IBM Maximo Application Suiteon AWS.
Growing Software Portfolio: We now have 25 SaaS products currently available on AWS including watsonx.data, APP Connect, Maximo Application Suite, IBM Turbonomic and three new SaaS editions of Guardium Insights. There are now more than 70 IBM listings in the AWS marketplace. As part of an ongoing global expansion of our partnership, the IBM software and SaaS catalog (limited release) is now available for our clients in Denmark, France, Germany and the United Kingdom to procure via the AWS Marketplace.
In addition to these software capabilities, IBM is growing its generative AI capabilities and expertise with AWS—delivering new solutions to clients and training thousands of consultants on AWS generative AI services. IBM also launched an Innovation Lab in collaboration with AWS at the IBM Client Experience Center in Bangalore. This builds on IBM’s existing expertise with AWS generative AI services including Amazon SageMaker and Amazon CodeWhisperer and Amazon Bedrock.
IBM is the only technology company with both AWS-specific consulting expertise and complementary technology spanning data and AI, automation, security and sustainability capabilities—all built on Red Hat Open Shift Service on AWS—that run cloud-native on AWS.
For more information about the IBM and AWS partnership, please visit www.ibm.com/aws. Visit us at AWS re:Invent in booth #930. Don’t miss these sessions from IBM experts exploring hybrid cloud and AI:
Hybrid by Design at USAA: 5:00 p.m., Tuesday, November 28, The Venetian, Murano 3306
Scale and Accelerate the Impact of Generative AI with watsonx: 4:30 p.m., Wednesday, November 29, Wynn Las Vegas, Cristal 7
*Gartner. Hype Cycle for Generative AI, 2023, 11 September 2023. Gartner and Hype Cycle are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.
Three weeks ago my wife and I spent the weekend with old friends in New York City. We had a wonderful time but came home feeling tired and very weak. Then we developed severe flu like symptoms. We tested positive for Covid and started on medication to prevent the disease from getting worse. Unfortunately the medication did not work as planned for my wife. She has been hospitalized for the past eight days. With luck she will be discharged tomorrow to a care facility. How long she will be there is unknown. I’m slowly getting better but am still weak […]
A chronological listing of news and opinion articles posted on the Skeptical Science Facebook Page during the past week: Sun, Nov 19, 2023 thru Sat, Nov 25, 2023.
Story of the Week
World stands on frontline of disaster at Cop28, says UN climate chief
Exclusive: Simon Stiell says leaders must ‘stop dawdling’ and act before crucial summit in Dubai
World leaders must “stop dawdling and start doing” on carbon emission cuts, as rapidly rising temperatures this year have put everyone on the frontline of disaster, the UN’s top climate official has warned.
No country could think itself immune from catastrophe, said Simon Stiell, who will oversee the crucial Cop28 climate summit that begins next week. Scores of world leaders will arrive in Dubai for tense talks on how to tackle the crisis.
“We’re used to talking about protecting people on the far-flung frontlines. We’re now at the point where we’re all on the frontline,” said Stiell, speaking exclusively to the Guardian before the summit. “Yet most governments are still strolling when they need to be sprinting.”
Global temperatures have broken new records in recent months, making this year the hottest on record, and perilously close to the threshold of 1.5C above pre-industrial levels that countries have agreed to hold to. Temperatures are now heading for a “hellish” 3C increase, unless urgent and drastic action is taken, but greenhouse gas emissions have continued to rise.
Stiell said it was still possible to cut greenhouse gas emissions enough to stay within the crucial limit, but that further delay would be dangerous.
“Every year of the baby steps we’ve been taking up to this point means that we need to be taking … bigger leaps with each following year if we are to stay in this race,” he said. “The science is absolutely clear.”
The fortnight-long Cop28 talks will start this Thursday in Dubai, hosted by the United Arab Emirates, a major oil and gas-producing country. Scores of world leaders, senior ministers and officials from 198 countries will be in attendance, along with an estimated 70,000 delegates, making it the biggest annual conference of the parties (Cop) yet held under the 1992 UN framework convention on climate change.
Click here to access the entire article as originally posted on the The Guardian website.
Application modernization is the process of updating legacy applications leveraging modern technologies, enhancing performance and making it adaptable to evolving business speeds by infusing cloud native principles like DevOps, Infrastructure-as-code (IAC) and so on. Application modernization starts with assessment of current legacy applications, data and infrastructure and applying the right modernization strategy (rehost, re-platform, refactor or rebuild) to achieve the desired result.
While rebuild results in maximum benefit, there is a need for high degree of investment, whereas rehost is about moving applications and data as such to cloud without any optimization and this requires less investments while value is low. Modernized applications are deployed, monitored and maintained, with ongoing iterations to keep pace with technology and business advancements. Typical benefits realized would range from increased agility, cost-effectiveness and competitiveness, while challenges include complexity and resource demands. Many enterprises are realizing that moving to cloud is not giving them the desired value nor agility/speed beyond basic platform-level automation. The real problem lies in how the IT is organized, which reflects in how their current applications/services are built and managed (refer to Conway’s law). This, in turn, leads to the following challenges:
Duplicative or overlapping capabilities offered by multiple IT systems/components create sticky dependencies and proliferations, which impact productivity and speed to market.
Duplicative capabilities across applications and channels give rise to duplicative IT resources (e.g., skills and infrastructure)
Duplicative capabilities (including data) resulting in duplication of business rules and the like give rise to inconsistent customer experience.
Lack of alignment of IT capabilities to business capabilities impacts time to market and business-IT. In addition, enterprises end up building several band-aids and architectural layers to support new business initiatives and innovations.
Hence, application modernization initiatives need to be focusing more on the value to business and this involves significant element of transformation of the applications to business capabilities aligned components and services. The biggest challenge with this is the amount of investment needed and many CIOs/CTOs are hesitant to invest due to the cost and timelines involved in realizing value. Many are addressing this via building accelerators that could be customized for enterprise consumption that helps accelerate specific areas of modernization and one such example from IBM is IBM Consulting Cloud Accelerators. While attempting to drive acceleration and optimize cost of modernization, Generative AI is becoming a critical enabler to drive change in how we accelerate modernization programs. We will explore key areas of acceleration with an example in this article.
A simplified lifecycle of application modernization programs (not meant to be exhaustive) is depicted below. Discovery focuses on understanding legacy application, infrastructure, data, interaction between applications, services and data and other aspects like security. Planning breaks down the complex portfolio of applications into iterations to be modernized to establish an iterative roadmap—and establishing an execution plan to implement the roadmap.
Blueprint/Design phase activities change based on the modernization strategy (from decomposing application and leveraging domain-driven design or establish target architecture based on new technology to build executable designs). Subsequent phases are build and test and deploy to production. Let us explore the Generative AI possibilities across these lifecycle areas.
Discovery and design:
The ability to understand legacy applications with minimal SME involvement is a critical acceleration point. This is because, in general, SMEs are busy with systems lights-on initiatives, while their knowledge could be limited based on how long they have been supporting the systems. Collectively, discovery and design is where significant time is spent during modernization, whereas development is much easier once the team has decoded the legacy application functionality, integration aspects, logic and data complexity.
Modernization teams perform their code analysis and go through several documents (mostly dated); this is where their reliance on code analysis tools becomes important. Further, for re-write initiatives, one needs to map functional capabilities to legacy application context so as to perform effective domain-driven design/decomposition exercises. Generative AI becomes very handy here through its ability to correlate domain/functional capabilities to code and data and establish business capabilities view and connected application code and data—of course the models need to be tuned/contextualized for a given enterprise domain model or functional capability map. Generative AI-assisted API mapping called out in this paper is a mini exemplar of this. While the above is for application decomposition/design, event-storming needs process maps and this is where Generative AI assists in contextualizing and mapping extracts from process mining tools. Generative AI also helps generate use cases based on code insights and functional mapping. Overall, Generative AI helps de-risk modernization programs via ensuring adequate visibility to legacy applications as well as dependencies.
Generative AI also helps generate target design for specific cloud service provider framework through tuning the models based on a set of standardized patterns (ingress/egress, application services, data services, composite patterns, etc.). Likewise, there are several other Generative AI use cases that include generating of target technology framework-specific code patterns for security controls. Generative AI helps to generate detail design specifications, for example, user stories, User Experience Wire Frames, API Specifications (e.g., Swagger files), component relationship diagram and component interaction diagrams.
Planning:
One of the difficult tasks of a modernization program is to be able to establish a macro roadmap while balancing parallel efforts versus sequential dependencies and identifying co-existence scenarios to be addressed. While this is normally done as a one-time task—continuous realignment through Program Increments (PIs)—planning exercises incorporating execution level inputs is far more difficult. Generative AI comes in handy to be able to generate roadmaps based on historical data (applications to domain area maps, effort and complexity factors and dependency patterns, etc.), applying this to applications in the scope of a modernization program—for a given industry or domain.
The only way to address this is to make it consumable via a suite of assets and accelerators that can address enterprise complexity. This is where Generative AI plays a significant role in correlating application portfolio details with discovered dependencies.
Build and test:
Generating code is one of the most widest known Generative AI use case, but it is important to be able to generate a set of related code artifacts ranging from IAC (Terraform or Cloud Formation Template), pipeline code/configurations, embed security design points (encryption, IAM integrations, etc.), application code generation from swaggers or other code insights (from legacy) and firewall configurations (as resource files based on services instantiated, etc.). Generative AI helps generate each of the above through an orchestrated approach based on predefined application reference architectures built from patterns—while combining outputs of design tools.
Testing is another key area; Generative AI can generate the right set of test cases and test code along with test data so as to optimize the test cases being executed.
Deploy:
There are several last mile activities that typically takes days to weeks based on enterprise complexity. The ability to generate insights for security validation (from application and platform logs, design points, IAC, etc.) is a key use case that will help assist accelerated security review and approval cycles. Generating configuration management inputs (for CMDB)and changing management inputs based on release notes generated from Agility tool work items completed per release are key Generative AI leverage areas.
While the above-mentioned use cases across modernization phases appear to be a silver bullet, enterprise complexities will necessitate contextual orchestration of many of the above Generative AI use cases-based accelerators to be able to realize value and we are far from establishing enterprise contextual patterns that help accelerate modernization programs. We have seen significant benefits in investing time and energy upfront (and ongoing) in customizing many of these Generative AI accelerators for certain patterns based on potential repeatability.
Let us now examine a potential proven example:
Example 1: Re-imagining API Discovery with BIAN and AI for visibility of domain mapping and identification of duplicative API services
The Problem: Large Global Bank has more than 30000 APIs (both internal and external) developed over time across various domains (e.g., retail banking, wholesale banking, open banking and corporate banking). There is huge potential of duplicate APIs existing across the domains, leading to higher total cost of ownership for maintaining the large API portfolio and operational challenges of dealing with API duplication and overlap. A lack of visibility and discovery of the APIs leads API Development teams to develop the same or similar APIs rather than find relevant APIs for reuse. The inability to visualize the API portfolio from a Banking Industry Model perspective constrains the Business and IT teams to understand the capabilities that are already available and what new capabilities are needed for the bank.
Generative AI-based solution approach: The solution leverages BERT Large Language Model, Sentence Transformer, Multiple Negatives Ranking Loss Function and domain rules, fine-tuned with BIAN Service Landscape knowledge to learn the bank’s API portfolio and provide ability to discover APIs with auto-mapping to BIAN. It maps API Endpoint Method to level 4 BIAN Service Landscape Hierarchy, that is, BIAN Service Operations.
The core functions of solution are the ability to:
Ingest swagger specifications and other API documentations and understand the API, end points, the operations and the associated descriptions.
Ingest BIAN details and understand BIAN Service Landscape.
Fine-tune with matched and unmatched mapping between API Endpoint Method and BIAN Service Landscape.
Provide a visual representation of the mapping and matching score with BIAN Hierarchical navigation and filters for BIAN levels, API Category and matching score.
Overall logical view (Open Stack based) is as below:
User Interface for API Discovery with Industry Model:
Key Benefits: The solution helped developers to easily find re-usable APIs, based on BIAN business domains; they had multiple filter/search options to locate APIs. In addition, teams were able to identify key API categories for building right operational resilience. Next revision of search would be based on natural language and will be a conversational use case.
The ability to identify duplicative APIs based on BIAN service domains helped establish a modernization strategy that addresses duplicative capabilities while rationalizing them.
This use case was realized within 6–8 weeks, whereas the bank would have taken a year to achieve the same result (as there were several thousands of APIs to be discovered).
Example 2: Automated modernization of MuleSoft API to Java Spring Boot API
The Problem: While the current teams were on a journey to modernize MuleSoft APIs to Java Spring boot, sheer volume of APIs, lack of documentation and the complexity aspects were impacting the speed.
Generative AI-based Solution Approach: The Mule API to Java Spring boot modernization was significantly automated via a Generative AI-based accelerator we built. We began by establishing deep understanding of APIs, components and API logic followed by finalizing response structures and code. This was followed by building prompts using IBM’s version of Sidekick AI to generate Spring boot code, which satisfies the API specs from MuleSoft, unit test cases, design document and user interface.
Mule API components were provided into the tool one by one using prompts and generated corresponding Spring boot equivalent, which was subsequently wired together addressing errors that propped up. The accelerator generated UI for desired channel that could be integrated to the APIs, unit test cases and test data and design documentation. A design documentation that gets generated consists of sequence and class diagram, request, response, end point details, error codes and architecture considerations.
Key Benefits: Sidekick AI augments Application Consultants’ daily work by pairing multi-model Generative AI technical strategy contextualized through deep domain knowledge and technology. The key benefits are as follows:
Generates most of the Spring Boot code and test cases that are optimized, clean and adheres to best practices—key is repeatability.
Ease of integration of APIs with channel front-end layers.
Ease of understanding of code of developer and enough insights in debugging the code.
The Accelerator PoC was completed with 4 different scenarios of code migration, unit test cases, design documentation and UI generation in 3 sprints over 6 weeks.
Conclusion
Many CIOs/CTOs have had their own reservations in embarking on modernization initiatives due to a multitude of challenges called out at the beginning—amount of SME time needed, impact to business due to change, operating model change across security, change management and many other organizations and so on. While Generative AI is not a silver bullet to solve all of the problems, it helps the program through acceleration, reduction in cost of modernization and, more significantly, de-risking through ensuring no current functionality is missed out. However, one needs to understand that it takes time and effort to bring LLM Models and libraries to enterprise environment needs-significant security and compliance reviews and scanning. It also requires some focused effort to improve the data quality of data needed for tuning the models. While cohesive Generative AI-driven modernization accelerators are not yet out there, with time we will start seeing emergence of such integrated toolkits that help accelerate certain modernization patterns if not many.