IBM watsonx AI and data platform, security solutions and consulting services for generative AI to be showcased at AWS re:Invent

According to a Gartner® report, “By 2026, more than 80% of enterprises will have used generative AI APIs or models, and/or deployed GenAI-enabled applications in production environments, up from less than 5% in 2023.”* However, to be successful they need the flexibility to run it on their existing cloud environments. That’s why we continue expanding the IBM and AWS collaboration, providing clients flexibility to build and govern their AI projects using the watsonx AI and data platform with AI assistants on AWS.

With sprawling data underpinning these AI projects, enterprises are increasingly looking to data lakehouses to bring it all together in one place where they can access, cleanse and manage it. To that end, watsonx.data, a fit-for-purpose data store built on an open data lakehouse architecture, is already available as a fully managed software-as-a-service (SaaS) on Red Hat OpenShift and Red Hat OpenShift Services on AWS (ROSA)—all accessible in the AWS Marketplace.

The watsonx.governance toolkit and watsonx.ai next generation studio for AI builders will follow in early 2024, making the full watsonx platform available on AWS. This provides clients a full stack of capabilities to train, tune and deploy AI models with trusted data, speed and governance with increased flexibility to run their AI workflows wherever they reside.

During AWS ReInvent, IBM will show how clients accessing Llama 2 from AWS Sagemaker will be able to use the watsonx.governance toolkit to govern both the training data and the AI to operate and scale with trust and transparency. Watsonx.governance can also help manage these models against regulatory guidelines and risks tied to the model itself and the application using it.

We’ll also be unveiling several exciting pieces of news about our fast-growing partnership, and showcasing the following joint innovations:

  • IBM Security’s Program for Service Providers: A new program for Managed Security Service Providers (MSSPs) and Cloud System Integrators to accelerate their adoption of IBM security software delivered on AWS. This program helps security providers develop and deliver threat detection and data security services, designed specifically for protecting SMB clients. It also enables service providers to deliver services that can be listed in the AWS Marketplace, leveraging IBM Security software, which feature AWS built-in integrations — significantly speeding and simplifying onboarding.
  • Apptio Cloudability and IBM Turbonomic Integration: Since IBM’s acquisition of Apptio closed in August, teams have been working on the integration of Apptio Cloudability, a cloud cost-management tool, and Turbonomic, an IT resource management tool for continuous hybrid cloud optimization. Today, key optimization metrics from Turbonomic can be visualized within the Cloudability interface, providing deeper cost analysis and savings for AWS Cloud environments.
  • Workload Modernization: We’re providing tools and services for deployment and support to simplify and automate the modernization and migration path for on-premise to as-a-service versions of IBM Planning AnalyticsDb2 Warehouse and IBM Maximo Application Suite on AWS.
  • Growing Software Portfolio: We now have 25 SaaS products currently available on AWS including watsonx.data, APP Connect, Maximo Application Suite, IBM Turbonomic and three new SaaS editions of Guardium Insights. There are now more than 70 IBM listings in the AWS marketplace. As part of an ongoing global expansion of our partnership, the IBM software and SaaS catalog (limited release) is now available for our clients in Denmark, France, Germany and the United Kingdom to procure via the AWS Marketplace.

In addition to these software capabilities, IBM is growing its generative AI capabilities and expertise with AWS—delivering new solutions to clients and training thousands of consultants on AWS generative AI services. IBM also launched an Innovation Lab in collaboration with AWS at the IBM Client Experience Center in Bangalore. This builds on IBM’s existing expertise with AWS generative AI services including Amazon SageMaker and Amazon CodeWhisperer and Amazon Bedrock.

IBM is the only technology company with both AWS-specific consulting expertise and complementary technology spanning data and AI, automation, security and sustainability capabilities—all built on Red Hat Open Shift Service on AWS—that run cloud-native on AWS.

For more information about the IBM and AWS partnership, please visit www.ibm.com/aws. Visit us at AWS re:Invent in booth #930. Don’t miss these sessions from IBM experts exploring hybrid cloud and AI:

  • Hybrid by Design at USAA: 5:00 p.m.​, Tuesday, November 28, The Venetian, Murano 3306
  • Scale and Accelerate the Impact of Generative AI with watsonx: 4:30 p.m., Wednesday, November 29, Wynn Las Vegas, Cristal 7

Learn more about the IBM and AWS partnership


*Gartner. Hype Cycle for Generative AI, 2023, 11 September 2023. Gartner and Hype Cycle are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved.

Source: IBM Blockchain

Covid 2023

Three weeks ago my wife and I spent the weekend with old friends in New York City. We had a wonderful time but came home feeling tired and very weak. Then we developed severe flu like symptoms. We tested positive for Covid and started on medication to prevent the disease from getting worse. Unfortunately the medication did not work as planned for my wife. She has been hospitalized for the past eight days. With luck she will be discharged tomorrow to a care facility. How long she will be there is unknown. I’m slowly getting better but am still weak […]

The post Covid 2023 appeared first on Knowledge is Good.

Source: Knowledge is Good

Volunteer Training Program Sees Record Growth As it Marks 10 Years

Volunteer Training Program Sees Record Growth As it Marks 10 Years

IEEE depends on volunteer members for many things, including organizing conferences, coordinating regional and local activities, writing standards, and deciding on IEEE’s future.

But because the organization can be complex, many members don’t know what resources and roles are available to them, and they might need training on how to lead groups. That’s why in 2013, the
IEEE Member and Geographic Activities board established its Volunteer Leadership Program. VoLT, an MGA program, provides members with resources and an overview of IEEE, including its culture and mission. The program also offers participants training to help them gain management and leadership skills. Each participant is paired with a mentor to provide guidance, advice, and support.

two men standing for a portrait in a conference setting
Program specialist for IEEE’s Volunteer User Experience Stephen Torpie and long-time volunteer and Life Member Marc Apter discuss the benefits of the VoLT program with visitors to the exhibit booth at IEEE Sections Congress.Stephen Torpie

VoLT, which is celebrating its 10th anniversary this year, has grown steadily since its launch. In its first year, the program had 49 applicants and 19 graduates. Now nearly 500 members from all 10 IEEE regions and 165 sections have completed the program. This year the program received 306 applications, and it accepted 70 students to participate in the next six-month session.

“When I first got on the Board of Directors, I didn’t realize all the complexities of the organization, so I thought it would be helpful to provide a broad background for others to help them understand IEEE’s larger objectives,” says Senior Member
Loretta Arellano, the mastermind behind VoLT. “The program was developed so that volunteers can quickly learn the IEEE structure and obtain leadership skills unique to a volunteer organization.

“IEEE is such a large organization, and typically members get involved with just one aspect and are never exposed to the rest of IEEE. They don’t realize there are a whole lot of resources and people to help them.”

Soft skills training and mentorship

Before applying to VoLT, members are required to take 10 courses that provide them with a comprehensive introduction to IEEE. The free courses are available on the IEEE Center for Leadership Excellence
website.

Along with their application, members must include a reference letter from an IEEE volunteer.

“The VoLT program taught me how expansive IEEE’s network and offerings are,”
says Moriah Hargrove Anders, an IEEE graduate student member who participated in the program in 2017. “The knowledge [I gained] has guided the leadership I take back to my section.”

Participants attend 10 to 12 webinars on topics such as soft skills, leadership, and stress management. VoLT also trains them in
IEEE Collabratec, IEEE vTools, IEEE Entrepreneurship, and other programs, plus the IEEE Code of Ethics.

“IEEE is such a large organization, and typically members get involved with just one aspect and are never exposed to the rest of IEEE. They don’t realize there are a whole lot of resources and people to help them.”
—Loretta Arellano

Program mentors are active IEEE volunteers and have held leadership positions in the organization. Six of the 19 mentors from the program’s first year are still participating in VoLT. Of the 498 graduates, 205 have been a mentor at least once.

VoLT participants complete a team project, in which they identify a problem, a need, an opportunity, or an area of improvement within their local organizational unit or the global IEEE. Then they develop a business plan to address the concern. Each team presents a video highlighting its business plan to VoLT’s mentors, who evaluate the plans and select the three strongest. The three plans are sent to each individual’s IEEE region director and section leader to consider for implementation.

“The VoLT program helped me to reaffirm and expand my knowledge about IEEE,”
Lizeth Vega Medina says. The IEEE senior member graduated from the program in 2019. “It also taught me how to manage situations as a volunteer.”

Each year, the program makes improvements based on feedback from students and the MGA board.

To acknowledge its anniversary, VoLT offered an exhibit booth in August at the
IEEE Sections Congress in Ottawa. The event, held every three years, brings together IEEE leaders and volunteers from around the world. Recent VoLT graduates presented their team’s project. Videos of the sessions are available on IEEE.tv.

To stay updated on the program and its anniversary celebrations, follow VoLT on Facebook, Instagram, and LinkedIn.

Source: IEEE SPECTRUM NEWS

Application modernization overview

Application modernization is the process of updating legacy applications leveraging modern technologies, enhancing performance and making it adaptable to evolving business speeds by infusing cloud native principles like DevOps, Infrastructure-as-code (IAC) and so on. Application modernization starts with assessment of current legacy applications, data and infrastructure and applying the right modernization strategy (rehost, re-platform, refactor or rebuild) to achieve the desired result.

While rebuild results in maximum benefit, there is a need for high degree of investment, whereas rehost is about moving applications and data as such to cloud without any optimization and this requires less investments while value is low. Modernized applications are deployed, monitored and maintained, with ongoing iterations to keep pace with technology and business advancements. Typical benefits realized would range from increased agility, cost-effectiveness and competitiveness, while challenges include complexity and resource demands. Many enterprises are realizing that moving to cloud is not giving them the desired value nor agility/speed beyond basic platform-level automation. The real problem lies in how the IT is organized, which reflects in how their current applications/services are built and managed (refer to Conway’s law). This, in turn, leads to the following challenges:

  • Duplicative or overlapping capabilities offered by multiple IT systems/components create sticky dependencies and proliferations, which impact productivity and speed to market.
  • Duplicative capabilities across applications and channels give rise to duplicative IT resources (e.g., skills and infrastructure)
  • Duplicative capabilities (including data) resulting in duplication of business rules and the like give rise to inconsistent customer experience.
  • Lack of alignment of IT capabilities to business capabilities impacts time to market and business-IT. In addition, enterprises end up building several band-aids and architectural layers to support new business initiatives and innovations.

Hence, application modernization initiatives need to be focusing more on the value to business and this involves significant element of transformation of the applications to business capabilities aligned components and services. The biggest challenge with this is the amount of investment needed and many CIOs/CTOs are hesitant to invest due to the cost and timelines involved in realizing value. Many are addressing this via building accelerators that could be customized for enterprise consumption that helps accelerate specific areas of modernization and one such example from IBM is IBM Consulting Cloud Accelerators. While attempting to drive acceleration and optimize cost of modernization, Generative AI is becoming a critical enabler to drive change in how we accelerate modernization programs. We will explore key areas of acceleration with an example in this article.

A simplified lifecycle of application modernization programs (not meant to be exhaustive) is depicted below. Discovery focuses on understanding legacy application, infrastructure, data, interaction between applications, services and data and other aspects like security. Planning breaks down the complex portfolio of applications into iterations to be modernized to establish an iterative roadmap—and establishing an execution plan to implement the roadmap.

Blueprint/Design phase activities change based on the modernization strategy (from decomposing application and leveraging domain-driven design or establish target architecture based on new technology to build executable designs). Subsequent phases are build and test and deploy to production. Let us explore the Generative AI possibilities across these lifecycle areas.

Discovery and design:

The ability to understand legacy applications with minimal SME involvement is a critical acceleration point. This is because, in general, SMEs are busy with systems lights-on initiatives, while their knowledge could be limited based on how long they have been supporting the systems. Collectively, discovery and design is where significant time is spent during modernization, whereas development is much easier once the team has decoded the legacy application functionality, integration aspects, logic and data complexity.

Modernization teams perform their code analysis and go through several documents (mostly dated); this is where their reliance on code analysis tools becomes important. Further, for re-write initiatives, one needs to map functional capabilities to legacy application context so as to perform effective domain-driven design/decomposition exercises. Generative AI becomes very handy here through its ability to correlate domain/functional capabilities to code and data and establish business capabilities view and connected application code and data—of course the models need to be tuned/contextualized for a given enterprise domain model or functional capability map. Generative AI-assisted API mapping called out in this paper is a mini exemplar of this. While the above is for application decomposition/design, event-storming needs process maps and this is where Generative AI assists in contextualizing and mapping extracts from process mining tools. Generative AI also helps generate use cases based on code insights and functional mapping. Overall, Generative AI helps de-risk modernization programs via ensuring adequate visibility to legacy applications as well as dependencies.

Generative AI also helps generate target design for specific cloud service provider framework through tuning the models based on a set of standardized patterns (ingress/egress, application services, data services, composite patterns, etc.). Likewise, there are several other Generative AI use cases that include generating of target technology framework-specific code patterns for security controls. Generative AI helps to generate detail design specifications, for example, user stories, User Experience Wire Frames, API Specifications (e.g., Swagger files), component relationship diagram and component interaction diagrams.

Planning:

One of the difficult tasks of a modernization program is to be able to establish a macro roadmap while balancing parallel efforts versus sequential dependencies and identifying co-existence scenarios to be addressed. While this is normally done as a one-time task—continuous realignment through Program Increments (PIs)—planning exercises incorporating execution level inputs is far more difficult. Generative AI comes in handy to be able to generate roadmaps based on historical data (applications to domain area maps, effort and complexity factors and dependency patterns, etc.), applying this to applications in the scope of a modernization program—for a given industry or domain.

The only way to address this is to make it consumable via a suite of assets and accelerators that can address enterprise complexity. This is where Generative AI plays a significant role in correlating application portfolio details with discovered dependencies.

Build and test:

Generating code is one of the most widest known Generative AI use case, but it is important to be able to generate a set of related code artifacts ranging from IAC (Terraform or Cloud Formation Template), pipeline code/configurations, embed security design points (encryption, IAM integrations, etc.), application code generation from swaggers or other code insights (from legacy) and firewall configurations (as resource files based on services instantiated, etc.). Generative AI helps generate each of the above through an orchestrated approach based on predefined application reference architectures built from patterns—while combining outputs of design tools.

Testing is another key area; Generative AI can generate the right set of test cases and test code along with test data so as to optimize the test cases being executed.

Deploy:

There are several last mile activities that typically takes days to weeks based on enterprise complexity. The ability to generate insights for security validation (from application and platform logs, design points, IAC, etc.) is a key use case that will help assist accelerated security review and approval cycles. Generating configuration management inputs (for CMDB)and changing management inputs based on release notes generated from Agility tool work items completed per release are key Generative AI leverage areas.

While the above-mentioned use cases across modernization phases appear to be a silver bullet, enterprise complexities will necessitate contextual orchestration of many of the above Generative AI use cases-based accelerators to be able to realize value and we are far from establishing enterprise contextual patterns that help accelerate modernization programs. We have seen significant benefits in investing time and energy upfront (and ongoing) in customizing many of these Generative AI accelerators for certain patterns based on potential repeatability.

Let us now examine a potential proven example:

Example 1: Re-imagining API Discovery with BIAN and AI for visibility of domain mapping and identification of duplicative API services

The Problem: Large Global Bank has more than 30000 APIs (both internal and external) developed over time across various domains (e.g., retail banking, wholesale banking, open banking and corporate banking). There is huge potential of duplicate APIs existing across the domains, leading to higher total cost of ownership for maintaining the large API portfolio and operational challenges of dealing with API duplication and overlap. A lack of visibility and discovery of the APIs leads API Development teams to develop the same or similar APIs rather than find relevant APIs for reuse. The inability to visualize the API portfolio from a Banking Industry Model perspective constrains the Business and IT teams to understand the capabilities that are already available and what new capabilities are needed for the bank.

Generative AI-based solution approach: The solution leverages BERT Large Language Model, Sentence Transformer, Multiple Negatives Ranking Loss Function and domain rules, fine-tuned with BIAN Service Landscape knowledge to learn the bank’s API portfolio and provide ability to discover APIs with auto-mapping to BIAN. It maps API Endpoint Method to level 4 BIAN Service Landscape Hierarchy, that is, BIAN Service Operations.

The core functions of solution are the ability to:

  • Ingest swagger specifications and other API documentations and understand the API, end points, the operations and the associated descriptions.
  • Ingest BIAN details and understand BIAN Service Landscape.
  • Fine-tune with matched and unmatched mapping between API Endpoint Method and BIAN Service Landscape.
  • Provide a visual representation of the mapping and matching score with BIAN Hierarchical navigation and filters for BIAN levels, API Category and matching score.

Overall logical view (Open Stack based) is as below:

User Interface for API Discovery with Industry Model:

Key Benefits: The solution helped developers to easily find re-usable APIs, based on BIAN business domains; they had multiple filter/search options to locate APIs. In addition, teams were able to identify key API categories for building right operational resilience. Next revision of search would be based on natural language and will be a conversational use case.

The ability to identify duplicative APIs based on BIAN service domains helped establish a modernization strategy that addresses duplicative capabilities while rationalizing them.

This use case was realized within 6–8 weeks, whereas the bank would have taken a year to achieve the same result (as there were several thousands of APIs to be discovered).

Example 2: Automated modernization of MuleSoft API to Java Spring Boot API

The Problem: While the current teams were on a journey to modernize MuleSoft APIs to Java Spring boot, sheer volume of APIs, lack of documentation and the complexity aspects were impacting the speed.

Generative AI-based Solution Approach: The Mule API to Java Spring boot modernization was significantly automated via a Generative AI-based accelerator we built. We began by establishing deep understanding of APIs, components and API logic followed by finalizing response structures and code. This was followed by building prompts using IBM’s version of Sidekick AI to generate Spring boot code, which satisfies the API specs from MuleSoft, unit test cases, design document and user interface.

Mule API components were provided into the tool one by one using prompts and generated corresponding Spring boot equivalent, which was subsequently wired together addressing errors that propped up. The accelerator generated UI for desired channel that could be integrated to the APIs, unit test cases and test data and design documentation. A design documentation that gets generated consists of sequence and class diagram, request, response, end point details, error codes and architecture considerations.

Key Benefits: Sidekick AI augments Application Consultants’ daily work by pairing multi-model Generative AI technical strategy contextualized through deep domain knowledge and technology. The key benefits are as follows:

  • Generates most of the Spring Boot code and test cases that are optimized, clean and adheres to best practices—key is repeatability.
  • Ease of integration of APIs with channel front-end layers.
  • Ease of understanding of code of developer and enough insights in debugging the code.

The Accelerator PoC was completed with 4 different scenarios of code migration, unit test cases, design documentation and UI generation in 3 sprints over 6 weeks.

Conclusion

Many CIOs/CTOs have had their own reservations in embarking on modernization initiatives due to a multitude of challenges called out at the beginning—amount of SME time needed, impact to business due to change, operating model change across security, change management and many other organizations and so on. While Generative AI is not a silver bullet to solve all of the problems, it helps the program through acceleration, reduction in cost of modernization and, more significantly, de-risking through ensuring no current functionality is missed out. However, one needs to understand that it takes time and effort to bring LLM Models and libraries to enterprise environment needs-significant security and compliance reviews and scanning. It also requires some focused effort to improve the data quality of data needed for tuning the models. While cohesive Generative AI-driven modernization accelerators are not yet out there, with time we will start seeing emergence of such integrated toolkits that help accelerate certain modernization patterns if not many.

Source: IBM Blockchain

Winning the cloud game: Phoning the right friend to answer the cloud optimization question

Cloud optimization is essential as organizations look to accelerate business outcomes and unlock the value of their data. At its core, cloud optimization is the process of correctly selecting and assigning the right resources to a workload or application. But cloud optimization is also a lifecycle process that balances performance, compliance and cost to achieve efficiency. And getting it right is crucial. Gartner predicts that enterprise IT spending on public cloud will exceed 51% by 2025, while Flexera’s State of the Cloud Report in 2023 highlighted that managing cloud spend overtook security as a top challenge facing all organizations for the first time.

Research shows that that 90% of enterprises have a multi-cloud strategy and 80% have a hybrid cloud strategy—a combination of public and private clouds. Only 7% of enterprises are using a single public cloud provider. 

It’s easy to see the complexity of the cloud optimization problem given the use of multi cloud. Many organizations have elected to deploy Cloud Centers of Excellence or FinOps practices with the goal of optimizing cloud spend. But building out a FinOps practice or Cloud Center of Excellence is easier said than done. It takes time and talent. Sometimes organizations are short on both. Cloud optimization goes well beyond simple cost reduction and workload placement and is more about making sure your costs align with your business goals.

Remember the TV game show Who Wants to Be a Millionaire? There was a feature on the show called “Phone a Friend.” On the show, the contestant was connected with a friend over a phone line and was given 30 seconds to read the question and answers and solicit assistance. 

Of course, the contestant wants to call the RIGHT friend—the one that can help them with the correct answer and lead them to the money.

As it relates to cloud optimization, workload placement and app modernization, it feels like enterprises need a Phone a Friend feature. But they need to call the RIGHT friend too. 

Why should you phone a friend? Because you need help to answer the IT question and the clock is ticking. If an enterprise calls the wrong friend, they lose the chance to modernization their apps on time, optimize their costs and digitally transform. In short, by calling the WRONG friend, they lose the game.

Enterprises aren’t just looking for tools that manage resources and costs in multicloud and hybrid cloud environments. Tools are great but you need the right tools in the right order. 

Additionally, organizations need help to build out a roadmap or implement the roadmap especially as they are looking to modernize legacy virtual environments. They want AI powered automation capabilities that span from AIOps to application and infrastructure observability.

They need to win the game. They need the right friend with the right answers to help them win.

There is a strong synergy between digital transformation and IT modernization and it’s a long game. While transformation sets the vision and strategic direction of the organization, IT modernization is the practical implementation of that vision. These initiatives reshape and transform organizations. By embracing new technologies, organizations improve efficiency, enhance customer experience and remain competitive.

No matter where an organization is on this journey, phoning the “Right Friend” can move them along the game board from siloed systems to integrated platforms, from on-premises infrastructure to cloud computing and providing the answers around the move from monolithic applications to microservices. A solid strategy involves knowing which friend to call for guidance to help drive business growth and innovation.

Learn more about IBM and AWS todaySource: IBM Blockchain

The advantages and disadvantages of ERP systems

Enterprise resource planning (ERP) solutions offer organizations a one-stop-shop for managing daily operations. The business management software has gained popularity in the business world as organizations try to keep up with the changing landscape. As with most business solutions, there are advantages and disadvantages of ERP systems to consider.

It’s important to understand how enterprise resource planning can work for an organization and its capabilities at a granular level. Here are some key benefits an enterprise resource planning system can bring when managing all aspects of the business.

Advantages of ERP

Improve customer service

The business world is hyper-competitive and that’s no different when it comes to attracting and retaining customers. The customer service experience is a vital part to an organization and an ERP solution can help advance customer relationship management. Since a new system like ERP software puts all customer information into one place, it can facilitate quicker customer service and a more personalized approach.

ERP stores contact information, order history, past support cases and more in one simplified system. Separately, since ERP will track past orders and real-time inventory the customer is much more likely to receive the correct items on time. If those factors are in place, it’s much more likely a customer leaves happy and will return for more down the road.

Customize reporting

Real-time data reporting is one of the highlights of an ERP solution and why it’s a serious advantage over other business management systems. With ERP reporting tools, organizations can customize reporting across many different functions, such as finance, inventory, procurement and human resources and be able to calculate it depending on what matters most to the organization. This tailor-made approach lets the business measure whichever KPIs they find most important and track performance of different business components.

The other advantage is ERP offers the latest data in real-time. This means if an employee is trying to assess an issue, they don’t have outdated data to analyze and instead have the most accurate and up to date numbers to refer to. The customized reporting can help an organization make informed decisions, which is critical when the business environment is ever-changing.

Expand collaborations

The way that ERP solutions are built make for excellent collaboration across different departments. With integrated applications and data storage all under one solution, teams get a clear picture into how each is functioning and contributing to the business.

With the enterprise resource planning system in place, teams across the organization can communicate freely as they aren’t functioning on separate platforms. The integration on the back-end is extremely important and helps employees integrate and work as one. With access to all data, one employee on a completely irrelevant team might be able to point out a malfunction or something that cuts down on duplicate work. This expanded collaboration can increase decision-making, while being a single source of truth for all data entry.

Greater sustainability

The fast-paced ever changing business world has seen a big emphasis on sustainability. C-suites are facing pressure from boards, investors, customers and others to regulate the negative impact of their carbon emissions.

To find out how organizations use ERP implementation to attain sustainability goals, the IBM Institute for Business Value (IBV) and SAP, in collaboration with Oxford Economics, surveyed more than 2,125 senior executives involved in their organizations’ environmental sustainability strategies—around the world and across industries. The surprising result: those who outperform their competition in both environmental and financial outcomes also boast the most deeply engaged ERP implementation.

Improve transparency and insights

One of the benefits of ERP is that it offers full access to every business function and process in an organization all in one place. With the implementation of ERP, data from every department can be accessed by executive-level employees. The ERP solution monitors data daily and can provide day-to-day information, helping an organization be as precise as possible when it comes to factors such as inventory levels and business operations.

The complete visibility ERP provides gives organization leaders better functional business insights and more accurate business forecasting. As a result, this can streamline tasks and make clearer, more concise workflows. In addition, having accurate forecasting models is a competitive advantage, as they allow for improved data-driven strategy and decision-making. As ERP can monitor each department and keep all data in one place, there’s an opportunity for more efficient processes and improved cross-collaboration. In addition, ERP can improve business data security across the whole organization for both on-premises and cloud-based ERP systems.

An example of the success of an ERP implementation is Neste, a market leader in renewable diesel, sustainable aviation fuel, and renewable polymers and chemicals based in Espoo, Finland. The company took a joint-team approach when it came to implementing its new ERP system. Neste worked with IBM Consulting™ for SAP to roll out the SAP S/4HANA solution on the Microsoft Azure cloud across most of its operations, including its renewables supply chains. Neste’s new ERP platform is enabling supply chain process efficiency improvements and making its data more transparent. “Among the most far-reaching benefits,” notes Neste Head of Integrated ERP, Marko Mäki-Ullakko, “is the ability to spot and resolve process inefficiencies.”

“We’ve been able to use SAP’s process discovery capabilities to spot supply chain and production bottlenecks,” he explained. “In that way, integrated SAP has been and will be a critical tool for our process optimization efforts.”

Increase flexibility and scalability

One of the unique features of ERP software is the inclusion of applications or modules across many different business needs. ERP applications, such as procurement, supply chain management, inventory and project management, are all separate applications offered under ERP.

ERP applications can stand on their own but can also be integrated in the entirety of the ERP system, making for easier scalability and configuration in an organization. By being able to add or take away applications, ERP can help scale a business as it evolves over time.

Scalability will look different depending on which ERP solution your organization chooses to use. If a business plans to grow rapidly over time the cloud-based ERP system is the best choice since cloud ERP systems are run on remote servers.

Increase productivity

By automating different tasks, ERP software frees up employees to work on more pertinent tasks and increased efficiency. The ERP system boosts productivity in a range of different ways that all stem from the automation of basic tasks and making processes more straightforward. With the streamlined approach from an ERP system, there is less time dedicated to digging up information and allows for employees to perform other tasks faster. Manual data entry is not necessary, making tasks such as inventory management much easier and making metrics tracking much simpler.

With a lens into the entire organization, employees are no longer tasked with tracking down the right data set or the employee who knows how a certain process works and can instead focus on more important tasks and projects. ERP solutions offer these features using technology, such as artificial intelligence (AI), machine learning, robotic process automation and more. These technologies support the automation and intelligent suggestion features in ERP software applications.

Reduce ongoing costs

The way an ERP solution is structured makes it so data input only occurs one time but can serve multiple purposes across the organization. This can result in saving the business time and money as it streamlines redundant tasks. The upfront costs and cost savings will also depend on which type of ERP solution you choose.

Without a centralized ERP software solution, organizations rely on numerous systems to run the business. The more systems, the higher the potential IT costs. An ERP system could potentially reduce those costs. Separately, it could also reduce training requirements for the end-user since they would only need to learn on one system. This could result in more profitability and less disruptions.

Standardize business processes

The purpose of implementing an ERP solution is to highlight and build from an organization’s best practices and consistencies. This allows you to streamline operations and standardize workflows, ultimately to reduce manual labor and human error across your business. Platforms such as customer relationship management (CRM) can simply be integrated into the ERP system.

ERP software offers many advantages, but standardization is one of the most important. By relying on standardization and configuration, organizations could also see reduced project costs and better cross-team collaboration with less friction.   

Disadvantages of ERP

Increase complexity

ERP is an all-encompassing business management tool, and it can be quite complex. The software can be exciting. Organizations can get caught up in that excitement and risk failing to make a well-thought-out plan for ERP implementation.

The processes of some organizations may find the ERP solution to be too large and not well-suited for its needs. This can result in a poor ROI and should be avoided if possible. The best way to avoid these pitfalls is to build role-based user training and simplify your ERP software to fit your organization’s needs.

Add short-term costs

There are multiple factors to consider when thinking about switching to an ERP software. One of them is cost; not only the cost of the software, but the cost of time and resources needed to implement the system and train employees across all departments.

Another aspect of cost is the ongoing operational costs required of an ERP solution, specifically an on-premises ERP solution. The best way to avoid this ongoing cost is to utilize a cloud-based ERP system, which is a Software-as-a-Service (SaaS) solution that can be run from any location.

One other factor to consider is the change management that is required when implementing an ERP system. ERP implementation requires changes to business processes and workflows. These changes are major investments in time and resources. When selecting ERP software, consider these factors and select the system type that best fits your organization’s needs.

More time-consuming

Since ERP is customizable, and not a one-size-fits-all software, it can become very time-consuming. Customization is a huge advantage to the ERP solution, but can be a challenge as it needs to be built from the ground up.  

An implementation process takes time; organizations must prepare for a lengthy process. The time it takes to transfer to the ERP system depends on which legacy system is being used. The best way to avoid this issue is, again, to have an ERP implementation plan in place that is clear, concise and includes an assigned implementation team.

IBM and ERP

The migration from a legacy system to ERP software can be a huge undertaking no matter the size of the organization. When considering an ERP solution, it’s important to bring in experts to help run a smooth and transparent implementation plan.

IBM Consulting® experts can help your organization successfully migrate legacy ERP applications to the cloud, redesign processes to leverage data, AI and automation, and transform finance into a competitive advantage within your business.

SAP managed services for applications and ERP can help manage an organization’s workloads, giving you more time to focus on innovation and new opportunities. Managed services for SAP applications enable agility and resource optimization by supporting and optimizing underlying operational functions. Areas like security and compliance reporting, application management, and service delivery to lines-of-business become more predictable from a pricing, resource and workload perspective.

Explore SAP consulting servicesSource: IBM Blockchain

Your Black Friday observability checklist

Black Friday—and really, the entire Cyber Week—is a time when you want your applications running at peak performance without completely exhausting your operations teams.

Observability solutions can help you achieve this goal, whether you’re a small team with a single product or a large team operating complex ecommerce applications. But not all observability solutions (or tools) are alike, and if you are missing just one key capability, it could cause customer satisfaction issues, slower sales and even top- and bottom-line revenue catastrophes.

The observability market is full of vendors, with different descriptions, features and support capabilities. This can make it difficult to distinguish what’s critical from what is just nice to have in your observability solution.

Here’s a handy checklist to help you find and implement the best possible observability platform to keep all your applications running merry and bright:

  • Complete automation. You need automatic capture to achieve a comprehensive real-time view of your application. A full-stack tool that can automatically observe your environment will minimize mean time to detection (MTTD) and prevent potential outages.
  • High-fidelity data. The most powerful use of data is the ability to contextualize. Without context, your team has no idea how big or small your problem is. Contextualizing telemetry data by visualizing the relevant information or metadata enables teams to better understand and interpret the data. This combination of accuracy and context helps teams make more informed decisions and pinpoint the root causes of issues.
  • Real-time change detection. Monitoring your entire stack with a single platform (from mainframes to mobile) can contribute to your growth. How? You can now see how transactions are zipping around across the internet, keeping the wheels of your commerce well lubricated. Another advantage of real-time detection is the visibility you gain when you connect your application components with your underlying infrastructure. This is important to your IT team’s success, as they now have the visibility of your stack and services and can map them to your dependency.
  • Mobile and website digital experience management. End-user, mobile, website and synthetic monitoring all enable you to improve the end-user experience. You should use an observability tool with real-user monitoring to deliver an exceptional experience for users and accommodate growth. This allows you to track real users’ interactions with your applications, while end-user monitoring captures performance data from the user’s perspective. Synthetic monitoring creates simulated user interactions to proactively identify potential issues, ensuring your applications meet user expectations and performance standards. All three capabilities combined can: provide real-time insights into server performance and website load times; capture user interactions and provide detailed insights into user behaviour; and monitor server loads and traffic distribution. This can automatically adjust load balancing configurations to distribute traffic evenly, preventing server overloads and ensuring a smooth shopping experience.
  • Built-in AI and machine learning. Having AI-assisted root cause analysis in your observability platform is crucial if you want to diagnose the root causes of issues or anomalies within a system or application automatically. This capability is particularly valuable in complex and dynamic environments where manual analysis might be time consuming and less efficient.
  • Visibility deep and wide. The true advantage of full stack lies in connecting your application components with the underlying infrastructure. This is critical for IT success because it grants visibility into your stack and services and maps them to dependencies.
  • Ease of use. An automated and user-friendly installation procedure minimizes the complexity of deployment.
  • Broad platform support. This monitors popular cloud platforms (AWS, GCP, MS Azure, IBM Cloud®) for both Infrastructure as a Service and Platform as a Service with simplified installation.
  • Continuous production profiling. Profiles code issues when they occur for various programming languages, offering visibility into code-based performance hot spots and bottlenecks.

In a market with detection gaps, 10 seconds is too long. Let this checklist guide you as you build a real-time full-stack observability solution that keeps your business running smoothly for the entire holiday season.

Request a demo to learn moreSource: IBM Blockchain

Pioneer of Google’s Data Centers Dies at 58

Pioneer of Google’s Data Centers Dies at 58

Luiz André Barroso

Data center pioneer

Senior member, 59; died 16 September

An engineer at Google for more than 20 years, Barroso is credited with designing the company’s warehouse-size data centers. They house hundreds of thousands of computer servers and disk drives and have brought cloud computing, more powerful search engines, and faster Internet service. He died unexpectedly of natural causes.

Barroso was born in Brazil and earned bachelor’s and master’s degrees in 1989 in electrical engineering from Pontifical Catholic University of Rio de Janeiro. He then moved to Los Angeles, where he earned a Ph.D. in computer engineering in 1996 from the University of Southern California.

In 1995 he joined the Digital Equipment Corp. Western Research Laboratory, in Palo Alto, Calif., as a researcher specializing in microprocessor design. While there, he investigated how to build hardware to run more modern business applications and Web services. Three years later, the company was acquired by Compaq and his project was terminated.

He left Compaq in 2001 to join Google in Mountain View, Calif., as a software engineer.

The company housed its servers at leased space in third-party data centers, which were basically cages in which a few racks of computing equipment were placed. As Google’s business expanded, its need for infrastructure increased. In 2004 Barroso was tasked with investigating ways to build more efficient data centers.

He devised a way to use low-cost components and energy-saving techniques to distribute Google’s programs across thousands of servers, instead of the traditional method of relying on a few powerful, expensive machines.

The company’s first data center designed by Barroso opened in 2006 in The Dalles, Ore. It implemented fault-tolerance software and hardware infrastructure to make the servers less prone to disruption. Google now has 35 data centers in 10 countries, all drawing from Barroso’s groundbreaking techniques.

In 2009 Barroso co-authored The Data Center as a Computer: An Introduction to the Design of Warehouse-Scale Machines, a seminal textbook.

He also led the team that designed Google’s AI chips, known as tensor processing units or TPUs, which accelerated machine-learning workloads. He helped integrate augmented reality and machine learning into Google Maps.

At the time of his death, Barroso was a Google Fellow, the company’s highest rank for technical staff.

He also was an executive sponsor of the company’s Hispanic and Latinx employee group and oversaw a program that awarded fellowships to doctoral students in Latin America.

For his contributions to computing architecture, he received the 2020 Eckert-Mauchly Award, an honor given jointly by IEEE and the Association for Computer Machinery.

He was a Fellow of the ACM and the American Association for the Advancement of Science.

He served on the board of Rainforest Trust, a nonprofit dedicated to protecting tropical lands and conserving threatened wildlife. Just weeks before he died, Barroso organized and led a weeklong trip to Brazil’s Pantanal wetlands.

Read The Institute’s 2020 profile of him to learn more about his career journey.

Calyampudi Radhakrishna Rao

Former director of the Indian Statistical Institute

Honorary member, 102; died 23 August

Rao was onetime director of the Indian Statistical Institute, in Kolkata. The pioneering mathematician and statistician spent more than four decades at the organization, where he discovered two seminal estimators: the Cramér–Rao bound and the Rao–Blackwell theorem. The two estimators—rules for calculating an estimate of a given quantity based on observed data—provided the basis for much of modern statistics.

For his discoveries, Rao received the 2023 International Prize in Statistics. The award is presented every two years to an individual or team for “major achievements using statistics to advance science, technology, and human welfare.”

Rao began his career in 1943 as a technical apprentice at the Indian Statistical Institute. He was promoted the following year to superintending statistician. Two years later, he published a paper in the Bulletin of the Calcutta Mathematical Society, demonstrating two fundamental statistical concepts still heavily used in the field today. The Cramér-Rao bound helps statisticians determine the quality of any estimation method. The Rao-Blackwell theorem provides a means for optimizing estimates.

Rao’s work formed the basis of information geometry, an interdisciplinary field that applies the techniques of differential geometry to study probability theory and statistics.

Rao was a professor at the ISI’s research and training school before being promoted to director in 1964—a position he held for 12 years.

He moved to the United States in the 1980s to join the University of Pittsburgh as a professor of mathematics and statistics. He left Pittsburgh eight years later to teach at Pennsylvania State University in State College, where in 2001 he became director of its multivariate analysis center. Multivariate statistics are data analysis procedures that simultaneously consider more than two variables.

After nine years at Penn State he moved to New York, where he was a research professor at the University of Buffalo until shortly before he died.

Rao authored more than 14 books and 400 journal articles during his career. He received several awards for his lifetime contributions, including 38 honorary doctoral degrees from universities in 19 countries.

In 2010 he was honored with the India Science Award, the highest honor given by the government of India in the scientific sector. He received the 2002 U.S. National Medal of Science, the country’s highest award for lifetime achievement in scientific research.

He was nominated in 2013 for a Nobel Peace Prize for his contributions to the International Encyclopedia of Statistical Science. Last year he was named an honorary member of IEEE.

Rao received a master’s degree in mathematics in 1940 from Andhra University, in Visakhapatnam. Three years later he earned a master’s degree in statistics from the University of Calcutta. He went on to receive a Ph.D. in statistics from King’s College Cambridge in 1945 and a doctor of science degree from the University of Cambridge in 1965.

Herbert William Zwack

Former U.S. Naval Research Laboratory associate superintendent

Life member, 88; died 14 March

Zwack led electronic warfare research programs at the U.S. Naval Research Laboratory, in Washington, D.C., where he worked for more than two decades.

After receiving a bachelor’s degree in electrical engineering in 1955 from the Polytechnic Institute of Brooklyn (now the New York University Tandon School of Engineering), in New York City, he joined Hazeltine (now BAE Systems). At the defense electronics company, located in Greenlawn, N.Y., he helped develop the Semi-Automatic Ground Environment (SAGE), the first U.S. air defense system. He also created the Mark XII IFF, a radar system designed to detect enemy aircraft.

In 1958 he left to join Airborne Instruments Laboratory, a defense contractor in Mineola, N.Y. At AIL, he was involved in electronic warfare systems R&D. He later was promoted to head of the analysis receiver department, and he led the development of UHF and microwave intercept analysis receivers for the U.S. Army.

He accepted a new position in 1970 as head of the advanced development department in the Amecom Division of Litton Industries, a defense contractor in College Park, Md. He helped develop technology at Litton to intercept and analyze radar signals, including the AN/ALR-59 (later the AN/ALR-73) passive detection system for the U.S. Navy E-2 Hawkeye aircraft.

Two years later he left to join the Tactical Electronic Warfare Division of the Naval Research Laboratory, in Washington, D.C., as head of its remote-sensor department. He was responsible for hiring new technical staff and securing research funding.

By 1974, he was promoted to head of the laboratory’s electronic warfare systems branch, leading research in areas including advanced miniature antenna and receiver programs, intelligence collection and processing systems, and high-speed signal sorting.

In 1987 he was promoted to associate superintendent of the Tactical Electronic Warfare Division, a position he held until he retired in 1995.

Randall W. Pack

Nuclear and computer engineer

Life member, 82; died 2 December 2022

Pack was a nuclear power engineer until the late 1990s, when he shifted his focus to computer engineering.

He served in the U.S. Navy for eight years after receiving a bachelor’s degree in engineering in 1961 from Vanderbilt University, in Nashville. While enlisted, he studied at the U.S. Naval Nuclear Power Training Command, in Goose Creek, S.C., and the U.S. Naval Submarine School, in Pensacola, Fla. After completing his studies in 1964, he served as chief engineer on two Navy nuclear submarines including the USS Sam Rayburn.

He left the Navy and earned master’s and doctoral degrees in nuclear engineering from the University of California, Berkeley. In 1974 he joined the Electric Power Research Institute, in Palo Alto, Calif., as a technical expert in nuclear reactor design, testing, operations, maintenance, instrumentation, and safety.

In 1980 he began work as a researcher at the Institute of Nuclear Power Operations, in Atlanta. Seven years later he joined the General Physics Corp. (now GP Strategies), in Columbia, Md., where he worked for 10 years.

Park decided to switch careers and at night took graduate courses at Johns Hopkins University, in Baltimore. After graduating in 1997 with a master’s degree in computer science, he left General Physics and became a computer science consultant. He retired in 2008.

From 2008 to 2022, he served as an adjunct professor at Anne Arundel Community College, in Arnold, Md., where he taught courses for the school’s Peer Learning Partnership, an enrichment program for older adults.Source: IEEE SPECTRUM NEWS

Level up your Kafka applications with schemas

Apache Kafka is a well-known open-source event store and stream processing platform and has grown to become the de facto standard for data streaming. In this article, developer Michael Burgess provides an insight into the concept of schemas and schema management as a way to add value to your event-driven applications on the fully managed Kafka service, IBM Event Streams on IBM Cloud®.

What is a schema

A schema describes the structure of data.

For example:

A simple Java class modelling an order of some product from an online store might start with fields like:

public class Order{

private String productName

private String productCode

private int quantity

[…]

}

If order objects were being created using this class, and sent to a topic in Kafka, we could describe the structure of those records using a schema such as this Avro schema:

{
"type": "record",
"name": “Order”,
"fields": [
{"name": "productName", "type": "string"},
{"name": "productCode", "type": "string"},
{"name": "quantity", "type": "int"}
]
}

Why should you use a schema

Apache Kafka transfers data without validating the information in the messages. It does not have any visibility of what kind of data are being sent and received, or what data types it might contain. Kafka does not examine the metadata of your messages.

One of the functions of Kafka is to decouple consuming and producing applications, so that they communicate via a Kafka topic rather than directly. This allows them to each work at their own speed, but they still need to agree upon the same data structure; otherwise, the consuming applications have no way to deserialize the data they receive back into something with meaning. The applications all need to share the same assumptions about the structure of the data.

In the scope of Kafka, a schema describes the structure of the data in a message. It defines the fields that need to be present in each message and the types of each field.

This means a schema forms a well-defined contract between a producing application and a consuming application, allowing consuming applications to parse and interpret the data in the messages they receive correctly.

What is a schema registry?

A schema registry supports your Kafka cluster by providing a repository for managing and validating schemas within that cluster. It acts as a database for storing your schemas and provides an interface for managing the schema lifecycle and retrieving schemas. A schema registry also validates evolution of schemas.

Optimize your Kafka environment by using a schema registry.

A schema registry is essentially an agreement of the structure of your data within your Kafka environment. By having a consistent store of the data formats in your applications, you avoid common mistakes that can occur when building applications such as poor data quality, and inconsistencies between your producing and consuming applications that may eventually lead to data corruption. Having a well-managed schema registry is not just a technical necessity but also contributes to the strategic goals of treating data as a valuable product and helps tremendously on your data-as-a-product journey.

Using a schema registry increases the quality of your data and ensures data remain consistent, by enforcing rules for schema evolution. So as well as ensuring data consistency between produced and consumed messages, a schema registry ensures that your messages will remain compatible as schema versions change over time. Over the lifetime of a business, it is very likely that the format of the messages exchanged by the applications supporting the business will need to change. For example, the Order class in the example schema we used earlier might gain a new status field—the product code field might be replaced by a combination of department number and product number, or changes the like. The result is that the schema of the objects in our business domain is continually evolving, and so you need to be able to ensure agreement on the schema of messages in any particular topic at any given time.

There are various patterns for schema evolution:

  • Forward Compatibility: where the producing applications can be updated to a new version of the schema, and all consuming applications will be able to continue to consume messages while waiting to be migrated to the new version.
  • Backward Compatibility: where consuming applications can be migrated to a new version of the schema first, and are able to continue to consume messages produced in the old format while producing applications are migrated.
  • Full Compatibility: when schemas are both forward and backward compatible.

A schema registry is able to enforce rules for schema evolution, allowing you to guarantee either forward, backward or full compatibility of new schema versions, preventing incompatible schema versions being introduced.

By providing a repository of versions of schemas used within a Kafka cluster, past and present, a schema registry simplifies adherence to data governance and data quality policies, since it provides a convenient way to track and audit changes to your topic data formats.

What’s next?

In summary, a schema registry plays a crucial role in managing schema evolution, versioning and the consistency of data in distributed systems, ultimately supporting interoperability between different components. Event Streams on IBM Cloud provides a Schema Registry as part of its Enterprise plan. Ensure your environment is optimized by utilizing this feature on the fully managed Kafka offering on IBM Cloud to build intelligent and responsive applications that react to events in real time.

  • Provision an instance of Event Streams on IBM Cloud here.
  • Learn how to use the Event Streams Schema Registry here.
  • Learn more about Kafka and its use cases here.
  • For any challenges in set up, see our Getting Started Guide and FAQs.

Source: IBM Blockchain