Many organizations look for platform solutions to apply similar levels of standardization and control to AI implementations that they have previously required for application development and lifecycle management. Additionally, enterprises with strict cybersecurity concerns need to develop and deploy AI and ML with the necessary governance and management tools that enable safe but scalable implementations.
Red Hat applies its stack — from Red Hat Enterprise Linux to OpenShift to Red Hat’s AI inference, agentic, and fine-tuning capabilities — to support predictive and generative AI (genAI) development and deployment. Red Hat AI is a portfolio meant to centralize AI monitoring and management and tooling to help across the entire model lifecycle from data ingest to training to ongoing management. Red Hat AI includes Red Hat AI enterprise for organizations looking to deploy and scale efficiently and anywhere, Red Hat AI Inference Server for optimized inference of LLMs, Red Hat OpenShift AI for distributed Kubernetes platform environments, and Red Hat Enterprise Linux AI for individual Linux server environments.
Red Hat commissioned Forrester Consulting to conduct a Total Economic Impact™ (TEI) study and examine the potential return on investment (ROI) enterprises may realize by deploying Red Hat AI.1 The purpose of this study is to provide readers with a framework to evaluate the potential financial impact of Red Hat AI on their organizations.
233%
Return on investment (ROI)
$4.4M
Net present value (NPV)
To better understand the benefits, costs, and risks associated with this investment, Forrester interviewed four decision-makers with experience using Red Hat AI. For the purposes of this study, Forrester aggregated the experiences of the interviewees and combined the results into a single composite organization, which is an industry-agnostic organization that places security concerns and operational efficiencies as top priorities in its approach to on-premises AI model development and deployment.
Interviewees said that prior to using Red Hat AI, their organizations had AI tools in place (including hyperscalers) that either could not deploy AI workloads on-premises or did not offer comprehensive functionality and security to deploy AI on-premises. Additionally, prior tools were not standardized or centralized to enable AI deployments at an enterprise scale. These limitations led to version control issues, security concerns, and resource restrictions for both resources and power.
After the investment in Red Hat AI, the interviewees standardized their approach to AI development and deployment to recoup key operational efficiencies, realize cost savings, and scale deployments to generate business impacts. Key results from the investment include infrastructure savings from optimizing GPU utilization, time savings for data scientists and AI engineers from using more accurate AI models that required less training and rework, and accelerated AI development timelines through more efficient tool and environment provisioning and setup. The resulting time and cost savings enabled organizations to scale AI deployments to meet business demands while maintaining highly secure and protected environments and ensuring performance standards.
Key Findings
Quantified benefits. Three-year, risk-adjusted present value (PV) quantified benefits for the composite organization include:
GPU utilization improves from 30% to 80%, saving $3 million by Year 3. IT operations teams benefit from more transparency into resource capacity and utilization from the centralized view of AI infrastructure and operations Red Hat AI provides. The composite organization makes more informed decisions regarding GPU allocation to better utilize existing GPU and scale incrementally to continue meeting business objectives. Over three years, GPU utilization improvement avoids $3 million in total additional GPU cost for the organization.
Model training and customization time decreases by 60%, unlocking $2.5 million in productivity by Year 3. Data scientists and other Red Hat AI users benefit from self-service access to the latest models and resources to drive more efficient AI model development. Additionally, models built with Red Hat AI perform better in response rates and accuracy levels. As such, users spend less time on model training and rework, with the time savings compounding year over year as model accuracy continues to improve. Over three years and for the 80 data scientists that make up the Red Hat AI users, the time savings are worth more than $2.5 million to the composite organization.
MLOps provisioning time decreases by 75%, accelerating 400 AI projects by Year 3. Red Hat AI standardizes the approach to tool provisioning and environment preparation for AI model development, reducing time spent by MLOps teams and arming users with the necessary tools to meet business needs faster. The composite preapproves included tools, and new automations ensure it deploys models in a secure and repeatable manner. Eliminating approval processes and increasing automations reduce MLOps team time spent on provisioning by 75%, accelerating time to development by more than two days per project. Over three years and across a cumulative total of 400 AI projects, the MLOps team time savings are worth more than $407,000 to the composite organization.
Accelerating customer-facing, AI-enabled capability productization drives up to 2% in annual profit growth by Year 3. In addition to the MLOps team’s efficiencies, the accelerated development timelines enable the composite organization to deploy more AI models to production to drive business value. From an infrastructure perspective, the organization also sees better utilization of GPU to ensure AI models are high-performing, especially those deployed to external or customer-facing use cases. Over three years, the 0.5% to 2% improvement to the top line results in more than $317,000 of additional revenue for the composite organization.
Unquantified benefits. Benefits that provide value for the composite organization but are not quantified for this study include:
Consolidating application and AI infrastructure reduces management time by up to 60%. Although the composite organization is a previous OpenShift customer, new customers have an opportunity to further reduce infrastructure management time by consolidating more of their infrastructure stack to Red Hat. Organizations can redistribute time savings on environment setup and monitoring to manage the larger, more complex environment that now includes AI development platforms and deployment tools with the same or fewer resources.
Delivering timely access to AI tools boosts productivity and mitigates the risk of shadow IT.Providing AI development and deployment tools to data scientists in a timely manner reduces the risk of those users turning to unsanctioned tools and personal environments.
Enhancing model security and facilitating governance improves transparency. Red Hat AI consolidates the view of AI infrastructure and operations to increase transparency, consistency, and control over the tools, resources, and models used to facilitate governance efforts. Additionally, increased transparency makes it easier to address current internal audit requirements and prepare for future AI regulations. Finally, Red Hat AI makes it possible to develop and deploy AI on-premises to meet enterprise security and data management compliance requirements without restricting data access for data scientists, developers, and AI engineers.
Self-service capabilities improve user experience and stakeholder relationships. Self-service capabilities and a more democratic approach to resource utilization improve the relationship between users and the MLOps and IT ops teams that support them. This also establishes better relationships between data scientists or AI engineers and business stakeholders, as data scientists and AI engineers have what they need to deploy even high-complexity AI models at scale and better meet business demands.
Costs. Three-year, risk-adjusted PV costs for the composite organization include:
Total fees paid to Red Hat of $1.3 million for upgrading to Red Hat AI. As a prior OpenShift customer, the composite organization upgrades both its infrastructure and licensing to include Red Hat AI. The upgrade costs the organization $160,000 annually in additional licenses and infrastructure. Red Hat deploys engineers to assist in the upgrade from the OpenShift platform to Red Hat AI. Dedicated support, infrastructure design, and implementation work cost the organization $500,000 for the three-month project. Additionally, the organization pays Red Hat for initial training of a small subset of users. The fees paid to Red Hat for the investment total $1.3 million over three years.
Internal resource time spent on implementation and ongoing management. Internal resources, including an eight-person MLOps team, work with Red Hat engineers to design and implement Red Hat AI. The MLOps team joins a weekly standing meeting with the Red Hat team to discuss the Red Hat AI product roadmap and the organization’s internal AI strategy. Data scientists or AI engineers dedicate a small amount of time to training on Red Hat AI, but the time is moderate as the users are familiar with the OpenShift interface and Red Hat infrastructure.
The financial analysis that is based on the interviews found that a composite organization experiences benefits of $6.2 million over three years versus costs of $1.9 million, adding up to a net present value (NPV) of $4.4 million and an ROI of 233%.
Avoided additional GPU cost from better utilization of existing resources over three years
$3 million
“Shift left is our first philosophy and [Red Hat AI] aligns with that philosophy because I can easily manage everything, including both infrastructure and AI, through software development practices. The mentality is simple, but the value is strong.”
Head of DevOps, financial services
“I wanted to show very quickly that we were able to build and deploy AI at scale, so I needed a solution that was pragmatic and that worked. But at the same time, I wanted it to be a solution that would last for many years.”
AI innovation manager, manufacturing
Key Statistics
233%
Return on investment (ROI)
$6.2M
Benefits PV
$4.4M
Net present value (NPV)
13 months
Payback
Benefits (Three-Year)
[CHART DIV CONTAINER]
Infrastructure management: Cost savings from improved GPU utilizationData scientist time savingsMLOps efficienciesBusiness value: Top-line impact
Red Hat AI Use Cases
How Customers Scaled AI Deployments To Meet Business Needs With Red Hat AI
The interviewees’ organizations implemented Red Hat AI to meet their AI strategy goals safely by standardizing the development approach and consolidating their view of related infrastructure, tools, and models. As such, the interviewees could scale AI projects, introduce more complex AI and ML capabilities, and deploy them to production without sacrificing performance. AI project volumes and use cases varied across interviewees’ organizations. Examples include:
The head of DevOps at a financial services organization indicated that their organization had deployed more than 100 models in production since implementing Red Hat AI more than one year ago. The models included both predictive and generative AI. The most prominent example was an AI application built to support the loan approval process with a focus on mitigating risk associated with faulty decision-making. This interviewee stated: “Our goal is to use AI to support decisions, not to make decisions. We still want and need the human resource to make the decision.”
The AI innovation manager at a manufacturing organization indicated that their organization’s AI strategy “really focused on AI for business efficiency rather than for engineering.” They plan to bring internal product and engineering use cases to Red Hat AI eventually, but the current priority is building models to support business processes. To that end, in the one year since go-live with Red Hat AI, the organization developed 100 AI projects and deployed about five or six to production. The use cases at the manufacturing organization span both quality control and testing. It had a standout model meant to reduce testing lead time from several months to mere minutes, with the goal of identifying issues more quickly and improving overall safety. In the same vein, the organization deployed a model to the manufacturing process to measure nonconformities, with the goal of detecting root causes by comparing similarities across those nonconformities. The business objective for this model was to save money on replacement parts and other costs associated with maintenance cycles.
The CIO at a government organization indicated that they had 150 AI projects in progress, about 30% of which were in production. Those in production included a forecasting model that pinpointed better resource allocation at various regional levels; a model that referenced metadata and internal claims and case data to identify instances of waste, fraud, and abuse of materials; a predictive and prescriptive model that measured data from various sources to optimize operations; and a research journal repository model that utilized AI to search for, quantify, and summarize research.
The CEO at a telecommunications organization had thousands of AI models in production, including a critical customer tracking system that pulled together various data sources to reduce incidents and customer service failures.
The Red Hat Red Hat AI Customer Journey
Drivers leading to the AI investment
Interviews
Role
Industry
Region
Annual revenue (USD)
Employees
Red Hat AI users
Prior Red Hat OS customer?
Deployment and scale
Head of DevOps
Financial services
EMEA
$1.3B
14,000
120
No
On-premises
AI innovation manager
Manufacturing
EMEA
$81B
156,000
70
Yes
On-premises; six GPU with plans to double within the year
CEO
Telecommunications
Latin America
$130M
600
80
Yes
On-premises; one GPU
CIO
Government
NorthAmerica
N/A
80,000
50
Yes
Hybrid; two GPU with plans to quadruple within the year
Key Challenges
Prior to Red Hat AI, interviewees noted that their organizations lacked standard platform approaches to AI model development and deployment. As a result, they struggled with common challenges, including:
Infrastructure restrictions limited AI at scale. Prior to the investment in Red Hat AI, interviewees’ organizations operated without centralized views of their AI infrastructure. IT ops spent time running reports to understand model contingencies and versioning as well as monitoring resource capacity and utilization. Infrastructure obscurity meant that organizations could not plan for or appropriately assign resources including employees and GPUs, which increased the cost to build AI models and inhibited performance. Governance also suffered as frustrated data scientists sought their own, unapproved AI tools and used other internal environments to meet business stakeholder demands. These activities increased tool sprawl and added further complexities to the infrastructure, making it even more difficult to manage.
Extended AI development timelines impeded time to value. In addition to infrastructure restrictions, AI tool provisioning also delayed development timelines prior to the investment in Red Hat AI. Without standardized approaches to AI development, the interviewees’ organizations lacked unified deployment processes for AI models including pipelines, tools, and workbenches. As such, MLOps teams struggled to keep up with data scientist demands, and organizations saw provisioning timelines extend to multiple days. This, in turn, further pushed data scientists toward shadow IT and AI.
Security considerations required on-premises deployments. Many of the interviewees’ organizations operated in industries with strict data privacy and protection regulations. As such, these organizations required the ability to deploy AI models on-premises to meet security requirements while also providing data scientists and developers with access to critical data sources during AI model development. Many of the AI development tools used before Red Hat AI lacked comprehensive functionality for on-premises deployments or simply could not be deployed at all. Although security concerns drove the immediate need for on-premises deployments, many organizations sought platforms that could be provisioned for more flexible deployment options in the future, including cloud and edge.
Investment Objectives
The interviewees searched for a solution that could:
Standardize AI development to deploy at scale.
Centralize monitoring and management to improve governance.
Improve utilization of existing resources including resource and GPU capacity to meet cost requirements.
Deploy AI on-premises to meet compliance restrictions.
After evaluating multiple vendors, the interviewees’ organizations chose Red Hat AI due to:
Flexible deployment options to meet current on-premises requirements and provision for cloud deployments and a hybrid future.
Infrastructure that meets data privacy and cybersecurity requirements.
Standard approach to AI development and deployment to improve repeatability and better meet data scientist needs.
Transparency into infrastructure and operations to improve efficiencies as well as utilization and capacity management, which resulted in slower rates of GPU investment due to more efficient inference.
Centralized view of AI infrastructure and audit capabilities to improve governance.
“[As an organization], we must manage sensitive data with different classifications. It is a major trigger and a major driver of having an on-prem platform [for AI development]. We cannot go on cloud with most of our documentation and most of our data sources. So we had to go on-prem and have it strictly governed so that we know what data comes in and goes out.”
AI innovation manager, manufacturing
“I would argue Red Hat provided more of a unified stack for AI development instead of what we had before, which was a lot of disjointed stacks and a separate infrastructure for workloads including different pipelines and other tools. Basically, it resulted in slow model development.”
CIO, government
Composite Organization
Based on the interviews, Forrester constructed a TEI framework, a composite company, and an ROI analysis that illustrates the areas financially affected. The composite organization is representative of the interviewees’ organizations, and it is used to present the aggregate financial analysis in the next section. The composite organization has the following characteristics:
Description of composite. The global, hundred-million-dollar, industry-agnostic organization puts security concerns and operational efficiencies at the top of its AI development and deployment priorities. The organization has 3,000 total employees; 80 are data scientists or AI engineers who become Red Hat users, and a team of eight MLOps resources manage the Red Hat implementation and ongoing user provisioning of AI tools and workbenches.
Deployment characteristics. The organization is a prior OpenShift customer that upgrades to Red Hat AI to deploy AI models on-premises. The organization begins in Year 1 with four GPU server nodes, with two GPU per node to total eight GPU. It builds 100 different AI projects, 20% of which go live in a production environment. The organization continues to scale AI development and deployment and doubles GPU capacity and total supported AI projects year over year.
KEY ASSUMPTIONS
$100 million revenue
3,000 employees
80 Red Hat AI users
Eight GPU in Year 1
100 AI projects in Year 1
Analysis Of Benefits
Quantified benefit data as applied to the composite
Total Benefits
Ref.
Benefit
Year 1
Year 2
Year 3
Total
Present Value
Atr
Infrastructure management: Cost savings from improved GPU utilization
$540,000
$1,080,000
$2,160,000
$3,780,000
$3,006,311
Btr
Data scientist time savings
$816,000
$1,020,000
$1,224,000
$3,060,000
$2,504,403
Ctr
MLOps efficiencies
$125,843
$125,843
$251,685
$503,370
$407,499
Dtr
Business value: Top-line impact
$57,000
$114,000
$228,000
$399,000
$317,333
Total benefits (risk-adjusted)
$1,538,843
$2,339,843
$3,863,685
$7,742,370
$6,235,546
Infrastructure Management: Cost Savings From Improved GPU Utilization
Evidence and data. Prior to the investment in Red Hat AI, the interviewees’ organizations lacked standard approaches to AI development and infrastructure platforms to manage the necessary resources. Interviewees said their organizations sought to scale the complexity and volumes of AI models in production environments and required more GPUs each year to do so. However, their legacy infrastructures provided limited ability to monitor and manage GPU usage, resulting in low GPU utilization rates despite additional spending. With Red Hat AI, their organizations gained better views of where and how GPUs were being used to make smarter allocation decisions. This visibility improved GPU utilization and enabled their organizations to develop and deploy more AI models with existing resources before purchasing additional GPUs.
In addition to cost savings, more transparency into GPU utilization meant that interviewees’ organizations were better able to meet business needs as they changed over time. The head of DevOps at a financial services organization stated: “[Flexible infrastructure resources are] not only a cost saver but also a way to do more with the power we have. This gives us flexibility to change resources depending on our priorities. We don’t have unlimited power; we must use limited on-prem resources. Infrastructure management therefore becomes much more meaningful to meet the business objectives of the financial institution and deliver value.”
This same interviewee also described how Red Hat AI provided more transparency to optimize existing GPUs. They said: “Because OpenShift uses Kubernetes natively, it brings us the ability to manage our resources easily. For example, if you are working on a single model for one month and you need data access for the month, I can change the resources and I can assign the necessary internal resources and GPU to complete the development.”
The CIO at a government organization stated that prior to Red Hat AI, their average utilization rate per GPU was around 30% to 40%. With Red Hat AI, that utilization increased to between 70% and 80%. Since each GPU [server] cost upward of $400,000, the increase resulted in significant cost savings for their organization.
The same interviewee noted that GPU utilization did not come at the price of model performance and pointed to model latency improvements of 10% to 20% in parallel with making their GPU utilization more efficient. They stated: “With Red Hat AI, you can visualize latency trends and see how models are responding to maintain target ranges. And I can reuse all my existing OpenShift governance and monitoring stacks, so depending on what I want to bring in [in terms of data sources and performance targets], I end up with lower latency, no egress costs.”
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
The composite organization calculates that it needs four GPU AI server nodes (two GPU per node to total eight GPU) in the first year of the investment to meet business goals. They also anticipate that the volume of GPUs required will double in each year of the investment as it builds more complex models and deploys them to production.
Prior to Red Hat AI, the organization has a GPU utilization rate of 30%, which means it must purchase more GPU servers to meet the scale and performance expectations set by the business.
Each GPU server node costs $300,000.
With Red Hat AI, the organization improves GPU utilization to 70% in Year 1 and up to 80% by Year 3.
As a result, the organization purchases fewer GPU servers each year to meet business and AI development goals.
Risks. The third section details the potential risks that can impact the benefit. These can be both qualitative and quantitative. State the risks in bullet form. Refer to Best Practices.
The business goals and associated AI strategy set at the organization will determine the volume of GPU servers expected annually.
The AI development maturity and prior environment will determine the starting utilization rate for GPU servers.
GPU costs may vary depending on market fluctuations and providers.
Results. To account for these risks, Forrester adjusted this benefit by 10%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $3.0 million.
80%
GPU utilization rate in Year 3
Infrastructure Management: Cost Savings From Improved GPU Utilization
Ref.
Metric
Source
Year 1
Year 2
Year 3
A1
GPU servers calculated annually to meet business demands
Composite
4
8
16
A2
GPU utilization rate before Red Hat AI
Interviews
30%
30%
30%
A3
Additional GPU servers required with prior utilization rate (rounded)
A1*(1-A2)
3
6
11
A4
Cost per GPU server annually
Composite
$300,000
$300,000
$300,000
A5
Additional cost for GPUs with prior utilization rate
A3*A4
$900,000
$1,800,000
$3,300,000
A6
GPU utilization rate after Red Hat AI
Interviews
70%
75%
80%
A7
Additional GPU servers required with current utilization rate (rounded)
A1*(1-A6)
1
2
3
A8
Additional cost for GPUs with current utilization rate
A4*A7
$300,000
$600,000
$900,000
At
Infrastructure management: Cost savings from improved GPU utilization
A8-A5
$600,000
$1,200,000
$2,400,000
Risk adjustment
↓10%
Atr
Infrastructure management: Cost savings from improved GPU utilization (risk-adjusted)
$540,000
$1,080,000
$2,160,000
Three-year total: $3,780,000
Three-year present value: $3,006,311
Data Scientist Time Savings
Evidence and data. With Red Hat AI, users at the interviewees’ organizations benefited from self-service access to the latest models and resources. Additionally, the AI development platform provided access to predictive and generative AI-specific capabilities, including frameworks. Therefore, platform users accelerated time to begin development and benefited from building better, more accurate models. As a result, users at interviewees’ organizations spent less time on model training and rework. This time savings compounded year over year as the quality of the models continued to improve. The interviewees’ organizations then redistributed the time savings to building more complex AI models.
The head of DevOps at a financial services organization indicated that data scientists were responsible for more than just enabling model creation. Rather than spending time creating the workbench and storing the necessary data, they wanted to focus on more interesting modeling and development work. The DevOps leader said: “In 10 minutes, if you want to develop a new model, your repository is ready, your workbench is ready, and your infrastructure resources are available. Your working environment is ready and totally isolated. The data scientists reallocate that time to develop more accurate AI and ML models. They can start more projects, or they can take more time for themselves to upskill or hone existing skills.”
The CIO at a government organization estimated that their users saved 40% to 50% of their time with Red Hat AI by having access to a familiar set of tools and environments that required less of a learning curve. They stated: “Efficiencies for data scientists come from having full visibility and control and that they can leverage all existing Kubernetes and OpenShift skills. So with other [AI tools], the data scientists would have to spend time learning different cloud UIs, software development kits, etc.”
The same interviewee indicated that their data scientists redistributed their time savings to building more accurate models that required less human correction or retraining: “[With the time savings], data scientists are able to target items more effectively and enable better timing of actions with a higher precision so that there are fewer false positives. I also automate my retraining to include new corrected data and to tighten guidance.”
The CIO at a government organization also indicated that having more accurate AI models compounded the time savings. They stated: “Even if we have a 10% to 20% accuracy gain — and we saw a 90% accuracy improvement in some areas — it means less rework on the IT side. This time savings compounds as we build more trust with the models, which also increases adoption of the models we build.”
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
The organization has 80 data scientists who are Red Hat AI users. Users could include AI engineer resources at other organizations.
Data scientists spent 15% of their time on average on model training and rework prior to Red Hat AI.
With Red Hat AI, data scientists save 40% of their time in Year 1 and up to 60% of their time by Year 3 on those tasks due to improved model accuracy.
The average fully burdened annual salary for a data scientist is $200,000.
Risks. Data scientist time savings may vary depending on the following:
The volume of users, the resource types expected to use Red Hat AI, and their fully burdened salary ranges.
The AI development maturity and environment prior to Red Hat AI will impact the time spent on model training and rework.
Results. To account for these risks, Forrester adjusted this benefit downward by 15%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $2.5 million.
60%
Data scientist time savings on model training and rework by Year 3
Data Scientist Time Savings
Ref.
Metric
Source
Year 1
Year 2
Year 3
B1
Red Hat AI users (data scientists)
Composite
80
80
80
B2
Time spent on AI model training and rework before Red Hat AI
Composite
15%
15%
15%
B3
Time savings on model training and rework with Red Hat AI
Interviews
40%
50%
60%
B4
Fully burdened annual salary for a data scientist
Composite
$200,000
$200,000
$200,000
Bt
Data scientist time savings
B1*B2*B3*B4
$960,000
$1,200,000
$1,440,000
Risk adjustment
↓15%
Btr
Data scientist time savings (risk-adjusted)
$816,000
$1,020,000
$1,224,000
Three-year total: $3,060,000
Three-year present value: $2,504,403
MLOps Efficiencies
Evidence and data. For the interviewees’ organizations, MLOps teams were most often responsible for environment provisioning and preparation to support AI model development. With Red Hat AI, MLOps teams easily reproduced environments to improve development scale and accelerate timelines. The time savings enabled MLOps resources to support more users with their AI projects more quickly, which in turn, accelerated time to business value for their organizations.
The CIO at a government organization agreed that due to MLOps efficiencies, their organization reduced its preparation and provisioning stage by 25% to 30%. They noted: “I would say on the provisioning side that there is a lot of time saved there, the GPU driver setup, model serving setup, compliance reviews, debugging, monitoring setup. Sometimes you might spend a couple of hours or even days per model on the model serving setup. On average, there is a 25% to 30% reduction in different projects.”
The CEO at a telecommunications organization said that time savings across environment provisioning tasks for AI model development ultimately helped accelerate development timelines from up to five days per project to 10 minutes in some cases. The head of DevOps at a financial services organization also agreed that reducing the time spent on operational tasks required to begin AI model development reduced overall timelines from one week to 10 minutes.
The CIO at a government organization saw that time savings for MLOps teams also translated into more control that helped reduce compliance issues. They said: “I have the full pipeline control from an MLOps stance. I have more automation and portability, and I don’t have as much lock-in. So I can do MLflow, GitOps, in an integrated continuous integration/continuous delivery.”
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
The composite organization supports 100 AI projects with Red Hat AI in Year 1 and doubles the volume of projects year over year, adding 100 more projects in Year 2 and 200 more projects in Year 3.
Prior to Red Hat AI, the provisioning and preparation phase of each project took three days on average. The composite assigns a single MLOps resource to complete these tasks per project.
The organization sees a 75% reduction in time spent on pre-AI model development work, which impacts the internal MLOps teams responsible for completing provisioning tasks and accelerates overall development timelines.
MLOps resources have fully burdened annual salaries of $175,000, which is divided by 266 working days to achieve an MLOps resource cost per day.
Risks. MLOps efficiencies will vary depending on the following:
AI model development maturity and scale at the organization in terms of the volume of development projects completed each year.
The resource types and counts involved in provisioning and preparation tasks for AI model development as well as their fully burdened annual salaries.
The average number of days spent on predevelopment operations for AI model development prior to the Red Hat AI investment.
Results. To account for these risks, Forrester adjusted this benefit downward by 15%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $407,000.
75%
MLOps resource efficiencies on AI project provisioning and preparation tasks
MLOps Efficiencies
Ref.
Metric
Source
Year 1
Year 2
Year 3
C1
Red Hat AI projects (new annually)
Composite
100
100
200
C2
Time per project spent on provisioning and preparation before Red Hat AI (days)
Interviews
3
3
3
C3
Time savings on preparation with Red Hat AI
Interviews
75%
75%
75%
C4
MLOps resource cost per day (rounded)
Composite
$658
$658
$658
Ct
MLOps efficiencies
C1*C2*C3*C4
$148,050
$148,050
$296,100
Risk adjustment
↓15%
Ctr
MLOps efficiencies (risk-adjusted)
$125,843
$125,843
$251,685
Three-year total: $503,370
Three-year present value: $407,499
Business Value: Top-Line Impact
Evidence and data. With Red Hat AI, the interviewees’ organizations leveraged internal efficiencies for users and their supporting teams to accelerate AI development timelines. Additionally, with more transparency and control over the infrastructure, they built more complex AI models, such as those that use genAI, with the necessary power to ensure performance standards. With data and cybersecurity demands met, the interviewees’ organizations deployed AI on-premises at scale and more models to production environments. Altogether, the organizations benefited from AI models deployed for external and internal use cases that drove business value.
The head of DevOps at a financial services organization built a loan approval model that enabled employees to make fast decisions in an environment with rapidly changing requirements. They indicated that this model would not have been possible to build with other tools, as they lacked the level of control enabled with Red Hat AI.
The AI innovation manager at a manufacturing organization built multiple AI models for customer-facing use cases that together generated more than $500,000 in additional revenue for their organization. In addition to external use cases, the interviewee discussed an internal use case that included a natural language processing model that performed root cause analysis to reduce nonconformities in the manufacturing lifecycle, saving their organization more than $1 million annually on maintenance and replacement part costs.
The CIO at a government organization set aggressive performance targets for the models built with Red Hat AI, such as a 95% source quality score, a less than 3% hallucination rate, and a time to answer of 3 seconds or less for short queries. The same interviewee discussed the security component involved in building models with sensitive data. They said, “People are wanting to combine a variety of different data sources and some of it was a challenge from a security perspective.” The interviewee noted that with Red Hat AI, they were able to utilize more data to better tune AI models and achieve better outcomes: “Today, I’m able to give a better fine gradient. So I’m giving better warnings, making better operational decisions, and seeing better outcomes, including more simplified processes.”
The CEO at a telecommunications organization built an externally facing model for their customer service function that reduced incident response times by 50%. They theorized that this improvement would impact the top line of the organization through better customer experience scores and related KPIs.
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
The composite organization has $100 million annual revenue.
Given the AI models in production, the organization improves the top line by 0.5% in Year 1, up to 2% in Year 3.
With a 12% operating margin, the top-line improvement leaves the organization with an additional $60,000 to $240,000 in profit each year.
Risks. Business value from top-line impact may vary depending on the following:
The size and industry that the organization operates within.
The volume of AI model projects geared toward revenue-generating use cases.
Results. To account for these risks, Forrester adjusted this benefit downward by 5%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $317,000.
2%
Top-line improvement from AI models built with Red Hat AI by Year 3
Business Value: Top-Line Impact
Ref.
Metric
Source
Year 1
Year 2
Year 3
D1
Annual revenue
Composite
$100,000,000
$100,000,000
$100,000,000
D2
Top-line improvement with Red Hat AI
Interviews
0.500%
1.000%
2.000%
D3
Operating margin
Composite
12%
12%
12%
Dt
Business value: Top-line impact
D1*D2*D3
$60,000
$120,000
$240,000
Risk adjustment
↓5%
Dtr
Business value: Top-line impact (risk-adjusted)
$57,000
$114,000
$228,000
Three-year total: $399,000
Three-year present value: $317,333
Unquantified Benefits
Interviewees mentioned the following additional benefits that their organizations experienced but were not able to quantify:
Additional infrastructure team time savings. Interviewees whose organizations were not previously OpenShift users mentioned additional IT ops team time savings from more transparency and standardization after consolidating to Red Hat AI. The head of DevOps at a financial services organization saw less need for ancillary reports to monitor activities across the infrastructure. They stated, “[With Red Hat AI], I can see who is managing models, which models use which libraries, model versions, etc. through a simple query.” The same interviewee described how the additional insights were put into action, stating: “I know which model was built when and in what version, which code and where it came from. I can report against it, identify who can change it, and log changes that have already happened and who approved of those changes.” Finally, this interviewee tied together infrastructure time savings with the ability to scale AI development, stating: “Being on Red Hat AI means we have access to tested and proven versions of approved frameworks and models, which makes it easier to scale up and scale down and manage resources, which saves time on the infrastructure side.” The CIO at a government organization agreed that infrastructure time and cost savings were key factors in enabling AI development at scale, and stated, “[If we had used other AI development tools for our AI projects], we could have theoretically seen 50% to 60% more work to get the tool approved, to get that application built, to get it even into development, let alone production environments.”
Reduced risk of shadow IT/AI. Streamlining AI development and deployment tools to a single platform meant that interviewees’ organizations benefited from fewer delayed tool approvals and saw faster tool and environment provisioning. By having the approved tools and infrastructure in place, data scientists and developers accelerated time to start projects, which mitigated the risk of those resources obtaining their own, nonapproved tools to meet business objectives.
Heightened model security and facilitated governance. The head of DevOps at a financial services organization contextualized the importance of governance for AI development at large enterprises, and stated: “If you’re an enterprise and are working with more than 2,000 developers and more than 100 data scientists, you have to control the AI and report against it. You have to give that level of proof of accountability to the bank and the financial institution you are serving.” The same interviewee noted that although internal governance drove their current governance, external regulations were on the horizon: “External regulations [regarding AI/ML models] might come into play in the future and the same reports and transparency that we benefit from today for internal purposes will be critical in meeting those requirements. The approach we’ve taken with Red Hat AI is our safeguard for the future.” The AI innovation manager at a manufacturing organization pointed to emerging global regulations such as the EU AI Act as a sign that more AI regulation was imminent and agreed that Red Hat AI would enable their organization to meet the demands of such external regulations in the future.
Better employee experiences for data scientists and the business stakeholders they serve. Internally, there were less politics regarding resource consumption for AI development efforts. The head of DevOps at a financial services organization stated, “There is transparency into resource assignments — [we] can see who is using what and for how long, and everyone can see it, which means less politics. Before, there was no governance, so whoever got to work earliest would have access to the resources.” Data scientists were then able to better meet business stakeholder expectations and accelerate time to model deployment.
“The business doesn’t care what technology we use; they are just looking for output. They are focusing on how we can get more out of and accelerate the development lifecycle, [and] how we can easily deploy our models when we finalize.”
Head of DevOps, financial services
Flexibility
The value of flexibility is unique to each customer. There are multiple scenarios in which a customer might implement Red Hat AI and later realize additional uses and business opportunities, including:
Encourage more innovation. Interviewees’ organizations took advantage of Red Hat AI improvements as part of Red Hat’s product roadmap, ongoing updates to the platform, and better control of existing capabilities to arm users with more tools and more data to encourage AI and ML projects. Externally, the organizations scaled AI projects to cover more use cases. They also added more complex AI (including genAI models) and GPUs to fuel more end-user focused model development. The head of DevOps at a financial services organization said: “We are focusing on scaling up to include other business cases, especially genAI use cases, at the bank. We want to enhance the business cases.” The CEO at a telecommunications organization agreed, stating, “Our organization plans to scale the AI solution to other critical areas, such as satellite management, and eventually turn the implementation into a commercial product for other telecom service providers.” Additionally, many organizations were moving toward enabling data scientists with access to GPUs, which would enable more flexibility to further accelerate innovation. The AI innovation manager at a manufacturing organization stated: “We hope to give developers direct access to GPUs. Right now, the AI team is in between that process to determine the business value and allocate the GPUs.”
Continued AI development and deployment flexibility. Many of the interviewees’ organizations sought Red Hat AI as their platform approach to AI development specifically to meet on-premises deployment demands. However, the interviewees recognized the need for a flexible future of deployment methods, including planning for hybrid AI model deployments. As such, they were especially keen to avoid vendor lock-in and increase the collaboration and partnership with their selected vendors. The AI innovation manager at a manufacturing organization stated: “I’m looking forward to what Red Hat AI can bring for hybrid cloud AI deployments, with edge cases especially. We have demonstrated with proof of concept (POC) this year that the platform is capable of this, we are just waiting for the business to be ready.”
Flexibility would also be quantified when evaluated as part of a specific project (described in more detail in Total Economic Impact Approach).
“As a final reflection, I would say that the possibility of working with teams that speak the same language gives you the sense of being part of an open-source community, which lends our organization continuity and the possibility of predictability.”
CEO, telecommunications
Analysis Of Costs
Quantified cost data as applied to the composite
Total Costs
Ref.
Cost
Initial
Year 1
Year 2
Year 3
Total
Present Value
Etr
Cost of Red Hat AI professional services and support
$845,900
$176,000
$176,000
$176,000
$1,373,900
$1,283,586
Ftr
Cost of internal time for implementation and ongoing management
$501,769
$35,539
$35,539
$35,539
$608,386
$590,149
Total costs (risk-adjusted)
$1,347,669
$211,539
$211,539
$211,539
$1,982,286
$1,873,735
Cost Of Red Hat Professional Services And Support
Evidence and data. Most of the interviewees’ organizations were prior OpenShift customers and chose to upgrade their licensing and infrastructure to accommodate Red Hat AI functionality. Although familiarity with the Red Hat infrastructure and OpenShift interface were bonuses, many organizations were still early in their AI development journeys. As such, many of the interviewees’ organizations opted to engage with Red Hat consulting teams to perform AI assessment workshops and AI incubators and for assistance with the updated platform deployment. Additionally, small subsets of total users leveraged Red Hat training resources on AI development and deployment features and functionality. These users planned to launch internal training programs to further disperse the learnings. Pricing for these components may vary. Contact Red Hat for additional details.
The AI innovation manager at a manufacturing organization estimated that their organization spent $400,000 in professional services and consulting support with Red Hat during the implementation process.
The CIO at the government organization noted that they also enlisted professional services support from Red Hat as part of their implementation process.
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
Given the 80 users and the eight GPU, the licensing and infrastructure annual fees total $160,000 for the composite organization.
The composite engages with Red Hat engineers on the consulting team to conduct an AI assessment workshop and an AI incubator and for assistance in the platform upgrade deployment.
It selects 20% of total users to attend Red Hat training sessions on the new platform functionality for AI development and deployment. Remaining users will learn through an internal “train the trainer” program.
Risks. Fees paid to Red Hat for Red Hat AI may vary depending on the following :
Prior relationship with Red Hat in terms of products and platforms in place.
AI development and deployment maturity.
Volume of intended users and capacity in terms of GPU.
Results. To account for these risks, Forrester adjusted this cost upward by 10%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $1.3 million.
Cost Of Red Hat AI Professional Services And Support
Ref.
Metric
Source
Initial
Year 1
Year 2
Year 3
E1
Licensing and infrastructure annual fees
Composite
$160,000
$160,000
$160,000
$160,000
E2
Consulting fees: AI assessment workshop
Composite
$47,000
E3
Consulting fees: AI incubator
Composite
$200,000
E4
Consulting fees: Platform deployment
Composite
$250,000
E5
Training cost for Red Hat
F7*7K
$112,000
Et
Cost of Red Hat AI professional services and support
E1+E2+E3+E4+E5
$769,000
$160,000
$160,000
$160,000
Risk adjustment
↑10%
Etr
Cost of Red Hat AI professional services and support (risk-adjusted)
$845,900
$176,000
$176,000
$176,000
Three-year total: $1,373,900
Three-year present value: $1,283,586
Cost Of Internal Time For Implementation And Ongoing Management
Evidence and data. For the interviewees, prior familiarity with the platform interface and Red Hat infrastructure was helpful in reducing the need for additional internal resources to support and manage the new environment. However, internal teams were still responsible for assisting in the platform upgrade deployment. Additionally, as the Red Hat infrastructure expanded to include AI development and deployment functionality, existing resources spent additional hours on ongoing management activities, such as extending weekly meetings with Red Hat to discuss the Red Hat AI product roadmap and platform updates as well as on training for new functionalities.
The CEO at a telecommunications organization indicated that their Red Hat contacts added an additional 1-hour meeting per week to discuss Red Hat AI specifically.
The CIO at a government organization estimated that the pilot process as part of the implementation project required a few hundred hours from their MLOps team. Altogether, resource time spent on implementation was the equivalent of seven or eight FTEs working over several months.
The head of DevOps at a financial services organization dedicated their MLOps team (which comprised six FTEs) to the implementation process for the first two months. From there, the work shifted to Red Hat engineers to implement the architecture.
The AI innovation manager at a manufacturing organization specifically noted that their organization performed a POC with a minimum valuable product for two separate business cases as part of their initial implementation program. They said: “Red Hat delivered. Our platform team was very impressed and so were my data scientists. This POC illuminated the potential of Red Hat AI, which is why we ended up moving forward with the investment.”
Modeling and assumptions. Based on the interviews, Forrester assumes the following about the composite organization:
The composite organization is a prior Red Hat OpenShift customer. Therefore, there are fewer resources dedicated to deployment, implementation, and ongoing management than if they were to purchase OpenShift at the same time as Red Hat AI.
The team of eight MLOps FTEs is dedicated to the three-month deployment and implementation project for Red Hat AI.
The team spends 4 additional hours per month on relationship management and ongoing investment management for Red Hat AI on top of existing management of the OpenShift platform.
MLOps FTEs have fully burdened annual salaries of $175,000.
Twenty percent of total users attend a Red Hat-sponsored training program to become platform super users and are expected to disseminate that knowledge to the remaining platform users.
Platform users are primarily data scientists, who have fully burdened annual salaries of $200,000
Risks. Internal time spent on the investment may vary depending on the following:
An organization’s Red Hat products before upgrading to Red Hat AI. Organizations that do not have OpenShift before upgrading to Red Hat AI may require higher levels of involvement from internal resources for initial implementation and ongoing relationship and platform management. This involvement can range from 40 hours per month up to 40 hours per week due to OpenShift version upgrades, operator driver updates, user access management, and pipeline debugging.
The scope and scale of Red Hat AI deployment and implementation.
The resource volumes and types involved in both platform implementation activities as well as ongoing investment and relationship management functions.
The volume of platform users and the organization’s approach to training.
Results. To account for these risks, Forrester adjusted this cost upward by 10%, yielding a three-year, risk-adjusted total PV (discounted at 10%) of $590,000.
Cost Of Internal Time For Implementation And Ongoing Management
Ref.
Metric
Source
Initial
Year 1
Year 2
Year 3
F1
MLOps resources (FTEs)
Composite
8
8
8
8
F2
Time dedicated to implementation (months)
Composite
3
0
0
0
F3
Fully burdened annual salary for an MLOps resource
Composite
$175,000
$175,000
$175,000
$175,000
F4
Cost of MLOps time spent on implementation
F1*(F3/12)*F2
$350,000
$0
$0
$0
F5
Time dedicated to ongoing platform management and Red Hat relationship per month (hours)
Composite
4
4
4
4
F6
Cost of MLOps time spent on ongoing platform management
F1*(F3/2,080)*F5*12
$32,308
$32,308
$32,308
$32,308
F7
Resources that required training from Red Hat (super users)
20% of total users
16
F8
Time spent on training for super users per month (hours)
Composite
4
4
4
4
F9
Cost of internal resource time spent on training
F7*F8*12*(B4/2080)
$73,846
$0
$0
$0
Ft
Cost of internal time for implementation and ongoing management
F4+F6+F9
$456,154
$32,308
$32,308
$32,308
Risk adjustment
↑10%
Ftr
Cost of internal time for implementation and ongoing management (risk-adjusted)
$501,769
$35,539
$35,539
$35,539
Three-year total: $608,386
Three-year present value: $590,149
Financial Summary
Consolidated Three-Year, Risk-Adjusted Metrics
Cash Flow Chart (Risk-Adjusted)
[CHART DIV CONTAINER]
Total costsTotal benefitsCumulative net benefitsInitialYear 1Year 2Year 3
Cash Flow Analysis (Risk-Adjusted)
Initial
Year 1
Year 2
Year 3
Total
Present Value
Total costs
($1,347,669)
($211,539)
($211,539)
($211,539)
($1,982,286)
($1,873,735)
Total benefits
$0
$1,538,843
$2,339,843
$3,863,685
$7,742,370
$6,235,546
Net benefits
($1,347,669)
$1,327,304
$2,128,304
$3,652,146
$5,760,084
$4,361,811
ROI
233%
Payback
13 months
Please Note
The financial results calculated in the Benefits and Costs sections can be used to determine the ROI, NPV, and payback period for the composite organization’s investment. Forrester assumes a yearly discount rate of 10% for this analysis.
These risk-adjusted ROI, NPV, and payback period values are determined by applying risk-adjustment factors to the unadjusted results in each Benefit and Cost section.
The initial investment column contains costs incurred at “time 0” or at the beginning of Year 1 that are not discounted. All other cash flows are discounted using the discount rate at the end of the year. PV calculations are calculated for each total cost and benefit estimate. NPV calculations in the summary tables are the sum of the initial investment and the discounted cash flows in each year. Sums and present value calculations of the Total Benefits, Total Costs, and Cash Flow tables may not exactly add up, as some rounding may occur.
From the information provided in the interviews, Forrester constructed a Total Economic Impact™ framework for those organizations considering an investment in Red Hat AI.
The objective of the framework is to identify the cost, benefit, flexibility, and risk factors that affect the investment decision. Forrester took a multistep approach to evaluate the impact that Red Hat AI can have on an organization.
Due Diligence
Interviewed Red Hat stakeholders and Forrester analysts to gather data relative to AI.
Interviews
Interviewed four decision-makers at organizations using AI to obtain data about costs, benefits, and risks.
Composite Organization
Designed a composite organization based on characteristics of the interviewees’ organizations.
Financial Model Framework
Constructed a financial model representative of the interviews using the TEI methodology and risk-adjusted the financial model based on issues and concerns of the interviewees.
Case Study
Employed four fundamental elements of TEI in modeling the investment impact: benefits, costs, flexibility, and risks. Given the increasing sophistication of ROI analyses related to IT investments, Forrester’s TEI methodology provides a complete picture of the total economic impact of purchase decisions. Please see Appendix A for additional information on the TEI methodology.
Total Economic Impact Approach
Benefits
Benefits represent the value the solution delivers to the business. The TEI methodology places equal weight on the measure of benefits and costs, allowing for a full examination of the solution’s effect on the entire organization.
Costs
Costs comprise all expenses necessary to deliver the proposed value, or benefits, of the solution. The methodology captures implementation and ongoing costs associated with the solution.
Flexibility
Flexibility represents the strategic value that can be obtained for some future additional investment building on top of the initial investment already made. The ability to capture that benefit has a PV that can be estimated.
Risks
Risks measure the uncertainty of benefit and cost estimates given: 1) the likelihood that estimates will meet original projections and 2) the likelihood that estimates will be tracked over time. TEI risk factors are based on “triangular distribution.”
Financial Terminology
Present value (PV)
The present or current value of (discounted) cost and benefit estimates given at an interest rate (the discount rate). The PVs of costs and benefits feed into the total NPV of cash flows.
Net present value (NPV)
The present or current value of (discounted) future net cash flows given an interest rate (the discount rate). A positive project NPV normally indicates that the investment should be made unless other projects have higher NPVs.
Return on investment (ROI)
A project’s expected return in percentage terms. ROI is calculated by dividing net benefits (benefits less costs) by costs.
Discount rate
The interest rate used in cash flow analysis to take into account the time value of money. Organizations typically use discount rates between 8% and 16%.
Payback
The breakeven point for an investment. This is the point in time at which net benefits (benefits minus costs) equal initial investment or cost.
Appendix A
Total Economic Impact
Total Economic Impact is a methodology developed by Forrester Research that enhances a company’s technology decision-making processes and assists solution providers in communicating their value proposition to clients. The TEI methodology helps companies demonstrate, justify, and realize the tangible value of business and technology initiatives to both senior management and other key stakeholders.
Appendix B
Endnotes
1 Total Economic Impact is a methodology developed by Forrester Research that enhances a company’s technology decision-making processes and assists solution providers in communicating their value proposition to clients. The TEI methodology helps companies demonstrate, justify, and realize the tangible value of business and technology initiatives to both senior management and other key stakeholders.
[CONTENT]
Disclosures
Readers should be aware of the following:
This study is commissioned by Red Hat and delivered by Forrester Consulting. It is not meant to be used as a competitive analysis.
Forrester makes no assumptions as to the potential ROI that other organizations will receive. Forrester strongly advises that readers use their own estimates within the framework provided in the study to determine the appropriateness of an investment in Red Hat AI. For any interactive functionality, the intent is for the questions to solicit inputs specific to a prospect's business. Forrester believes that this analysis is representative of what companies may achieve with Red Hat AI based on the inputs provided and any assumptions made. Forrester does not endorse Red Hat or its offerings. Although great care has been taken to ensure the accuracy and completeness of this model, Red Hat and Forrester Research are unable to accept any legal responsibility for any actions taken on the basis of the information contained herein. The interactive tool is provided ‘AS IS,’ and Forrester and Red Hat make no warranties of any kind.
Red Hat reviewed and provided feedback to Forrester, but Forrester maintains editorial control over the study and its findings and does not accept changes to the study that contradict Forrester’s findings or obscure the meaning of the study.
Red Hat provided the customer names for the interviews but did not participate in the interviews.