在这个技术变革加速的时代,人工智能(AI)正以前所未有的速度改变企业的核心运营模式。此份报告围绕空间计算、AI未来趋势、智能硬件、IT升级、量子计算、智能核心六大主题展开深入探讨,无论是企业决策者还是技术管理者,都可以从中获取战略性洞察,为未来的技术升级和数字化转型做好准备。
Spatial Computing Takes Center Stage
What is the future of spatial computing?
With real-time simulations as just the start, new, exciting use cases can reshape industries ranging from health care to entertainment.
Kelly Raskovich, Bill Briggs, Mike Bechtel, and Ed Burns
Today’s ways of working demand deep expertise in narrow skill sets. Being informed about projects often requires significant specialized training and understanding of context, which can burden workers and keep information siloed.
This has historically been true especially for any workflow involving a physical component. Specialized tasks demanded narrow training in a variety of unique systems, which made it hard to work across disciplines.
One example is computer-aided design (CAD) software. An experienced designer or engineer can view a CAD file and glean much information about the project.
But those outside of the design and engineering realm—whether they’re in marketing, finance, supply chain, project management, or any other role that needs to be up to speed on the details of the work—will likely struggle to understand the file, which keeps essential technical details buried.
Spatial computing is one approach that can aid this type of collaboration. As discussed in Tech Trends 2024, spatial computing offers new ways to contextualize business data, engage customers and workers, and interact with digital systems.
It more seamlessly blends the physical and digital, creating an immersive technology ecosystem for humans to more naturally interact with the world.
For example, a visual interaction layer that pulls together contextual data from business software can allow supply chain workers to identify parts that need to be ordered and enable marketers to grasp a product’s overall aesthetics to help them build campaigns.
Employees across the organization can make meaning of and, in turn, make decisions with detailed information about a project in ways anyone can understand.
If eye-catching virtual reality (VR) headsets are the first thing that come to mind when you think about spatial computing, you’re not alone.
But spatial computing is about more than providing a visual experience via a pair of goggles.
It also involves blending standard business sensor data with the Internet of Things, drone, light detection and ranging (LIDAR), image, video, and other three-dimensional data types to create digital representations of business operations that mirror the real world.
These models can be rendered across a range of interaction media, whether a traditional two-dimensional screen, lightweight augmented reality glasses, or full-on immersive VR environments.
Spatial computing senses real-world, physical components; uses bridging technology to connect physical and digital inputs; and overlays digital outputs onto a blended interface (figure 1).
Spatial computing’s current applications are as diverse as they are transformative.
Real-time simulations have emerged as the technology’s primary use case.
Looking ahead, advancements will continue to drive new and exciting use cases, reshaping industries such as health care, manufacturing, logistics, and entertainment—which is why the market is projected to grow at a rate of 18.2% between 2022 and 2033.
The journey from the present to the future of human-computer interaction promises to fundamentally alter how we perceive and interact with the digital and physical worlds.
一、空间计算成为焦点
空间计算未来会是什么样子? 从实时模拟的简单应用开始,这项技术正逐渐改变从医疗到娱乐等多个行业。它不仅是一项新兴技术,更是一种可能重新定义我们生活和工作的工具。
挑战:信息孤岛和协作难题
现在的工作方式要求员工在非常专业的领域具备深入的技能。如果不了解具体背景,想快速上手项目就很难。这在涉及到实际物理操作的工作中表现得尤其明显。比如设计师或工程师能快速从CAD(计算机辅助设计)文件中看出项目的关键细节,但如果是非专业领域的人,比如营销、财务、供应链或项目管理人员,就很难理解这些文件里的内容,这导致了信息被孤立,团队协作受限。
空间计算如何改变这一现状
空间计算能让团队协作变得更加简单。空间计算可以用新的方式把业务数据可视化,让客户、员工更容易理解和互动。它能把物理和数字结合起来,打造一个沉浸式的技术环境,让人与世界的交互更加自然。
举个例子:空间计算可以通过一个可视化交互界面,直接从业务软件中提取相关数据。比如,供应链工作人员可以快速找到需要订购的零件,而营销人员能更直观地理解产品的外观,从而更高效地制定推广方案。通过这种方式,不同部门的人都能轻松获取项目信息,快速做出决策,而不用因为专业壁垒而卡壳。
不仅仅是炫酷的VR头显
说到空间计算,很多人会想到那些看上去很酷的VR头显。但空间计算并不仅仅是“戴个眼镜看画面”这么简单。它是将传感器数据、物联网、无人机、激光雷达(LIDAR)等技术整合起来,打造出能真实还原业务操作的数字模型。无论是传统的二维屏幕、轻便的增强现实(AR)眼镜,还是完全沉浸式的VR环境,都可以用来呈现这些模型。
空间计算的核心是感知真实的物理环境,使用技术将物理和数字连接起来,并将数字信息叠加到一个融合的界面上(见图1)。
多元化应用,正在改变各行各业
现在,空间计算的应用已经覆盖了多个领域,实时模拟是目前的主要应用场景之一。未来,随着技术的进步,这项技术将推动更多创新,重新定义医疗、制造、物流和娱乐等行业。这也是为什么从2022年到2033年,空间计算市场预计每年将增长18.2%。
从现在到未来,空间计算将彻底改变我们与数字和物理世界互动的方式,让工作和生活变得更高效、更有趣,同时也带来更多新机会!
Now: Filled to the Rim with Sims
At its heart, spatial computing brings the digital world closer to lived reality. Many business processes have a physical component, particularly in asset-heavy industries, but, too often, information about those processes is abstracted, and the essence (and insight) is lost.
Businesses can learn much about their operations from well-organized, structured business data, but adding physical data can help them understand those operations more deeply. That’s where spatial computing comes in.
“This idea of being served the right information at the right time with the right view is the promise of spatial computing,” says David Randle, global head of go-to-market for spatial computing at Amazon Web Services (AWS). “We believe spatial computing enables more natural understanding and awareness of physical and virtual worlds.”
Advanced Simulations: A Primary Application
One of the primary applications unlocked by spatial computing is advanced simulations. Think digital twins, but rather than virtual representations that monitor physical assets, these simulations allow organizations to test different scenarios to see how various conditions will impact their operations.
Imagine:
• A manufacturing company where designers, engineers, and supply chain teams can seamlessly work from a single 3D model to craft, build, and procure all the parts they need.
• Doctors who can view true-to-life simulations of their patients’ bodies through augmented reality displays.
• An oil and gas company that can layer detailed engineering models on top of 2D maps.
The possibilities are as vast as our physical world is varied.
The Portuguese soccer club Benfica’s sports data science team, for example, uses cameras and computer vision to track players throughout matches and develop full-scale 3D models of every move its players make.
The cameras collect 2,000 data points from each player, and AI helps identify specific players, the direction they were facing, and critical factors that fed into their decision-making.
The data essentially creates a digital twin of each player, allowing the team to run simulations of how plays would have worked if a player was in a different position. X’s and O’s on a chalkboard are now three-dimensional models that coaches can experiment with.
“There’s been a huge evolution in AI pushing these models forward, and now we can use them in decision-making,” says Joao Copeto, chief information and technology officer at Sport Lisboa e Benfica.
This isn’t only about wins and losses—it’s also about dollars and cents. Benfica has turned player development into a profitable business by leveraging data and AI.
Over the past 10 years, the team has generated some of the highest player-transfer deals in Europe. Similar approaches could also pay dividends in warehouse operations, supply chain and logistics, or any other resource planning process.
Simulations in Medicine
Advanced simulations are also showing up in medical settings.
For instance:
• Virtual patient scenarios can be simulated as a training supplement for nurses or doctors in a more dynamic, self-paced environment than textbooks would allow.
• Fraser Health Authority in Canada has pioneered the use of simulation models to improve care by creating a system-wide digital twin.
This public health authority in British Columbia generated powerful visualizations of patient movement through different care settings and simulations to determine the impact of deploying different care models on patient access.
Although the work is ongoing, Fraser expects improvement in appropriate, need-based access to care through increased patient awareness of available services.
New: Data is the Differentiator
Enterprise IT teams will likely need to overcome significant hurdles to develop altogether-new spatial computing applications. They likely haven’t faced these hurdles when implementing more conventional software-based projects.
For one thing, data isn’t always interoperable between systems, which limits the ability to blend data from different sources.
Furthermore, the spaghetti diagrams mapping out the path that data travels in most organizations are circuitous at best, and building the data pipelines to get the correct spatial data into visual systems is a thorny engineering challenge.
Ensuring that data is of high quality and faithfully mirrors real-world conditions may be one of the most significant barriers to using spatial computing effectively.
Rethinking Spatial Data Management
David Randle of AWS notes that spatial data has not historically been well managed at most organizations, even though it represents some of a business’s most valuable information.
“This information, because it’s quite new and diverse, has few standards around it and much of it sits in silos, some of it’s in the cloud, most of it’s not,” says Randle. “This data landscape encompassing physical and digital assets is extremely scattered and not well managed. Our customers’ first problem is managing their spatial data.”
Taking a more systematic approach to ingesting, organizing, and storing this data, in turn, makes it more available to modern AI tools, and that’s where the real learnings begin.
Data Pipelines: The Fuel for Business
We’ve often heard that data is the new oil, but for an American oil and gas company, the metaphor is becoming reality thanks to significant effort in replumbing some of its data pipelines.
The energy company uses drones to conduct 3D scans of equipment in the field and its facilities, and then applies computer vision to the data to ensure its assets operate within predefined tolerances.
It’s also creating high-fidelity digital twins of assets based on data pulled from engineering, operational, and enterprise resource planning systems.
The critical piece? Data integration.
The energy giant built a spatial storage layer, using application program interfaces to connect to disparate data sources and file types, including machine, drone, business, and image and video data.
Few organizations today have invested in this type of systematic approach to ingesting and storing spatial data. Still, it’s a key factor driving spatial computing capabilities and an essential first step for delivering impactful use cases.
模拟技术大展身手
空间计算的核心,是让数字世界更贴近现实生活。很多业务流程都跟物理世界相关,特别是在那些“重资产”行业中,但问题是,这些流程的信息常常被抽象化,关键的细节和洞察就这么丢掉了。
企业当然能从有条理、结构化的数据中学到不少东西,但如果再加入物理数据,他们对业务的理解会更加深入。这就是空间计算大显身手的地方。
“空间计算的优势在于:在正确的时间,以正确的方式,提供正确的信息。”亚马逊云服务(AWS)全球空间计算负责人David Randle说道,“我们相信,空间计算能帮助人们更自然地理解和感知真实世界和虚拟世界。”
高级模拟:空间计算的拿手好戏
空间计算解锁了一个非常重要的应用——高级模拟。这不只是传统的“数字孪生”概念,而是更进一步。除了虚拟化地监控物理资产,它还能让企业测试各种情境,看看不同条件会如何影响业务运营。
举几个例子:
1、一个制造企业,设计师、工程师和供应链团队可以一起通过同一个3D模型,完成设计、制造和采购所有零部件的任务。
2、医生可以通过增强现实设备,查看几乎真实还原的患者身体模型,更直观地了解病情。
3、石油和天然气公司可以把精细的工程模型直接叠加在二维地图上,轻松规划业务。
这些应用场景,可能性几乎和物理世界一样多样。
再来看看一个特别的例子:葡萄牙足球俱乐部本菲卡的体育数据科学团队,利用摄像机和计算机视觉技术,实时跟踪球员的比赛动作,并为每名球员生成完整的3D模型。
这些摄像机能从每个球员身上收集2000多个数据点,而AI会帮忙识别球员的身份、他面对的方向,以及影响他决策的关键因素。通过这些数据,俱乐部实际上为每个球员创建了一个数字孪生,可以用来模拟“如果某个球员位置不同,比赛战术会如何变化”。那些过去画在战术板上的“X”和“O”,现在成了教练可以随意调整的3D模型。
“AI在推动这些模型方面真的有了巨大的进步,现在我们能用它们来帮助做出更好的决策。”Joao Copeto,本菲卡俱乐部的首席信息和技术官这样说道。
Multimodal AI Creates the Context
In the past, businesses couldn’t merge spatial and business data into one visualization, but that too is changing. As discussed in “What’s next for AI?” multimodal AI—AI tools that can process virtually any data type as a prompt and return outputs in multiple formats—is already adept at processing virtually any input, whether text, image, audio, spatial, or structured data types.
This capability will allow AI to serve as a bridge between different data sources, and interpret and add context between spatial and business data. AI can reach into disparate data systems and extract relevant insights.
This isn’t to say multimodal AI eliminates all barriers. Organizations still need to manage and govern their data effectively. The old saying “garbage in, garbage out” has never been more prescient. Training AI tools on disorganized and unrepresentative data is a recipe for disaster, as AI has the power to scale errors far beyond what we’ve seen with other types of software.
Enterprises should focus on implementing open data standards and working with vendors to standardize data types.
But once they’ve addressed these concerns, IT teams can open new doors to exciting applications. “You can shape this technology in new and creative ways,” says Johan Eerenstein, executive vice president of workforce enablement at Paramount.
Next: AI Is the New UI
Many of the aforementioned challenges in spatial computing are related to integration. Enterprises struggle to pull disparate data sources into a visualization platform and render that data in a way that provides value to the user in their day-to-day work. But soon, AI stands to lower those hurdles.
As mentioned above, multimodal AI can take a variety of inputs and make sense of them in one platform, but that could be only the beginning. As AI is integrated into more applications and interaction layers, it allows services to act in concert. As mentioned in “What’s next for AI?” this is already giving way to agentic systems that are context-aware and capable of executing functions proactively based on user preferences.
These autonomous agents could soon support the roles of supply chain manager, software developer, financial analyst, and more.
What will separate tomorrow’s agents from today’s bots will be their ability to plan ahead and anticipate what the user needs without even having to ask. Based on user preferences and historical actions, they will know how to serve the right content or take the right action at the right time.
When AI agents and spatial computing converge, users won’t have to think about whether their data comes from a spatial system, such as LIDAR or cameras (with the important caveat that AI systems are trained on high-quality, well-managed, interoperable data in the first place), or account for the capabilities of specific applications.
With intelligent agents, AI becomes the interface, and all that’s necessary is to express a preference rather than explicitly program or prompt an application.
Imagine a bot that automatically alerts financial analysts to changing market conditions or one that crafts daily reports for the C-suite about changes in the business environment or team morale.
All the many devices we interact with today, be they phone, tablet, computer, or smart speaker, will feel downright cumbersome in a future where all we have to do is gesture toward a preference and let context-aware, AI-powered systems execute our command. Eventually, once these systems have learned our preferences, we may not even need to gesture at all.
The Full Impact
The full impact of agentic AI systems on spatial computing may be many years out, but businesses can still work toward reaping the benefits of spatial computing. Building the data pipelines may be one of the heaviest lifts, but once built, they open up myriad use cases.
Autonomous asset inspection, smoother supply chains, true-to-life simulations, and immersive virtual environments are just a few ways leading enterprises are making their operations more spatially aware.
As AI continues to intersect with spatial systems, we’ll see the emergence of revolutionary new digital frontiers, the contours of which we’re only beginning to map out.
多模态AI创造上下文
过去,企业无法将空间数据和业务数据合并到一个可视化界面中,但这种局面正在改变。多模态AI——能够处理几乎任何类型数据输入,并以多种格式输出的AI工具——已经非常擅长处理各种输入数据,包括文本、图像、音频、空间数据和结构化数据。
这种能力将使AI成为连接不同数据源的桥梁,帮助解释并建立空间数据与业务数据之间的上下文关联。AI可以深入分散的数据系统,提取相关洞察并为决策提供支持。
这并不意味着多模态AI可以消除所有障碍。企业仍需要有效管理和治理数据。俗话说“输入垃圾,输出垃圾”(Garbage in, garbage out),在AI时代,这句话比以往更加贴切。如果用混乱或不具代表性的数据训练AI工具,其错误会被放大到前所未见的程度。因此,企业应优先实施开放的数据标准,并与供应商合作实现数据类型的标准化。
一旦这些问题解决,IT团队就可以探索令人兴奋的新应用领域。“你可以用新颖且富有创造力的方式塑造这项技术。”派拉蒙公司(Paramount)负责员工赋能的执行副总裁Johan Eerenstein说道。
AI是全新的用户界面
空间计算的许多挑战都与数据集成相关。企业通常难以将分散的数据源整合到一个可视化平台中,并以对日常工作有实际价值的方式呈现数据。但AI的加入将很快降低这些障碍。
正如前文提到的,多模态AI能够处理各种输入数据,并在一个平台上进行解析,但这可能只是一个开始。随着AI逐渐融入更多应用和交互层,它能让服务之间形成协作。这种趋势已经催生了具有上下文感知能力的自主系统,它们可以根据用户偏好主动执行功能。
未来,这些自主智能代理将支持供应链管理、软件开发、金融分析等角色。与如今的简单聊天机器人不同,明天的AI代理将具备前瞻性规划能力,能够提前预测用户需求,而不需要明确指令。基于用户偏好和历史行为,它们将能够在恰当的时机提供合适的内容或采取正确的行动。
当AI代理与空间计算结合,用户无需担心数据是来自激光雷达(LIDAR)、摄像机,还是其他空间系统(前提是AI系统以高质量、管理良好且互通的数据为基础训练)。智能代理将使AI成为全新的界面,用户只需表达一个偏好,而不需要明确编程或输入复杂指令。
想象一下:
• 一个AI机器人能自动向金融分析师发出市场变化警报。
• 或者,它每天为管理层编写关于业务环境变化或团队士气的报告。
未来,所有我们今天使用的设备——手机、平板、电脑、智能音箱——可能都会显得笨拙。那时,我们只需通过一个简单的手势,甚至无需动作,就能让这些上下文感知的AI系统完成命令。
迈向新数字前沿的第一步
虽然自主智能AI系统在空间计算中的全面影响可能还需要几年时间才能实现,但企业已经可以开始利用空间计算带来的好处。构建数据管道可能是最繁重的工作,但一旦完成,就能解锁无数应用场景,比如:
• 自动资产检测
• 更流畅的供应链
• 真实感更强的模拟
• 沉浸式虚拟环境
一些领先企业已经开始利用这些方式让运营更具有空间感知能力。随着AI与空间系统的不断交汇,我们将看到新的数字领域的诞生,这些领域的轮廓我们现在还只是刚刚开始绘制。
What’s Next for AI?
While large language models continue to advance, new models and agents are proving to be more effective at discrete tasks. AI needs different horses for different courses.
The Speed of AI’s Advancement
Blink and you’ll miss it: The speed of artificial intelligence’s advancement is outpacing expectations.
Last year, as organizations scrambled to understand how to adopt generative AI, we cautioned Tech Trends 2024 readers to lead with need as they differentiate themselves from competitors and adopt a strategic approach to scaling their use of large language models (LLMs).
Today, LLMs have taken root, with up to 70% of organizations, by some estimates, actively exploring or implementing LLM use cases.¹
Leading organizations are already considering AI’s next chapter. Instead of relying on foundation models built by large players in AI, which may be more powerful and built on more data than needed, enterprises are now thinking about implementing multiple, smaller models that can be more efficient for business requirements.²
LLMs will continue to advance and be the best option for certain use cases, like general-purpose chatbots or simulations for scientific research, but the chatbot that peruses your financial data to think through missed revenue opportunities doesn’t need to be the same model that replies to customer inquiries.
Put simply, we’re likely to see a proliferation of different horses for different courses.
A series of smaller models working in concert may end up serving different use cases than current LLM approaches. New open-source options and multimodal outputs (as opposed to just text) are enabling organizations to unlock entirely new offerings.³
In the years to come, the progress toward a growing number of smaller, more specialized models could once again move the goalposts of AI in the enterprise.
From Knowledge to Execution
Organizations may witness a fundamental shift in AI from augmenting knowledge to augmenting execution.
Investments being made today in agentic AI could upend the way we work and live by arming consumers and businesses with armies of silicon-based assistants.
Imagine AI agents that can carry out discrete tasks, like delivering a financial report in a board meeting or applying for a grant.
“There’s an app for that” could well become “There’s an agent for that.”
Now: Getting the Fundamentals Right
LLMs are undoubtedly exciting but require a great deal of groundwork.
Instead of building models themselves, many enterprises are partnering with companies like Anthropic or OpenAI, or accessing AI models through hyperscalers.⁴
According to Gartner, AI servers will account for close to 60% of hyperscalers’ total server spending.⁵
While some enterprises have found immediate business value in using LLMs, others remain wary about the accuracy and applicability of LLMs trained on external data.⁶
On an enterprise time scale, AI advancements are still in a nascent phase (crawling or walking, as noted last year). According to recent surveys by Deloitte, Fivetran, and Vanson Bourne, fewer than one-third of generative AI experiments have moved into production, often because organizations struggle to access or cleanse the data needed to run AI programs.
Data as the Foundation
According to Deloitte’s 2024 State of Generative AI in the Enterprise Q3 report, 75% of surveyed organizations have increased their investments in data lifecycle management due to generative AI.⁸
• Data is foundational to LLMs, because bad inputs lead to worse outputs (“garbage in, garbage squared”).
• Data labeling costs can drive significant AI investments.⁹
While some AI companies scrape the internet to build the largest models possible, savvy enterprises create the smartest models possible, with better domain-specific data education.
Example:
LIFT Impact Partners, a Vancouver-based organization, is fine-tuning its AI-enabled virtual assistants to help new Canadian immigrants process paperwork.
“When you train it on your organization’s unique persona, data, and culture, it becomes significantly more relevant and effective,” says Bruce Dewar, president and CEO of LIFT Impact Partners.
Challenges with Data Enablement
Organizations surveyed by Deloitte noted challenges with:
• Scaling AI pilots
• Unclear regulations around sensitive data
• Questions about third-party licensed data usage
55% of organizations avoided certain AI use cases due to data-related issues, and an equal proportion are working to enhance data security.
Differentiation:
While out-of-the-box models offered by vendors can help, differentiated AI impact will likely require differentiated enterprise data.
Real-World Value
Two-thirds of organizations surveyed are increasing investments in generative AI after seeing strong value across industries, from:
• Insurance claims review
• Telecom troubleshooting
• Consumer segmentation tools
LLMs are also creating value in specialized use cases like space repairs, nuclear modeling, and material design.
New: Different Horses for Different Courses
While LLMs have vast use cases, they are not always the most efficient choice.
• LLMs require massive resources, focus primarily on text, and augment human intelligence rather than execute discrete tasks.
• Smaller, purpose-built models can better address specific needs.
Future:
In the next 18–24 months, enterprises will likely rely on a toolkit of AI models, including:
1. Small language models (SLMs)
2. Multimodal models
3. Agentic AI systems
These models will help organizations optimize specific tasks without relying on massive, general-purpose LLMs.
Example:
An SLM trained on inventory data could let employees retrieve insights quickly, avoiding manual processing of large datasets that can take weeks.
二、人工智能的下一步是什么?
大型语言模型(LLMs)还在不断进化,但新的AI模型和代理(agents)在某些特定任务上表现得更加高效。用一句俗话来说:“对症下药,才能事半功倍。”
就在去年,各企业还在努力弄清楚如何拥抱生成式AI时,我们就提醒过读者,要以实际需求为导向,用战略性的方式将大型语言模型(LLMs)落地应用。
如今,LLMs已经广泛应用,有数据估计多达70%的企业正在探索或实施LLM的用例。
然而领先的企业已经开始考虑AI的下一个阶段:与其依赖那些由AI巨头打造的超大型基础模型——虽然它们功能强大、数据丰富,但往往超出实际需求——不如部署多个小型模型,更高效地满足特定业务需求。
LLMs仍然是某些场景(比如通用聊天机器人或科学研究模拟)的最佳选择,但一个用于分析财务数据、寻找收入增长点的聊天机器人,真的需要和一个解答客户询问的机器人用同样的模型吗?
换句话说,我们会看到不同任务用不同AI模型的趋势。
一系列小型模型可以协同工作,服务于当前LLM难以覆盖的用例。开源模型的普及和多模态输出(不仅限于文本)正在帮助企业解锁全新的服务和产品。
未来几年,小型、更专业化的模型会进一步推动AI在企业中的发展,并让AI的“规则”再次被重新定义。
从“增强知识”到“增强执行”
我们可能会看到AI从“帮助获取知识”逐步转向“帮助完成任务”的根本转变。
目前在开发中的代理型AI(Agentic AI),正是这种趋势的代表。这些智能代理有望颠覆我们的工作和生活方式,为消费者和企业提供一个“硅基助理军团”。
想象一下:
• 一个AI代理可以在董事会会议上呈现财务报告,甚至帮你申请一笔资金拨款。
• 我们常说的“有什么应用(app)可以解决这个问题?”可能会演变成“有一个AI代理可以帮你搞定”。
现在:打好基础是关键
虽然LLMs令人兴奋,但要真正落地还需要扎实的基础工作。
许多企业没有自行开发模型,而是选择与Anthropic或OpenAI等公司合作,或者通过云计算巨头(Hyperscalers)使用AI模型。
根据Gartner的预测,AI服务器的支出将占云计算巨头总服务器支出的近60%。
一些企业已经从LLMs中找到了直接的业务价值,但也有企业对基于外部数据训练的LLMs的准确性和适用性心存顾虑。
现实是:
目前的AI发展阶段还很早,类似“婴儿学爬”或“刚学走路”的阶段。根据德勤、Fivetran和Vanson Bourne的调查,只有不到三分之一的生成式AI实验进入了生产阶段,主要原因是企业在获取或清理运行AI所需数据时遇到了困难。
数据是AI的基石
根据德勤的2024年第三季度《企业生成式AI状态报告》,75%的受访企业因生成式AI而增加了在数据生命周期管理上的投资。
• 数据是LLMs的基础,输入数据差,输出结果会更糟(俗话说:“垃圾进,垃圾出”)。
• 数据标注成本也是AI投资的一个重要因素。
虽然一些AI公司通过抓取互联网数据来构建大规模模型,但聪明的企业更倾向于创建“更智能的模型”,通过领域专用数据进行更好的训练。
案例:
位于温哥华的LIFT Impact Partners是一家为非营利组织提供资源的机构,他们用经过优化的数据训练AI虚拟助手,帮助新移民办理加拿大的移民手续。
“当你用组织独特的个性、数据和文化去训练AI,它会变得更加贴合实际,更高效。”LIFT的总裁兼首席执行官Bruce Dewar说道,“它不只是工具,更像是企业的延伸和代言人。”
数据面临的挑战
企业在AI落地过程中还面临以下数据相关的挑战:
• 如何让AI试点项目顺利扩展?
• 对敏感数据的模糊法规?
• 外部数据(比如第三方许可数据)的使用问题?
调查显示:
55%的企业因为数据问题避免了某些AI用例,同时同样比例的企业正在加强数据安全。
解决之道:
虽然使用供应商提供的“开箱即用”模型可以绕过部分问题,但要实现差异化的AI价值,企业需要独特的企业数据。
AI的现实价值
尽管有挑战,AI带来的回报也非常显著:
1、三分之二的企业因为已经看到了强大的业务价值而增加了对生成式AI的投资。
2、AI已经在保险索赔审核、电信故障排查、消费者分层分析等领域展现了现实价值。
3、在更专业的场景中,LLMs也有建树,比如太空维修、核反应模拟和材料设计。
不同任务,用不同AI模型
LLMs覆盖了广泛的用例,但它们并不是万能的。
1、LLMs需要庞大的资源,主要用于处理文本,并且更擅长“增强人类智能”,而非执行具体任务。
2、小型语言模型(SLMs)和多模态模型可能更适合某些特定需求。
未来趋势:
在接下来的18-24个月内,企业可能会采用多种AI模型组合的方式,包括:
1. 小型语言模型(SLMs)
2. 多模态模型
3. 代理型AI系统
案例:
一家企业可以用库存数据训练一款SLM,让员工快速获得洞察,而不是手动处理大量数据——这可能需要数周时间。
通过这种方式,AI不仅变得更灵活,还让企业在效率和成本之间找到更好的平衡点。
Naveen Rao, vice president of AI at Databricks, believes more organizations will take this systems approach with AI:
“A magic computer that understands everything is a sci-fi fantasy. Rather, in the same way we organize humans in the workplace, we should break apart our problems. Domain-specific and customized models can then address specific tasks, tools can run deterministic calculations, and databases can pull in relevant data. These AI systems deliver the solution better than any one component could do alone.”
An added benefit of smaller models is that they can be run on-device and trained by enterprises on smaller, highly curated data sets to solve more specific problems, rather than general queries, as discussed in “Hardware is eating the world.”
Companies like Microsoft and Mistral are currently working to distill such SLMs, built on fewer parameters, from their larger AI offerings, and Meta offers multiple options across smaller models and frontier models.
Finally, much of the progress happening in SLMs is through open-source models offered by companies like Hugging Face or Arcee.AI. Such models are ripe for enterprise use since they can be customized for any number of needs, as long as IT teams have the internal AI talent to fine-tune them.
In fact, a recent Databricks report indicates that over 75% of organizations are choosing smaller open-source models and customizing them for specific use cases. Since open-source models are constantly improving thanks to the contributions of a diverse programming community, the size and efficiency of these models are likely to improve at a rapid clip.
Humans interact through a variety of mediums: text, body language, voice, videos, among others. Machines are now hoping to catch up.
Given that business needs are not contained to text, it’s no surprise that companies are looking forward to AI that can take in and produce multiple mediums.
In some ways, we’re already accustomed to multimodal AI, such as when we speak to digital assistants and receive text or images in return, or when we ride in cars that use a mix of computer vision and audio cues to provide driver assistance.
Multimodal generative AI, on the other hand, is in its early stages. The first major models, Google’s Project Astra and OpenAI’s GPT-4 Omni, were showcased in May 2024, and Amazon Web Services’ Titan offering has similar capabilities.
Progress in multimodal generative AI may be slow because it requires significantly higher amounts of data, resources, and hardware. In addition, the existing issues of hallucination and bias that plague text-based models may be exacerbated by multimodal generation.
Still, the enterprise use cases are promising:
The notion of “train once, run anywhere (or any way)” promises a model that could be trained on text, but deliver answers in pictures, video, or sound, depending on the use case and the user’s preference, which improves digital inclusion.
Example Applications:
• Companies like AMD aim to use the fledgling technology to quickly translate marketing materials from English to other languages or to generate content.
• For supply chain optimization, multimodal generative AI can be trained on sensor data, maintenance logs, and warehouse images to recommend ideal stock quantities.
This also leads to new opportunities with spatial computing, as written about in “Spatial computing takes center stage.”
As the technology progresses and model architecture becomes more efficient, we can expect to see even more use cases in the next 18 to 24 months.
Agentic AI
The third new pillar of AI may pave the way for changes to our ways of working over the next decade.
Large (or small) action models go beyond the question-and-answer capabilities of LLMs and complete discrete tasks in the real world.
Examples:
• Booking a flight based on your travel preferences.
• Providing automated customer support that can access databases and execute needed tasks—likely without the need for highly specialized prompts.
The proliferation of such action models, working as autonomous digital agents, heralds the beginnings of agentic AI, and enterprise software vendors like Salesforce and ServiceNow are already touting these possibilities.
Enterprise Use Case: ServiceNow’s Xanadu Platform
Chris Bedi, chief customer officer at ServiceNow, believes that domain- or industry-specific agentic AI can change the game for human and machine interaction in enterprises.
For instance, in the company’s Xanadu platform, one AI agent can:
1. Scan incoming customer issues against a history of incidents to recommend next steps.
2. Communicate with another autonomous agent that executes those recommendations.
A human reviewer oversees the agent-to-agent communication to approve the hypotheses.
Other Use Cases:
• One agent could manage workloads in the cloud.
• Another agent could handle customer orders.
“Agentic AI cannot completely take the place of a human,” says Bedi, “but what it can do is work alongside your teams, handling repetitive tasks, seeking out information and resources, doing work in the background 24/7, 365 days a year.”
Liquid Neural Networks: A New AI Frontier
Aside from the categories of AI models noted above, advancements in AI design and execution are also impacting enterprise adoption—namely, the advent of liquid neural networks.
What are liquid neural networks?
• This cutting-edge technology offers greater flexibility by mimicking the human brain’s structure.
• Unlike traditional neural networks, which might require 100,000 nodes, liquid networks can accomplish tasks with just a couple dozen nodes.
Liquid neural networks are designed to run on less computing power with more transparency. This opens up possibilities for embedding AI into edge devices, robotics, and safety-critical systems.
In other words, it’s not just the applications of AI but also its underlying mechanisms that are ripe for improvement and disruption in the coming years.
小模型的系统化未来
Databricks副总裁Naveen Rao认为,越来越多的企业会用系统化的方法来发展AI:“那种‘万能计算机无所不懂’的想法只是科幻电影的幻想。我们更应该像管理人类团队一样,把问题分解开。领域专属和定制化模型可以解决具体任务,工具可以做确定性计算,数据库则负责获取相关数据。这些AI系统协同工作,提供的解决方案远比单一组件要强大得多。”
小型模型(SLMs)的优势之一是它们可以直接在设备上运行,而且企业可以用高度定制化的小型数据集来训练这些模型,解决更具体的问题,而不是应对宽泛的需求。例如,微软和Mistral正在开发这种精简版的小型语言模型,而Meta则提供了多个小型模型和前沿模型供选择。
此外,很多SLMs的进步来自开源模型,比如Hugging Face或Arcee.AI等公司提供的模型。这些开源模型非常适合企业使用,因为它们可以根据不同需求进行调整,只要企业的IT团队拥有调试这些模型的AI人才即可。一份Databricks的报告显示,超过75%的企业正在选择小型开源模型,并将其定制用于具体场景。由于多样化的开发者社区不断改进这些开源模型,模型的效率和规模预计将快速提升。
多模态模型的崛起
人类通过多种方式交流,比如文本、肢体语言、语音和视频等。现在,机器也正在努力赶上这个水平。
企业的需求远超文本数据,这就是为什么多模态AI开始成为大家关注的焦点。其实,我们已经接触到了一些多模态AI的应用,比如,当我们和数字助手对话时,它可以以文本或图像的形式回复我们;或者我们开车时,车辆通过计算机视觉和音频提示提供驾驶辅助。
然而,多模态生成式AI还处于起步阶段。2024年5月,谷歌的Project Astra、OpenAI的GPT-4 Omni,以及亚马逊云服务(AWS)的Titan展示了早期的多模态AI技术。这些技术的进展较慢,原因在于它们需要大量的数据、资源和硬件支持。此外,目前文字生成AI存在的“幻觉”和偏见问题在多模态生成中可能更加突出。
企业应用前景:
多模态AI可以“一次训练,多场景输出”,比如基于文本数据训练的模型可以根据用户需求,以图片、视频或音频的形式提供答案。这种能力不仅提升了用户体验,还促进了数字包容性。
具体场景:
1、企业可以用它将营销材料快速从英文翻译成其他语言,或者自动生成内容。
2、在供应链优化中,多模态AI可以结合传感器数据、维护记录和仓库图像,推荐最佳的库存量。
随着技术的发展和模型架构的效率提升,未来18到24个月内会看到更多新的应用场景。
代理型AI(Agentic AI)
AI的第三大趋势可能在未来十年内彻底改变我们的工作方式。
代理型AI不仅能够回答问题,还能完成现实世界中的具体任务。例如,帮助用户根据个人偏好预订航班,或者在无需复杂指令的情况下,提供自动化的客户支持。这些模型作为自主数字代理的普及标志着代理型AI的开端,像Salesforce和ServiceNow这样的企业软件供应商,已经开始宣传这些可能性。
企业案例:
ServiceNow的Xanadu平台中,一个AI代理可以根据客户问题的历史记录生成下一步建议,然后将这些建议传递给另一个代理来执行,而人类则只需在代理之间的沟通中进行审核。这种协作模式可以扩展到不同领域,比如一个代理专注于云端工作负载管理,另一个代理则负责为客户下单。
ServiceNow的首席客户官Chris Bedi表示:“代理型AI无法完全取代人类,但它可以成为团队的好助手,处理重复性的任务、查找信息和资源,并在后台全天候工作。”
液态神经网络:AI技术的新突破
除了AI模型的种类,AI的设计和运行机制也在快速进步,比如液态神经网络的出现。这种网络拥有更高的灵活性,其训练方法模仿了人脑的结构。与传统网络需要十万个节点不同,液态神经网络可能只需要几十个节点就能完成类似的任务。
这种尖端技术不仅能显著降低计算需求,还能提供更高的透明性,使得AI更适合嵌入边缘设备、机器人和关键安全系统中。
换句话说,未来的AI不仅在应用场景上会带来更多可能,它的底层技术也在酝酿新的颠覆。
Next: There’s an Agent for That
In the next decade, AI could be wholly focused on execution instead of human augmentation.
A future employee could make a plain-language request to an AI agent, for example:
“Close the books for Q2 and generate a report on EBITDA.”
Like in an enterprise hierarchy, the primary agent would then delegate the needed tasks to agents with discrete roles that cascade across different productivity suites to take action.
As with humans, teamwork could be the missing ingredient that enables the machines to improve their capabilities.
This leads to a few key considerations for the years to come (figure 2):
1. AI-to-AI Communication
Agents will likely have a more efficient way of communicating with each other than human language, as we don’t need human-imitating chatbots talking to each other.
Better AI-to-AI communication can enhance outcomes, as fewer people will need to become experts to benefit from AI. Rather, AI can adapt to each person’s communication style.
2. Job Displacement and Creation
Some claim that roles such as prompt engineer could become obsolete.
However, the AI expertise of those employees will remain pertinent as they focus on managing, training, and collaborating with AI agents as they do with LLMs today.
For example:
A lean IT team with AI experts might build the agents it needs in a sort of “AI factory” for the enterprise.
The significant shift in the remaining workforce’s skills and education may ultimately reward more human skills like creativity and design, as mentioned in previous Tech Trends.
3. Privacy and Security
The proliferation of agents with system access is likely to raise broad concerns about cybersecurity.
This will only become more important as time progresses and more of our data is accessed by AI systems.
New paradigms for risk and trust will be required to make the most out of applying AI agents.
4. Energy and Resources
AI’s energy consumption is a growing concern.
To mitigate environmental impacts, future AI development will need to balance performance with sustainability.
It will need to take advantage of improvements in liquid neural networks or other efficient forms of training AI—not to mention the hardware needed to make all of this work, as we discuss in “Hardware is Eating the World.”
5. Leadership for the Future
AI has transformative potential, as everyone has heard plenty over the last year, but only insofar as leadership allows.
Applying AI as a faster way of doing things the way they’ve always been done will result in:
• At best: Missed potential
• At worst: Amplified biases
Imaginative, courageous leaders should dare to take AI from calcified best practices to the creation of “next practices,” where we find new ways of organizing ourselves and our data toward an AI-enabled world.
Future Considerations: Data, Data, and More Data
When it comes to AI, enterprises will likely have the same considerations in the future that they do today:
Data, data, and data.
Until AI systems can reach artificial general intelligence or learn as efficiently as the human brain, they will remain hungry for more data and inputs to help them be more powerful and accurate.
Steps taken today to organize, streamline, and protect enterprise data could pay dividends for years to come, as data debt could one day become the biggest portion of technical debt.
Such groundwork should also help enterprises prepare for the litany of regulatory challenges and ethical uncertainties (such as data collection and use limitations, fairness concerns, and lack of transparency) that come with shepherding this new, powerful technology into the future.
The stakes of “garbage in, garbage out” are only going to grow:
It would be much better to opt for genius in, genius squared.
每个任务都可以有一个AI代理
在未来十年,AI可能完全专注于执行任务,而非仅仅增强人类的能力。
想象一下:
一位员工可以对AI代理发出简单的指令,例如“完成第二季度的账目并生成一份EBITDA(税息折旧及摊销前利润)报告”。主代理会像企业的分层管理一样,将任务分派给具有不同职责的代理,这些代理会跨多个生产力工具套件协同完成行动。
正如人类团队合作能够提升效率一样,AI之间的团队合作可能成为推动机器能力提升的关键因素。
以下是未来几年需要考虑的几个重点:
1. AI与AI之间的沟通
AI代理之间的沟通可能会比模仿人类语言更加高效。
我们不需要AI通过像人类一样的聊天方式彼此交谈,而是可以采用更加直接的机器语言进行沟通。这种方式能够提高AI之间的协作效果,同时降低人类需要掌握AI专业知识的门槛。
最终,AI可以适应每个人的沟通风格,让更多人无需成为专家,也能从AI中受益。
2. 工作的取代与创造
有人担心像“提示工程师”这样的角色可能会变得过时。但实际上,这些具有AI专业技能的员工仍然会很重要,他们的职责会转向管理、训练和与AI代理合作,就像他们现在处理大型语言模型(LLMs)一样。
例如,一个精简的IT团队可以通过企业内的“AI工厂”打造它们所需的AI代理,来支持各类任务。
此外,随着工作技能和教育需求发生显著变化,人类具备的创造力和设计能力等技能可能会变得更加宝贵。这一点在之前的《科技趋势》中已经提到过。
3. 隐私与安全
随着AI代理的普及,它们对系统的访问权限会引发更多的网络安全问题。
这些问题会随着时间的推移和AI对更多数据的访问而变得更加重要。为了更好地利用AI代理,新的风险控制和信任管理范式将变得必不可少。
4. 能源与资源消耗
AI的能耗问题正在成为一个日益增长的关注点。
为了降低对环境的影响,未来的AI开发需要在性能与可持续性之间找到平衡。这可能需要利用液态神经网络或其他高效的AI训练方法,同时改进硬件技术(关于硬件,我们在《硬件正在吞噬世界》中有深入讨论)。
5. 为未来培养领导力
AI拥有改变世界的潜力,但这种潜力能否实现,很大程度上取决于领导者的决策与视野。
如果AI只是被用来加速现有的工作方式,那么最多只能带来潜力的浪费,最糟的情况下则可能放大现有的偏见。
真正有想象力和勇气的领导者应该敢于将AI引入“下一代实践(Next Practices)”,通过创造性的方式重新组织数据和工作流程,构建一个更高效、更智能的AI世界。
未来AI的核心依旧是数据
当谈到AI时,未来企业依旧需要考虑三个核心问题:数据、数据,还是数据。
在AI系统能够达到人工通用智能(AGI)或像人类大脑一样高效学习之前,AI将始终需要更多的数据和输入,来提升其能力和准确性。
今天为组织、优化和保护企业数据所做的努力,可能在未来多年里都会带来巨大的回报。如果没有做好这些基础工作,企业可能面临**“数据债务”**的积累,最终成为技术债务中最沉重的部分。
同时,这些数据准备工作还可以帮助企业应对AI带来的各种监管挑战和伦理问题,例如数据采集与使用的限制、公平性问题以及透明度不足等。
“垃圾进,垃圾出”的问题只会变得更加严重,而目标应该是“天才输入,天才输出”。如果企业能够在数据上投入更多的努力,那么未来AI代理带来的价值将是不可估量的。
Hardware is Eating the World
After years of “software eating the world,” it’s hardware’s turn to feast.
We previewed in the computation chapter of Tech Trends 2024 that as Moore’s Law comes to its supposed end, the promise of the AI revolution increasingly depends on access to the appropriate hardware.
Case in point: NVIDIA is now one of the world’s most valuable (and watched) companies, as specialized chips become an invaluable resource for AI computation workloads.¹
According to Deloitte research based on a World Semiconductor Trade Statistics forecast, the market for chips used only for generative AI is projected to reach over US$50 billion this year.²
A Critical Hardware Use Case: AI-Embedded End-User and Edge Devices
Take personal computers (PCs), for instance. For years, enterprise laptops have been commodified. But now, we may be on the cusp of a significant shift in computing, thanks to AI-embedded PCs.
Companies like AMD, Dell, and HP are already touting the potential for AI PCs to:
• “Future-proof” technology infrastructure
• Reduce cloud computing costs
• Enhance data privacy
With access to offline AI models for image generation, text analysis, and speedy data retrieval, knowledge workers could be supercharged by faster, more accurate AI.
That being said, enterprises should be strategic about refreshing end-user computation on a large scale—there’s no use wasting AI resources that are limited in supply.
The Cost of Advancements: Sustainability in Data Centers
Of course, all of these advancements come at a cost.
Data centers are a new focus of sustainability as the energy demands of large AI models continue to grow.4
The International Energy Agency has suggested that the demands of AI will significantly increase electricity in data centers by 2026, equivalent to Sweden’s or Germany’s annual energy demands.5
A recent Deloitte study on powering AI estimates that global data center electricity consumption may triple in the coming decade, largely due to AI demand.6
Innovations in energy sources and efficiency are needed to make AI hardware more accessible and sustainable, even as it proliferates and finds its way into everyday consumer and enterprise devices.
Consider this: Unit 1 of the nuclear plant Three Mile Island, which was shut down five years ago due to economic reasons, will reopen by 2028 to power data centers with carbon-free electricity.7
Looking Forward: AI Hardware in IoT
AI hardware is poised to step beyond IT and into the Internet of Things (IoT).
An increasing number of smart devices could become even more intelligent as AI enables them to analyze their usage and take on new tasks (as agentic AI, mentioned in “What’s next for AI?” advances).
Today: Benign use cases, like AI in toothbrushes.
Tomorrow: Robust potential, like AI in lifesaving medical devices.
The true power of hardware could be unlocked when smarter devices bring about a step change in our relationship with robotics.
Now: Chips Ahoy!
A generation of technologists has been taught to believe software is the key to return on investment, given its scalability, ease of updates, and intellectual property protections.9
But now, hardware investment is surging as computers evolve from calculators to cogitators.10
We wrote last year that specialized chips like graphics-processing units (GPUs) were becoming the go-to resources for training AI models.
In its 2024 TMT Predictions report, Deloitte estimated that total AI chip sales in 2024 would be 11% of the predicted global chip market of US$576 billion.11
Growing from roughly US$50 billion today, the AI chip market is forecasted to reach up to US$400 billion by 2027, though a more conservative estimate is US$110 billion (figure 1).
三、硬件正在吞噬世界
在过去的几年里,我们一直说“软件正在吞噬世界”,但现在轮到硬件登场了。
随着摩尔定律逐渐失效,AI革命的未来越来越依赖于合适的硬件资源。举个例子:NVIDIA(英伟达)现已成为全球最具价值、最受关注的公司之一,因为专用芯片已成为AI计算任务中不可或缺的资源。
根据德勤基于“世界半导体贸易统计”预测的研究,仅用于生成式AI的芯片市场预计将在今年突破500亿美元的规模。
企业硬件的关键用例:嵌入AI的终端设备
一个关键的硬件应用场景可能在于嵌入AI的终端用户设备和边缘设备。例如,个人电脑(PC)行业在过去多年中已经高度商品化,但随着AI嵌入PC,我们可能正站在计算技术重大变革的起点。
AMD、戴尔、惠普等公司已经在宣传AI PC的潜力,认为它们可以:
• “未来-proof”(未来适用)技术基础设施
• 降低云计算成本
• 增强数据隐私
借助离线AI模型,知识工作者可以快速实现图像生成、文本分析和数据检索等功能,大幅提高工作效率和精度。
尽管如此,企业在大规模更新终端用户设备时需要谨慎决策,因为AI资源是有限的,浪费它们没有意义。
硬件背后的能源代价:可持续发展的压力
当然,所有这些技术进步的背后都有代价。
随着大型AI模型的能源需求不断增长,数据中心正在成为可持续发展的新焦点。国际能源署(IEA)预测,到2026年,AI的能源需求将使数据中心的用电量大幅增加,达到与瑞典或德国全年用电量相当的水平。
德勤的研究也估计,未来十年内,由于AI需求的推动,全球数据中心的电力消耗可能会增加三倍。
为应对这一挑战,需要在能源来源和能效创新方面进行突破,以使AI硬件既可用又可持续。比如,美国三里岛核电站的1号机组,五年前因经济原因关闭,但预计将在2028年重新开放,为数据中心提供无碳电力支持。
硬件未来的展望:从IT到物联网
展望未来,AI硬件将从IT领域扩展到物联网(IoT)。
越来越多的智能设备将变得更加智能,因为AI赋予它们分析自身使用情况并承担新任务的能力(这一点在“AI的下一步是什么?”中提到的代理型AI将继续推动)。
今天: 比如,AI被用在牙刷等看似普通的设备中。
明天: AI可能被嵌入救命的医疗设备中,其潜力远超目前的应用。
当更智能的设备能够与机器人技术相结合,这种硬件将真正释放出改变我们生活的力量,重新定义人类与机器的关系。
芯片崛起的时代
长期以来,技术界普遍认为软件是投资回报的关键,因为它具有可扩展性、易于更新和知识产权保护的优势。
但现在,随着计算机从“计算器”进化到“认知者”,硬件投资正在快速崛起。
我们去年曾提到,像图形处理器(GPU)这样的专用芯片正在成为训练AI模型的首选资源。
根据德勤2024年的《TMT预测报告》,AI芯片市场预计将在2024年占全球芯片市场(5760亿美元)总量的11%。
目前估计AI芯片市场约为500亿美元,但到2027年,这一数字可能增长到4000亿美元(较保守的预测为1100亿美元,详见图1)。
Large Tech Companies and the Growing Demand for AI Hardware
Large tech companies are driving a portion of this demand, as they may build their own AI models and deploy specialized chips on-premises. However, enterprises across industries are seeking compute power to meet their IT goals.
For instance, according to a Databricks report, the financial services industry has had the highest growth in GPU usage, at 88% over the past six months, in running large language models (LLMs) that tackle fraud detection and wealth management.
All of this demand for GPUs has outpaced capacity. In today’s iteration of the Gold Rush, the companies providing “picks and shovels,” or the tools for today’s tech transformation, are winning big.
NVIDIA’s CEO Jensen Huang has noted that cloud GPU capacity is mostly filled, but the company is also rolling out new chips that are significantly more energy-efficient than previous iterations. Hyperscalers are buying up GPUs as they roll off the production line, spending almost $1 trillion on data center infrastructure to accommodate the demand from clients who rent GPU usage. All the while, the energy consumption of existing data centers is pushing aging power grids to the brink globally.
New Chips for a New Era: Neural Processing Units (NPUs)
Understandably, enterprises are looking for new solutions. While GPUs are crucial for handling the high workloads of LLMs or content generation, and central processing units are still table stakes, neural processing units (NPUs) are now in vogue.
NPUs, which mimic the brain’s neural network, can accelerate smaller AI workloads with greater efficiency and lower power demands. These chips enable enterprises to:
• Shift AI applications away from the cloud
• Apply AI locally to sensitive data that can’t be hosted externally
This new breed of chip is a crucial part of the future of embedded AI.
Vivek Mohindra, senior vice president of corporate strategy at Dell Technologies, notes:
“Of the 1.5 billion PCs in use today, 30% are four years old or more. None of these older PCs have NPUs to take advantage of the latest AI PC advancements.”
A major refresh of enterprise hardware may be on the horizon.
As NPUs enable end-user devices to run AI offline and allow models to become smaller to target specific use cases, hardware may once again become a differentiator for enterprise performance.
AI’s Transformative Potential
In a recent Deloitte study:
• 72% of respondents believe generative AI’s impact on their industry will be “high to transformative.”
Once AI becomes mainstream thanks to advancements in hardware, that number may edge closer to 100%.
New: Infrastructure is Strategic Again
The heady cloud-computing highs of assumed unlimited access are giving way to a resource-constrained era.
After being relegated to a utility for years, enterprise infrastructure (e.g., PCs) is once again strategic.
Specifically, specialized hardware will likely be crucial to three significant areas of AI growth:
1. AI-embedded devices and the Internet of Things (IoT)
2. Data centers
3. Advanced physical robotics
While the impact on robotics may occur over the next few years, enterprises will likely face decisions about the first two areas in the next 18 to 24 months.
1. Edge Computing Footprint
By 2025, more than 50% of data could be generated by edge devices.
As NPUs proliferate, more devices could run AI models without relying on the cloud. This trend is especially relevant as generative AI model providers focus on creating smaller, more efficient models for specific tasks.
With quicker response times, decreased costs, and greater privacy controls, hybrid computing (a mix of cloud and on-device AI workloads) may become essential for many enterprises. Hardware manufacturers are betting on it.
According to Dell Technologies’ Mohindra:
“Processing AI at the edge is one of the best ways to handle the vast amounts of data required. When you consider latency, network resources, and just sheer volume, moving data to a centralized compute location is inefficient, ineffective, and not secure. It’s better to bring AI to the data, rather than bring the data to AI.”
2. The Hardware Refresh is Coming
One major bank predicts that AI PCs will account for more than 40% of PC shipments by 2026.
Similarly, nearly 15% of 2024 smartphone shipments are expected to be capable of running LLMs or image-generation models.
HP’s Alex Thatcher compares this hardware refresh to the major transition from command-line inputs to graphical user interfaces in the 1990s:
“The software has fundamentally changed, replete with different tools and ways of collaborating. You need hardware that can accelerate that change and make it easier for enterprises to create and deliver AI solutions.”
Apple and Microsoft have also fueled this impending hardware refresh by embedding AI into their devices this year.
Strategic Hardware Adoption
As hardware choices proliferate, good governance will be crucial. Enterprises need to answer key questions:
• How many employees need next-generation devices?
• Which areas of the business will benefit most from these advancements?
Chip manufacturers are racing to improve AI horsepower, but enterprises can’t afford to refresh their entire edge footprint with every new advancement.
Instead, businesses should adopt a tiered strategy to ensure these devices are deployed where they can have the greatest impact.
大型科技公司推动AI硬件需求
大型科技公司正成为推动AI硬件需求的一部分,它们可能自建AI模型并部署专用芯片到本地。但事实上,各行各业的企业都在寻求更强的计算能力来实现它们的IT目标。
例如:
根据Databricks的一份报告,在运行大语言模型(LLMs)以处理欺诈检测和财富管理任务时,金融服务业的GPU使用量在过去六个月内增长了88%,是增长最快的行业之一。
GPU需求超出供给:新的“淘金热”
所有这些对GPU的需求已经远远超出了产能。在当今这个“新淘金热”中,那些提供“镐和铲”的公司,也就是为技术转型提供工具的企业,正在赢得大笔回报。
NVIDIA(英伟达)首席执行官黄仁勋表示,云GPU的容量几乎已经用尽。不过,NVIDIA正在推出新一代的芯片,其能源效率显著高于以往版本。
云计算巨头(Hyperscalers)正在以惊人的速度购买刚刚下线的GPU,投资接近1万亿美元用于数据中心基础设施,以满足客户对GPU使用的租赁需求。同时,现有数据中心的能源消耗也在将全球老旧的电网推向极限。
新一代芯片:神经处理单元(NPUs)
面对GPU需求激增的压力,企业正在寻找新的解决方案。尽管GPU对于处理LLMs或内容生成的高工作负载至关重要,而CPU依然是基本配置,但**神经处理单元(NPUs)**正在迅速成为新热点。
NPUs模拟大脑的神经网络结构,可以以更高的效率和更低的功耗加速较小的AI工作负载。它们的优势在于:
• 让AI应用从云端转移到本地运行
• 保护敏感数据,避免托管在外部平台上
这类新型芯片是未来嵌入式AI的重要组成部分。
戴尔科技(Dell Technologies)战略高级副总裁Vivek Mohindra表示:
“目前全球有15亿台PC,其中30%超过4年机龄。这些老旧PC都没有NPUs,无法利用最新的AI PC功能。”
企业硬件可能迎来一次大规模的升级浪潮。随着NPUs让终端设备可以离线运行AI,同时让AI模型更小、更贴合具体用例,硬件可能再次成为企业性能的差异化优势。
AI的变革潜力
根据德勤的一项研究:
72%的受访者认为生成式AI对其所在行业的影响将是“重大到变革性”。
随着硬件的进一步普及,让AI触手可及,这一比例可能会接近100%。
新趋势:企业基础设施重回战略核心
曾经,云计算给人一种“资源无限”的印象,但如今,我们正进入一个资源受限的时代。
在过去几年里,企业基础设施(例如PC)被视为一种“工具性资源”,但现在,它们再次成为战略重点。
特别是,专用硬件在以下三个AI增长领域将尤为重要:
1. 嵌入AI的设备与物联网(IoT)
2. 数据中心
3. 先进的物理机器人
尽管机器人领域的影响可能在未来几年才会显现,但企业在未来18到24个月内需要着手应对前两个领域的相关决策。
1. 边缘计算的崛起
到2025年,超过50%的数据可能会由边缘设备生成。
随着NPUs的普及,更多设备将能够运行AI模型而无需依赖云计算。尤其是,生成式AI的提供商正在开发更小、更高效的模型,针对具体任务提供支持。
边缘计算的优势:
• 更快的响应时间
• 更低的成本
• 更强的隐私控制
混合计算(即云端与设备端AI工作负载相结合)可能成为许多企业的必备选项,而硬件制造商正在押注这一趋势。
戴尔科技的Mohindra表示:
“从延迟、网络资源以及数据量来看,将数据移到集中计算位置既低效又不安全。将AI带到数据前线,而不是把数据送到AI前线,是更好的选择。”
2. 硬件的升级浪潮即将到来
一家大型银行预测,到2026年,AI PC将占PC出货量的40%以上。
同时,预计到2024年,近15%的智能手机将能够运行LLMs或图像生成模型。
HP AI PC体验与云端客户高级总监Alex Thatcher表示:
“这次设备升级浪潮就像90年代从命令行输入到图形用户界面的转型一样重大。软件已经发生了根本性的变化,带来了全新的工具和协作方式。企业需要能够加速这种变化的硬件,以便更轻松地创建和交付AI解决方案。”
苹果和微软今年也通过将AI嵌入到它们的设备中,推动了即将到来的硬件升级潮。
企业的战略硬件应用
随着硬件选择的增多,良好的治理将至关重要。企业需要问自己:
• 我们的员工中有多少人需要下一代设备?
• 哪些业务领域最需要这些硬件的支持?
尽管芯片制造商正在竞相提升AI的算力,但企业无法在每次新技术发布时对所有设备进行全面升级。
相反,企业应该采取分级战略,确保这些设备能够部署在最需要的地方,以实现最大的影响。
IT, amplified: AI elevates the reach (and remit) of the tech function
As the tech function shifts from leading digital transformation to leading AI transformation, forward-thinking leaders are using this as an opportunity to redefine the future of IT.
Much has been said, including within the pages of Tech Trends, about the potential for artificial intelligence to revolutionize business use cases and outcomes. Nowhere is this more true than in the end-to-end life cycle of software engineering and the broader business of information technology, given generative AI’s ability to write code, test software, and augment tech talent in general.
Deloitte research has shown that tech companies at the forefront of this organizational change are ready to realize the benefits: They are twice as likely as their more conservative peers to say generative AI is transforming their organization now or will within the next year.
We wrote in a Tech Trends 2024 article that enterprises need to reorganize their developer experiences to help IT teams achieve the best results. Now, the AI hype cycle has placed an even greater focus on the tech function’s ways of working. IT has long been the lighthouse of digital transformation in the enterprise, but it must now take on AI transformation. Forward-thinking IT leaders are using the current moment as a once-in-a-generation opportunity to redefine roles and responsibilities, set investment priorities, and communicate value expectations.
More importantly, by playing this pioneering role, chief information officers can help inspire other technology leaders to put AI transformation into practice.
After years of enterprises pursuing lean IT and everything-as-a-service offerings, AI is sparking a shift away from virtualization and austere budgets. Gartner predicts that “worldwide IT spending is expected to total $5.26 trillion in 2024, an increase of 7.5% from 2023.”
As we discuss in “Hardware is eating the world,” hardware and infrastructure are having a moment, and enterprise IT spending and operations may shift accordingly. As both traditional AI and generative AI become more capable and ubiquitous, each of the phases of tech delivery may see a shift from human in charge to human in the loop. Organizations need a clear strategy in place before that occurs.
Based on Deloitte analysis, over the next 18 to 24 months, IT leaders should plan for AI transformation across five key pillars:
1. Engineering
2. Talent
3. Cloud financial operations (FinOps)
4. Infrastructure
5. Cyber risk
This trend may usher in a new type of lean IT
If commercial functions see an increased number of citizen developers or digital agents that can spin up applications on a whim, the role of the IT function may shift from building and maintaining to orchestrating and innovating.
In that case, AI may not only be undercover, as we indicate in the introduction to this year’s report, but may also be overtly in the boardroom, overseeing tech operations in line with human needs.
Now: Spotlight—and higher spending—on IT
For years, IT has been under pressure to streamline sprawling cloud spend and curb costs. Since 2020, however, investments in tech have been on the rise thanks to pent-up demand for collaboration tools and the pandemic-era emphasis on digitalization.
According to Deloitte research:
• From 2020 to 2022, the global average technology budget as a percentage of revenue jumped from 4.25% to 5.49%, an increase that approximately doubled the previous revenue change from 2018 to 2020.
• In 2024, US companies’ average budget for digital transformation as a percentage of revenue is 7.5%, with 5.4% coming from the IT budget.
As demand for AI sparks another increase in spending, the finding from Deloitte’s 2023 Global Technology Leadership Study continues to ring true: Technology is the business, and tech spend is increasing as a result.
Today, enterprises are grappling with the new relevance of hardware, data management, and digitization in ramping up their usage of AI and realizing its value potential.
In Deloitte’s Q2 State of Generative AI in the Enterprise report, businesses that rated themselves as having “very high” levels of expertise in generative AI were increasing their investment in hardware and cloud consumption much more than the average enterprise.
Overall, 75% of organizations surveyed have increased their investments around data-life-cycle management due to generative AI.
Tech investment strategies are critical
These figures point to a common theme: To realize the highest impact from gen AI, enterprises likely need to accelerate their cloud and data modernization efforts.
AI has the potential to deliver efficiencies in cost, innovation, and a host of other areas, but the first step to accruing these benefits is for businesses to focus on making the right tech investments.
Because of these crucial investment strategies, the spotlight is on tech leaders who are paving the way.
According to Deloitte research:
• Over 60% of US-based technology leaders now report directly to their chief executives, an increase of more than 10 percentage points since 2020.
This is a testament to the tech leader’s increased importance in setting the AI strategy rather than simply enabling it.
Far from a cost center, IT is increasingly being seen as a differentiator in the AI age, as CEOs, following market trends, are keen on staying abreast of AI’s adoption in their enterprise.
The future of IT: Leaner, more integrated, and faster
John Marcante, former global CIO of Vanguard and US CIO-in-residence at Deloitte, believes AI will fundamentally change the role of IT.
He says:
“The technology organization will be leaner, but have a wider purview. It will be more integrated with the business than ever. AI is moving fast, and centralization is a good way to ensure organizational speed and focus.”
IT is gearing up for transformation
As IT gears up for the opportunity presented by AI—perhaps the opportunity that many tech leaders and employees have waited for—changes are already underway in how the technology function organizes itself and executes work.
The stakes are high, and IT is due for a makeover.
四、IT能力大升级
随着技术职能从引领数字化转型转向引领AI转型,前瞻性的领导者正在利用这一机会重新定义IT的未来。
AI对IT的全面影响:软件工程与技术职能
关于人工智能如何彻底改变业务场景和结果,业界已有许多讨论。《科技趋势》多次提到这一点,而在软件工程全生命周期和信息技术业务中,这一点尤为真实。
生成式AI能够编写代码、测试软件并全面增强技术团队的能力,这些优势正在改变IT的工作方式。
根据德勤的研究,走在这一组织变革前沿的科技公司,已经准备好享受这一红利:
它们比更保守的同行企业更有可能表示生成式AI正在或即将在一年内改变其组织。
我们在《科技趋势2024》中提到,企业需要重新组织开发者的工作体验,帮助IT团队取得更好的成果。如今,AI的热潮更进一步,将焦点聚集在IT职能的工作方式上。
IT长期以来一直是企业数字化转型的灯塔,但现在,它必须承担起AI转型的责任。
前瞻性的IT领导者正在将这一时刻视为百年难得的机会,通过重新定义角色与职责、设定投资优先级和传递价值预期,全面推动组织变革。更重要的是,通过扮演这一先锋角色,**首席信息官(CIO)**可以激励其他技术领导者将AI转型付诸实践。
AI时代的技术支出趋势
在企业长期追求精益IT和一切服务化的背景下,AI正在引发一场从虚拟化和缩减预算向新投资方向的转变。
Gartner预测,到2024年,全球IT支出将达到5.26万亿美元,比2023年增长7.5%。
正如我们在《硬件正在吞噬世界》中讨论的,硬件和基础设施正成为焦点,企业的IT支出和运营可能因此发生相应变化。随着传统AI和生成式AI变得更加强大和普及,技术交付的每个阶段可能从“以人为主导”逐步转向“人类参与其中(Human in the Loop)”。
企业需要在这种转变发生之前制定清晰的战略。根据德勤的分析,未来18到24个月内,IT领导者应围绕以下五大核心支柱制定AI转型计划:
1. 工程
2. 人才
3. 云财务运营(FinOps)
4. 基础设施
5. 网络风险
未来IT:从“建设者”到“创新者”
这场趋势可能在未来十年催生一种新的精益IT模式。如果企业的商业职能中出现更多“公民开发者”或能够随时生成应用的数字代理,那么IT职能的角色可能会从构建与维护转变为协调与创新。
这种情况下,AI不仅仅是隐藏在后台的助推器,甚至可能直接参与到董事会层面的战略决策中,与人类需求保持一致,监督技术运营。
IT支出的聚光灯下
多年来,IT一直承受着控制云支出的压力。然而,自2020年以来,受疫情期间对协作工具的需求激增和数字化转型的推动,技术投资呈现上升趋势。
数据统计:
1、从2020年到2022年,全球企业的技术预算占收入比例从4.25%跃升至5.49%。
2、到2024年,美国企业的数字化转型预算占收入的7.5%,其中5.4%来自IT预算。
随着AI需求带来新一轮支出增长,德勤2023年的《全球技术领导力研究》中提到的观点依然成立:技术就是业务,因此技术支出也在不断增加。
企业正在应对硬件需求、数据管理和数字化的新相关性,以加速AI的应用并释放其价值潜力。根据德勤Q2生成式AI报告,认为自己在生成式AI方面具有“非常高”专业水平的企业,在硬件和云消费方面的投资比平均水平高出许多。
AI驱动的技术投资策略
75%的企业因生成式AI而增加了数据生命周期管理的投资。
这些数据指向一个共同主题:为了让生成式AI发挥最大效用,企业需要加速云和数据现代化。AI有潜力在成本、创新和其他多个领域带来高效益,但前提是企业必须专注于正确的技术投资策略。
由于这些关键的投资策略,技术领导者成为了关注的焦点。
根据德勤的研究,超过60%的美国技术领导者现在直接向首席执行官汇报,比2020年增加了10个百分点。这反映出技术领导者在制定AI战略中的重要性,已从单纯的技术支持角色转变为战略制定者。
IT不再只是成本中心,而是AI时代的差异化优势,CEO们正密切关注AI在企业中的应用,以保持领先地位。
IT的未来:更精益、更融合、更快速
Vanguard(先锋集团)前全球CIO兼德勤美国驻地CIO John Marcante认为,AI将从根本上改变IT的角色。他说:“技术团队会变得更精简,但覆盖范围更广。它将与业务的融合程度比以往任何时候都高。AI发展速度很快,而集中化是确保组织速度与专注的最佳方式。”
IT的变革时刻已经到来
随着IT为AI带来的机遇做好准备,技术职能的组织方式和执行方式正在发生改变。这可能正是许多技术领导者和员工一直等待的机会。
但这场变革的代价很高,IT也即将迎来一场全面的“改头换面”。
New: An AI boost for IT
Over the next 18 to 24 months, the nature of the IT function is likely to change as enterprises increasingly employ generative AI. Deloitte’s foresight analysis suggests that, by 2027, even in the most conservative scenario, gen AI will be embedded into every company’s digital product or software footprint (figure 1), as we discuss across five key pillars.
Engineering
In the traditional software development life cycle, manual testing, inexperienced developers, and disparate tool environments can lead to inefficiencies, as we’ve discussed in prior Tech Trends. Fortunately, AI is already having an impact on these areas. AI-assisted code generation, automated testing, and rapid data analytics all save developers more time for innovation and feature development. The productivity gain from coding alone is estimated to be worth US$12 billion in the United States alone.
At Google, AI tools are being rolled out internally to developers. In a recent earnings call, CEO Sundar Pichai said that around 25 percent of the new code at the technology giant is developed using AI. Shivani Govil, senior director of product management for developer products, believes that “AI can transform how engineering teams work, leading to more capacity to innovate, less toil, and higher developer satisfaction. Google’s approach is to bring AI to our users and meet them where they are—by bringing the technology into products and tools that developers use every day to support them in their work. Over time, we can create even tighter alignment between the code and business requirements, allowing faster feedback loops, improved product market fit, and better alignment to the business outcomes.”
In another example, a health care company used COBOL code assist to enable a junior developer with no experience in the programming language to generate an explanation file with 95% accuracy.
As Deloitte recently stated in a piece on engineering in the age of gen AI, the developer role is likely to shift from writing code to defining the architecture, reviewing code, and orchestrating functionality through contextualized prompt engineering. Tech leaders should anticipate human-in-the-loop code generation and review to be the standard over the next few years of AI adoption.
新趋势:AI为IT注入动力
未来18到24个月,随着企业对生成式AI的日益采用,IT职能的性质可能会发生巨大变化。根据德勤的前瞻分析,到2027年,即使是在最保守的情景下,生成式AI也将嵌入每家企业的数字产品或软件体系中(如图1所示)。以下是AI将在五大核心支柱中的具体影响。
1. 工程(Engineering)
在传统的软件开发生命周期中,手动测试、缺乏经验的开发者以及分散的工具环境往往会导致效率低下。这些问题已在我们之前的《科技趋势》中讨论过。而现在,AI正在这些领域产生积极的影响。
AI助力的功能:
• 代码生成
• 自动化测试
• 快速数据分析
这些能力帮助开发者节省时间,从而将更多精力投入到创新和功能开发中。据估计,仅代码编写效率的提升在美国的生产力收益就高达120亿美元。
谷歌案例:
谷歌正在内部向开发人员推出AI工具。该公司CEO桑达尔·皮查伊在近期的财报电话会议中提到,大约25%的新代码是通过AI开发的。
谷歌开发者产品高级总监Shivani Govil表示:
“AI可以彻底改变工程团队的工作方式,提高创新能力、减少重复性劳动并提升开发者满意度。谷歌的做法是将AI技术融入开发者每天使用的产品和工具中,以支持他们的工作。随着时间的推移,我们可以实现代码与业务需求之间更紧密的对齐,从而加速反馈循环、改善产品与市场的契合度,并更好地支持业务目标。”
AI提升的真实场景:
• 一家医疗公司通过AI支持的COBOL代码助手,帮助一位没有COBOL经验的初级开发者生成了准确率高达**95%**的解释文件。
开发者角色的转变
德勤在最近一篇关于生成式AI时代工程开发的文章中指出,开发者的角色正从“编写代码”转向“定义架构、审查代码并通过上下文化的提示工程整合功能”。
技术领导者应预见到,人类参与的代码生成与审查将在未来几年成为AI应用的行业标准。
Technology executives surveyed by Deloitte last year noted that they struggle to hire workers with critical IT backgrounds in security, machine learning, and software architecture, and are forced to delay projects with financial backing due to a shortage of appropriately skilled talent. As AI becomes the newest skill in demand, many companies may not even be able to find all the talent they need, leading to a hiring gap wherein nearly 50% of AI-related positions cannot be filled.
As a result, tech leaders should focus on upskilling their own talent, another area where AI can help. Consider the potential benefits of:
• AI-powered skills gap analyses and recommendations
• Personalized learning paths
• Virtual tutors for on-demand learning
For example, Bayer, the life sciences company, has used generative AI to summarize procedural documents and generate rich media such as animations for e-learning. Similarly, AI could generate documentation to help a new developer understand legacy technology and then create an associated learning podcast and exam for the same developer.
At Google, developers thrive on hands-on experience and problem-solving, so leaders are keen to provide AI learning and tools (like coding assistants) that meet developers where they are on their learning journey. “We can use AI to enhance learning, in context with emerging technologies, in ways that anticipate and support the rapidly changing skills and knowledge required to adapt to them,” says Sara Ortloff, senior director of developer experience at Google.
As automation increases, tech talent would take an oversight role and enjoy more capacity to focus on innovation that can improve the bottom line. This could help attract talent since, according to Deloitte research, the biggest incentive that attracts tech talent to new opportunities is the work they would do in the role.
Cloud Financial Operations
Runaway spending became a common problem in the cloud era when resources could be provisioned with a click. Hyperscalers have offered data and tooling for finance teams and CIOs to keep better track of their team’s cloud usage, but many of these FinOps tools still require manual budgeting and offer limited visibility across disparate systems.
The power of AI enables organizations to be more informed, proactive, and effective with their financial management. For example:
• Real-time cost analysis
• Robust pattern detection
• Resource allocation across systems
AI can help enterprises identify more cost-saving opportunities through better predictions and tracking.
As AI demand increases in the coming years, enterprises are likely to see higher cloud costs. However, applying AI to FinOps can justify the investments in AI and optimize costs elsewhere.
Infrastructure
Across the broad scope of IT infrastructure—from toolchains to service management—organizations haven’t seen as much automation as they’d like. Just a few years ago, studies estimated that nearly half of large enterprises were handling key tasks like security, compliance, and service management manually.
The missing ingredient?
Automation that can learn, improve, and react to the changing demands of a business.
Now, this is becoming possible. Automated resource allocation, predictive maintenance, and anomaly detection could all be implemented in systems that natively understand their own real-time status and can take action accordingly.
This emerging view of IT is referred to as “autonomic” IT, inspired by the autonomic nervous system in the human body that adjusts dynamically to internal and external stimuli. In such a system, infrastructure takes care of itself, surfacing only issues that require human intervention.
For instance:
• eBay is already leveraging generative AI to scale infrastructure and analyze massive amounts of customer data, enabling impactful changes to its platform.
Cybersecurity
Although AI simplifies and enhances many IT processes, it also introduces greater complexity in cyber risks. As we discussed last year, generative AI and synthetic media open up new attack surfaces, including:
• Phishing
• Deepfakes
• Prompt injection attacks
As AI proliferates and digital agents become the newest B2B representatives, these risks may worsen.
How enterprises can respond:
• Data authentication: For example, SWEAR, a security company, has pioneered a way to verify digital media using blockchain.
• Data masking
• Incident response
• Automated policy generation
Generative AI can optimize cybersecurity responses and strengthen defenses against attacks.
Rethinking IT Resources
As technology teams adapt to these changes and challenges, many will shift their focus to innovation, agility, and growth enabled by AI. Teams can:
• Streamline IT workflows
• Reduce the need for manual intervention or offshoring
• Focus on higher-value activities
This could lead to a reallocation of IT resources across the board.
As Ian Cairns, CEO of Freeplay, notes:
“As with any major platform shift, the businesses that succeed will be the ones that can rethink and adapt how they work and build software for a new era.”
2. 人才(Talent)
根据德勤去年对技术高管的调查,许多企业在招聘具有关键IT背景(如安全、机器学习和软件架构)的人才时面临困难。由于缺乏具备适当技能的人才,它们不得不推迟一些已经获得资金支持的项目。随着AI成为最新的热门技能,许多公司可能根本找不到所需的全部人才,导致招聘缺口进一步扩大,目前约有50%的AI相关岗位无法填补。
因此,技术领导者需要将重点放在提升现有团队的技能上,而这恰好是AI可以发挥作用的领域之一。可以想象以下AI支持的能力:
• AI驱动的技能差距分析与建议
• 个性化学习路径
• 按需学习的虚拟导师
生命科学公司拜耳(Bayer)利用生成式AI总结程序文档,并生成动画等丰富的媒体用于电子学习。同样,AI还可以生成文档,帮助新开发者理解旧系统技术,并为其生成相关的学习播客和考试内容。
在谷歌,开发者依靠实际操作经验和解决问题来成长,因此公司领导者特别注重提供AI学习资源和工具(如代码助手),以满足开发者当前学习阶段的需求。谷歌开发者体验高级总监Sara Ortloff表示:
“我们可以通过AI提升学习能力,将其与新兴技术的上下文结合起来,预见并支持不断变化的技能需求,帮助开发者适应这些变化。”
随着自动化的增加,技术人才将更多承担监督角色,同时有更多的时间专注于推动创新,为企业带来切实的收益。这种变化还能吸引人才——根据德勤的研究,技术岗位吸引人才的最大因素是岗位本身的工作内容。
3. 云财务运营(Cloud Financial Operations)
在云计算时代,由于资源可以随手点击部署,过度支出已成为常见问题。虽然云服务商(Hyperscalers)已经为财务团队和CIO提供了工具以更好地跟踪云使用情况,但许多FinOps工具仍需要手动预算,且在跨系统之间的可见性方面存在限制。
AI的加入可以让企业在财务管理上更加信息透明、主动出击、高效管理。比如:
• 实时成本分析
• 强大的模式检测
• 跨系统的资源分配
AI还能通过更好的预测和跟踪,帮助企业发现更多节约成本的机会。
随着未来几年AI需求的持续增长,大型企业可能面临云成本显著上升的情况。然而,通过将AI应用于FinOps,不仅可以为AI投资正名,还能在其他领域优化成本。
4. 基础设施(Infrastructure)
在广泛的IT基础设施领域——从工具链到服务管理,企业自动化程度仍远低于预期。几年前的研究表明,近一半的大型企业仍在手动处理安全、合规和服务管理等关键任务。
缺少的关键要素是什么?
能够学习、改进并响应企业需求变化的自动化。
如今,这种能力正在成为现实。
比如:
• 自动化的资源分配
• 预测性维护
• 异常检测
这些功能可以通过一个实时感知自身状态并采取行动的系统实现。这种新兴的IT概念被称为**“自主IT”**,灵感来自人体的自主神经系统,它能动态调整心率和呼吸以适应内外部刺激。
4. 自主IT的优势:
• 让基础设施自行运行,只在需要人工干预时提出问题。
• eBay已利用生成式AI扩展其基础设施,并分析海量客户数据,从而对其平台进行重要改进。
5. 网络安全(Cybersecurity)
虽然AI让许多IT流程变得更加简单高效,但也带来了更高的网络风险复杂性。正如我们去年提到的,生成式AI和合成媒体为网络攻击打开了新的入口,包括:
• 钓鱼攻击
• 深度伪造(Deepfakes)
• 提示注入攻击
随着AI的普及,以及数字代理成为最新的B2B代表,这些风险可能会更加严重。
企业应如何应对?
• 数据认证:例如,安全公司SWEAR通过区块链验证数字媒体的真实性。
• 数据掩码
• 事件响应
• 自动化策略生成
生成式AI还可以优化网络安全响应,加强对攻击的防御能力。
重新思考IT资源分配
随着技术团队逐步适应上述变化和挑战,许多团队将把重点转向由AI驱动的创新、敏捷性和增长。
团队可以:
• 简化IT工作流程
• 减少对手动干预或外包的依赖
• 专注于高价值活动
这可能会导致IT资源的全面重新分配。
正如Freeplay公司CEO Ian Cairns所说:
“与任何重大平台转变一样,能够重新思考和适应工作方式及软件开发模式的企业,将在这一新纪元中胜出。”
The new math: Solving cryptography in an age of quantum
Quantum computers are likely to pose a severe threat to today’s encryption practices. Updating encryption has never been more urgent.
Cybersecurity professionals already have a lot on their minds. From run-of-the-mill social engineering hacks to emerging threats from AI-generated content, there’s no shortage of immediate concerns. But while focusing on the urgent, they could be overlooking an important threat vector: the potential risk that a cryptographically relevant quantum computer (CRQC) will someday be able to break much of the current public-key cryptography that businesses rely upon. Once that cryptography is broken, it will undermine the processes that establish online sessions, verify transactions, and assure user identity.
Let’s contrast this risk with the historical response to Y2K, where businesses saw a looming risk and addressed it over time, working backward from a specific time to avert a more significant impact.¹ The potential risk of a CRQC is essentially the inverse case: The effect is expected to be even more sweeping, but the date at which such a cryptographically relevant quantum computer will become available is unknown. Preparing for CRQCs is generally acknowledged to be highly important but is often low on the urgency scale because of the unknown timescale. This has created a tendency for organizations to defer the activities necessary to prepare their cybersecurity posture for the arrival of quantum computers.
“Unless it’s here, people are saying, ‘Yeah, we’ll get to it, or the vendors will do it for me. I have too many things to do and too little budget,’” says Mike Redding, chief technology officer at cybersecurity company Quantropi.² “Quantum may be the most important thing ever, but it doesn’t feel urgent to most people. They’re just kicking the can down the road.”
This complacent mindset could breed disaster because the question isn’t if quantum computers are coming—it’s when. Most experts consider the exact time horizon for the advent of a CRQC to be irrelevant when it comes to encryption. The consensus is that one will likely emerge in the next five to 10 years, but how long will it take organizations to update their infrastructures and third-party dependencies? Eight years? Ten years? Twelve?
Given how long it took to complete prior cryptographic upgrades, such as migrating from cryptographic hashing algorithms SHA1 to SHA2, it is prudent to start now.
In a recent report, the US Office of Management and Budget said, “It is likely that a CRQC will be able to break some forms of cryptography that are now commonly used throughout government and the private sector. A CRQC is not yet known to exist; however, steady advancements in the quantum computing field may yield a CRQC in the coming decade. Accordingly … federal agencies must bolster the defense of their existing information systems by migrating to the use of quantum-resistant public-key cryptographic systems.”³
The scale of the problem is potentially massive, but fortunately, tools and expertise exist today to help enterprises address it. Recently released postquantum cryptography (PQC) algorithm standards from the US National Institute of Standards and Technology (NIST) could help to neutralize the problem before it becomes costly,⁴ and many other governments around the world are also working on this issue.⁵
Furthermore, a reinvigorated cyber mindset could set enterprises on the road to better security.
五、量子计算
量子计算机可能会对当前的加密实践构成严重威胁,更新加密技术已经变得刻不容缓。
量子威胁的迫近
网络安全专业人士已经有许多问题需要担忧:从常见的社交工程攻击到AI生成内容带来的新威胁,问题层出不穷。然而,在应对这些紧迫问题的同时,他们可能忽略了一个重要的威胁:量子计算机对加密系统的潜在风险。一旦具备加密破解能力的量子计算机(CRQC)出现,可能会攻破目前广泛依赖的公钥加密技术。这将动摇互联网连接会话的建立、交易验证以及用户身份验证等核心过程。
相比之下,可以将这种风险与历史上的千年虫问题(Y2K)应对方式进行对比。Y2K是一个明确的风险,企业从特定的时间节点倒推,采取了系统性行动来避免更大的影响。而量子计算机的威胁却正好相反:它的影响可能更为深远,但具体会在何时成为现实却无法预知。这种时间上的不确定性让企业倾向于将其视为次要问题,并推迟为量子计算机的到来调整网络安全防御的必要活动。
正如网络安全公司Quantropi的首席技术官Mike Redding所说:
“除非量子计算机已经出现,人们会说,‘没关系,我们以后再处理,或者供应商会帮我解决。我的事情已经够多了,预算也有限。’”
他补充道:“量子技术也许是有史以来最重要的事情,但对大多数人来说,它并不紧迫,他们只是把问题往后推。”
忽视的代价
这种松懈心态可能会导致灾难性的后果,因为问题的关键并不是量子计算机是否会到来,而是何时到来。
专家共识:
虽然量子计算机的具体时间表尚不明确,但绝大多数专家认为,一个能够威胁加密安全的量子计算机将在未来5到10年内出现。然而,企业需要多长时间才能完成对基础设施和第三方依赖的全面升级?8年?10年?甚至12年?
回顾历史,从哈希算法SHA1迁移到SHA2的时间就很漫长。考虑到这种迁移的复杂性,尽早行动是明智的选择。正如美国管理和预算办公室在一份报告中所指出的:
“很可能具备加密破解能力的量子计算机(CRQC)将能够攻破目前政府和私营部门广泛使用的一些加密形式。尽管目前尚不知是否存在这样的计算机,但量子计算领域的稳步进展可能会在未来十年内带来CRQC的诞生。因此,联邦机构必须加强现有信息系统的防御,迁移到使用量子抗性公钥加密系统。”
问题的规模与解决方案
量子计算机带来的问题可能影响范围极大,但幸运的是,现有的工具和专业知识已经为企业提供了解决方案:
1. 后量子密码学(PQC)标准:
美国国家标准与技术研究院(NIST)最近发布了PQC算法标准,这些算法可以在问题变得昂贵之前化解风险。
2. 国际合作:
世界上许多国家的政府也在积极研究解决这一问题的办法。
此外,量子威胁还为企业提供了一个重新思考网络安全的机会,以构建更强大的安全体系。
Now: Cryptography everywhere
Two of the primary concerns for cybersecurity teams are technology integrity and operational disruption. Undermining digital signatures and cryptographic key exchanges that enable data encryption are at the heart of those fears. Losing the type of cryptography that can guarantee digital signatures are authentic and unaltered would likely deal a major blow to the integrity of communications and transactions. Additionally, losing the ability to transmit information securely could potentially upend most organizational processes.
Enterprises are starting to become aware of the risks posed by quantum computing to their cybersecurity. According to Deloitte’s Global Future of Cyber survey, 52% of organizations are currently assessing their exposure and developing quantum-related risk strategies. Another 30% say they are currently taking decisive action to implement solutions to these risks.
“The scale of this problem is sizeable, and its impact in the future is imminent. There may still be time when it hits us, but proactive measures now will help avoid a crisis later. That is the direction we need to take,” says Gomeet Pant, group vice president of security technologies for the India-based division of a large industrial products firm.
Cryptography is now so pervasive that many organizations may need help identifying all the places it appears. It’s in applications they own and manage, and in their partner and vendor systems. Understanding the full scope of the organizational risk that a CRQC would pose to cryptography (figure 1) requires action across a wide range of infrastructures, supply chains, and applications. Cryptography used for data confidentiality and digital signatures to maintain the integrity of emails, macros, electronic documents, and user authentication would all be threatened, undermining the integrity and authenticity of digital communications.
To make matters worse, enterprises’ data may already be at risk, even though there is no CRQC yet. There’s some indication that bad actors are engaging in what’s known as “harvest now, decrypt later” attacks—stealing encrypted data with the notion of unlocking it whenever more mature quantum computers arrive. Organizations’ data will likely continue to be under threat until they upgrade to quantum-resistant cryptographic systems.
“We identified the potential threat to customer data and the financial sector early on, which has driven our groundbreaking work toward quantum-readiness,” said Yassir Nawaz, director of the emerging technology security organization at JP Morgan. “Our initiative began with a comprehensive cryptography inventory and extends to developing PQC solutions that modernize our security through crypto-agile processes.”
Given the scale of the issues, upgrading to quantum-safe cryptography could take years, maybe even a decade or more, and we’re likely to see cryptographically relevant quantum computers sometime within that range. The potential threat posed by quantum to cryptography may feel over the horizon, but the time to start addressing it is now (figure 2).
“It is important that organizations start preparing now for the potential threat that quantum computing presents,” said Matt Scholl, computer security division chief at NIST. “The journey to transition to the new postquantum-encryption standards will be long and will require global collaboration along the way. NIST will continue to develop new post-quantum cryptography standards and work with industry and government to encourage their adoption.”
当前趋势:无处不在的加密
网络安全团队当前面临的两大核心问题是技术完整性和运营中断。
削弱数字签名和支持数据加密的加密密钥交换正是这些担忧的根源。
如果失去了能够保证数字签名真实性和未被篡改的加密技术,通信和交易的完整性可能会遭受重大打击。此外,失去安全传输信息的能力可能会颠覆大多数组织的运行流程。
企业对量子威胁的日益关注
企业已经开始意识到量子计算对网络安全构成的风险。
根据德勤的《全球网络未来调查》:
• 52%的企业正在评估自身的暴露程度,并制定与量子相关的风险策略。
• 另有30%的企业表示,正在采取果断行动以实施应对这些风险的解决方案。
印度一家大型工业产品公司安全技术部门的副总裁Gomeet Pant表示:
“这个问题的规模相当庞大,其未来的影响迫在眉睫。或许我们还有时间应对,但现在采取主动措施可以避免未来的危机。这是我们需要前进的方向。”
识别加密系统的全局风险
加密技术如今如此普遍,以至于许多组织可能难以识别它存在的所有位置。
加密不仅用于它们自有的应用程序,还广泛分布在合作伙伴和供应商系统中。
要全面理解具备加密相关性的量子计算机(CRQC)对加密技术可能造成的风险(见图1),企业需要在以下领域采取行动:
• 基础设施
• 供应链
• 应用程序
CRQC将威胁以下领域中使用的加密技术:
• 数据保密性
• 数字签名的完整性
这包括电子邮件、宏指令、电子文档和用户认证的完整性和真实性。
这些威胁可能会破坏数字通信的完整性与可信度。
“先采集,后解密”的新风险
更糟糕的是,即使CRQC尚未出现,企业的数据可能已经面临风险。
有迹象表明,恶意行为者正在进行所谓的“先采集,后解密”攻击:
• 他们窃取加密数据,等量子计算机技术成熟后再进行解密。
因此,直到企业升级到量子抗性加密系统之前,其数据将持续处于威胁之下。
JP Morgan新兴技术安全组织总监Yassir Nawaz表示:
“我们很早就识别到了客户数据和金融行业可能面临的潜在威胁,这推动了我们在量子准备方面的开创性工作。
我们的计划从全面的加密技术盘点开始,并延伸到开发后量子密码学(PQC)解决方案,通过灵活加密流程来现代化我们的安全防护。”
升级到量子安全加密的时间窗口
鉴于问题的规模,升级到量子安全加密可能需要数年,甚至十年以上。而根据专家预测,CRQC可能会在这段时间范围内出现。
量子对加密的威胁似乎还很遥远,但现在正是开始解决这个问题的最佳时机(见图2)。
NIST计算机安全部门负责人Matt Scholl表示:
“组织必须从现在开始为量子计算可能带来的威胁做好准备。
从当前加密系统过渡到新的后量子加密标准将是一个漫长的过程,需要全球范围内的协作。
NIST将继续开发新的后量子密码学标准,并与行业和政府合作,推动这些标准的采用。”
The intelligent core: AI changes everything for core modernization
For years, core and enterprise resource planning systems have been the single source of truth for enterprises’ systems of records. AI is fundamentally challenging that model.
Many core systems providers have gone all in on artificial intelligence and are rebuilding their offerings and capabilities around an AI-first model. The integration of AI into core enterprise systems represents a significant shift in how businesses operate and leverage technology for competitive advantage.
It’s hard to overstate AI’s transformative impact on core systems. For years, the core and the enterprise resource planning tools that sit on top of it were most businesses’ systems of record—the single source of truth. If someone had a question about any aspect of operations, from suppliers to customers, the core had the answer.
AI is not simply augmenting this model; it’s fundamentally challenging it. AI tools have the ability to reach into core systems and learn about an enterprise’s operations, understand its process, replicate its business logic, and so much more. This means that users don’t necessarily have to go directly to core systems for answers to their operational questions, but rather can use whatever AI-infused tool they’re most familiar with. Thus, this transformation goes beyond automating routine tasks to fundamentally rethinking and redesigning processes to be more intelligent, efficient, and predictive. It has the potential to unleash new ways of doing business by arming workers with the power of AI along with information from across the enterprise.
No doubt, there will be integration and change management challenges along the way. IT teams will need to invest in the right technology and skills, and build robust data governance frameworks to protect sensitive data. The more AI is integrated with core systems, the more complicated architectures become, and this complexity will need to be managed. Furthermore, teams will need to address issues of trust to help ensure AI systems are handling critical core operations effectively and responsibly.
But tackling these challenges could lead to major gains. Eventually, we expect AI to progress beyond being the new system of record to become a series of agents that not only do analyses and make recommendations but also take action. The ultimate endpoint is autonomous decision-making, enabling enterprises to operate quickly compared with their current pace of operations.
Now: Businesses need more from systems of record
Core systems and, in particular, enterprise resource planning (ERP) platforms are increasingly seen as critical assets for the enterprise. There’s a clear recognition of the value that comes from having one system hold all the information that describes how the business operates. For this reason, the global ERP market is projected to grow at a rate of 11% from 2023 through 2030. This growth is driven by a desire for both greater efficiency and more data-driven decision-making.¹
The challenge is that relatively few organizations are realizing the benefits they expect from these tools. Despite an acknowledgment that a centralized single source of truth is key to achieving greater operational efficiency, many ERP projects don’t deliver. According to Gartner research, by 2027, more than 70% of recently implemented ERP initiatives will fail to fully meet their original business case goals.²
Part of the reason ERP projects may fail to align with business goals is that the systems tend to be one-size-fits-all. Businesses needed to mirror their operations to fit the ERP system’s model. Applications across the organization were expected to integrate with the ERP. It was the system of record and held all business data and business logic, so the organization acquiesced to these demands, even if they were hard to meet. However, this produced a certain level of disconnect between the business and the ERP system.
AI is breaking this model. Some enterprises are looking to reduce their reliance on monolithic ERP implementations, and AI is likely to be the tool that allows them to by opening up data sets and enabling new ways of working.
六、智能核心:AI正在重塑核心现代化
多年来,核心系统和企业资源计划(ERP)系统一直是企业记录管理的“唯一真实来源”。但人工智能(AI)正在从根本上挑战这一模式。
AI如何改变核心系统
许多核心系统供应商已经全面拥抱AI,并将其产品与功能围绕“AI优先”模式进行重建。将AI整合到核心企业系统中,标志着企业运营和技术应用方式的重大转变,为企业竞争优势提供了全新的路径。
多年来,企业依赖核心系统和其上的ERP工具作为记录管理的基础。如果对运营中的任何方面有疑问,无论是供应商还是客户,答案都可以从核心系统中找到。
然而,AI的影响不仅仅是增强这一模式,而是从根本上挑战它。AI工具能够深入核心系统,学习企业的运营流程、理解其业务逻辑,甚至能够复制这些流程。这意味着用户不再需要直接访问核心系统来获取问题的答案,而是可以使用他们最熟悉的AI工具。
这种变革不仅局限于自动化常规任务,而是从根本上重新设计和优化流程,使其更加智能、高效和具有预测能力。AI结合整个企业的信息,能够释放全新的业务模式,为员工赋能。
集成与管理的挑战
不可否认,在实现这一转型的过程中会面临集成和变更管理方面的挑战。
• 技术与技能投资:IT团队需要选择合适的技术并提升团队技能。
• 数据治理框架:建立健全的数据治理框架,保护敏感数据免受风险。
• 复杂架构管理:随着AI深入核心系统,系统架构将变得更加复杂,这需要团队有效应对。
• AI的信任问题:确保AI系统在处理关键核心操作时的高效性与责任性也同样重要。
尽管如此,克服这些挑战将带来巨大的收益。未来,AI可能不仅仅是一个新的记录系统,还会发展成为一系列智能代理,不仅能够分析和提出建议,还可以直接采取行动。最终,这将实现自主决策,使企业的运营速度远超当前水平。
现在:企业需要核心系统提供更多支持
核心系统,尤其是ERP平台,被越来越多地视为企业的重要资产。企业普遍认识到,集中管理所有业务信息的系统是实现更高效率和数据驱动决策的关键。
正因为如此,全球ERP市场预计将以11%的年增长率从2023年持续增长至2030年。这种增长主要由企业对更高效率和数据驱动决策的需求推动。
为什么许多ERP项目未能满足预期?
尽管企业已经意识到ERP系统的价值,但现实中,只有少数组织能够真正从中获益。根据Gartner的研究,到2027年,超过**70%**的新ERP实施项目将无法完全实现其原定的商业目标。
原因之一是ERP系统的“千篇一律”:
• 企业需要调整自己的业务流程以适应ERP系统的模型。
• 企业内部的应用程序需要与ERP进行整合。
由于ERP作为记录系统,持有所有的业务数据和逻辑,企业被迫适应其要求,尽管这些要求可能难以满足。这种模式导致了企业与ERP系统之间的脱节。
AI如何打破传统模式
一些企业希望减少对单一ERP系统的依赖,而AI正是实现这一目标的关键工具:
• 开放数据集:AI使数据更加灵活和可用。
• 改变工作方式:AI提供了全新的、更智能的工作方式。
这不仅是技术的升级,更是企业运营模式的全面变革。
New: AI augments the core
With some evolution, ERP systems will likely maintain their current position as systems of record. In most large enterprises, they still hold virtually all the business data, and organizations that have spent the last several years implementing ERP systems will likely be reluctant to move on from them.
Orchestrating the platform approach
In this model, today’s core systems become a platform upon which AI innovations are built. However, this prospect raises multiple questions around AI orchestration that IT and business leaders will have to answer. Do they use the modules provided by vendors, use third-party tools, or, in the case of more tech-capable teams, develop their own models? Relying on vendors means waiting for functionality but may come with greater assurance of easy integration.
Another question is how much data to expose to AI. One of the benefits of generative AI is its ability to read and interpret data across different systems and file types. This is where opportunities for new learnings and automation come from, but it could also present privacy and security challenges. In the case of core systems, we’re talking about highly sensitive HR, finance, supplier, and customer information. Feeding this data into AI models without attention to governance could create new risks.
There’s also the question of who should own initiatives to bring AI to the core. This is a highly technical process that demands the skills of IT—but it also supports critical operational functions that the business should be able to put its fingerprints on.
The answer to these questions will likely look different from use case to use case and even enterprise to enterprise. But teams should think about them and develop clear answers before going all in on AI in the core. These answers form the foundation upon which rests the larger benefits of the technology.
“To get the most out of AI, companies should develop a clear strategy anchored in their business goals,” says Eric van Rossum, chief marketing officer for cloud ERP and industries at SAP. “AI shouldn’t be considered as a stand-alone functionality, but rather as an integral, embedded capability in all business processes to support a company’s digital transformation.”
AI enables new ways of working
Forward-looking enterprises are already answering these orchestration questions. Graybar, a wholesale distributor of electrical, industrial, and data communications solutions, is in the middle of a multiyear process of modernizing a 20-year-old core system implementation, which started with upgrades to its HR management tools and is now shifting to ERP modernization. It’s leaning on the best modules available from its core systems vendors when it makes sense, while also layering on third-party integrations and homegrown tools when there’s an opportunity to differentiate its products and services.⁴
The growth of AI presented leaders at the company with an opportunity to not only upgrade its tech stack, but also to think about how to reshape processes to drive new efficiencies and revenue growth. Trust has been a key part of the modernization efforts. The company is rolling out AI in narrowly tailored use cases where tools are applied to ensure reliability, security, and clear business value.
新的模式:AI助力核心系统升级
随着不断的演变,ERP系统很可能会继续保持其作为企业记录管理“系统真相”的核心地位。在大多数大型企业中,这些系统仍然承载着几乎所有的业务数据,而那些花费数年时间实施ERP系统的企业,通常也不愿意轻易放弃它们。
打造平台化协同模式
在这种新模式下,现有的核心系统将演变成一个平台,成为AI创新的基础。然而,这种前景也带来了多个需要IT和业务领导者解答的问题:
• 是否依赖供应商的模块?
• 是否使用第三方工具?
• 是否由技术能力强的团队自行开发模型?
数据的开放程度也是一个需要关注的问题。生成式AI的优势在于能够跨不同系统和文件类型读取和解读数据,从而带来新的洞察和自动化机会。但与此同时,这也可能带来隐私和安全风险,尤其是在处理核心系统中高度敏感的HR、财务、供应商和客户数据时。
在缺乏强有力治理的情况下,将这些数据输入AI模型可能会引发新的风险。
另一个问题是:AI在核心系统中的落地应该由谁负责?
这不仅是一个高度技术化的过程,需要IT的专业技能,同时也涉及业务部门的关键运营职能,因此需要业务部门的深度参与。
答案可能因不同的用例和企业情况而异。但企业在全面拥抱核心系统中的AI之前,应该提前考虑这些问题,并制定清晰的解决方案。这些答案将构成AI技术进一步释放价值的基础。
SAP云ERP与行业首席营销官Eric van Rossum表示:
“为了充分利用AI,企业应该制定一个以业务目标为核心的清晰战略。AI不应该被看作独立的功能,而是应该成为嵌入所有业务流程中的关键能力,从而支持企业的数字化转型。”
AI推动全新工作模式
前瞻性的企业已经开始回答这些协同问题。
例如,Graybar(一家电气、工业和数据通信解决方案的批发分销商)正在进行一个为期多年的现代化升级项目,该项目涉及对已有20年的核心系统进行全面改造。
• 他们的现代化进程从HR管理工具的升级开始,目前已经转向ERP系统的现代化升级。
• 在此过程中,Graybar在适用的情况下依赖核心系统供应商提供的最佳模块,同时在有机会差异化其产品和服务时,引入第三方集成以及自主开发的工具。
AI的增长为公司领导层提供了一个不仅可以升级技术栈,还可以重新思考业务流程的机会,以推动新的效率提升和收入增长。
在这一现代化过程中,信任是关键因素之一。公司正在针对具体的、狭义的用例推出AI工具,确保这些工具在安全性和可靠性方面都符合要求。
免责声明:本文为转载,非本网原创内容,不代表本网观点。其原创性以及文中陈述文字和内容未经本站证实,对本文以及其中全部或者部分内容、文字的真实性、完整性、及时性本站不作任何保证或承诺,请读者仅作参考,并请自行核实相关内容。
如有疑问请发送邮件至:bangqikeconnect@gmail.com