Companies have been racing to deploy generative AI technology into their work since the launch of ChatGPT in 2022.
自 2022 年 ChatGPT 推出以來,各公司一直在競(jìng)相將生成式 AI 技術(shù)部署到他們的工作中。
According to Microsoft and LinkedIn's 2024 Work Trends report, which surveyed 31,000 full-time workers between February and March, close to four in five business leaders believe their company needs to adopt the technology to stay competitive.
根據(jù)微軟和 LinkedIn 的 2024 年工作趨勢(shì)報(bào)告,該報(bào)告在 2 月至 3 月間對(duì) 31,000 名全職員工進(jìn)行了調(diào)查,近五分之四的企業(yè)領(lǐng)導(dǎo)人認(rèn)為他們的公司需要采用該技術(shù)來保持競(jìng)爭(zhēng)力。
But adopting AI in the workplace also presents risks, including reputational, financial, and legal harm. The challenge of combating them is that they're ambiguous, and many companies are still trying to understand how to identify and measure them.
但在工作場(chǎng)所采用人工智能也會(huì)帶來風(fēng)險(xiǎn),包括聲譽(yù)、財(cái)務(wù)和法律損害。 對(duì)抗它們的挑戰(zhàn)在于它們是模糊的,許多公司仍在試圖了解如何識(shí)別和衡量它們。
AI programs run responsibly should include strategies for governance, data privacy, ethics, and trust and safety, but experts who study risk say the programs haven't kept up with innovation.
負(fù)責(zé)任地運(yùn)行的人工智能項(xiàng)目應(yīng)包括治理、數(shù)據(jù)隱私、道德、信任和安全等策略,但研究風(fēng)險(xiǎn)的專家表示,這些項(xiàng)目沒有跟上創(chuàng)新的步伐。
Efforts to use AI responsibly in the workplace are moving "nowhere near as fast as they should be," Tad Roselund, a managing director and senior partner at Boston Consulting Group. These programs often require a considerable amount of investment and a minimum of two years to implement, according to BCG.
波士頓咨詢集團(tuán)董事總經(jīng)理兼高級(jí)合伙人泰德·羅塞倫德 (Tad Roselund) 表示,在工作場(chǎng)所負(fù)責(zé)任地使用人工智能的努力“遠(yuǎn)未達(dá)到應(yīng)有的速度”。 BCG 表示,這些計(jì)劃通常需要大量投資,并且需要至少兩年的時(shí)間來實(shí)施。
Investors need to play a more critical role in funding the tools and resources for these programs, according to Navrina Singh, the founder of Credo AI, a governance platform that helps companies comply with AI regulations. Funding for generative AI startups hit $25.2 billion in 2023, according to a report from Stanford's Institute for Human-Centered Artificial Intelligence, but it's unclear how much went to companies that focus on responsible AI.
Credo AI 的創(chuàng)始人 Navrina Singh 表示,投資者需要在為這些項(xiàng)目的工具和資源提供資金方面發(fā)揮更關(guān)鍵的作用。Credo AI 是一個(gè)幫助公司遵守人工智能法規(guī)的治理平臺(tái)。 根據(jù)斯坦福大學(xué)以人為中心的人工智能研究所的一份報(bào)告,到 2023 年,生成型人工智能初創(chuàng)公司的資金將達(dá)到 252 億美元,但目前尚不清楚有多少資金流向了專注于負(fù)責(zé)任人工智能的公司。
"The venture capital environment also reflects a disproportionate focus on AI innovation over AI governance," Singh told Business Insider by email. "To adopt AI at scale and speed responsibly, equal emphasis must be placed on ethical frameworks, infrastructure, and tooling to ensure sustainable and responsible AI integration across all sectors."
辛格通過電子郵件對(duì)《商業(yè)內(nèi)幕》表示:“風(fēng)險(xiǎn)投資環(huán)境還反映出,人們對(duì)人工智能創(chuàng)新的關(guān)注超過了對(duì)人工智能治理的關(guān)注。” “為了負(fù)責(zé)任地大規(guī)模、快速地采用人工智能,必須同等重視道德框架、基礎(chǔ)設(shè)施和工具,以確保所有部門的可持續(xù)和負(fù)責(zé)任的人工智能整合。”
Legislative efforts have been underway to fill that gap. In March, the EU approved the Artificial Intelligence Act, which assigns the risks of AI applications into three categories and bans those with unacceptable risks. Meanwhile, the Biden Administration signed a sweeping executive order in October demanding greater transparency from major tech companies developing artificial intelligence models.
旨在填補(bǔ)這一空白的立法工作正在進(jìn)行中。 今年3月,歐盟批準(zhǔn)了《人工智能法案》,將人工智能應(yīng)用的風(fēng)險(xiǎn)分為三類,并禁止風(fēng)險(xiǎn)不可接受的應(yīng)用。 與此同時(shí),拜登政府于 10 月簽署了一項(xiàng)全面的行政命令,要求開發(fā)人工智能模型的主要科技公司提高透明度。