閱讀理解原文
Imagine this: you’ve carved out an evening to unwind and decide to make a homemade pizza. You assemble your pie, throw it in the oven, and are excited to start eating. But once you get ready to take a bite of your oily creation, you run into a problem — the cheese falls right off. Frustrated, you turn to Google for a solution.“Add some glue,” Google answers. “Mix about 1/8 cup of Elmer’s glue in with the sauce. Non-toxic glue will work.”
So, yeah, don’t do that. As of writing this, though, that’s what Google’s new AI Overviews feature will tell you to do. The feature, while not triggered for every query, scans the web and drums up an AI-generated response. The answer received for the pizza glue query appears to be based on a comment from a user named “fucksmith” in a more than decade-old Reddit thread, and they’re clearly joking.
This is just one of many mistakes cropping up in the new feature that Google rolled out broadly this month. It also claims that former US President James Madison graduated from the University of Wisconsin not once but 21 times, that a dog has played in the NBA, NFL, and NHL, and that Batman is a cop.
Look, Google didn’t promise this would be perfect, and it even slaps a “Generative AI is experimental” label at the bottom of the AI answers. But it’s clear these tools aren’t ready to accurately provide information at scale.
閱讀理解題目
1. What are the new Google features mentioned in the article?
A. AI assistant
B. AI Overviews
C. AI search
D. AI recipes
2. Where does the answer to the “pizza glue query” mentioned in the article come from?
A. A recent scientific study
B. Google’s official recommendations
C. A user comment on Reddit from more than ten years ago
D. A cookbook
3. Which of the following statements is correct?
A. Google's new feature is completely accurate.
B. Google’s AI Overviews feature generates answers for every query.
C. Answers generated by the AI Overviews function are labeled "Generative AI is experimental".
D. Google recommends using glue to make pizza.
4. According to the article, which of the following is not an incorrect answer given by the AI Overviews function?
A. Glue should be added when making pizza.
B. Former U.S. President James Madison graduated from the University of Wisconsin 21 times.
C. There is a dog who has played in the NBA, NFL and NHL.
D. Batman is a police officer.
5. Which of the following can be inferred from the article?
A. Google has perfectly solved the problem of AI providing accurate information.
B. AI technology is fully mature in terms of information provision.
C. Google’s AI Overviews feature is experimental and may contain inaccurate answers.
D. Users should rely solely on Google’s AI Overviews feature for information.
中文翻譯
想象一下:您抽出一個晚上來放松并決定自制披薩。你把餡餅組裝好,扔進(jìn)烤箱,然后興奮地開始吃。但一旦你準(zhǔn)備好咬一口你的油膩?zhàn)髌?,你就會遇到一個問題——奶酪馬上就會掉下來。沮喪之余,你向 Google尋求解決方案。“添加一些膠水,”Google回答道。 “將大約 1/8 杯埃爾默膠與醬汁混合。無毒膠水就可以了。”
所以,是的,不要這樣做。 過,在撰寫本文時,這就是谷歌新的人工智能概述功能將告訴您要做的事情。 該功能雖然不會針對每個查詢觸發(fā),但會掃描網(wǎng)絡(luò)并激發(fā)人工智能生成的響應(yīng)。收到的披薩膠查詢答案似乎是基于一位名為“fucksmith”的用戶在十多年前的 Reddit 帖子中的評論,他們顯然是在開玩笑。
這只是谷歌本月廣泛推出的新功能中出現(xiàn)的眾多錯誤之一。它還聲稱美國前總統(tǒng)詹姆斯·麥迪遜從威斯康星大學(xué)畢業(yè)不止一次而是21次,一只狗曾在NBA、NFL和NHL打過球,蝙蝠俠是一名警察。
看,谷歌并沒有承諾這將是完美的,它甚至在人工智能答案的底部貼上了“生成式人工智能是實(shí)驗(yàn)性的”標(biāo)簽。 但很明顯,這些工具還沒有準(zhǔn)備好大規(guī)模準(zhǔn)確地提供信息。
題目解析
1.答案:B
解析:文章中明確提到,“As of writing this, though, that’s what Google’s new AI Overviews feature will tell you to do.”(在撰寫本文時,這就是Google新的AI Overviews功能會告訴你的。)
2.答案:C
解析:文章中提到,“The answer received for the pizza glue query appears to be based on a comment from a user named “fucksmith” in a more than decade-old Reddit thread”(關(guān)于披薩膠水的查詢收到的回答似乎是基于十多年前Reddit上的一個名為“fucksmith”的用戶的評論。)
3.答案:C
解析:文章中提到,“Google didn’t promise this would be perfect”(Google并沒有承諾這會是完美的),所以A錯誤;“The feature, while not triggered for every query”(這個功能雖然不會對每個查詢都觸發(fā)),所以B錯誤;C正確,因?yàn)槲恼轮忻鞔_提到“it even slaps a ‘Generative AI is experimental’ label at the bottom of the AI answers”(它甚至在AI答案的底部貼上了“生成式AI是實(shí)驗(yàn)性的”標(biāo)簽);D明顯錯誤,因?yàn)槲恼戮褪窃谂蠫oogle的這個錯誤建議。
4.答案:A
解析:A選項(xiàng)是文章中提到的實(shí)際錯誤答案,而B、C、D選項(xiàng)都是文章中提到的AI Overviews功能給出的其他錯誤答案。問題要求選出“不是”錯誤答案的選項(xiàng),因此A是正確答案。
5.答案:C
解析:文章中多次提到AI Overviews功能的不準(zhǔn)確性和實(shí)驗(yàn)性,如“Google didn’t promise this would be perfect”(Google并沒有承諾這會是完美的)和“it even slaps a ‘Generative AI is experimental’ label at the bottom of the AI answers”(它甚至在AI答案的底部貼上了“生成式AI是實(shí)驗(yàn)性的”標(biāo)簽),因此可以推斷出C選項(xiàng)是正確的。A和B選項(xiàng)與文章中的信息相悖,D選項(xiàng)則是一種過于絕對的表述,與文章中的信息不符。