英語演講 學(xué)英語,練聽力,上聽力課堂! 注冊 登錄
> 英語演講 > 英語演講mp3 > TED音頻 >  第231篇

演講MP3+雙語文稿:社交媒體的虛假信息是如何擾亂民主、經(jīng)濟(jì)與公眾的?

所屬教程:TED音頻

瀏覽:

2023年01月11日

手機(jī)版
掃描二維碼方便學(xué)習(xí)和分享
https://online2.tingclass.net/lesson/shi0529/10000/10387/tedyp230.mp3
https://image.tingclass.net/statics/js/2012

聽力課堂TED音頻欄目主要包括TED演講的音頻MP3及中英雙語文稿,供各位英語愛好者學(xué)習(xí)使用。本文主要內(nèi)容為演講MP3+雙語文稿:社交媒體的虛假信息是如何擾亂民主、經(jīng)濟(jì)與公眾的?,希望你會(huì)喜歡!

[演講者及介紹]Sinan Aral

數(shù)據(jù)科學(xué)家、企業(yè)家、投資者Sinan Aral揭示了社交媒體是如何擾亂我們的民主、經(jīng)濟(jì)和公眾的。

[演講主題]虛假信息大行其道,我們?nèi)绾魏葱l(wèi)真相?

[中英文字幕]

翻譯者 Ivana Korom 校對者 Krystian Aparta

00:14

So, on April 23 of 2013, the Associated Press put out the following tweet on Twitter. It said, "Breaking news: Two explosions at the White House and Barack Obama has been injured." This tweet was retweeted 4,000 times in less than five minutes, and it went viral thereafter.

013 年 4 月 23 日,美聯(lián)社在推特上發(fā)布了 這樣一條推文: “突發(fā)新聞: 白宮發(fā)生兩起爆炸,巴拉克·奧巴馬受傷?!?在不到五分鐘的時(shí)間里,這條推文被轉(zhuǎn)發(fā)了四千次,隨后也在網(wǎng)絡(luò)上被瘋傳。

00:41

Now, this tweet wasn't real news put out by the Associated Press. In fact it was false news, or fake news, that was propagated by Syrian hackers that had infiltrated the Associated Press Twitter handle. Their purpose was to disrupt society, but they disrupted much more. Because automated trading algorithms immediately seized on the sentiment on this tweet, and began trading based on the potential that the president of the United States had been injured or killed in this explosion. And as they started tweeting, they immediately sent the stock market crashing, wiping out 140 billion dollars in equity value in a single day.

不過,這條推文并不是 美聯(lián)社發(fā)布的真實(shí)新聞。事實(shí)上,這是一則不實(shí)新聞,或者說是虛假新聞,是由入侵了美聯(lián)社推特賬號 的敘利亞黑客擴(kuò)散的。他們的目的是擾亂社會(huì),但他們擾亂的遠(yuǎn)不止于此。因?yàn)樽詣?dòng)交易算法 立刻捕捉了這條推文的情感,【注:機(jī)器學(xué)習(xí)中對主觀性文本的情感分析】 并且根據(jù)美國總統(tǒng)在這次爆炸中 受傷或喪生的可能性,開始了交易。而當(dāng)他們開始發(fā)推時(shí),股市迅速隨之崩盤,一日之內(nèi)便蒸發(fā)了 1400 億美元的市值。

01:25

Robert Mueller, special counsel prosecutor in the United States, issued indictments against three Russian companies and 13 Russian individuals on a conspiracy to defraud the United States by meddling in the 2016 presidential election. And what this indictment tells as a story is the story of the Internet Research Agency, the shadowy arm of the Kremlin on social media. During the presidential election alone, the Internet Agency's efforts reached 126 million people on Facebook in the United States, issued three million individual tweets and 43 hours' worth of YouTube content. All of which was fake -- misinformation designed to sow discord in the US presidential election.

美國特別檢察官羅伯特·穆勒 起訴了三家俄羅斯公司 以及十三個(gè)俄羅斯人,指控他們干預(yù) 2016 年美國總統(tǒng)大選,合謀誆騙美國。而這次起訴講述的 是互聯(lián)網(wǎng)研究機(jī)構(gòu)的故事,即俄羅斯政府在社交媒體上 布下的影影綽綽的手腕。僅在總統(tǒng)大選期間,互聯(lián)網(wǎng)機(jī)構(gòu)就 影響了 1.26 億名 美國 Facebook 用戶,發(fā)布了 300 萬條推文,以及 43 個(gè)小時(shí)的 Youtube 內(nèi)容。這一切都是虛假的—— 通過精心設(shè)計(jì)的虛假信息,在美國總統(tǒng)大選中播下不和的種子。

02:21

A recent study by Oxford University showed that in the recent Swedish elections, one third of all of the information spreading on social media about the election was fake or misinformation.

牛津大學(xué)最近的一項(xiàng)研究顯示,在近期的瑞典大選中,在社交媒體上傳播 的關(guān)于大選的信息中,有三分之一 是虛假或謬誤信息。

02:35

In addition, these types of social-media misinformation campaigns can spread what has been called "genocidal propaganda," for instance against the Rohingya in Burma, triggering mob killings in India.

另外,這些通過社交媒體 進(jìn)行的誤導(dǎo)活動(dòng) 可以傳播所謂的“種族清洗宣傳”,例如在緬甸煽動(dòng)對羅興亞人的迫害,或者在印度引發(fā)暴徒殺人。

02:50

We studied fake news and began studying it before it was a popular term. And we recently published the largest-ever longitudinal study of the spread of fake news online on the cover of "Science" in March of this year. We studied all of the verified true and false news stories that ever spread on Twitter, from its inception in 2006 to 2017. And when we studied this information, we studied verified news stories that were verified by six independent fact-checking organizations. So we knew which stories were true and which stories were false. We can measure their diffusion, the speed of their diffusion, the depth and breadth of their diffusion, how many people become entangled in this information cascade and so on. And what we did in this paper was we compared the spread of true news to the spread of false news. And here's what we found.

我們在虛假新聞變成熱點(diǎn)之前 就開始了對虛假新聞的研究。最近,我們發(fā)表了一項(xiàng) 迄今最大型的關(guān)于虛假新聞 在網(wǎng)絡(luò)傳播的縱向研究,在今年三月登上了《科學(xué)》期刊封面。我們研究了推特上傳播的所有 核實(shí)過的真假新聞,范圍是自 2006 年推特創(chuàng)立到 2017 年。在我們研究這些訊息時(shí),我們通過六家獨(dú)立的 事實(shí)核查機(jī)構(gòu)驗(yàn)證,以確認(rèn)新聞故事的真實(shí)性。所以我們清楚哪些新聞是真的,哪些是假的。我們可以測量 這些新聞的擴(kuò)散程度,擴(kuò)散速度,以及深度與廣度,有多少人被卷入這個(gè)信息級聯(lián)。【注:人們加入信息更具說服力的團(tuán)體】 我們在這篇論文中 比較了真實(shí)新聞和 虛假新聞的傳播程度。這是我們的研究發(fā)現(xiàn)。

03:48

We found that false news diffused further, faster, deeper and more broadly than the truth in every category of information that we studied, sometimes by an order of magnitude. And in fact, false political news was the most viral. It diffused further, faster, deeper and more broadly than any other type of false news. When we saw this, we were at once worried but also curious. Why? Why does false news travel so much further, faster, deeper and more broadly than the truth?

我們發(fā)現(xiàn),在我們研究 的所有新聞?lì)悇e中,虛假新聞都比真實(shí)新聞傳播得 更遠(yuǎn)、更快、更深、更廣,有時(shí)甚至超出一個(gè)數(shù)量級。事實(shí)上,虛假的政治新聞 傳播速度最快。它比任何其他種類的虛假新聞 都擴(kuò)散得更遠(yuǎn)、更快、更深、更廣。我們看到這個(gè)結(jié)果時(shí),我們立刻感到擔(dān)憂,但同時(shí)也很好奇。為什么? 為什么虛假新聞比真相 傳播得更遠(yuǎn)、更快、更深、更廣?

04:21

The first hypothesis that we came up with was, "Well, maybe people who spread false news have more followers or follow more people, or tweet more often, or maybe they're more often 'verified' users of Twitter, with more credibility, or maybe they've been on Twitter longer." So we checked each one of these in turn. And what we found was exactly the opposite. False-news spreaders had fewer followers, followed fewer people, were less active, less often "verified" and had been on Twitter for a shorter period of time. And yet, false news was 70 percent more likely to be retweeted than the truth, controlling for all of these and many other factors.

我們想到的第一個(gè)假設(shè)是,“可能傳播虛假新聞的人 有更多的關(guān)注者,或者關(guān)注了更多人,或者發(fā)推更頻繁,或者他們中有更多 推特的‘認(rèn)證’用戶,可信度更高,或者他們在推特上的時(shí)間更長。” 因此,我們挨個(gè)檢驗(yàn)了這些假設(shè)。我們發(fā)現(xiàn),結(jié)果恰恰相反。假新聞散布者有更少關(guān)注者,關(guān)注的人更少,活躍度更低,更少被“認(rèn)證”,使用推特的時(shí)間更短。然而,在控制了這些和很多其他變量之后,虛假新聞比真實(shí)新聞 被轉(zhuǎn)發(fā)的可能性高出了 70%。

05:01

So we had to come up with other explanations. And we devised what we called a "novelty hypothesis." So if you read the literature, it is well known that human attention is drawn to novelty, things that are new in the environment. And if you read the sociology literature, you know that we like to share novel information. It makes us seem like we have access to inside information, and we gain in status by spreading this kind of information.

我們不得不提出別的解釋。于是,我們設(shè)想了一個(gè) “新穎性假設(shè)”。如果各位對文獻(xiàn)有所了解,會(huì)知道一個(gè)廣為人知的現(xiàn)象是,人類的注意力會(huì)被新穎性所吸引,也就是環(huán)境中的新事物。如果各位了解社會(huì)學(xué)文獻(xiàn)的話,你們應(yīng)該知道,我們喜歡分享 新鮮的信息。這使我們看上去像是 能夠獲得內(nèi)部消息,通過傳播這類信息,我們的地位可以獲得提升。

05:30

So what we did was we measured the novelty of an incoming true or false tweet, compared to the corpus of what that individual had seen in the 60 days prior on Twitter. But that wasn't enough, because we thought to ourselves, "Well, maybe false news is more novel in an information-theoretic sense, but maybe people don't perceive it as more novel."

因此我們把剛收到的真假推文 和用戶前 60 天內(nèi) 在推特上看過的語庫比較,以衡量剛收到的推文的新穎度。但這還不夠,因?yàn)槲覀兿氲剑翱赡茉谛畔⒄摰膶用?虛假新聞更加新穎,但也許在人們的感知里,它并沒有很新鮮?!?/p>

05:54

So to understand people's perceptions of false news, we looked at the information and the sentiment contained in the replies to true and false tweets. And what we found was that across a bunch of different measures of sentiment -- surprise, disgust, fear, sadness, anticipation, joy and trust -- false news exhibited significantly more surprise and disgust in the replies to false tweets. And true news exhibited significantly more anticipation, joy and trust in reply to true tweets. The surprise corroborates our novelty hypothesis. This is new and surprising, and so we're more likely to share it.

因此,為了理解 人們對虛假新聞的感知,我們研究了對真假推文的回復(fù)中 包含的信息和情感。我們發(fā)現(xiàn),在多種不同的情感量表上—— 驚訝,厭惡,恐懼,悲傷,期待,喜悅,信任—— 對虛假新聞的回復(fù)里 明顯表現(xiàn)出了 更多的驚訝和厭惡。而對真實(shí)新聞的回復(fù)里,表現(xiàn)出的則是 更多的期待、喜悅,和信任。這個(gè)意外事件證實(shí)了 我們的新穎性假設(shè)。這很新鮮、很令人驚訝,所以我們更可能把它分享出去。

06:43

At the same time, there was congressional testimony in front of both houses of Congress in the United States, looking at the role of bots in the spread of misinformation. So we looked at this too -- we used multiple sophisticated bot-detection algorithms to find the bots in our data and to pull them out. So we pulled them out, we put them back in and we compared what happens to our measurement. And what we found was that, yes indeed, bots were accelerating the spread of false news online, but they were accelerating the spread of true news at approximately the same rate. Which means bots are not responsible for the differential diffusion of truth and falsity online. We can't abdicate that responsibility, because we, humans, are responsible for that spread.

同時(shí),在美國國會(huì)兩院前 進(jìn)行的國會(huì)作證 提到了機(jī)器人賬號(注:一種使用 自動(dòng)化腳本執(zhí)行大量簡單任務(wù)的軟件) 在傳播虛假信息時(shí)的作用。因此我們也對這一點(diǎn)進(jìn)行了研究—— 我們使用多個(gè)復(fù)雜的 機(jī)器人賬號探測算法,尋找并提取出了 我們數(shù)據(jù)中的機(jī)器人賬號。我們把機(jī)器人賬號移除,再把它們放回去,并比較其對我們的測量 產(chǎn)生的影響。我們發(fā)現(xiàn),確實(shí),機(jī)器人賬號加速了 虛假新聞在網(wǎng)絡(luò)上的傳播,但它們也在以大約相同的速度 加速真實(shí)新聞的傳播。這意味著,機(jī)器人賬號 并不是造成網(wǎng)上虛實(shí)信息 傳播差距的原因。我們不能推脫這個(gè)責(zé)任,因?yàn)橐獙@種傳播負(fù)責(zé)的,是我們?nèi)祟愖约骸?/p>

07:35

Now, everything that I have told you so far, unfortunately for all of us, is the good news.

對于我們大家來說 都很不幸的是,剛剛我告訴各位的一切 都是好消息。

07:43

The reason is because it's about to get a whole lot worse. And two specific technologies are going to make it worse. We are going to see the rise of a tremendous wave of synthetic media. Fake video, fake audio that is very convincing to the human eye. And this will powered by two technologies.

原因在于,形勢馬上要大幅惡化了。而兩種特定的技術(shù) 會(huì)將形勢變得更加糟糕。我們將會(huì)目睹 一大波合成媒體的劇增。虛假視頻、虛假音頻,對于人類來說都能以假亂真。這是由兩項(xiàng)技術(shù)支持的。

08:06

The first of these is known as "generative adversarial networks." This is a machine-learning model with two networks: a discriminator, whose job it is to determine whether something is true or false, and a generator, whose job it is to generate synthetic media. So the synthetic generator generates synthetic video or audio, and the discriminator tries to tell, "Is this real or is this fake?" And in fact, it is the job of the generator to maximize the likelihood that it will fool the discriminator into thinking the synthetic video and audio that it is creating is actually true. Imagine a machine in a hyperloop, trying to get better and better at fooling us.

其一是所謂的“生成對抗網(wǎng)絡(luò)”。這是一個(gè)由兩個(gè)網(wǎng)絡(luò)組成 的機(jī)器學(xué)習(xí)模型: 一個(gè)是判別網(wǎng)絡(luò),負(fù)責(zé)分辨樣本的真假; 另一個(gè)是生成網(wǎng)絡(luò),負(fù)責(zé)產(chǎn)生合成媒體。生成網(wǎng)絡(luò)產(chǎn)生 合成視頻或音頻,而判別網(wǎng)絡(luò)則試圖分辨,“這是真的還是假的?” 事實(shí)上,生成網(wǎng)絡(luò)的任務(wù)是 盡可能地欺騙判別網(wǎng)絡(luò),讓判別網(wǎng)絡(luò)誤以為 它合成的視頻和音頻 其實(shí)是真的。想象一臺(tái)處于超級循環(huán)中的機(jī)器,試圖變得越來越擅長欺騙我們。

08:51

This, combined with the second technology, which is essentially the democratization of artificial intelligence to the people, the ability for anyone, without any background in artificial intelligence or machine learning, to deploy these kinds of algorithms to generate synthetic media makes it ultimately so much easier to create videos.

第二項(xiàng)技術(shù),簡而言之,就是在民眾中 的人工智能的民主化,即讓任何人 不需要任何人工智能或 機(jī)器學(xué)習(xí)的背景,也能調(diào)用這些算法 生成人工合成媒體。這兩種技術(shù)相結(jié)合,讓制作視頻變得如此容易。

09:15

The White House issued a false, doctored video of a journalist interacting with an intern who was trying to take his microphone. They removed frames from this video in order to make his actions seem more punchy. And when videographers and stuntmen and women were interviewed about this type of technique, they said, "Yes, we use this in the movies all the time to make our punches and kicks look more choppy and more aggressive." They then put out this video and partly used it as justification to revoke Jim Acosta, the reporter's, press pass from the White House. And CNN had to sue to have that press pass reinstated.

白宮曾發(fā)布過一個(gè) 虛假的、篡改過的視頻,內(nèi)容為一名記者和一個(gè)試圖搶奪 他的麥克風(fēng)的實(shí)習(xí)生的互動(dòng)。他們從視頻中移除了一些幀,讓他的行動(dòng)顯得更有攻擊性。而當(dāng)攝影師和替身演員 被采訪問及這種技術(shù)時(shí),他們說,“是的,我們經(jīng)常 在電影中使用這種技術(shù), 讓我們的出拳和踢腿動(dòng)作 看上去更具打擊感,更加有氣勢?!?他們于是發(fā)布了這個(gè)視頻,將其作為部分證據(jù),試圖撤銷視頻中的記者,吉姆·阿考斯塔 的白宮新聞通行證。于是 CNN 不得不提出訴訟,要求恢復(fù)該新聞通行證。

10:01

There are about five different paths that I can think of that we can follow to try and address some of these very difficult problems today. Each one of them has promise, but each one of them has its own challenges. The first one is labeling. Think about it this way: when you go to the grocery store to buy food to consume, it's extensively labeled. You know how many calories it has, how much fat it contains -- and yet when we consume information, we have no labels whatsoever. What is contained in this information? Is the source credible? Where is this information gathered from? We have none of that information when we are consuming information. That is a potential avenue, but it comes with its challenges. For instance, who gets to decide, in society, what's true and what's false? Is it the governments? Is it Facebook? Is it an independent consortium of fact-checkers? And who's checking the fact-checkers?

我能想到我們可以走 的五條不同道路,以試圖解決當(dāng)今我們面對 的這些異常艱難的問題。每一種措施都帶來希望,但每一種也有其自身的挑戰(zhàn)。第一種措施是貼上標(biāo)簽。可以這么想: 當(dāng)你去超市購買食品時(shí),食品上會(huì)有詳細(xì)的標(biāo)簽。你可以得知它有多少卡路里,含有多少脂肪—— 然而當(dāng)我們攝取信息時(shí),我們沒有任何標(biāo)簽。這個(gè)信息中含有什么? 其來源是否可信? 這個(gè)信息是從哪里收集的? 在我們攝取信息時(shí),我們并沒有以上的任何信息。這是一種可能的解決辦法,但它有自身的挑戰(zhàn)。比如說,在社會(huì)中,有誰能決定信息的真?zhèn)危?是政府嗎? 是 Facebook 嗎? 是由事實(shí)核查機(jī)構(gòu) 組成的獨(dú)立聯(lián)盟嗎? 誰又來對事實(shí)核查機(jī)構(gòu) 進(jìn)行核查呢?

11:03

Another potential avenue is incentives. We know that during the US presidential election there was a wave of misinformation that came from Macedonia that didn't have any political motive but instead had an economic motive. And this economic motive existed, because false news travels so much farther, faster and more deeply than the truth, and you can earn advertising dollars as you garner eyeballs and attention with this type of information. But if we can depress the spread of this information, perhaps it would reduce the economic incentive to produce it at all in the first place.

另一種可能的解決手段是獎(jiǎng)勵(lì)措施。我們知道,在美國總統(tǒng)大選期間,有一波虛假信息來源于馬其頓,他們沒有任何政治動(dòng)機(jī),相反,他們有經(jīng)濟(jì)動(dòng)機(jī)。這個(gè)經(jīng)濟(jì)動(dòng)機(jī)之所以存在,是因?yàn)樘摷傩侣劚日嫦鄠鞑サ?更遠(yuǎn)、更快、更深,你可以使用這類信息 博取眼球、吸引注意,從而通過廣告賺錢。但如果我們能抑制 這類信息的傳播,或許就能在源頭減少 生產(chǎn)這類信息的經(jīng)濟(jì)動(dòng)機(jī)。

11:41

Third, we can think about regulation, and certainly, we should think about this option. In the United States, currently, we are exploring what might happen if Facebook and others are regulated. While we should consider things like regulating political speech, labeling the fact that it's political speech, making sure foreign actors can't fund political speech, it also has its own dangers. For instance, Malaysia just instituted a six-year prison sentence for anyone found spreading misinformation. And in authoritarian regimes, these kinds of policies can be used to suppress minority opinions and to continue to extend repression.

第三,我們可以考慮進(jìn)行監(jiān)管,毫無疑問,我們應(yīng)當(dāng)考慮這個(gè)選項(xiàng)。現(xiàn)在,在美國,我們在探索當(dāng) Facebook 和其它平臺(tái) 受到監(jiān)管時(shí),會(huì)發(fā)生什么事情。我們應(yīng)當(dāng)考慮的措施包括: 監(jiān)管政治言論,對政治言論進(jìn)行標(biāo)簽,確保外國參與者無法資助政治言論,但這也有自己的風(fēng)險(xiǎn)。舉個(gè)例子,馬來西亞剛剛頒布法案,對任何散布不實(shí)消息的人 處以六年監(jiān)禁。而在獨(dú)裁政權(quán)中,這種政策可以被利用 以壓制少數(shù)群體的意見,繼續(xù)擴(kuò)大壓迫。

12:25

The fourth possible option is transparency. We want to know how do Facebook's algorithms work. How does the data combine with the algorithms to produce the outcomes that we see? We want them to open the kimono and show us exactly the inner workings of how Facebook is working. And if we want to know social media's effect on society, we need scientists, researchers and others to have access to this kind of information. But at the same time, we are asking Facebook to lock everything down, to keep all of the data secure.

第四種可能的解決方法是透明度。我們想了解 Facebook 的算法是怎樣運(yùn)作的。數(shù)據(jù)是怎樣與算法結(jié)合,得出我們看到的結(jié)果? 我們想讓他們開誠布公,為我們披露 Facebook 內(nèi)部 具體是如何運(yùn)作的。而如果我們想知道 社交媒體對社會(huì)的影響,我們需要科學(xué)家、研究人員 和其他人能夠入手這種信息。但與此同時(shí),我們還要求 Facebook 鎖上一切,保證所有數(shù)據(jù)的安全。

13:01

So, Facebook and the other social media platforms are facing what I call a transparency paradox. We are asking them, at the same time, to be open and transparent and, simultaneously secure. This is a very difficult needle to thread, but they will need to thread this needle if we are to achieve the promise of social technologies while avoiding their peril.

因此,F(xiàn)acebook 和其他社交媒體平臺(tái) 正面對我稱之為的“透明性悖論”。我們要求他們 在開放、透明的同時(shí) 保證安全。這是非常艱難的挑戰(zhàn),這些公司必須直面挑戰(zhàn),才能在實(shí)現(xiàn)社交科技承諾的同時(shí) 回避它們帶來的危害。

13:25

The final thing that we could think about is algorithms and machine learning. Technology devised to root out and understand fake news, how it spreads, and to try and dampen its flow. Humans have to be in the loop of this technology, because we can never escape that underlying any technological solution or approach is a fundamental ethical and philosophical question about how do we define truth and falsity, to whom do we give the power to define truth and falsity and which opinions are legitimate, which type of speech should be allowed and so on. Technology is not a solution for that. Ethics and philosophy is a solution for that.

我們能想到的最后一個(gè)解決手段是 算法和機(jī)器學(xué)習(xí)。有的科技被開發(fā)出來,用于拔除和理解虛假新聞,了解它們的傳播方式,并試圖降低其擴(kuò)散。人類需要跟進(jìn)這種科技,因?yàn)槲覀儫o法逃避的是,在任何科技解答或手段的背后 都有一個(gè)根本的倫理與哲學(xué)問題: 我們?nèi)绾味x真實(shí)和虛偽,我們將定義真?zhèn)蔚臋?quán)力托付于誰,哪些意見是合法的,哪種言論能被允許,諸如此類??萍疾⒎菍@個(gè)問題的解答, 倫理學(xué)和哲學(xué)才是。

14:11

Nearly every theory of human decision making, human cooperation and human coordination has some sense of the truth at its core. But with the rise of fake news, the rise of fake video, the rise of fake audio, we are teetering on the brink of the end of reality, where we cannot tell what is real from what is fake. And that's potentially incredibly dangerous.

人類決策、人類合作和人類協(xié)調(diào) 的幾乎每一個(gè)理論,其核心都存在某種程度的真相。但隨著虛假新聞、 虛假視頻、 虛假音頻的崛起,我們正在現(xiàn)實(shí)終結(jié) 的邊緣搖搖欲墜,在這里我們無法分辨 何為真實(shí),何為虛假。這有可能是極度危險(xiǎn)的。

14:39

We have to be vigilant in defending the truth against misinformation. With our technologies, with our policies and, perhaps most importantly, with our own individual responsibilities, decisions, behaviors and actions.

我們必須保持警惕,拒絕虛假信息,捍衛(wèi)真相—— 通過我們的技術(shù),我們的政策,以及,或許也是最重要的,通過我們自己的責(zé)任感、 決定、行為,和舉動(dòng)。

14:58

Thank you very much.

謝謝大家。

14:59

(Applause)

(掌聲)

用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級聽力 英語音標(biāo) 英語入門 發(fā)音 美語 四級 新東方 七年級 賴世雄 zero是什么意思重慶市生輝龍庭花園英語學(xué)習(xí)交流群

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦