我们生活在“信息决定论”时代吗?
2024-09-01 08:34阅读:

本世纪初,美国中央情报局的媒体分析师马丁·古里开始思考互联网的政治影响。古里就职于中情局公开信息中心,该中心的任务是分析报纸、杂志和报告等公开信息。随着互联网的出现,只关注这类信息源开始显得过时。无数人在网上写东西,他们的共同观点可能导致股市崩溃、影响选举甚至引发革命。古里后来写道:“我意识到,如果我只在熟悉的权威信源搜索证据,就会忽视几乎无穷无尽的新信源。我处于一种不确定状态——在这种新的分配体系下永远需要分析的状态。”
2014年,古里在自费出版的著作《大众的反抗和权威的危机》中描述这种不确定性的后果。他提出,过去可以通过读报或看电视感觉自己充分了解“新闻”。然而,互联网却使人觉得总有更多东西需要知道--而这是“一种腐蚀权威的酸剂”。
由于人人都只能阅读互联网的一部分内容,传统的大众受众正在分裂成“活跃的社群”——“大大小小的群体,围绕共同利益或话题有机地聚在一起”。古里认为,这些社群有一种独特的情绪:为公认的看法被打破、权威的论点被拆解而兴奋喜悦。古里写道:“每一位专家都被一群业余人士包围,他们急于抨击每一个错误,嘲笑每一个不成功的预测或政策。”然而,“公众反对,却不建议”。
人工智能将人类“编队”
十年前古里出书时,信息世界发生显著变化,能发声的人急剧增加。而尤瓦尔·诺亚·哈拉里在他的新书《纽带:从石器时代到人工智能的信息网络简史》中展望未来几十年的情况。书里说,到那时,我们在网上遇到的许多声音可能都出自机器人。他写道:“我们谈论的可能是人类历史的终结。不是历史的终结,而是人类主导的历史的终结。”人工智能系统可能迅速“吞噬人类文化的全部——我们几千年来创造的一切——消化它,并开始产出大量新的文化艺术品”。
哈拉里认为,要掌握这些事态发展可能把
我们带向何方,采用“信息”的一种新定义有所帮助。我们习惯于认为信息是具象的——也就是说,信息的一部分代表现实,可能真也可能假。但另一种看待信息的方式是,把信息视为能够将人“编队”的“社会纽带”。从这个角度看,信息是否真实并不重要。《圣经》塑造了历史进程,因为里面的故事说服无数人合作。官方记录只描述我们生活中的有限方面,但它们在政府和公民之间建立关系。泰勒·斯威夫特的歌曲创造出“霉粉”。有了新的信息,新的社会关系就会涌现。
当人工智能系统开始将人类“编队”时会发生什么?想要一睹可能的结果,我们可以看看人工智能出现之前的互联网。哈拉里援引数字情报公司西米勒网络公司2022年进行的一项研究。这项研究显示,推特上有20%至30%的内容由机器人发布,但这些内容仅由该平台5%的机器人账户发布。毫不夸张地说,像推特这样的平台本身就是一种机器人;它的算法自动决定用户看到什么。因此,在这样一个平台上,一群机器人与巨型机器人互动,而人类则阅读并回应。
如果这种现象被放大——如果机器人和算法能够进行智能对话——可能的结果,用哈拉里的话说,就是“数字无政府主义”。机器之间的对话将影响有关人类的对话。他说:“公共领域将充斥着由计算机生成的假新闻,公民将无法辨别他们是在和人类朋友还是在和操控性的机器辩论,人们对于讨论的最基本规则甚至最基本的事实都不会再有共识。”
“无限信息”世界的挑战
要应对这个可能出现的世界,哈拉里主张发展强有力的“计算机政治”,通过这种政治,民主社会或许可以保护自己的公共领域。他认为,我们应该禁止计算机冒充人类,并要求人工智能系统对用户行使信托责任。监管机构应该负责评估最重要的算法,当人工智能系统作出影响其生活的决定时,个人应该有“获得解释的权利”。然而,他承认,即使实施这样的改革,也有理由怀疑“民主能否与21世纪的信息网络结构共存”。在一个信息时代蓬勃发展的政府形式可能在下一个时代无法繁荣发展。
这可以称作信息决定论,也就是认为信息在世界流动的方式实际上是一张网,使我们身陷其中而无法摆脱。认真对待这种观点的一个原因是,它其实并不新鲜。1999年,小说家威廉·吉布森在小说《所有明天的派对》中让一个人物思考事物在无限信息世界中的流动性:
当然,他被教导说,历史,还有地理,都死了。过去意义上的历史是一种历史概念。过去意义上的历史是叙事,我们自己讲述的故事,关于我们来自哪里,发生了什么,而这些叙事被新一代不断修改,实际上一直都是这样。历史是可塑的,关键在如何诠释。数字时代与其说对此作出多大改变,不如说只是让这种情况太过明显而无法忽视。
关键在于最后一步。随着信息的密度、速度和流动性不断增加,我们越来越意识到它在生活中扮演的角色--而且对它愈加怀疑。
科幻小说《所有明天的派对》是三部曲中的最后一部。三部曲的故事始于2006年前后。在现实中,2006年是推特推出的一年;在这年,脸书网站向大学生以外的人开放并且开始自己的新闻推送;在这年,谷歌收购优兔(YouTube),《时代》周刊的“年度人物”是“你”——上网的个人,聚集在一起成为“从少数人手中夺取权力的多数”。小说家列夫·格罗斯曼在那一期专栏中写道:“我们早就做好了准备。”
当时,信息决定论激动人心。如今,它却像是一种我们必须克服的挑战,不然就走着瞧。
Are We Living in the Age of
Info-Determinism?
Increasingly, our networks seem to be steering our history in ways
we don’t like and can’t control.
By Joshua Rothman
In the early two-thousands, Martin Gurri, a media analyst at the
Central Intelligence Agency, began considering the political
implications of the Internet. Gurri worked in the Open Source
Center, a part of the C.I.A. tasked with analyzing publicly
available information, such as newspapers, magazines, and reports.
With the advent of the Web, focussing exclusively on such sources
had begun to feel old-fashioned. Vast numbers of people were
writing online, and the ideas that they shared could tank stocks,
sway elections, or spark revolutions. “I realized that I couldn’t
restrict my search for evidence to the familiar authoritative
sources without ignoring a near-infinite number of new sources,”
Gurri later wrote. “I was left in a state of uncertainty—a
permanent condition for analysis under the new dispensation.”
In 2014, Gurri described the consequences of this uncertainty in a
self-published book called “The Revolt of the Public and the Crisis
of Authority.” (An updated edition appeared in 2018.) In the old
days, he argued, it had been possible to read a newspaper or watch
a newscast and feel that you’d got a good grasp of “the news.” The
Internet, however, created the sense that there was always more to
know—and this was “an acid, corrosive to authority.” Now “every
presidential statement, every CIA assessment, every investigative
report by a great newspaper, suddenly acquired an arbitrary aspect,
and seemed grounded in moral predilection rather than intellectual
rigor.” Meanwhile, because everyone could read only a slice of the
Internet, the traditional mass audience was splitting into “vital
communities”—“groups of wildly disparate size gathered organically
around a shared interest or theme.” These communities, Gurri
thought, had a characteristic mood: they revelled in the
destruction of received opinion and the disassembly of arguments
from authority. “Every expert is surrounded by a horde of amateurs
eager to pounce on every mistake and mock every unsuccessful
prediction or policy,” Gurri wrote. And yet, “the public opposes,
but does not propose.” Demolishing ideas is easy in a subreddit;
crafting new ones there is mostly beside the point.
The way those in power responded to these dynamics was troubling.
Their general strategy, Gurri thought, was to wish that the
Internet and its “unruly public” would go away, and that the
halcyon days of authoritative hierarchy would return. Leaders
lectured Internet users about media literacy and pushed for the
tweaking of algorithms. Internet users, for their part, grew
increasingly uninterested in taking leaders, institutions, and
experts seriously. For more and more people, a random YouTuber
seemed preferable to a credentialled expert; anyone representing
“the system” was intrinsically untrustworthy. As the powerful and
the public came to regard one another with contempt, they created
“a perpetual feedback loop of failure and negation,” Gurri wrote.
Nihilism—“the belief that the status quo is so abhorrent that
destruction will be a form of progress”—became widespread. It could
be expressed substantively (say, by rioting in the Capitol) or
discursively, by asserting your right to say and believe anything
you want, no matter how absurd.
I first read “The Revolt of the Public” in 2016, after Donald Trump
won the Presidency, because many bloggers I followed described it
as prescient. I disagreed with Gurri, who has a libertarian
sensibility, on many points, including the character of the Obama
Presidency and the nature of the Occupy movement, and felt that the
book downplayed the degree to which the American left has remained
largely allied with its institutions while the right has not. But I
also found its analysis illuminating, and I’ve thought about the
book with drumbeat regularity ever since. Recently, a friend told
me about a relative of his who maintained that some public schools
had installed “human litter boxes” for the convenience of students
who “identify as cats.” “How could he seriously believe something
like that?” my friend asked. Remembering Gurri, I wondered if
“believing” was the wrong concept to apply to such a case. Saying
that you believe in human litter boxes might be better seen as a
way of signalling your rejection of discursive authority. It’s like
saying, “No one can tell me what to think.”
How can a society function when the rejection of knowledge becomes
a political act? Gurri offers a few suggestions, most aimed at
healing the breach between institutions and the public: government
agencies might use technology to become more transparent, for
example, and disillusioned voters might adopt more realistic
expectations about how much leaders can improve their lives. Yet
the main goal of his book isn’t to fix the problem (it may not be
fixable); he just wants to describe it. Short of some new and vast
transformation, it’s hard to see the Internet becoming a venue for
consensus-building; similarly, it’s difficult to imagine a world in
which citizens become reënchanted with the media and return to
trusting authority figures. “All things equal, the system will
continue to bleed away legitimacy,” he concludes. “The mass
extinction of stories of legitimacy leaves no margin for error, no
residual store of public good will. Any spark can blow up any
political system at any time, anywhere.”
A decade ago, when Gurri published “The Revolt of the Public,” the
salient change in the world of information was a startling increase
in the number of human voices empowered to speak at once. Yuval
Noah Harari, in his new book “Nexus: A Brief History of Information
Networks from the Stone Age to AI,” looks forward to the next few
decades, when many of the voices we encounter online may be
automated. “What we are talking about is potentially the end of
human history,” he writes. “Not the end of history, but the end of
its human-dominated part.” A.I. systems could quickly “eat the
whole of human culture—everything we have created over thousands of
years—digest it, and begin to gush out a flood of new cultural
artifacts.” He goes on:
We live cocooned by culture, experiencing reality through a
cultural prism. Our political views are shaped by the reports of
journalists and the opinions of friends. Our sexual habits are
influenced by what we hear in fairy tales and see in movies. Even
the way we walk and breathe is nudged by cultural traditions, such
as the military discipline of soldiers and the meditative exercises
of monks. Until very recently, the cultural cocoon we lived in was
woven by other humans. Going forward, it will be increasingly
designed by computers.
For us to grasp where these developments might take us, Harari
believes it’s helpful to adopt a novel definition of “information.”
We’re used to thinking of information as being
representational—that is, a piece of information represents
reality, and might be true or false. But another way to look at
information is to see it as a “social nexus” capable of putting
people into “formation.” From this perspective, it doesn’t matter
whether information is true or not. The Bible has shaped the course
of history because the stories it contains have persuaded billions
of people to coöperate. Bureaucratic records describe only limited
aspects of our lives, but they have created relationships between
governments and citizens. Taylor Swift’s songs have conjured
Swifties out of the mass of humanity. When new information becomes
available, new social relationships spring up.
What will happen when A.I. systems begin pulling people into
formation? We can get a glimpse of the possible consequences by
looking at what’s already happened on the pre-A.I. Internet. Harari
cites a 2022 study, conducted by the digital-intelligence firm
Similarweb, which showed that between twenty and thirty per cent of
the content on Twitter was posted by bots, which in turn
constituted only five per cent of that platform’s user base. It’s
no stretch to say that a platform like Twitter is in itself a kind
of bot; its algorithms decide, in an automated fashion, what users
should see. On such a platform, therefore, swarms of bots interact
with a mega-bot, while human beings read and respond alongside. If
this phenomenon were amplified—and if the bots and algorithms were
capable of holding intelligent conversations—the likely outcome is
“digital anarchy,” as Harari puts it. Conversations among machines
will shape conversations about humans. “The public sphere will be
flooded by computer-generated fake news, citizens will not be able
to tell whether they are having a debate with a human friend or a
manipulative machine, and no consensus will remain about the most
basic rules of discussion or the most basic facts.”
To prepare for this possible world, Harari advocates the
development of a robust “computer politics,” through which
democratic societies might safeguard their public spheres. Among
other things, he argues, we should ban the impersonation of people
by computers and require A.I. systems to exercise a fiduciary duty
toward their users. Regulatory agencies should be charged with
evaluating the most important algorithms, and individuals should
have a “right to an explanation” when A.I. systems make decisions
that shape their lives. And yet he admits that, even if such
reforms are put into place, there will be reasons to doubt “the
compatibility of democracy with the structure of
twenty-first-century information networks.” Democracy on a small
scale is easy; it’s no problem for the members of a club or the
residents of a small town to elect a new leader or mayor. But
democracy on a mass scale depends on mass institutions—mass media,
mass education, mass culture—that seem likely to fracture or mutate
with the arrival of A.I. The forms of government that flourished in
one info-epoch may not thrive in the next.
Call it info-determinism: the belief that the ways that information
flows through the world are actually a kind of web in which we’re
ensnared. One reason to take this view seriously is that it’s
actually pretty old. In 1999, in a novel called “All Tomorrow’s
Parties,” the novelist William Gibson imagined a character
reflecting on the fluidity of things in a world of unlimited
information:
He had been taught, of course, that history, along with geography,
was dead. That history in the older sense was an historical
concept. History in the older sense was narrative, stories we told
ourselves about where we’d come from and what it had been like, and
those narratives were revised by each new generation, and indeed
always had been. History was plastic, was a matter of
interpretation. The digital had not so much changed that as made it
too obvious to ignore.
The key step is the last one. As the density, pace, and fluidity of
information have increased, we’ve become more conscious of the role
it plays in our lives—and more suspicious of it.
“All Tomorrow’s Parties” was near-future science fiction: the
trilogy of novels it concluded begins sometime around 2006. In real
life, 2006 was the year that Twitter launched, and in which
Facebook opened itself to people who weren’t college students and
created its news feed; it was also the year in which Google bought
YouTube, and in which Time magazine’s “Person of the Year” was
“You”—the online individual, which, massed together, made for “the
many wrestling power from the few.” “We are so ready for it,” the
novelist Lev Grossman wrote, in that issue. “We’re ready to balance
our diet of predigested news with raw feeds from Baghdad and Boston
and Beijing. You can learn more about how Americans live just by
looking at the backgrounds of YouTube videos—those rumpled bedrooms
and toy-strewn basement rec rooms—than you could from 1,000 hours
of network television.” Back then, info-determinism was exciting.
Today, it feels like a challenge which we must surmount, or
else.