查看原文
其他

【诺奖得主Wilczek科普专栏】纠错之错

KouShare 蔻享学术 2023-03-06




(温馨提示:文章内含中文版本、英文版本,满足更多读者需求哟!)


声明:本专栏纸质版每月在《环球科学》杂志刊登,网络电子版经作者授权由蔻享学术在微信公众号上进行网络首发。


Frank Wilczek

弗兰克·维尔切克是麻省理工学院物理学教授、量子色动力学的奠基人之一。因发现了量子色动力学的渐近自由现象,他在2004年获得了诺贝尔物理学奖。


作者 | Frank Wilczek翻译 | 胡风、梁丁当

中文版

发展初期,我们常要解决由零件故障带来的问题。技术进步减少了故障的出现,但如何应对故障与错误,依然很重要。



当最先进的计算机还是一个由大量真空电子管组成的庞然大物时,整机需要塞满几个房间。此时,计算机的设计师还必须认真地考虑真空管的局限性。因为这些真空管易损耗, 甚至会突然完全失效。部分受到这些问题的启发,约翰·冯·诺依曼(John von Neumann,被后人称为“现代计算机之父”)等人开创了一个新的研究领域。这个领域成果的缩影是1956年冯 · 诺依曼发表的论文《概率逻辑与从不可靠组件合成可靠有机体》(Probability Logics and the Synthesis of Reliable Organism from Unreliable Parts)。他在文中写道 :“目前我们对错误的处理方式并不令人满意,甚至可以说是毫无章法的。作者多年来一直坚信,应该用热力学的方法来处理错误。”他继续补充说,“我们目前的处理方式远远达不到这一目标。

热力学和统计力学是物理学中发展出的强有力的方法。其核心思想是:从微观原子和分子的基本行为出发,利用概率统计,推导出宏观物体的性质,比如温度和压力。冯·诺依曼把处理信息的复杂的基本单元类比于原子,想发展一套类似的统计理论。

然而,由于半导体技术和分子生物学的迅速发展,使得这个新兴理论的发展半路夭折了。由高精技术组装出来的固态晶体管、印刷电路和芯片具有很高的可靠性。它们的出现让工程师们把注意力从应对错误转向了避免错误。在最基本的生物学过程中,也有类似的现象 :当细胞从DNA中读出了所储存的信息后,它们会进行严格的校对和纠错,从而将潜在的错误扼杀在萌芽状态。

如今,随着科学家不断突破技术的界限和提出更高的挑战,学界又一次面临应该如何应对故障这个古老的问题了。打个比方,如果我们降低对晶体管可靠性的要求,我们又可以使晶体管更小更快,并放宽制造要求。在生物学领域,还有更贴切的例子。除了精密的蛋白质组装,还有更宏观的、更粗糙的过程,比如大脑的组装和运作。只有接受冯 · 诺依曼的挑战,我们才有可能理解这些基础组件会出错但依然可以良好运作的系统。

从1956年至今,人们在应对故障方面取得了很多进展。比如,互联网被设计成即使有节点出现故障或者断线也依然可以工作(早期研究旨在确保即使在发生核交战时,通信网络也不会中断。)在人工神经网络中,基于某种概率逻辑,它的每个单元对许多其他单元的输入进行平均。这样,即使部分单元的计算并不精确,人工神经网络仍然能够顺利完成很多令人印象深刻的计算。

我们还对人脑的网络连接和信息处理方式有了更多的了解。人脑由大量的生物细胞构成。这些细胞会出现连接错误、死亡或是故障。尽管如此,通常情况下大脑仍然能够非常好地运作。区块链和(到目前为止仍然是概念上的)拓扑量子计算机系统性地将信息分发在一个存在薄弱环节的有弹性的网络中。这种网络可以弥补故障组件带来的缺陷,就好比著名的视网膜盲点可以被我们的视觉感知所“填补”一样。

冯·诺依曼对不可靠部件的关注和他对能够自我复制的机器人的设想是一脉相承的。这些机器人,正如其仿效的生物细胞和有机生物那样,需要在一个无法预知的、不可靠的环境中,获取实现自我复制的材料。这是未来工程。或许它能够实现科幻小说中最离奇的场景——比如行星地球化、巨脑等等。

如果没有经历过出错和处理错误的过程,你就无法达到目标。戏剧性的是,如果半导体技术不是那么发达,或许我们能够更早地解决这个问题,并在今天走得更远。


英文版

In the days when top-of-the-line computers contained rooms full of vacuum tubes, their designers had to keep the tubes' limitations carefully in mind. They were prone to degrade over time, or suddenly fail altogether. Partly inspired by this problem, John von Neumann and others launched a new field of investigation, epitomized by von Neumann’s 1956 paper "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Parts." In it, he wrote, "Our present treatment of error is unsatisfactory and ad hoc. It is the author's conviction, voiced over many years, that error should be treated by thermodynamical methods." He added, "The present treatment falls far short of achieving this." 

Thermodynamics and statistical mechanics are the powerful methods physics has developed to derive the properties of bodies-such as their temperature and pressure-from the basic behavior of their atoms and molecules, using probability. Von Neumann hoped to do something comparable for the complex basic units, analogous to atoms, that process information.

The emerging theory was short-circuited, so to speak, by developments in semiconductor technology and molecular biology. Solid-state transistors, printed circuits and chips are models of reliability when assembled meticulously enough. With their emergence, the focus of engineers turned from coping with error to avoiding it. The most basic processes of biology share that focus: As cells read out the information stored in DNA, they do rigorous proofreading and error correction, to nip potential errors in the bud.

But the old issues are making a comeback as scientists push the boundaries of technology and ask more ambitious questions. We can make transistors smaller and faster-and relax manufacturing requirements-if we can compromise on their reliability. And we will only understand the larger-scale, sloppier processes of biology-such as assembling brains, as opposed to assembling proteins-if we take on von Neumann’s challenge.

A lot of progress in overcoming processing errors has been made since 1956. The internet is designed to work around nodes that malfunction or go offline. (Early research aimed to ensure the survival of communication networks following a nuclear exchange.) Artificial neural nets can do impressive calculations smoothly despite imprecision in their parts, using a sort of probabilistic logic in which each unit takes averages over inputs from many others. 

We've also come to understand a lot more about how human brains get wired up and process information: They are made from vast numbers of biological cells that can get miswired, die, or malfunction in different ways, but usually still manage to function impressively well. Blockchains and (so far, mostly notional) topological quantum computers systematically distribute information within a resilient web of possibly weak components. The contribution of failed components can be filled in, similar to how our visual perception "fills in" the retina's famous blind spot. 

Von Neumann's concern with unreliable parts fits within his vision of self-replicating machines. To reproduce themselves, such automatons-like the biological cells and organisms that inspired them-would need to tap into an unpredictable, unreliable environment for their building material. This is the engineering of the future. Plausibly, it is the path to some of science fiction’s wildest visions-terraforming of planets, giant brains, and more. 

You won't get there without making, and working around, lots of mistakes. Ironically, if semiconductor technology hadn’t been so good, we might have taken on that issue sooner, and be further along today.

扩展阅读

 

1.【课程】中南大学徐海教授:《名侦探柯南与化学探秘》

2.【课程】上海大学李永乐:《计算物理学》

3.【课程】北大葛颢研究员《数学动力学模型:在生物物理和生物化学中的应用》

4.【课程】复旦虞跃教授:共形场论

5.【精品课】理论力学课程——哈尔滨工业大学任延宇教授

编辑:黄琦

蔻享学术平台,国内领先的一站式科学资源共享平台,依托国内外一流科研院所、高等院校和企业的科研力量,聚焦前沿科学,以优化科研创新环境、传播和服务科学、促进学科交叉融合为宗旨,打造优质学术资源的共享数据平台。



版权说明:未经授权严禁任何形式的媒体转载和摘编,并且严禁转载至微信以外的平台!


原创文章首发于蔻享学术,仅代表作者观点,不代表蔻享学术立场。

转载请在『蔻享学术』公众号后台留言。


点击阅读原文~发现惊喜!

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存