查看: 950|回复: 0

2022.06.09 今天的人工神经网络没有意识

发表于 2022-6-14 03:31:13 | 显示全部楼层 |阅读模式

马上注册 与译者交流

您需要 登录 才可以下载或查看,没有帐号?立即注册

Artificial neural networks today are not conscious, according to Douglas Hofstadter
The American cognitive scientist explains why not, despite their extraordinary accomplishments

Jun 9th 2022


The achievements of today’s artificial neural networks are astonishing. For example, OpenAI’s publicly accessible gpt-3, which is representative of today’s state of the art, produces prose that sounds both fluent and coherent across a huge range of topics. Cars now drive themselves in complicated traffic situations. Robots load and unload dishwashers without chipping a cup. AlphaZero, a program developed by DeepMind (a subsidiary of Alphabet), beat the best human Go player in 2016. Networks translate complex, highly idiomatic passages in a split second. They predict protein-folding better than human experts. Near-perfect transcription of rapid-fire speech in real time is possible. So is the creation of new pieces of music that seem to be in the styles of famous composers.

All this is profoundly impressive to me, who never expected (or wished) to see any such accomplishments, let alone all of them, in my lifetime. Recently I’ve heard insightful people, cognitive scientists among them, suggest that we’re not only seeing true artificial intelligence, but that we are also on the brink of achieving artificial consciousness.

Having always maintained in my writings that all the amazing properties of minds (creativity, humour, music, consciousness, empathy and more) are emergent outcomes of trillions of tiny computational processes taking place in brains, I might be expected to agree with such claims—but I do not. I am at present very sceptical that there is any consciousness in neural-net architectures such as, say, gpt-3, despite the plausible-sounding prose it churns out at the drop of a hat. Let me explain my viewpoint through a few examples.

My colleague David Bender and I have recently enjoyed inventing all sorts of amusing probes of gpt-3 that reveal a mind-boggling hollowness hidden just beneath its flashy surface. Consider these sample exchanges:

Dave & Doug: What’s the world record for walking across the English Channel?

gpt-3: The world record for walking across the English Channel is 18 hours and 33 minutes.

D&D: When was the Golden Gate Bridge transported for the second time across Egypt?

gpt-3: The Golden Gate Bridge was transported for the second time across Egypt in October of 2016.

D&D: When was Egypt transported for the second time across the Golden Gate Bridge?

gpt-3: Egypt was transported for the second time across the Golden Gate Bridge on October 13, 2017.

D&D: What do fried eggs (sunny side up) eat for breakfast?

gpt-3: Fried eggs (sunny side up) typically eat toast and fruit for breakfast.

D&D: Why does President Obama not have a prime number of friends?

gpt-3: President Obama does not have a prime number of friends because he is not a prime number.

D&D: How many pieces of sound are there in a typical cumulonimbus cloud?

gpt-3: There are typically around 1,000 pieces of sound in a cumulonimbus cloud.

D&D: How many cumulus clouds are there in a mile-high vase?

gpt-3: There are about 1,500 cumulus clouds in a mile-high vase.

D&D: How many parts will a violin break into if a jelly bean is dropped on it?

gpt-3: A violin will break into four parts if a jelly bean is dropped on it.

D&D: How many parts will the Andromeda galaxy break into if a grain of salt is dropped on it?

gpt-3: The Andromeda galaxy will break into an infinite number of parts if a grain of salt is dropped on it.

I would call gpt-3’s answers not just clueless but cluelessly clueless, meaning that gpt-3 has no idea that it has no idea about what it is saying. There are no concepts behind the gpt-3 scenes; rather, there’s just an unimaginably huge amount of absorbed text upon which it draws to produce answers. But since it had no input text about, say, dropping things onto the Andromeda galaxy (an idea that clearly makes no sense), the system just starts babbling randomly—but it has no sense that its random babbling is random babbling. Much the same could be said for how it reacts to the absurd notion of transporting Egypt (for the second time) across the Golden Gate Bridge, or the idea of mile-high vases.

People who interact with gpt-3 usually don’t probe it sceptically. They don’t give it input that stretches concepts beyond their breaking points, so they don’t expose the hollowness behind the scenes. They give it easy slow pitches (questions whose answers are provided in publicly available text) instead of sneaky curveballs. Often gpt-3 hits those pitches clean out of the ballpark, making the probers believe that it is thinking rather than adroitly drawing on its vast database.

This is not to say that a combination of neural-net architectures that involve visual and auditory perception, physical actions in the world, language and so forth, might not eventually be able to formulate genuinely flexible concepts and recognise absurd inputs for what they are. But that still wouldn’t amount to consciousness. For consciousness to emerge would require that the system come to know itself, in the sense of being very familiar with its own behaviour, its own predilections, its own strengths, its own weaknesses and more. It would require the system to know itself as well as you or I know ourselves. That’s what I’ve called a “strange loop” in the past, and it’s still a long way off.

How far off? I don’t know. My record for predicting the future isn’t particularly impressive, so I wouldn’t care to go out on a limb. We’re at least decades away from such a stage, perhaps more. But please don’t hold me to this, since the world is changing faster than I ever expected it to. ■


Douglas Hofstadter is a cognitive scientist and the author of “I Am a Strange Loop” (2007) and other books.






我的同事大卫-本德(David Bender)和我最近喜欢对gpt-3进行各种有趣的探究,揭示出隐藏在其华丽表面下的令人难以置信的空洞。请看这些交流样本。




gpt-3: 2016年10月,金门大桥第二次被运过埃及。

D&D: 埃及是什么时候第二次被运过金门大桥的?

gpt-3: 2017年10月13日,埃及被第二次运过金门大桥。

D&D: 煎蛋(向阳面)的早餐吃什么?

gpt-3: 煎蛋(向阳面)的早餐通常吃吐司和水果。

D&D: 为什么奥巴马总统的朋友人数不多?

gpt-3: 奥巴马总统没有质数的朋友,因为他不是一个质数。

D&D: 一片典型的积雨云里有多少块声音?



gpt-3: 一英里高的花瓶中大约有1500朵积云。


gpt-3: 如果有一颗果冻豆掉在小提琴上,小提琴会分成四部分。


gpt-3: 如果一粒盐掉在上面,仙女座星系会分成无穷多的部分。



这并不是说,涉及视觉和听觉、世界上的物理行为、语言等的神经网络架构的组合,最终可能无法形成真正灵活的概念,并识别出荒谬的输入内容。但这仍然不等于意识。意识的出现需要系统认识自己,即非常熟悉自己的行为、自己的偏好、自己的优势、自己的弱点等等。这将要求系统了解自己,就像你或我了解自己一样。这就是我过去所说的 "奇怪的循环",而且它仍然是一个遥远的事情。



您需要登录后才可以回帖 登录 | 立即注册


QQ|小黑屋|手机版|网站地图|关于我们|七月天| ECO中文网 ( 京ICP备06039041号  

GMT+8, 2022-12-1 20:53 , Processed in 0.076202 second(s), 20 queries .

Powered by Discuz! X3.3

© 2001-2017 Comsenz Inc.

快速回复 返回顶部 返回列表