NowComment
2-Pane Combined
Comments:
Full Summaries Sorted

Why Elon Musk Fears Artificial Intelligence


0 General Document comments
3 Sentence and Paragraph comments
0 Image and Video comments


1 Elon Musk is usually far from a technological pessimist. From electric cars to Mars colonies, he’s made his name by insisting that the future can get here faster.

New Thinking Partner Conversation New Conversation
Paragraph 1 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 1, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 1, Sentence 2 0
No sentence-level conversations. Start one.

2 But when it comes to artificial intelligence, he sounds very different. Speaking at MIT in 2014, he called AI humanity’s “biggest existential threat” and compared it to “summoning the demon.”

New Thinking Partner Conversation New Conversation
Paragraph 2 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 2, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 2, Sentence 2 0
profile_photo
Mar 26
2019 Joseph Paul 2019 Joseph Paul (Mar 26 2019 9:36PM) : Musk made a very interesting connection between advancement in AI and summoning the demon. The more we push, the further we are indulging into something we don't quite fully understand.

3 He reiterated those fears in an interview published Friday with Recode’s Kara Swisher, though with a little less apocalyptic rhetoric. “As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger,” Musk told Swisher. “I do think we need to be very careful about the advancement of AI.”

New Thinking Partner Conversation New Conversation
Paragraph 3 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 3, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 3, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 3, Sentence 3 0
No sentence-level conversations. Start one.

4 To many people — even many machine learning researchers — an AI that surpasses humans by as much as we surpass cats sounds like a distant dream. We’re still struggling to solve even simple-seeming problems with machine learning. Self-driving cars have an extremely hard time under unusual conditions because many things that come instinctively to humans — anticipating the movements of a biker, identifying a plastic bag flapping in the wind on the road — are very difficult to teach a computer. Greater-than-human capabilities seem a long way away.

New Thinking Partner Conversation New Conversation
Paragraph 4 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 4, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 4, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 4, Sentence 3 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 4, Sentence 4 0
No sentence-level conversations. Start one.

5 Musk is hardly alone in sounding the alarm, though. AI scientists at Oxford and at UC Berkeley, luminaries like Stephen Hawking, and many of the researchers publishing groundbreaking results agree with Musk that AI could be very dangerous. They are concerned that we’re eagerly working toward deploying powerful AI systems, and that we might do so under conditions that are ripe for dangerous mistakes.

New Thinking Partner Conversation New Conversation
Paragraph 5 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 5, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 5, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 5, Sentence 3 0
No sentence-level conversations. Start one.

6 If we take these concerns seriously, what should we be doing? People concerned with AI risk vary enormously in the details of their approaches, but agree on one thing: We should be doing more research.

New Thinking Partner Conversation New Conversation
Paragraph 6 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 6, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 6, Sentence 2 0
No sentence-level conversations. Start one.

7 Musk wants the US government to spend a year or two understanding the problem before they consider how to solve it. He expanded on this idea in the interview with Swisher; the bolded comments are Swisher’s questions:

New Thinking Partner Conversation New Conversation
Paragraph 7 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 7, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 7, Sentence 2 0
No sentence-level conversations. Start one.

8 My recommendation for the longest time has been consistent. I think we ought to have a government committee that starts off with insight, gaining insight. Spends a year or two gaining insight about AI or other technologies that are maybe dangerous, but especially AI. And then, based on that insight, comes up with rules in consultation with industry that give the highest probability for a safe advent of AI.

New Thinking Partner Conversation New Conversation
Paragraph 8 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 8, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 8, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 8, Sentence 3 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 8, Sentence 4 0
No sentence-level conversations. Start one.

9 You think that — do you see that happening?

New Thinking Partner Conversation New Conversation
Paragraph 9 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 9, Sentence 1 0
No sentence-level conversations. Start one.

10 I do not.

New Thinking Partner Conversation New Conversation
Paragraph 10 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 10, Sentence 1 0
No sentence-level conversations. Start one.

11 You do not. And do you then continue to think that Google —

New Thinking Partner Conversation New Conversation
Paragraph 11 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 11, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 11, Sentence 2 0
No sentence-level conversations. Start one.

12 No, to the best of my knowledge, this is not occurring.

New Thinking Partner Conversation New Conversation
Paragraph 12 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 12, Sentence 1 0
No sentence-level conversations. Start one.

13 Do you think that Google and Facebook continue to have too much power in this? That’s why you started OpenAI and other things.

New Thinking Partner Conversation New Conversation
Paragraph 13 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 13, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 13, Sentence 2 0
No sentence-level conversations. Start one.

14 Yeah, OpenAI was about the democratization of AI power. So that’s why OpenAI was created as a nonprofit foundation, to ensure that AI power ... or to reduce the probability that AI power would be monopolized.

New Thinking Partner Conversation New Conversation
Paragraph 14 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 14, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 14, Sentence 2 0
No sentence-level conversations. Start one.

15 Which it’s being?

New Thinking Partner Conversation New Conversation
Paragraph 15 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 15, Sentence 1 0
No sentence-level conversations. Start one.

16 There is a very strong concentration of AI power, and especially at Google/DeepMind. And I have very high regard for Larry Page and Demis Hassabis, but I do think that there’s value to some independent oversight.

New Thinking Partner Conversation New Conversation
Paragraph 16 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 16, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 16, Sentence 2 0
No sentence-level conversations. Start one.

17 From Musk’s perspective, here’s what is going on: Researchers — especially at Alphabet’s Google Deep Mind, the AI research organization that developed AlphaGo and AlphaZero — are eagerly working toward complex and powerful AI systems. Since some people aren’t convinced that AI is dangerous, they’re not holding the organizations working on it to high enough standards of accountability and caution.

New Thinking Partner Conversation New Conversation
Paragraph 17 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 17, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 17, Sentence 2 0
No sentence-level conversations. Start one.

18 “We don’t want to learn from our mistakes” with AI

New Thinking Partner Conversation New Conversation
Paragraph 18 0
profile_photo
Mar 26
2019 Joseph Paul 2019 Joseph Paul (Mar 26 2019 9:38PM) : There are a lot of things that we can do, fail, and then learn from our mistakes. The difference between those things and AI is that we know the possible consequences when we participate in something of those sorts. With AI, we don't know the consequences
New Thinking Partner Conversation New Conversation
Paragraph 18, Sentence 1 0
No sentence-level conversations. Start one.

19 Max Tegmark, a physics professor at MIT, expressed many of the same sentiments in a conversation last year with journalist Maureen Dowd for Vanity Fair: “When we got fire and messed up with it, we invented the fire extinguisher. When we got cars and messed up, we invented the seat belt, airbag, and traffic light. But with nuclear weapons and A.I., we don’t want to learn from our mistakes. We want to plan ahead.”

New Thinking Partner Conversation New Conversation
Paragraph 19 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 19, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 19, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 19, Sentence 3 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 19, Sentence 4 0
No sentence-level conversations. Start one.

20 In fact, if AI is powerful enough, we might need to plan ahead. Nick Bostrom, at Oxford, made the case in his 2014 book Superintelligence that a badly designed AI system will be impossible to correct once deployed: “once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed.”

New Thinking Partner Conversation New Conversation
Paragraph 20 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 20, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 20, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 20, Sentence 3 0
No sentence-level conversations. Start one.

21 In that respect, AI deployment is like a rocket launch: Everything has to be done exactly right before we hit “go,” as we can’t rely on our ability to make even tiny corrections later. Bostrom makes the case in Superintelligence that AI systems could rapidly develop unexpected capabilities — for example, an AI system that is as good as a human at inventing new machine-learning algorithms and automating the process of machine-learning work could quickly become much better than a human.

New Thinking Partner Conversation New Conversation
Paragraph 21 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 21, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 21, Sentence 2 0
No sentence-level conversations. Start one.

22 That has many people in the AI field thinking that the stakes could be enormous. In a conversation with Musk and Dowd for Vanity Fair, Y Combinator’s Sam Altman said, “In the next few decades we are either going to head toward self-destruction or toward human descendants eventually colonizing the universe.”

New Thinking Partner Conversation New Conversation
Paragraph 22 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 22, Sentence 1 0
profile_photo
Mar 26
2019 Joseph Paul 2019 Joseph Paul (Mar 26 2019 9:44PM) : There is the possibility that the outcome of AI will be enormously positive, but it can also be extremely negative. Even those in the field have taken note of the stakes when it comes to AI.
New Thinking Partner Conversation New Conversation
Paragraph 22, Sentence 2 0
No sentence-level conversations. Start one.

23 “Right,” Musk concurred.

New Thinking Partner Conversation New Conversation
Paragraph 23 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 23, Sentence 1 0
No sentence-level conversations. Start one.

24 In context, then, Musk’s AI concerns are not an out-of-character streak of technological pessimism. They stem from optimism — a belief in the exceptional transformative potential of AI. It’s precisely the people who expect AI to make the biggest splash who’ve concluded that working to get ahead of it should be one of our urgent priorities.

New Thinking Partner Conversation New Conversation
Paragraph 24 0
No paragraph-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 24, Sentence 1 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 24, Sentence 2 0
No sentence-level conversations. Start one.
New Thinking Partner Conversation New Conversation
Paragraph 24, Sentence 3 0
No sentence-level conversations. Start one.

DMU Timestamp: March 07, 2019 02:52

General Document Comments 0
New Thinking Partner Conversation Start a new Document-level conversation

Image
0 comments, 0 areas
add area
add comment
change display
Video
add comment

Quickstart: Commenting and Sharing

How to Comment
  • Click icons on the left to see existing comments.
  • Desktop/Laptop: double-click any text, highlight a section of an image, or add a comment while a video is playing to start a new conversation.
    Tablet/Phone: single click then click on the "Start One" link (look right or below).
  • Click "Reply" on a comment to join the conversation.
How to Share Documents
  1. "Upload" a new document.
  2. "Invite" others to it.

Logging in, please wait... Blue_on_grey_spinner