Do You Trust this Computer? (2018)

1
What we're on the brink of is
a world of increasingly intense,
sophisticated
artificial intelligence.
Man: Technology is evolving
so much faster than our society
has the ability
to protect us as citizens.
The robots are coming, and they
will destroy our livelihoods.
You have a networked
intelligence that watches us,
knows everything about us,
and begins to try to change us.
Man #2: Twitter has become the
world's number-one news site.
Man #3:
Technology is never good or bad.
It's what we do
with the technology.
Eventually, millions of people
are gonna be thrown out of jobs
because their skills
are going to be obsolete.
Woman: Mass unemployment...
greater inequalities,
even social unrest.
Man #4: Regardless of whether
to be afraid or not afraid,
the change is coming,
and nobody can stop it.
Man #5: We've invested
huge amounts of money,
and so it stands to reason
that the military,
with their own desires,
are gonna start to use
these technologies.
Man #6:
Autonomous weapons systems
could lead to a global arms race
to rival the Nuclear Era.
Man #7:
We know what the answer is.
They'll eventually
be killing us.
Man #8:
These technology leaps
are gonna yield
incredible miracles...
and incredible horrors.
Man #9: We created it,
so I think, as we move forward,
this intelligence
will contain parts of us.
And I think the question is --
Will it contain
the good parts...
or the bad parts?
Sarah: The survivors
called the war "Judgment Day."
They lived only to face
a new nightmare --
the war against the machines.
Aah!
Nolan: I think
we've completely fucked this up.
I think Hollywood has managed
to inoculate the general public
against this question --
the idea of machines
that will take over the world.
Open the pod bay doors, HAL.
I'm sorry, Dave.
I'm afraid I can't do that.
HAL?
Nolan:
We've cried wolf enough times...
HAL?
...that the public
has stopped paying attention,
because it feels like
science fiction.
Even sitting here talking
about it right now,
it feels a little bit silly,
a little bit like,
"Oh, this is an artifact
of some cheeseball movie."
The WOPR spends all its time
thinking about World War III.
But it's not.
The general public is about
to get blindsided by this.
As a society and as individuals,
we're increasingly surrounded
by machine intelligence.
We carry this pocket device
in the palm of our hand
that we use to make
a striking array
of life decisions right now,
aided by a set
of distant algorithms
that we have no understanding.
We're already pretty jaded
about the idea
that we can talk to our phone,
and it mostly understands us.
Woman: I found quite a number
of action films.
Five years ago -- no way.
Markoff: Robotics.
Machines that see and speak...
Woman: Hi, there....and listen.
All that's real now.
And these technologies
are gonna fundamentally
change our society.
Thrun: Now we have this great
movement of self-driving cars.
Driving a car autonomously
can move people's lives
into a better place.
Horvitz: I've lost
a number of family members,
including my mother,
my brother and sister-in-law
and their kids,
to automobile accidents.
It's pretty clear we could
almost eliminate car accidents
with automation.
30,000 lives in the U.S. alone.
About a million around the world
per year.
Ferrucci:
In healthcare, early indicators
are the name of the game
in that space,
so that's another place where
it can save somebody's life.
Dr. Herman: Here in
the breast-cancer center,
all the things that
the radiologist's brain
does in two minutes, the
computer does instantaneously.
The computer has looked
at 1 billion mammograms,
and it takes that data
and applies it
to this image instantaneously,
so the medical application
is profound.
Zilis:
Another really exciting area
that we're seeing
a lot of development in
is actually understanding
our genetic code
and using that
to both diagnose disease
and create
personalized treatments.
Kurzweil:
The primary application
of all these machines
will be to extend
our own intelligence.
We'll be able to make
ourselves smarter,
and we'll be better
at solving problems.
We don't have to age.
We'll actually understand aging.
We'll be able to stop it.
Man: There's really no limit
to what intelligent machines
can do for the human race.
How could a smarter machine
not be a better machine?
It's hard to say exactly
when I began to think
that that was a bit naive.
Stuart Russell,
he's basically a god
in the field
of artificial intelligence.
He wrote the book that almost
every university uses.
Russell: I used to say it's the
best-selling AI textbook.
Now I just say "It's the PDF
that's stolen most often."
Artificial intelligence is
about making computers smart,
and from the point
of view of the public,
what counts as AI
is just something
that's surprisingly intelligent
compared to what
we thought computers
would typically be able to do.
AI is a field of research
to try to basically simulate
all kinds of human capabilities.
We're in the AI era.
Silicon Valley
has the ability to focus
on one bright, shiny thing.
It was social networking
and social media
over the last decade,
and it's pretty clear
that the bit has flipped.
And it starts
with machine learning.
Nolan: When we look back at this
moment, what was the first AI?
It's not sexy,
and it isn't the thing
we could see at the movies,
but you'd make a great case
that Google created,
not a search engine,
but a godhead.
A way for people to ask
any question they wanted
and get the answer they needed.
Russell: Most people are not
aware that what Google is doing
is actually a form of
artificial intelligence.
They just go there,
they type in a thing.
Google gives them the answer.
Musk: With each search,
we train it to be better.
Sometimes we're typing a search,
and it tell us the answer
before you've finished
asking the question.
You know, who is the president
of Kazakhstan?
And it'll just tell you.
You don't have to go to the
Kazakhstan national website
to find out.
You didn't used to be
able to do that.
Nolan:
That is artificial intelligence.
Years from now when we try
to understand, we will say,
"How did we miss it?"
Markoff: It's one of
the striking contradictions
that we're facing.
Google and Facebook, et al,
have built businesses
on giving us,
as a society, free stuff.
But it's a Faustian bargain.
They're extracting something
from us in exchange,
but we don't know
what code is running
on the other side and why.
We have no idea.
It does strike
right at the issue
of how much we should
trust these machines.
I use computers
literally for everything.
There are so many
computer advancements now,
and it's become such
a big part of our lives.
It's just incredible
what a computer can do.
You can actually carry
a computer in your purse.
I mean, how awesome is that?
I think most technology is meant
to make things easier
and simpler for all of us,
so hopefully that just
remains the focus.
I think everybody loves
their computers.
People don't realize
they are constantly
being negotiated with
by machines,
whether that's the price
of products in your Amazon cart,
whether you can get
on a particular flight,
whether you can reserve
a room at a particular hotel.
What you're experiencing
are machine-learning algorithms
that have determined
that a person like you
is willing to pay 2 cents more
and is changing the price.
Kosinski: Now, a computer looks
at millions of people
simultaneously for
very subtle patterns.
You can take seemingly
innocent digital footprints,
such as someone's playlist
on Spotify,
or stuff that they
bought on Amazon,
and then use algorithms
to translate this
into a very detailed and a
very accurate, intimate profile.
Kaplan: There is a dossier on
each of us that is so extensive
it would be possibly
accurate to say
that they know more about you
than your mother does.
Tegmark: The major cause
of the recent AI breakthrough
isn't just that some dude
had a brilliant insight
all of a sudden,
but simply that we have
much bigger data
to train them on
and vastly better computers.
el Kaliouby:
The magic is in the data.
It's a ton of data.
I mean, it's data
that's never existed before.
We've never had
this data before.
We've created technologies
that allow us to capture
vast amounts of information.
If you think of a billion
cellphones on the planet
with gyroscopes
and accelerometers
and fingerprint readers...
couple that with the GPS
and the photos they take
and the tweets that you send,
we're all giving off huge
amounts of data individually.
Cars that drive as the cameras
on them suck up information
about the world around them.
The satellites that are now
in orbit the size of a toaster.
The infrared about
the vegetation on the planet.
The buoys that are out
in the oceans
to feed into the climate models.
And the NSA, the CIA,
as they collect information
about the
geopolitical situations.
The world today is literally
swimming in this data.
Kosinski: Back in 2012,
IBM estimated
that an average human being
leaves 500 megabytes
of digital footprints every day.
If you wanted to back up
on the one day worth of data
that humanity produces
and imprint it out
on a letter-sized paper,
double-sided, font size 12,
and you stack it up,
it would reach from
the surface of the Earth
to the sun four times over.
That's every day.
Kaplan: The data itself
is not good or evil.
It's how it's used.
We're relying, really,
on the goodwill of these people
and on the policies
of these companies.
There is no legal requirement
for how they can
and should use
that kind of data.
That, to me, is at the heart
of the trust issue.
Barrat: Right now there's a
giant race for creating machines
that are as smart as humans.
Google -- They're working on
what's really the kind of
Manhattan Project
of artificial intelligence.
They've got the most money.
They've got the most talent.
They're buying up AI companies
and robotics companies.
Urban: People still think
of Google as a search engine
and their e-mail provider
and a lot of other things
that we use on a daily basis,
but behind that search box
are 10 million servers.
That makes Google the most
powerful computing platform
in the world.
Google is now working
on an AI computing platform
that will have
100 million servers.
So when you're interacting
with Google,
we're just seeing
the toenail of something
that is a giant beast
in the making.
And the truth is,
I'm not even sure
that Google knows
what it's becoming.
Phoenix: If you look inside of
what algorithms are being used
at Google,
it's technology
largely from the '80s.
So these are models that you
train by showing them a 1, a 2,
and a 3, and it learns not
what a 1 is or what a 2 is --
It learns what the difference
between a 1 and a 2 is.
It's just a computation.
In the last half decade, where
we've made this rapid progress,
it has all been
in pattern recognition.
Tegmark: Most of
the good, old-fashioned AI
was when we would tell
our computers
how to play a game like chess...
from the old paradigm where
you just tell the computer
exactly what to do.
Announcer:
This is "Jeopardy!"
"The IBM Challenge"!
Ferrucci: No one at the time
had thought that a machine
could have the precision
and the confidence
and the speed
to play "Jeopardy!"
well enough against
the best humans.
Let's play "Jeopardy!"
Watson.Watson: What is "shoe"?
You are right.
You get to pick.
Literary Character APB
for $800.
Answer --
the Daily Double.
Watson actually got its
knowledge by reading Wikipedia
and 200 million pages
of natural-language documents.
Ferrucci:
You can't program every line
of how the world works.
The machine has to learn
by reading.
Now we come to Watson.
"Who is Bram Stoker?"
And the wager?
Hello! $17,973.
$41,413.
And a two-day total
of $77--
Phoenix: Watson's trained
on huge amounts of text,
but it's not like it
understands what it's saying.
It doesn't know that water makes
things wet by touching water
and by seeing the way
things behave in the world
the way you and I do.
A lot of language AI today
is not building logical models
of how the world works.
Rather, it's looking at
how the words appear
in the context of other words.
Barrat: David Ferrucci
developed IBM's Watson,
and somebody asked him,
"Does Watson think?"
And he said,
"Does a submarine swim?"
And what they meant was,
when they developed submarines,
they borrowed basic principles
of swimming from fish.
But a submarine swims
farther and faster than fish
and can carry a huge payload.
It out-swims fish.
Ng: Watson winning the game
of "Jeopardy!"
will go down
in the history of AI
as a significant milestone.
We tend to be amazed
when the machine does so well.
I'm even more amazed when the
computer beats humans at things
that humans are
naturally good at.
This is how we make progress.
In the early days of
the Google Brain project,
I gave the team a very
simple instruction,
which was, "Build the biggest
neural network possible,
like 1,000 computers."
Musk: A neural net is
something very close
to a simulation
of how the brain works.
It's very probabilistic,
but with contextual relevance.
Urban: In your brain,
you have long neurons
that connect to thousands
of other neurons,
and you have these pathways
that are formed and forged
based on what
the brain needs to do.
When a baby tries something and
it succeeds, there's a reward,
and that pathway that created
the success is strengthened.
If it fails at something,
the pathway is weakened,
and so, over time,
the brain becomes honed
to be good at
the environment around it.
Ng: Really, it's just getting
machines to learn by themselves.
This is called "deep learning,"
and "deep learning"
and "neural networks"
mean roughly the same thing.
Tegmark: Deep learning
is a totally different approach
where the computer learns
more like a toddler,
by just getting a lot of data
and eventually
figuring stuff out.
The computer just gets
smarter and smarter
as it has more experiences.
Ng: So, imagine, if you will,
a neural network, you know,
like 1,000 computers.
And it wakes up
not knowing anything.
And we made it watch YouTube
for a week.
Oppan Gangnam style
Ow!
Charlie!
That really hurt!
Gangnam style
Op, op, op, op
Oppan Gangnam style
Ng: And so, after watching
YouTube for a week,
what would it learn?
We had a hypothesis that
it would learn to detect
commonly occurring objects
in videos.
And so, we know that human faces
appear a lot in videos,
so we looked,
and, lo and behold,
there was a neuron that had
learned to detect human faces.
Leave Britney alone!
Well, what else
appears in videos a lot?
So, we looked,
and to our surprise,
there was actually a neuron
that had learned to detect cats.
I still remember
seeing recognition.
"Wow, that's a cat.
Okay, cool.
Great."
Barrat:
It's all pretty innocuous
when you're thinking
about the future.
It all seems kind of
harmless and benign.
But we're making
cognitive architectures
that will fly farther
and faster than us
and carry a bigger payload,
and they won't be
warm and fuzzy.
Ferrucci: I think that,
in three to five years,
you will see a computer system
that will be able
to autonomously learn
how to understand,
how to build understanding,
not unlike the way
the human mind works.
Whatever that lunch was,
it was certainly delicious.
Simply some of
Robby's synthetics.
He's your cook, too?
Even manufactures
the raw materials.
Come around here, Robby.
I'll show you
how this works.
One introduces
a sample of human food
through this aperture.
Down here there's a small
built-in chemical laboratory,
where he analyzes it.
Later, he can reproduce
identical molecules
in any shape or quantity.
Why, it's
a housewife's dream.
Announcer: Meet Baxter,
revolutionary
new category of robots,
with common sense.
Baxter...
Barrat: Baxter is
a really good example
of the kind of competition
we face from machines.
Baxter can do almost anything
we can do with our hands.
Baxter costs about
what a minimum-wage worker
makes in a year.
But Baxter won't be
taking the place
of one minimum-wage worker --
He'll be taking
the place of three,
because they never get tired,
they never take breaks.
Gourley: That's probably the
first thing we're gonna see --
displacement of jobs.
They're gonna be done
quicker, faster, cheaper
by machines.
Our ability to even stay current
is so insanely limited
compared to
the machines we build.
For example, now we have this
great movement of Uber and Lyft
kind of making
transportation cheaper
and democratizing
transportation,
which is great.
The next step is gonna be
that they're all gonna be
replaced by driverless cars,
and then all the Uber
and Lyft drivers
have to find
something new to do.
Barrat: There are
4 million professional drivers
in the United States.
They're unemployed soon.
7 million people
that do data entry.
Those people
are gonna be jobless.
A job isn't just about money,
right?
On a biological level,
it serves a purpose.
It becomes a defining thing.
When the jobs went away
in any given civilization,
it doesn't take long
until that turns into violence.
We face a giant divide
between rich and poor,
because that's what automation
and AI will provoke --
a greater divide between
the haves and the have-nots.
Right now, it's working
into the middle class,
into white-collar jobs.
IBM's Watson does
business analytics
that we used to pay a business
analyst $300 an hour to do.
Gourley: Today, you're going
to college to be a doctor,
to be an accountant,
to be a journalist.
It's unclear that there's
gonna be jobs there for you.
Ng: If someone's planning for
a 40-year career in radiology,
just reading images,
I think that could be
a challenge
to the new graduates of today.
Dr. Herman: The da Vinci robot
is currently utilized
by a variety of surgeons
for its accuracy and its ability
to avoid the inevitable
fluctuations of the human hand.
Anybody who watches this
feels the amazingness of it.
You look through the scope,
and you're seeing the claw hand
holding that woman's ovary.
Humanity was resting right here
in the hands of this robot.
People say it's the future,
but it's not the future --
It's the present.
Zilis: If you think about
a surgical robot,
there's often not a lot
of intelligence in these things,
but over time, as we put
more and more intelligence
into these systems,
the surgical robots can actually
learn from each robot surgery.
They're tracking the movements,
they're understanding
what worked
and what didn't work.
And eventually, the robot
for routine surgeries
is going to be able to perform
that entirely by itself...
or with human supervision.
Dr. Herman: It seems that we're
feeding it and creating it,
but, in a way, we are a slave
to the technology,
because we can't go back.
Gourley: The machines are taking
bigger and bigger bites
out of our skill set
at an ever-increasing speed.
And so we've got to run
faster and faster
to keep ahead of the machines.
How do I look?
Good.
Are you attracted to me?
What?Are you attracted to me?
You give me indications
that you are.
I do?
Yes.
Nolan: This is the future
we're headed into.
We want to design
our companions.
We're gonna like to see
a human face on AI.
Therefore, gaming our emotions
will be depressingly easy.
We're not that complicated.
We're simple.
Stimulus-response.
I can make you like me basically
by smiling at you a lot.
AIs are gonna be fantastic
at manipulating us.
So, you've developed
a technology
that can sense
what people are feeling.
Right.
We've developed technology
that can read
your facial expressions
and map that to a number
of emotional states.
el Kaliouby: 15 years ago,
I had just finished
my undergraduate studies
in computer science,
and it struck me that I was
spending a lot of time
interacting with my laptops
and my devices,
yet these devices had absolutely
no clue how I was feeling.
I started thinking, "What if
this device could sense
that I was stressed
or I was having a bad day?
What would that open up?"
Hi, first-graders!
How are you?
Can I get a hug?
We had kids interact
with the technology.
A lot of it
is still in development,
but it was just amazing.
Who likes robots?
Me!
Who wants to have a robot
in their house?
What would you use
a robot for, Jack?
I would use it to ask my mom
very hard math questions.
Okay.
What about you, Theo?
I would use it
for scaring people.
All right.
So, start by smiling.
Nice.
Brow furrow.
Nice one.
Eyebrow raise.
This generation, technology
is just surrounding them
all the time.
It's almost like they expect
to have robots in their homes,
and they expect these robots
to be socially intelligent.
What makes robots smart?
Put them in, like, a math
or biology class.
I think you would
have to train it.
All right.
Let's walk over here.
So, if you smile and you
raise your eyebrows,
it's gonna run over to you.
Woman: It's coming over!
It's coming over! Look.
But if you look angry,
it's gonna run away.
-Awesome!
-Oh, that was good.
We're training computers to read
and recognize emotions.
Ready? Set? Go!
And the response so far
has been really amazing.
People are integrating this
into health apps,
meditation apps, robots, cars.
We're gonna see
how this unfolds.
Zilis:
Robots can contain AI,
but the robot is just
a physical instantiation,
and the artificial
intelligence is the brain.
And so brains can exist purely
in software-based systems.
They don't need to have
a physical form.
Robots can exist without
any artificial intelligence.
We have a lot of
dumb robots out there.
But a dumb robot can be
a smart robot overnight,
given the right software,
given the right sensors.
Barrat: We can't help but impute
motive into inanimate objects.
We do it with machines.
We'll treat them like children.
We'll treat them
like surrogates.
-Goodbye!
-Goodbye!
And we'll pay the price.
Okay, welcome to ATR.
Konnichiwa.
Gourley: We build
artificial intelligence,
and the very first thing
we want to do is replicate us.
I think the key point will come
when all the major senses
are replicated --
sight...
touch...
smell.
When we replicate our senses,
is that when it become alive?
Nolan:
So many of our machines
are being built
to understand us.
But what happens when
an anthropomorphic creature
discovers that they can
adjust their loyalty,
adjust their courage,
adjust their avarice,
adjust their cunning?
Musk: The average person,
they don't see killer robots
going down the streets.
They're like, "What are
you talking about?"
Man, we want to make sure
that we don't have killer robots
going down the street.
Once they're going down
the street, it is too late.
Russell: The thing
that worries me right now,
that keeps me awake,
is the development
of autonomous weapons.
Up to now, people have expressed
unease about drones,
which are remotely
piloted aircraft.
If you take a drone's camera
and feed it into the AI system,
it's a very easy step from here
to fully autonomous weapons
that choose their own targets
and release their own missiles.
The expected life-span
of a human being
in that kind of
battle environment
would be measured in seconds.
Singer: At one point,
drones were science fiction,
and now they've become
the normal thing in war.
There's over 10,000 in
U.S. military inventory alone.
But they're not
just a U.S. phenomena.
There's more than 80 countries
that operate them.
Gourley: It stands to reason
that people making some
of the most important and
difficult decisions in the world
are gonna start to use
and implement
artificial intelligence.
The Air Force just designed
a $400-billion jet program
to put pilots in the sky,
and a $500 AI, designed by
a couple of graduate students,
is beating the best human pilots
with a relatively
simple algorithm.
AI will have as big an impact
on the military
as the combustion engine
had at the turn of the century.
It will literally touch
everything
that the military does,
from driverless convoys
delivering logistical supplies,
to unmanned drones
delivering medical aid,
to computational propaganda,
trying to win the hearts
and minds of a population.
And so it stands to reason
that whoever has the best AI
will probably achieve
dominance on this planet.
At some point in
the early 21st century,
all of mankind was
united in celebration.
We marveled
at our own magnificence
as we gave birth to AI.
AI?
You mean
artificial intelligence?
A singular consciousness
that spawned
an entire race of machines.
We don't know
who struck first -- us or them,
but we know that it was us
that scorched the sky.
Singer: There's a long history
of science fiction,
not just predicting the future,
but shaping the future.
Arthur Conan Doyle
writing before World War I
on the danger of how
submarines might be used
to carry out civilian blockades.
At the time
he's writing this fiction,
the Royal Navy made fun
of Arthur Conan Doyle
for this absurd idea
that submarines
could be useful in war.
One of the things
we've seen in history
is that our attitude
towards technology,
but also ethics,
are very context-dependent.
For example, the submarine...
nations like Great Britain
and even the United States
found it horrifying
to use the submarine.
In fact, the German use of the
submarine to carry out attacks
was the reason why the United
States joined World War I.
But move the timeline forward.
Man: The United States
of America was suddenly
and deliberately attacked
by the empire of Japan.
Five hours after Pearl Harbor,
the order goes out
to commit unrestricted
submarine warfare against Japan.
So Arthur Conan Doyle
turned out to be right.
Nolan: That's the great old line
about science fiction --
It's a lie that tells the truth.
Fellow executives,
it gives me great pleasure
to introduce you to the future
of law enforcement...
ED-209.
This isn't just a question
of science fiction.
This is about what's next, about
what's happening right now.
The role of intelligent systems
is growing very rapidly
in warfare.
Everyone is pushing
in the unmanned realm.
Gourley: Today, the Secretary of
Defense is very, very clear --
We will not create fully
autonomous attacking vehicles.
Not everyone
is gonna hold themselves
to that same set of values.
And when China and Russia start
deploying autonomous vehicles
that can attack and kill, what's
the move that we're gonna make?
Russell: You can't say,
"Well, we're gonna use
autonomous weapons
for our military dominance,
but no one else
is gonna use them."
If you make these weapons,
they're gonna be used to attack
human populations
in large numbers.
Autonomous weapons are,
by their nature,
weapons of mass destruction,
because it doesn't need a human
being to guide it or carry it.
You only need one person,
to, you know,
write a little program.
It just captures
the complexity of this field.
It is cool.
It is important.
It is amazing.
It is also frightening.
And it's all about trust.
It's an open letter about
artificial intelligence,
signed by some of
the biggest names in science.
What do they want?
Ban the use of
autonomous weapons.
Woman: The author stated,
"Autonomous weapons
have been described
as the third revolution
in warfare."
Woman #2: ...thousand
artificial-intelligence
specialists
calling for a global ban
on killer robots.
Tegmark:
This open letter basically says
that we should redefine the goal
of the field of
artificial intelligence
away from just creating pure,
undirected intelligence,
towards creating
beneficial intelligence.
The development of AI
is not going to stop.
It is going to continue
and get better.
If the international community
isn't putting
certain controls on this,
people will develop things
that can do anything.
Woman: The letter says
that we are years, not decades,
away from these weapons
being deployed.
So first of all...
We had 6,000 signatories
of that letter,
including many of
the major figures in the field.
I'm getting a lot of visits
from high-ranking officials
who wish to emphasize that
American military dominance
is very important,
and autonomous weapons
may be part of
the Defense Department's plan.
That's very, very scary,
because a value system
of military developers
of technology
is not the same as a value
system of the human race.
Markoff: Out of the concerns
about the possibility
that this technology might be
a threat to human existence,
a number of the technologists
have funded
the Future of Life Institute
to try to grapple
with these problems.
All of these guys are secretive,
and so it's interesting
to me to see them,
you know, all together.
Everything we have is a result
of our intelligence.
It's not the result
of our big, scary teeth
or our large claws
or our enormous muscles.
It's because we're actually
relatively intelligent.
And among my generation,
we're all having
what we call "holy cow,"
or "holy something else"
moments,
because we see
that the technology
is accelerating faster
than we expected.
I remember sitting
around the table there
with some of the best and
the smartest minds in the world,
and what really
struck me was,
maybe the human brain
is not able to fully grasp
the complexity of the world
that we're confronted with.
Russell:
As it's currently constructed,
the road that AI is following
heads off a cliff,
and we need to change
the direction that we're going
so that we don't take
the human race off the cliff.
Musk: Google acquired DeepMind
several years ago.
DeepMind operates
as a semi-independent
subsidiary of Google.
The thing that makes
DeepMind unique
is that DeepMind
is absolutely focused
on creating digital
superintelligence --
an AI that is vastly smarter
than any human on Earth
and ultimately smarter than
all humans on Earth combined.
This is from the DeepMind
reinforcement learning system.
Basically wakes up
like a newborn baby
and is shown the screen
of an Atari video game
and then has to learn
to play the video game.
It knows nothing about objects,
about motion, about time.
It only knows that there's
an image on the screen
and there's a score.
So, if your baby woke up
the day it was born
and, by late afternoon,
was playing
40 different Atari video games
at a superhuman level,
you would be terrified.
You would say, "My baby
is possessed. Send it back."
Musk: The DeepMind system
can win at any game.
It can already beat all
the original Atari games.
It is superhuman.
It plays the games at superspeed
in less than a minute.
DeepMind turned
to another challenge,
and the challenge
was the game of Go,
which people
have generally argued
has been beyond
the power of computers
to play with
the best human Go players.
First, they challenged
a European Go champion.
Then they challenged
a Korean Go champion.
Man:
Please start the game.
And they were able
to win both times
in kind of striking fashion.
Nolan: You were reading articles
in New York Timesyears ago
talking about how Go would take
100 years for us to solve.
Urban:
People said, "Well, you know,
but that's still just a board.
Poker is an art.
Poker involves reading people.
Poker involves lying
and bluffing.
It's not an exact thing.
That will never be,
you know, a computer.
You can't do that."
They took the best
poker players in the world,
and it took seven days
for the computer
to start demolishing the humans.
So it's the best poker player
in the world,
it's the best Go player in the
world, and the pattern here
is that AI might take
a little while
to wrap its tentacles
around a new skill,
but when it does, when it
gets it, it is unstoppable.
DeepMind's AI has
administrator-level access
to Google's servers
to optimize energy usage
at the data centers.
However, this could be
an unintentional Trojan horse.
DeepMind has to have complete
control of the data centers,
so with a little
software update,
that AI could take
complete control
of the whole Google system,
which means
they can do anything.
They could look
at all your data.
They could do anything.
We're rapidly heading towards
digital superintelligence
that far exceeds any human.
I think it's very obvious.
Barrat:
The problem is, we're not gonna
suddenly hit
human-level intelligence
and say,
"Okay, let's stop research."
It's gonna go beyond
human-level intelligence
into what's called
"superintelligence,"
and that's anything
smarter than us.
Tegmark:
AI at the superhuman level,
if we succeed with that,
will be
by far the most powerful
invention we've ever made
and the last invention
we ever have to make.
And if we create AI
that's smarter than us,
we have to be open
to the possibility
that we might actually
lose control to them.
Russell: Let's say
you give it some objective,
like curing cancer,
and then you discover
that the way
it chooses to go about that
is actually in conflict
with a lot of other things
you care about.
Musk: AI doesn't have to be evil
to destroy humanity.
If AI has a goal, and humanity
just happens to be in the way,
it will destroy humanity
as a matter of course,
without even thinking about it.
No hard feelings.
It's just like
if we're building a road
and an anthill happens
to be in the way...
We don't hate ants.
We're just building a road.
And so goodbye, anthill.
It's tempting
to dismiss these concerns,
'cause it's, like,
something that might happen
in a few decades or 100 years,
so why worry?
Russell: But if you go back
to September 11, 1933,
Ernest Rutherford,
who was the most well-known
nuclear physicist of his time,
said that the possibility
of ever extracting
useful amounts of energy
from the transmutation
of atoms, as he called it,
was moonshine.
The next morning, Leo Szilard,
who was a much
younger physicist,
read this and got really annoyed
and figured out
how to make
a nuclear chain reaction
just a few months later.
We have spent more
than $2 billion
on the greatest
scientific gamble in history.
Russell: So when people say
that, "Oh, this is so far off
in the future, we don't have
to worry about it,"
it might only be three, four
breakthroughs of that magnitude
that will get us from here
to superintelligent machines.
Tegmark: If it's gonna take
20 years to figure out
how to keep AI beneficial,
then we should start today,
not at the last second
when some dudes
drinking Red Bull
decide to flip the switch
and test the thing.
Musk:
We have five years.
I think
digital superintelligence
will happen in my lifetime.
100%.
Barrat: When this happens,
it will be surrounded
by a bunch of people
who are really just excited
about the technology.
They want to see it succeed,
but they're not anticipating
that it can get out of control.
Oh, my God, I trust
my computer so much.
That's an amazing question.
I don't trust
my computer.
If it's on,
I take it off.
Like, even when it's off,
I still think it's on.
Like, you know?
Like, you really cannot tru--
Like, the webcams,
you don't know if, like,
someone might turn it...
You don't know, like.
I don't trust my computer.
Like, in my phone,
every time they ask me
"Can we send your
information to Apple?"
every time, I...
So, I don't trust my phone.
Okay. So, part of it is,
yes, I do trust it,
because it would be really
hard to get through the day
in the way our world is
set up without computers.
Dr. Herman: Trust is
such a human experience.
I have a patient coming in
with an intracranial aneurysm.
They want to look
in my eyes and know
that they can trust
this person with their life.
I'm not horribly concerned
about anything.
Good.
Part of that
is because
I have confidence in you.
This procedure
we're doing today
20 years ago
was essentially impossible.
We just didn't have the
materials and the technologies.
So, the coil is barely
in there right now.
It's just a feather
holding it in.
It's nervous time.
We're just in purgatory,
intellectual,
humanistic purgatory,
and AI might know
exactly what to do here.
We've got the coil
into the aneurysm.
But it wasn't in
tremendously well
that I knew that it would stay,
so with a maybe 20% risk
of a very bad situation,
I elected
to just bring her back.
Because of my relationship
with her
and knowing the difficulties
of coming in
and having the procedure,
I consider things,
when I should only consider
the safest possible route
to achieve success.
But I had to stand there for
10 minutes agonizing about it.
The computer feels nothing.
The computer just does
what it's supposed to do,
better and better.
I want to be AI in this case.
But can AI be compassionate?
I mean, it's everybody's
question about AI.
We are the sole
embodiment of humanity,
and it's a stretch for us
to accept that a machine
can be compassionate
and loving in that way.
Part of me
doesn't believe in magic,
but part of me has faith
that there is something
beyond the sum of the parts,
that there is at least a oneness
in our shared ancestry,
our shared biology,
our shared history.
Some connection there
beyond machine.
So, then, you have
the other side of that, is,
does the computer
know it's conscious,
or can it be conscious,
or does it care?
Does it need to be conscious?
Does it need to be aware?
I do not think that a robot
could ever be conscious.
Unless they programmed it
that way.
Conscious? No.
No.
No.
I mean, think a robot could be
programmed to be conscious.
How are they programmed
to do everything else?
That's another big part
of artificial intelligence,
is to make them conscious
and make them feel.
Lipson: Back in 2005, we started
trying to build machines
with self-awareness.
This robot, to begin with,
didn't know what it was.
All it knew was that it needed
to do something like walk.
Through trial and error,
it figured out how to walk
using its imagination,
and then it walked away.
And then we did
something very cruel.
We chopped off a leg
and watched what happened.
At the beginning, it didn't
quite know what had happened.
But over about a period
of a day, it then began to limp.
And then, a year ago,
we were training an AI system
for a live demonstration.
We wanted to show how we wave
all these objects
in front of the camera
and the AI could
recognize the objects.
And so, we're preparing
this demo,
and we had on a side screen
this ability
to watch what certain
neurons were responding to.
And suddenly we noticed
that one of the neurons
was tracking faces.
It was tracking our faces
as we were moving around.
Now, the spooky thing about this
is that we never trained
the system
to recognize human faces,
and yet, somehow,
it learned to do that.
Even though these robots
are very simple,
we can see there's
something else going on there.
It's not just programming.
So, this is just the beginning.
Horvitz: I often think about
that beach in Kitty Hawk,
the 1903 flight
by Orville and Wilbur Wright.
It was kind of a canvas plane,
and it's wood and iron,
and it gets off the ground for,
what, a minute and 20 seconds,
on this windy day
before touching back down again.
And it was
just around 65 summers or so
after that moment that you have
a 747 taking off from JFK...
...where a major concern
of someone on the airplane
might be whether or not
their salt-free diet meal
is gonna be coming to them
or not.
We have a whole infrastructure,
with travel agents
and tower control,
and it's all casual,
and it's all part of the world.
Right now, as far
as we've come with machines
that think and solve problems,
we're at Kitty Hawk now.
We're in the wind.
We have our tattered-canvas
planes up in the air.
But what happens
in 65 summers or so?
We will have machines
that are beyond human control.
Should we worry about that?
I'm not sure it's going to help.
Kaplan: Nobody has any idea
today what it means for a robot
to be conscious.
There is no such thing.
There are a lot of smart people,
and I have a great deal
of respect for them,
but the truth is, machines
are natural psychopaths.
Man:
Fear came back into the market.
Man #2: Went down 800,
nearly 1,000, in a heartbeat.
I mean,
it is classic capitulation.
There are some people
who are proposing
it was some kind
of fat-finger error.
Take the Flash Crash of 2010.
In a matter of minutes,
$1 trillion in value
was lost in the stock market.
Woman: The Dow dropped nearly
1,000 points in a half-hour.
Kaplan:
So, what went wrong?
By that point in time,
more than 60% of all the trades
that took place
on the stock exchange
were actually being
initiated by computers.
Man:
Panic selling on the way down,
and all of a sudden
it stopped on a dime.
Man #2: This is all happening
in real time, folks.
Wisz: The short story of what
happened in the Flash Crash
is that algorithms
responded to algorithms,
and it compounded upon itself
over and over and over again
in a matter of minutes.
Man: At one point, the market
fell as if down a well.
There is no regulatory body
that can adapt quickly enough
to prevent potentially
disastrous consequences
of AI operating
in our financial systems.
They are so prime
for manipulation.
Let's talk about the speed
with which
we are watching
this market deteriorate.
That's the type of AI-run-amuck
that scares people.
Kaplan:
When you give them a goal,
they will relentlessly
pursue that goal.
How many computer programs
are there like this?
Nobody knows.
Kosinski: One of the fascinating
aspects about AI in general
is that no one really
understands how it works.
Even the people who create AI
don't really fully understand.
Because it has millions
of elements,
it becomes completely impossible
for a human being
to understand what's going on.
Grassegger: Microsoft had set up
this artificial intelligence
called Tay on Twitter,
which was a chatbot.
They started out in the morning,
and Tay was starting to tweet
and learning from stuff
that was being sent to him
from other Twitter people.
Because some people,
like trolls, attacked him,
within 24 hours, the Microsoft
bot became a terrible person.
They had to literally
pull Tay off the Net
because he had turned
into a monster.
A misanthropic, racist, horrible
person you'd never want to meet.
And nobody had foreseen this.
The whole idea of AI is that
we are not telling it exactly
how to achieve a given
outcome or a goal.
AI develops on its own.
Nolan: We're worried about
superintelligent AI,
the master chess player
that will outmaneuver us,
but AI won't have to
actually be that smart
to have massively disruptive
effects on human civilization.
We've seen over the last century
it doesn't necessarily take
a genius to knock history off
in a particular direction,
and it won't take a genius AI
to do the same thing.
Bogus election news stories
generated more engagement
on Facebook
than top real stories.
Facebook really is
the elephant in the room.
Kosinski:
AI running Facebook news feed --
The task for AI
is keeping users engaged,
but no one really understands
exactly how this AI
is achieving this goal.
Nolan: Facebook is building an
elegant mirrored wall around us.
A mirror that we can ask,
"Who's the fairest of them all?"
and it will answer, "You, you,"
time and again
and slowly begin
to warp our sense of reality,
warp our sense of politics,
history, global events,
until determining what's true
and what's not true,
is virtually impossible.
The problem is that AI
doesn't understand that.
AI just had a mission --
maximize user engagement,
and it achieved that.
Nearly 2 billion people
spend nearly one hour
on average a day
basically interacting with AI
that is shaping
their experience.
Even Facebook engineers,
they don't like fake news.
It's very bad business.
They want to get rid
of fake news.
It's just very difficult
to do because,
how do you recognize news
as fake
if you cannot read
all of those news personally?
There's so much
active misinformation
and it's packaged very well,
and it looks the same when
you see it on a Facebook page
or you turn on your television.
Nolan:
It's not terribly sophisticated,
but it is terribly powerful.
And what it means is
that your view of the world,
which, 20 years ago,
was determined,
if you watched the nightly news,
by three different networks,
the three anchors who endeavored
to try to get it right.
Might have had a little bias
one way or the other,
but, largely speaking,
we could all agree
on an objective reality.
Well, that objectivity is gone,
and Facebook has
completely annihilated it.
If most of your understanding
of how the world works
is derived from Facebook,
facilitated
by algorithmic software
that tries to show you
the news you want to see,
that's a terribly
dangerous thing.
And the idea that we have not
only set that in motion,
but allowed bad-faith actors
access to that information...
I mean, this is a recipe
for disaster.
Urban: I think that there will
definitely be lots of bad actors
trying to manipulate the world
with AI.
2016 was a perfect example
of an election
where there was lots of AI
producing lots of fake news
and distributing it
for a purpose, for a result.
Ladies and gentlemen,
honorable colleagues...
it's my privilege
to speak to you today
about the power of big data
and psychographics
in the electoral process
and, specifically,
to talk about the work
that we contributed
to Senator Cruz's
presidential primary campaign.
Nolan: Cambridge Analytica
emerged quietly as a company
that, according to its own hype,
has the ability to use
this tremendous amount of data
in order
to effect societal change.
In 2016, they had
three major clients.
Ted Cruz was one of them.
It's easy to forget
that, only 18 months ago,
Senator Cruz was one of
the less popular candidates
seeking nomination.
So, what was not possible maybe,
like, 10 or 15 years ago,
was that you can send fake news
to exactly the people
that you want to send it to.
And then you could actually see
how he or she reacts on Facebook
and then adjust that information
according to the feedback
that you got.
So you can start developing
kind of a real-time management
of a population.
In this case, we've zoned in
on a group
we've called "Persuasion."
These are people who are
definitely going to vote,
to caucus, but they need
moving from the center
a little bit more
towards the right.
in order to support Cruz.
They need a persuasion message.
"Gun rights," I've selected.
That narrows the field
slightly more.
And now we know that we need
a message on gun rights,
it needs to be
a persuasion message,
and it needs to be nuanced
according to
the certain personality
that we're interested in.
Through social media, there's an
infinite amount of information
that you can gather
about a person.
We have somewhere close
to 4,000 or 5,000 data points
on every adult
in the United States.
Grassegger: It's about targeting
the individual.
It's like a weapon,
which can be used
in the totally wrong direction.
That's the problem
with all of this data.
It's almost as if we built the
bullet before we built the gun.
Ted Cruz employed our data,
our behavioral insights.
He started from a base
of less than 5%
and had a very slow-and-steady-
but-firm rise to above 35%,
making him, obviously,
the second most threatening
contender in the race.
Now, clearly, the Cruz
campaign is over now,
but what I can tell you
is that of the two candidates
left in this election,
one of them is using
these technologies.
I, Donald John Trump,
do solemnly swear
that I will faithfully execute
the office of President
of the United States.
Nolan: Elections are
a marginal exercise.
It doesn't take
a very sophisticated AI
in order to have
a disproportionate impact.
Before Trump, Brexit was
another supposed client.
Well, at 20 minutes to 5:00,
we can now say
the decision taken in 1975
by this country to join
the common market
has been reversed by this
referendum to leave the EU.
Nolan: Cambridge Analytica
allegedly uses AI
to push through two of
the most ground-shaking pieces
of political change
in the last 50 years.
These are epochal events,
and if we believe the hype,
they are connected directly
to a piece of software,
essentially, created
by a professor at Stanford.
Kosinski:
Back in 2013, I described
that what they are doing
is possible
and warned against this
happening in the future.
Grassegger:
At the time, Michal Kosinski
was a young Polish researcher
working at the
Psychometrics Centre.
So, what Michal had done was to
gather the largest-ever data set
of how people
behave on Facebook.
Kosinski:
Psychometrics is trying
to measure psychological traits,
such as personality,
intelligence,
political views, and so on.
Now, traditionally,
those traits were measured
using tests and questions.
Nolan: Personality test --
the most benign thing
you could possibly think of.
Something that doesn't
necessarily have
a lot of utility, right?
Kosinski: Our idea was that
instead of tests and questions,
we could simply look at the
digital footprints of behaviors
that we are all leaving behind
to understand openness,
conscientiousness,
neuroticism.
Grassegger: You can easily buy
personal data,
such as where you live, what
club memberships you've tried,
which gym you go to.
There are actually marketplaces
for personal data.
Nolan: It turns out, we can
discover an awful lot
about what you're gonna do
based on a very, very tiny
set of information.
Kosinski: We are training
deep-learning networks
to infer intimate traits,
people's political views,
personality,
intelligence,
sexual orientation
just from an image
from someone's face.
Now think about countries which
are not so free and open-minded.
If you can reveal people's
religious views
or political views
or sexual orientation
based on only profile pictures,
this could be literally
an issue of life and death.
I think there's no going back.
Do you know what
the Turing test is?
It's when a human interacts
with a computer,
and if the human doesn't know
they're interacting
with a computer,
the test is passed.
And over the next few days,
you're gonna be the human
component in a Turing test.
Holy shit.Yeah, that's right, Caleb.
You got it.
'Cause if that test
is passed,
you are dead center of
the greatest scientific event
in the history of man.
If you've created
a conscious machine,
it's not the history
of man--
That's the history
of gods.
Nolan: It's almost like
technology is a god
in and of itself.
Like the weather.
We can't impact it.
We can't slow it down.
We can't stop it.
We feel powerless.
Kurzweil:
If we think of God
as an unlimited amount
of intelligence,
the closest we can get to that
is by evolving
our own intelligence
by merging with the artificial
intelligence we're creating.
Musk:
Today, our computers, phones,
applications give us
superhuman capability.
So, as the old maxim says,
if you can't beat 'em, join 'em.
el Kaliouby: It's about
a human-machine partnership.
I mean, we already see
how, you know,
our phones, for example, act
as memory prosthesis, right?
I don't have to remember
your phone number anymore
'cause it's on my phone.
It's about machines
augmenting our human abilities,
as opposed to, like,
completely displacing them.
Nolan: If you look at all the
objects that have made the leap
from analog to digital
over the last 20 years...
it's a lot.
We're the last analog object
in a digital universe.
And the problem with that,
of course,
is that the data input/output
is very limited.
It's this.
It's these.
Zilis:
Our eyes are pretty good.
We're able to take in a lot
of visual information.
But our information output
is very, very, very low.
The reason this is important --
If we envision a scenario
where AI's playing a more
prominent role in societies,
we want good ways to interact
with this technology
so that it ends up
augmenting us.
Musk: I think
it's incredibly important
that AI not be "other."
It must be us.
And I could be wrong
about what I'm saying.
I'm certainly open to ideas
if anybody can suggest
a path that's better.
But I think we're gonna really
have to either merge with AI
or be left behind.
Gourley: It's hard to kind of
think of unplugging a system
that's distributed
everywhere on the planet,
that's distributed now
across the solar system.
You can't just, you know,
shut that off.
Nolan:
We've opened Pandora's box.
We've unleashed forces that
we can't control, we can't stop.
We're in the midst
of essentially creating
a new life-form on Earth.
Russell:
We don't know what happens next.
We don't know what shape
the intellect of a machine
will be when that intellect is
far beyond human capabilities.
It's just not something
that's possible.
The least scary future
I can think of is one
where we have at least
democratized AI.
Because if one company
or small group of people
manages to develop godlike
digital superintelligence,
they can take over the world.
At least when there's
an evil dictator,
that human is going to die,
but, for an AI,
there would be no death.
It would live forever.
And then you have
an immortal dictator
from which we can never escape.
Woman on P.A.:
Alan. Macchiato.
Woman:
Hello?
Yeah, yeah
Yeah, yeah
Yeah, yeah
Yeah, yeah
Hello?