Is Stephen Hawking Right About Artificial Intelligence?


Stephen Hawking, the popular theoretical physicist, has ignited the debate about whether one day man’s search for advanced artificial intelligence will lead to thinking machines that will take over from them. He continued to make path-breaking contributions to science until the age of 76, despite being reported that he had only two more years to live in 1963.

During a broad-ranging interview with the BBC, the British scientist made the argument. Hawking had the motor neurons disorder, amyotrophic lateral sclerosis (ALS), and the interview focused on the latest technologies that he used to interact with. Get an artificial intelligence certification and become a Certified Artificial Intelligence Expert.

Blog Contents

  • Hawking’s Remark on AI
  • Hawking’s Experience with AI
  • AI as a Threat to Humanity
  • Could Thinking Machines Take Over?
  • Machines Already Taking Over
  • Conclusion

Let’s dig deeper into the roots of AI and if Stephen Hawking is right about artificial intelligence.

Hawking’s Remark on AI

Stephen Hawking was regarded as one of the most brilliant theoretical physicists after Albert Einstein, known for his work on black holes and relativity. In an interview with the BBC, he has stated- “Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever-increasing rate.”
The rudimentary forms of artificial intelligence produced so far have already proven to be very useful, said Prof Hawking, but he fears the implications of developing something that can equal or outperform humans.“The development of full artificial intelligence could spell the end of the human race.” he added.

Hawking’s Experience with AI

The experience of Stephen Hawking with such a basic form of AI illustrates how non-superhuman AI can indeed alter the lives of people for the better. Speech forecasting helped him cope with a catastrophic neurological illness. Other systems based on AI are already helping to prevent, combat, and reduce the burden of disease.

For example, medical sensors and other health data can be analyzed by AI to predict how likely a patient is to develop a serious blood infection. In studies, it was significantly more accurate than other techniques and provided much more advanced warning.

Another group of researchers created an AI program for 700,000 patients to sift through electronic health records. The program called “Deep Patient” uncovered connections that were not apparent to doctors, identifying new patterns of risk for certain cancers, diabetes, and psychiatric disorders.

AI as a Threat to Humanity?

It is possible to bring AI to horrible uses. Scholars and analysts are already worried that self-flying drones could be precursors to deadly autonomous robots. The early stage of AI today poses many other legal and practical questions, too. AI systems are primarily focused on opaque algorithms that may be unable to clarify decisions even by their own designers. It is possible to bias the underlying mathematical models, and computational errors may arise. AI could eventually displace human abilities and increase unemployment. And restricted access to AI could increase inequality globally. The Artificial Intelligence Report, launched by a renowned university, highlighted some of these problems. But no evidence has been found so far that AI is going to pose any “imminent threat to humanity, as Hawking feared.

But he has also encouraged research in a limited form in AI. He has consistently called for further studies on AI’s advantages and dangers. And even non-superhuman AI systems, he claimed, could help eliminate conflict, poverty, and disease.

Could Machines for Thinking Take Over?

Hawking warned against an extreme form of AI, in which thinking machines will “take off” on their own, alter themselves, and design and create ever more capable systems independently. It would tragically outwit humans, constrained by the sluggish speed of biological evolution.

The issue of machine intelligence goes back at least as far as Alan Turing, the British code-breaker and father of computer science, considered the question in 1950: “Can machines think?”
In a variety of popular media and culture, the issue of these intelligent machines taking over has been discussed in one way or another. Think of the films Colossus, the Forbin Project (1970) and Westworld (1973), and more recently, Skynet, to name a few in the 1984 film Terminator and Sequels.
The matter of delegating responsibility to machines is common to all of these. The concept of technological uniqueness (or machine super-intelligence) goes back at least as far as the pioneer of artificial intelligence, Ray Solomonoff, warned in 1967:

“Although in the near future there is no prospect of very smart machines, the dangers posed are very serious and the problems very difficult. It would be good for a large number of intelligent people to think a lot about these problems before they arise.

It is my feeling that a sudden occurrence will be the realization of artificial intelligence. We will not have any practical experience with machine intelligence of any serious level at some point in the development of research: a month or so later, we will have a very intelligent machine and all the problems and dangers associated with our inexperience.”

Machines Already Taking Over

We see increasing amounts of accountability being delegated to machines in the meantime. These could be handheld calculators, routine mathematical calculations, or global positioning systems, on the one hand (GPSs).On the other hand, these could be air traffic control systems, guided missiles, driverless trucks at mine sites, or the recent trial appearances on our roads of driverless cars.

For reasons that include improving time, cost, and accuracy, humans delegate responsibility to machines. But nightmares that could occur with regard to damage by a driverless vehicle, say, would include legal insurance and liability attribution.

It is argued that when their intelligence supersedes that of humans, computers might take over.


The issue raised by Stephen Hawking has been raised several times by renowned scientists all over the world. Machines outwitting humans have been the theme of several movies, books, etc., in popular culture. This issue should be pondered over before men become slaves of the machine invented by them. Enroll for the best artificial intelligence course on Global Tech Council.