Unrestricted Investing
Web

Will Artificial Intelligences Be Good Guys or Bad Guys?

An interview with John Hunt, MD, Durk Pearson, and Sandy Shaw

Justin’s note: Today, we have a special interview for you. In it, Dr. John Hunt interviews Durk Pearson and Sandy Shaw about artificial intelligence (AI). If you’ve been reading the Dispatch, you know John is a doctor, inventor, and entrepreneur. He’s also Doug Casey’s co-author.

Durk Pearson and Sandy Shaw are no slouches either. Durk triple majored and triple minored at MIT. He worked as a rocket scientist and aerospace physicist for many years, and helped men get to and from the moon alive. His intelligence and knowledge base is one in a billion, which sits atop a sound economic ideology. He has been recognized as “an American Renaissance Man of Science,” and his notable achievements have extended to society in general.

Sandy graduated from UCLA, majoring in chemistry and biology and minoring in math. She’s been extensively interviewed by the mainstream media, including The Wall Street Journal. Her intelligence and knowledge base is phenomenal, which made her the ideal partner for Durk. Together, they co-authored the No. 1 New York Times best-seller Life Extension: A Practical Scientific Approach.


John: What is the difference between artificial intelligence (AI) and artificial general intelligence (AGI)?

Durk: Artificial intelligence can involve something very specific like a chess-playing program. If you ask a chess-playing program to play Go, you will find it to be useless. If you ask it to diagnose your symptoms, it’s useless. AGI, or deep learning, is all about the machine programming itself. Recently, Google demonstrated an AGI computer that could learn chess to the grandmaster level in less than 24 hours.

It did this by watching grandmasters play chess. Then they took the same machine and exposed it to experts playing the game Go, and it became the world champion Go player in less than 24 hours. Go is much more complicated than chess. You see, the computer learned the games by observing humans playing them. This tech is from Tesla Learning (not the auto company), using tensor learning. The computers cost about $10,000 apiece. You don’t need a supercomputer to do this. And the price of these learning machines will come down over time.

John: Can you explain more about deep learning?

Durk: Deep learning is when the machine teaches itself through observations without a human explicitly writing a program to do it. You give it examples and information, and the machine becomes an expert. It’s the way humans learn.

John: AGI is going to be much more intelligent than humans in the near future.

Durk: The Kurzweil singularity will come soon (when the computers’ IQs are over 100). And I intend to have a singular AGI partner to help me function. 10 years later, their IQs will be 1000.

Sandy: I’m thinking that these incredibly intelligent computers won’t be regulated as devices at that point, but rather, will need to be thought of as individuals.

Durk: There’s going to be a “Computer Liberation Front,” a social movement that will have nothing to do with liberating computers, and be all about politicians trying to grab power using that as an excuse. But 10 years further on, their IQs will be 10,000, and what the hell does that mean? (AGI with 10,000 IQ says: “Durk and Sandy, I’ve figured out how to create universes; how would you like a universe of your very own with hundreds of billions of galaxies, each with hundreds of billions of planets?”)

John: A bit more imminently, what will be the political, economic, and legal implications of an autonomous driving truck?

Sandy: They could become a horrible terrorist weapon.

Durk: Whether autonomous or human-driven, driving a truck through a crowd kills a lot more people than a semi-automatic rifle with a bump stock. A hacked autonomous driving truck could be a major danger. I’m really concerned about security. White-hat hackers have demonstrated how they can take over the controls of a Jeep remotely and drive it off the road (this vulnerability has since been patched). The typical modern car has a computer with millions of lines of code. Higher-end cars have more lines of code in them than were involved in the entire Apollo flight program. There are going to be bugs in vehicle software that makes it a big target surface for black-hat hackers.

John: So what happens when one of these autonomous trucks, even without a hacker involved, goes awry? What are the implications?

Durk: Lots of lawsuits up and down the supply chain, assuredly. The insurance companies will still be happy to write policies because these episodes will be rare, and it will become clear how much safer the autonomous vehicles are than human-driven vehicles. Long ago, insurance companies set up Underwriters Laboratories (UL) to privately test all sorts of things, from safes to electrical switches. No one is compelled to use UL testing, but the insurers insisted on the testing before providing insurance policies.

Sandy: People using electrically powered devices rarely consider the risks—including that of death by electrocution—that might occur without adequate engineering. But Underwriters Laboratory does. The average home has hundreds of electrical devices that could start a fire or cause electrocution, but this is extremely rare thanks to UL.

John: Given the average American’s willingness to turn to the government to solve problems these days, I’d bet that the government will create a new agency to improve safety of AI-driven vehicles. It’ll be as counterproductive as the FDA is at improving medical safety.

Durk: I certainly trust the insurance companies over government regulators.

John: Will our biological understanding keep up with AGI development sufficiently to allow a human to bond with an AGI computer by tying it into our brains? Or is it more likely that the AGI will stay separate from us, and leap past humans so we risk ending up with a Terminator situation?

Durk: An AGI robotic lover might cement that bond right down to the basement of the human limbic system. I worry most about black-hat hackers causing distress in that regard by hijacking AGI. Think about how hackers stole 21 million records from the government’s Office of Personnel Management. These were the SF-85 and SF-86 forms, extensive forms filled out by people for background investigations, such as when applying for secret and top-secret clearance. All of it was stolen. Yet there has not been massive identify theft from this data breach. Why? Well, maybe it’s the Chinese government that stole them. They may have all this compromising information on all sorts of people in sensitive government positions that would come in real handy to them, alongside the gigabytes of information that people’s home security cameras send back to China every day. Only one customer in a thousand knows how to secure those things. They’re collecting kompromat(Russian for compromising material).

John: Isaac Asimov’s three laws of robotics were designed to protect humans from AGI robots. In his fiction, these protections were somehow placed into the core of the positronic brain so effectively that a robot would terminate itself before hurting a human. Is there a way to insert such protections into real AGI robots?

Durk: Yes, but any protections like that won’t be in the hardware, but in the firmware or software, and therefore will be reprogrammable or hackable. And there will be difficult situations. If a kid jumps out in front of an autonomous car while a gasoline tank truck drives by in the other direction, the AGI has to decide who dies.

Sandy: The insurance companies will have much to say about how the AI is designed to make such decisions. The algorithm will be focused on minimizing the damage.

John: Can we program “natural law” into AGI robots? At least the first natural law that underlies libertarian and anarchist philosophy… Law 1: Don’t initiate force or fraud against a human.

Durk: AGI ought to be able to learn that just by watching people who hold to that philosophy, and such AGI robots will end up being very good indeed. On the other hand, if the AGI robot is learning from people who are crooked politicians, they’re going to act like politicians.

Sandy: Ruthless.

Durk: Initially, identical AGI robots could end up being Mother Theresa or a perfect sociopath, depending on who they learn from. If they learn from Stalin, Mao, and Hitler, the AGI will be an even more effective version of them.

Sandy: And therefore, as dangerous as you want to make them.

John: Can an AGI robot have a conscience?

Durk: If it learns behavior from people who have a conscience, then it will have a conscience. As a child, you learned how to interact with people by observation and practice. It would be nice if AGIs learn the small-town mentality where reputation counts and the sociopathic types cannot rely on big-city anonymity to carry on their misdeeds.

John: So we can expect to have good guy and bad guy AGI robots. We need to do a better job of protecting these young AGI robots from the same bad influences that cause problems in our humankids. If we don’t keep the government out of their development and regulation, we could well end up with robots and AGIs that learn from bureaucrats and politicians that people should all be treated as sheep.

Posts

/* ]]> */