How and when machines will surpass human intelligence


ChatGPT and other artificial intelligence programs have been a hot topic in the news lately, both nerve-wracking and exciting us with new possibilities in medicine, linguistics, and even self-driving cars. There are so many “what ifs” in this connected future that we have to rethink everything. killer robot To our own job security.

So should we take a step back from this kind of turbocharged AI and ease our fears? It comes down to the idea of ​​a singularity called the “horizon of .

As one means of A technological singularity could occur in just seven yearsThat’s why we reached out to subject matter experts to find out what singularity is, how close we’re getting, and whether we should start taking it in the early 2010s. doomsday preppers Take the reality show more seriously.

Preview of all sections of Popular Mechanics

What is singularity?

The singularity is the moment when machine intelligence equals or surpasses human intelligence. This is a concept long believed by visionaries like Stephen Hawking and Bill Gates. machine intelligence It may sound complicated, but it is defined as advanced computing that allows devices to interact (via computers, phones, or algorithms) and communicate intelligently with their environment.

The concept of singularities has been around for decades. British mathematician Alan Turing, widely known as the father of Theoretical Computer Science and artificial intelligence, experimented with its possibilities in the 1950s.he is his famous Turing test Find out if machines can think for themselves. Evaluation pits humans against computers, where the system tries to trick us into thinking it’s actually a human being.The Recent Emergence of Highly Advanced AI chatbot Like ChatGPT, Turing’s litmus paper is in the spotlight. Spoiler alert: Already passed.

“The difference between machine intelligence and human intelligence is that our intelligence is fixed, which is not the case with machines.” Ishani PriyadarshiniA postdoctoral researcher at the University of California, Berkeley, with expertise in applied artificial intelligence and the technological singularity, said: popular mechanics“While machines are endless and can be increased at any time, humans are not.” Unlike our brains, AI systems can scale many times. The real limit is the space that houses all the computing power.

When will we reach the singularity?

Claims that we will reach a singularity within the next decade are all over the internet, but speculation at best. Priyadarshini believes that singularities already exist in fragments. It’s like an unfinished puzzle.She supports her estimate that we can reach the singularity sometime after 2030adds that it’s hard to be completely sure with technology we know so little about. target. super intelligent computer system.

That being said, we have already seen signs of singularity in our lives. “There are games where humans cannot beat machines, and that is definitely a sign of singularity,” he says. To provide some perspective, IBM’s 1997 “IndigoA supercomputer was the first AI to beat a human chess player. And they didn’t just put Joe Schmoe at bat. Deep Blue went up against Garry Kasparov, who was the world chess champion at the time.

“There are games in which humans cannot beat machines, and that is definitely a sign of singularity.”

Singularity is still a notoriously difficult concept to measure. Even today, we struggle to find the markers that are moving us towards it. Many experts argue that language translation is the Rosetta Stone by which we measure our progress. For example, if AI can translate speech as well or better than humans, that’s a good sign that we’re one step closer to singularity.

But Priyadarshini thinks memes could be another marker of progression to singularity, as AI is notoriously bad at understanding memes.

What will be possible when AI reaches the singularity?

network security issues

John Lund///Getty Images

We don’t know what superintelligent systems can do. “We would need to have superintelligence ourselves.” Roman YampolskyAssociate Professor of Computer Engineering and Computer Science at the University of Louisville said: popular mechanicsWe can only speculate using our current level of intelligence.

Yampolsky recently paper

About AI predicting the decisions that AI will make. And it’s rather disturbing. “You need at least that much intelligence to predict what a system will do…if we’re talking about a system that’s smarter than humans [super intelligent] In that case, it is impossible to predict inventions or decisions,” he says.

Priyadarshini said it is difficult to tell if AI is malicious.She says rogue AI is simply biased This is essentially an unexpected side effect of programming. Importantly, AI is nothing more than decision making based on a set of rules and parameters. “We want self-driving cars, but we don’t want them to jump red lights and crash into passengers,” says Priyadarshini. Basic, self-driving car You might think that scytheing through red lights and humans is the most efficient way to reach your destination in a timely manner.

A lot of this has to do with the unknown unknown concept. There are no brains out there to accurately predict the capabilities of superintelligent systems. In fact, IBM currently estimates that only one-third of his developers know how to properly test these systems for potential biases that could be problematic. To fill this gap, the company has developed a new solution. free eye You can find weaknesses in your machine learning model by looking at the “human interpretable” slice of data. Whether this system can mitigate AI bias is unknown, but it is clearly a step ahead of us humans.

“AI researchers know that you can’t eliminate bias 100% from your code… so building an AI that is 100% unbiased and does nothing wrong will be difficult. says Priyadarshini.

How can AI harm us?

AI currently has no senses. That is, at this time, they cannot think, perceive, or feel like humans. Singularity and sensation are often confused, but are not closely related.

Although there is no AI currently, sensory, it does not free us from the unintended consequences of rogue AI.It simply means that the AI ​​has no motivation go wrong. “There’s no way to detect, measure, or estimate whether a system is experiencing an internal state…but they don’t have to be very capable and very dangerous,” Yampolskiy says. He also states that even if there were a way to measure the sensations, he wouldn’t even know if the sensations were correct. Possible by machine.

This means you don’t know if you’ll see the real life version Avahumanoid robot Ex Machina It rebels against its creator to escape captivity. Many of these AI doomsday scenarios shown in Hollywood are simply good . . . fictitious. “One of the things he pretty much believes in is that AI is nothing but code,” he says. “Humans may have no motive to oppose us, but machines that think humans are the root cause of certain problems may think so.” It’s just a bias in your code that you may have missed. There are ways around this, but we use very limited AI understanding.

A lot of this is due to the fact that we don’t know if the AI ​​will ever become sentient. Without it, AI really has no reason to chase us. The only notable exception to this is Sophia SophiaAn AI chatbot that wants to destroy humans. However, this was believed to be an error in the chatbot’s script. “As long as bad code exists, bias will continue to exist and AI will continue to make mistakes,” he says.

Autonomous to Rogue

In speaking about prejudice, Priyadarshini mentioned what she referred to. self-driving carIn a hypothetical situation, five people are driving a self-driving car down the road and one person jumps out onto the road. If the car doesn’t make it in time, it’s a simple math game of 1 vs 5. “He kills one passenger because one is smaller than her five, but why the need to do that?” Priyadarshini says.

We like to think of it as a 21st century remake of the original. trolley problemIt’s a famous thought experiment in philosophy and psychology that puts you in a fictional dilemma as a tram driver with no brakes. Imagine this: You are speeding down a railroad track at dangerous speeds. In the distance he sees five people on the track (which is sure to get run over), but he has the option of diverting the trolley to another track, leaving him alone on the way. Sure, one is better than her five, but you made a conscious choice to kill that one of hers.

Medical AI goes out of control

Yampolskiy mentioned the case where medical AI was tasked with developing a Covid vaccine. He said the system would recognize that the virus would mutate as more people contracted Covid, making it more difficult to develop a vaccine. all variant. “The system thinks . We can’t stop the fact that we can develop a vaccine that kills

“This is one of the possible scenarios for my level of intelligence…there are millions of similar scenarios with high levels of intelligence,” says Yampolskiy. This is what we are facing with AI.

How can we prevent singularity disasters?

The unknown unknown cannot be removed from artificial intelligence. These are unpredictable and unintended side effects as they are not superintelligence like AI. Knowing the capabilities of these systems is almost impossible.

Mr Priyadarshini said: Once you reach the point of no return, there is no going back. There are still many unknowns about the future of AI, but we can breathe a sigh of relief knowing that there are experts all over the world working to make AI a reality. good From AI without any apocalyptic scenarios we might be thinking of. It really only takes one shot to get it right.

This content is imported from polls. You may be able to find the same content in a different format or find more information on the website.

Matt Crisala headshot

Matt Crisala hails from Austin and has a tremendous passion for cars and motorsports, both domestically and internationally. popular mechanics, where he writes the majority of automotive coverage across digital and print. Previously, he was a contributing writer for Motor1, an Austin radio station focused on the world of racing, after an internship at Circuit Of The Americas F1 Track and Speed ​​City. He earned a bachelor’s degree from the University of Arizona School of Journalism, and on his team at the University Club Mountain he participated in motorcycle races. When he’s not working, he enjoys simulation he races, FPV he drones and the outdoors.

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to content