AI has capacity to become smarter than humans

0

While some experts are saying, Artificial Intelligence or AI has capacity to become smarter than humans’ others argue saying human have the upper hand on some fronts. For example, humans made up their own poor memory and slow processing speed by using common sense and logic – they can quickly and easily learn how the world works, and use this knowledge to predict the likelihood of events. Although researchers are continuously working on addressing this issue as AI still continues to struggle with these qualities.

As AI is advancing even at a pace that is already generating huge interests and even concerns – people within this sector would agree – AI will soon attain capability of competing with human intelligence. This is the point where many fear – AI may achieve human-like intelligence and render humans obsolete in the process.

Geoffrey Hinton, a leading AI scientist and winner of the 2018 Turing Award, who is considered as one of the world’s top AI scientists is now describing Artificial Intelligence as a new form of intelligence – one that poses unique risks and will therefore require unique solutions.

Geoffrey Hinton’s argument is nuanced. While he does think AI has the capacity to become smarter than humans, he also proposes it should be thought of as an altogether different form of intelligence to our own.

Although experts have been raising red flags for months, Hinton’s decision to voice his concerns is significant.

Dubbed the “godfather of AI”, he has helped pioneer many of the methods underlying the modern AI systems we see today. His early work on neural networks led to him being one of three individuals awarded the 2018 Turing Award. And one of his students, Ilya Sutskever, went on to become co-founder of OpenAI, the organization behind ChatGPT.

When Hinton speaks, the AI world listens. And if we’re to seriously consider his framing of AI as an intelligent non-human entity, one could argue we’ve been thinking about it all wrong.

On one hand, large language model-based tools such as ChatGPT produce text that’s very similar to what humans write. ChatGPT even makes stuff up, or “hallucinates”, which Hinton points out is something humans do as well. But we risk being reductive when we consider such similarities a basis for comparing AI intelligence with human intelligence.

We can find a useful analogy in the invention of artificial flight. For thousands of years, humans tried to fly by imitating birds: flapping their arms with some contraption mimicking feathers. This didn’t work. Eventually, we realized fixed wings create uplift, using a different principle, and this heralded the invention of flight.

Planes are no better or worse than birds; they are different. They do different things and face different risks.

AI (and computation, for that matter) is a similar story. Large language models such as GPT-3 are comparable to human intelligence in many ways, but work differently. ChatGPT crunches vast swathes of text to predict the next word in a sentence. Humans take a different approach to forming sentences. Both are impressive.

LEAVE A REPLY

Please enter your comment!
Please enter your name here