"Appearance / Menu" section. Location - "Header home page".
Dark Mode Light Mode

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Follow Us
Follow Us
Buy niketn Buy niketn

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use

Startups and academics face whether inhuman AI “comes into sight”

GettyImages 2198328493 e1743075455283 GettyImages 2198328493 e1743075455283

GettyImages 2198328493 e1743075455283

The noise is growing from the leaders of large companies AI, which “strong” computer intelligence will be rapidly superior to people, but many researchers in the field see claims as marketing.

The belief that human intelligence and better is called “artificial general intelligence” (AGI)-appears from modern methods of machine learning, nourishes hypotheses for the future, ranging from machine hypersee to human extinction.

“Systems that start pointing to AGA are in sight,” said Openai chief Sam Altman in a blog post last month. Anthropic Dario Amadeus said the milestone “could come as early as 2026.”

Such forecasts help you justify the hundreds of billions of dollars that are poured into computing equipment and energy supplies for its launch.

Others, though more skeptical.

AI Meta Chief Scientist Jan Lecun said AFP last month that “we are not going to reach AI at human level, just increasing LLM”-large linguistic models that stand behind modern systems such as chatgpt or claude.

The Lucun’s view is a backed by most scientists in the field.

More than three quarters of respondents before the recent poll of the US Association on the promotion of artificial intelligence (AAAI) agreed that “scaling modern approaches” is unlikely to create AGI.

“Gin from the bottle”

Some scientists believe that many claims of companies that the chiefs sometimes come with warnings about the danger of humanity is a strategy for attracting attention.

Enterprises “made these big investments and they have to pay off,” said Christian Cerving, a leading researcher at Darmstadt’s Technical University in Germany, and Comrade AAAI nominated for his achievements in the field.

“They just say,” It’s so dangerous that only I can manage it, because I’m afraid, but we have already released Ginny from the bottle, so I’m going to sacrifice myself on your behalf – but then you depend on me. “

Skepticism among academic researchers is not complete, with famous figures such as a physicist who won Nobel, Jeffrey Hinton, or Turing Prize winner in 2018, Yoshua Bengio warned of the danger from Mighty II.

“This is a little like a” student of the magician “Goethe, you have something you suddenly can’t control anymore,” said Cerving-Finding a poem in which the future magician loses control of the broom he fascinated to do his affairs.

Similar, a later reflection experiment is “Makemiser Paperclip”.

This imagined that AI would pursue its goal to make it so once that it would turn into the ground and eventually all the importance in the universe into paper leaks or machines to make closure-at the beginning of getting rid of people who may interfere with its progress, disconnecting it.

Although not “angry”, as such, the maxi is deadly insufficient from the fact that thinkers in the field call “alignment” for human purposes and values.

Cerving said that “could understand” such fears – suggesting that “human intelligence, its variety and quality is so excellent that it will take a lot of time if it is” to make computers.

It is much more concerned about the closest harm from the existing II, such as discrimination in cases where it interacts with people.

“The biggest thing”

Apparently, the bright bay in the long run between scientists and industry leaders can simply reflect the relationships of people when they choose the career path, offered Chaon Oh Heiger, AI Director: Futures and responsibility at the University of Cambridge University.

“If you are very optimistic about how powerful these methods are, you are likely to go and work more likely in one of the companies that invests a lot of resources to try it,” he said.

Even if Altman and Amadeus can be “quite optimistic” relatively fast, and Aga follows much later, “we have to think about it and take it seriously because it would be the biggest thing if it happened,” the HEGEARTAGHH added.

“If it was something else … The likelihood that aliens would come by 2030, or that there will be another giant pandemic or anything, we would put it for a while.”

The task may be to transfer these ideas to politicians and the public.

Talk about Super-Ai “instantly creates such an immune reaction … It sounds like science fiction,” Heaigeartaigh said.

Originally this story was presented on Fortune.com


https://fortune.com/img-assets/wp-content/uploads/2025/03/GettyImages-2198328493-e1743075455283.jpg?resize=1200,600
2025-03-28 05:00:00
Tom Barfield, AFP

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Previous Post
youngkin

Virginia Gov. Glenn Youngkin: Biden turned us into the land of the shrine '

Next Post
EN 20250327 183554 184128 CS

Forbidden Stories: 'Journalism is so crucial during conflict: War crimes can multiply with impunity'