"Appearance / Menu" section. Location - "Header home page".
Dark Mode Light Mode

Ai’s ‘Oppenheimer Moment’: Why is new thinking about disarmament

The engagement with the technical community is not “Nice to have” Donation Policy Donation – This is “absolutely necessary to engage this community from the beginning of the design, development and ability,” said the UN Institute of Disarmament (UNIDIR).

Speaking recently Global Conference on AI Security and Ethics The UNIDIR Host in Geneva, emphasized the importance of raising the efficient protection poles as the world is often called the AI ​​”Oppenheimer Mombe”, the American nuclear physicist in creating an atomic bomb in creating an atomic bomb.

The supervision is required so that the movements respect human rights, international law and ethics – especially in the field of AI managed weapons – to guarantee that these powerful technologies are developed in controlled, Unidir officer.

Frewed Tech

AI has already created a security dilemma for governments and miles in the world.

Double Nature AI technology – where they can be used in civilian and military settings – that developers can lose contact with the realities of battlefields, where their programming could cost life, managers of public affairs in Comand AI.

The tool is still in their infants, but have long encouraged fears to be able to use life or death in a war environment, removing the need for human decision-making and responsibilities. Hence growing calls for the Regulation, to ensure avoidance of errors that could lead to catastrophic consequences.

“We see that these systems fail all the time,” David Sully, the General Manager of the London company Advai, adding that technologies remain “very unwinding”.

“So, making them wrong aren’t so hard because people sometimes think,” he noted.

Joint responsibility

The teams focus on the thorough principles of safety, safety, inclusiveness, fairness and responsibility, Michael Karimian, director of digital diplomacy.

The American giant established by Bill Gates sets restrictions in real-time facial recognition technology used to enforce the laws that could cause mental or physical harm, Mr. Karimian explained.

Mandatory protection measures must be set and firms must cooperate to decompose the silos, he said at the UN Geneva event.

“Innovation is not something that is happening in one organization. There is a responsibility for sharing,” Mr. Karimian said, whose company partners with the Unidira to ensure international human rights.

Supervisory paradox

Part of the equation is that the technologies are developing in the pace so fast, the countries are struggling to continue.

“And the development surpasses our ability to manage their many risks,” Silyna Nur Abdullah said, who is the head of strategic planning and the Special Advisor to the Secretary-General in the International Telecommunications Union (ITU).

“We need to contact the AI ​​paradox, recognizing that the regulations are sometimes lagging behind the technology that the Development Tools and Technical Effective Development Experts,” said Mrs. Abdullah, adding that the developing countries must also receive at the table.

Gaps of responsibility

More than a decade in 2013. year, renowned human rights expert Christof Heyls UA report In the deadly autonomous robotics (Lars), they warned that “taking people from loops risks and humanity from the loop.”

Today, it is no less difficult to translate the legal judgment for the dependent context and is still crucial that “life and death” take people, not robots, insisted on Peggy Hicks, the director of the UN Human Rights Development Office (UN Human Rights) (Ohchr).

Mirror Company

While large tech and management leaders see around the eye in the eyes of the leading principles and defense systems, ideals can be in quotas with the company’s lower lines.

“We are a private company – we also seek for profitability,” Mr. Valli Comand AI said.

“The reliability of the system is sometimes very difficult to find,” he added. “But when you work in this sector, responsibility could be huge, absolutely huge.”

Unanswered challenges

While many developers are dedicated to designing algorithms that are “righteous, secure, robust” according to Mr. Sully – there is no way to implement these standards – and companies may not know what they do not know exactly what they do.

These principles “all dictate how adoption should be made, but they don’t really explain how it should happen,” said Mr. Sully, reminding the policy makers to “and still in the early stages.”

Big Tech and policy makers should reduce and sludge over a larger image.

“What is robustness for the system is incredibly technically, a really challenging goal for determining and is currently unanswered,” he continued.

No AI ‘Fingerprint’

Mr. Sully, which described as “a great advocate of the” AI system “, which is used for the mandate without A comprehensive nuclear test ban In Vienna, which follows whether nuclear testing takes place.

But identifying A-guided weapons, he says, is a whole new challenge that nuclear weapons – carrying forensic signatures – no.

“There is a practical problem with regard to the way the police at the international level,” the Director General said. “It’s a little no one wants to turn. But until it is processed … I think it will be a big, big obstacle.”

Future protection

UniDir conferences insisted on the need for strategic prediction, in order to understand the risks that represent top technologies that are born now.

For Mozilla, which coaches the new generation of technologists, future developers “should be aware of what they do with this powerful technology and what builds”, Mr. Firm insisted.

Academics like Moses B. Khanyile from Stellenbosch University in South Africa, believe that universities also bear “supreme responsibility” to protect basic ethical values.

The interests of the army – the intended beneficiaries of these technologies – and the government as regulators must be “harmonized,” said Dr. Khanyile, director of the University of Art Research from the University of Stellenbosch.

“They have to see and technology as a tool for good, so they have to become a force for good.”

The countries that have been engaged

He asked what an essential action was taken to build trust between countries, diplomats from China, the Netherlands, Pakistan, France, Italy and South Korea also weighed.

“We need to define the line of national security in terms of exports of Hi-Tech technology,” Shen Jian, extraordinarily and authorized (disarmament) and the Deputy Permanent Representative of the People’s Republic of China.

Roads for future AI research and development must include other new fields such as physics and neuroscience.

“Ai is complicated, but the real world is even more complex,” Robert told the Den Bosch, the Ambassador of Disarmament and the constant representative of the Netherlands at the Conference on Disarmament. “For that reason, I would say that it is important to look at AI in convergence with other technologies and especially cyber, quantum and space.”

https://global.unitednations.entermediadb.net/assets/mediadb/services/module/asset/downloads/preset/Collections/Embargoed/31-07-2024-Unsplash-AI-01.jpg/image770x420cropped.jpg

2025-04-05 12:00:00

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Add a comment Add a comment

Leave a Reply

Previous Post
Hoping to do Sol Campbell warns Arsenal Star a

"Hoping to do" - Sol Campbell warns Arsenal Star -a player if they don't sign the right players

Next Post
https3A2F2Fd1e00ek4ebabms.cloudfront.net2Fproduction2F69f630ba be45 4028 9290 7e61501daa54

Corporate America is afraid of Trump's wrath when he is thinking of response tariffs