The matrix surrounds us. Can robots harm humanity?

Posted on

The rise of an armada of robots out of control, a supercomputer that has decided to take responsibility for humanity is a typical plot of many science fiction films. Countries around the world are trying to avoid this development of events, but already in the real future.

To do this, governments and tech companies are adopting codes and bills that are laying straws in the field of artificial intelligence. These documents regulate the ethics of AI, trying to neutralize the harm that smart machines can potentially do to humanity.

The Secret of the Firm spoke with AI developers and cybersecurity experts about whether the Matrix – a reality ruled by out-of-control robots – could emerge in the future and what threats are posed by modern developments in artificial intelligence.

The laws of robotics in the XXI century

Countries around the world are adopting laws and codes that govern AI. In Russia, a similar document was signed) at the end of October 2021. It was developed by the largest technology companies (Yandex, VK, Rostelecom, etc.) together with the government.

The document says that a person, his rights and freedoms in the development of AI should remain the greatest value. Technology cannot be used to harm people, property or the environment.

Representatives of the business community, science and government agencies will develop guidelines and choose the best and worst practices for solving emerging ethical issues in the life cycle of artificial intelligence.

Nikita Kulikov, CEO of PravoRobotov, a non-profit organization, told Secrets Firm that the Russian variation of the Code of Ethics is clearly borrowed from European legislation. At the EU level, such a document was adopted in 2019.  First of all, Europeans were worried about the possibility of human control of robots.

Also, the EU urged to be more careful about data privacy. They have not even forgotten about the environmental responsibility of robots – a green topic in Europe is now in trend.

According to Kulikov, the EU Code of Ethics is currently considered the most advanced legal act in this area. “The Europeans have some primacy here, the developers of all similar documents from other countries looked exactly at the EU code,” the expert explained.

The US Department of Defense followed on the heels of Europeans in 2020 by developing and approving similar ethical standards. With one small caveat: they concerned the use of robots in war. They include five principles.

Responsibility of the developer for the creation and use of such weapons.
Impartiality – Developers will try to reduce “unintentional robot bias”.
Controllability – the final decision should always be made by humans, not robots.
Reliability – AI solutions will be constantly tested and must be completely safe for humans.
Subordination – robots should not get out of control of humans.

China also introduced a code of ethics for robots in 2021. It focuses on human control over robotics. The document says that robots should improve the well-being of people, they cannot be used in crime. In addition, it is emphasized that robots should not interfere with privacy.

Matrix from the future

All accepted ethical codes are based on the assumption that AI is capable of carrying threats to humans. It was for their prevention that these documents were developed.

For decades, philosophers and futurists have put this problem at the center of their intricate theories. The threats from robots were portrayed as more and more fantastic with each passing decade. Robots were often portrayed as hostile invaders capable of enslaving the world.

Today, such scenarios no longer seem completely unrealistic. Therefore, scientists and developers have focused on working out specific scenarios of possible threats from artificial intelligence and how they can be avoided.  Or will defaces turn into weapons of mass destruction in the hands of crooks?

Stanislav Ashman, general director of the Nano semantics IT company, identifies three threats associated with the development of artificial intelligence.

“The first one is a violation of privacy caused by the automatic analysis of a large amount of data – transactions, correspondence, facial recognition from video, etc. The second threat is the replacement of people in workplaces by robots. First of all, this applies to representatives of mass professions – cashiers, call center employees, drivers.

This process has already begun. And finally, the third threat that worries people is the emergence of intelligent machines capable of enslaving humanity. It seems to me that this is more of a fantasy, although today we already see how artificial intelligence controls some types of weapons – for example, drones that destroy targets on the ground, “the expert told Siret Firm.

Sometimes neural networks are getting out of control today. So, the voice assistant Oleg from Tinkoff Bank in 2019 suggested that the client cut off her fingers. The woman wrote to the chatbot that the fingerprint login system to the application does not work, and received such an unexpected response.

“Also, neural networks can make mistakes when making a decision to issue loans. For example, an application for a loan in one of the banks in 60 seconds can receive an unreasonable automatic refusal, with the explanation: “Artificial intelligence made such a decision based on big data,” said Ashman.

In addition, the expert drew attention to the threats from defaces. “The human voice and the image could be synthesized before, but it was expensive and time-consuming, it was made to order at Hollywood studios. Now this technology has become massive, cheap and accessible. This is dangerous, as fraudsters can start using it by creating fake videos with human participation.

They can put pressure on the victim, try to bypass the protection of banks with their help, deceive people with the help of social engineering, ”the expert concluded.

The possibility of hacking the robot also poses a great threat, says Alexey Lukatsky, an expert on information security at Cisco Systems.

“Hackers use different methods of attacks against artificial intelligence systems than against corporate servers. The latter can be simply put. AI-related servers are also possible, but more non-standard things are often done with them. For example, they attack learning models. This leads to the fact that artificial intelligence begins to make the wrong decisions on the same initial data as before. You can, on the contrary, substitute data or mix fake data into them – artificial intelligence will then draw incorrect conclusions, ”the expert explained.