Steal your face: 7 technologies to be afraid of

Posted on

The development of new technologies in the field of artificial intelligence and wireless networks brings many benefits to humanity, but also poses many dangers. Information security experts warn that the state and business should prepare now for new cyber threats that lurk in the future.
With the development of modern technology, cybercriminals have to invent more and more new methods to carry out hacker attacks in an ever-changing situation. Portal Business Insider, together with researchers in the field of information security, identified seven technologies of the future that cause the most concern.

Gazeta.Ru talked to experts and found out what you should really be afraid of.

Deepfake photos and videos
The technology, dubbed “deepfake”, involves the creation of fake photos or videos that bear an amazing resemblance to the original. “Deepfakes” can be used to manipulate public opinion, as well as for phishing – after all, a hacker who can pretend to be anyone has every chance of getting secret information by deception.

More and more services use biometrics when working with a user, for example, face identification in smartphones or face and voice identification in a single biometric system of the banking sector, Alexander Chernihiv, an expert in information security at CROC IT company, told Gazeta.Ru.

“It would seem that it is quite difficult to fake the image and voice of an ordinary person, but this is not entirely true. Biometric data is becoming public as almost everyone has posted photos on social media or a shared YouTube video with friends.

Programs like deface can change the face of any video in real time or fake the voice of a person. And for their work, only a few audio and video files per minute are enough.

This becomes a platform for the activities of scammers, ”the expert said.

As Tatyana Daniela, Deputy Director for Technology Development at ABBYY, noted, it is necessary to understand that not a single technology harms a person by itself. Decisions and algorithms are created by people, and it is they who determine for what purposes they will be used.

“The first defaces appeared less than three years ago, but some of these video clips are already almost indistinguishable from reality – at least when non-specialists look at them.  At the same time, the creation and training of such neural networks requires significant resources. This requires large computing power, good quality video and voice fragments, time to refine the architecture of the neural network and analyze errors.

If the goals of the developers are illegal, it is doubly difficult for them to get the data they need for training.

That is why cases of fraud with fake videos are rare so far. Large companies are investing heavily in developing technologies that will automatically distinguish fakes: detect the slightest unnatural movements of the face or hands, detect splicing and changes in voice tone, human mood, and so on. The general principle is the same as with antiviruses: technologies are developing very quickly, but for each malware they immediately begin to create an “antidote,” the Gazeta.Ru interlocutor explained.

Artificial intelligence

It is assumed that in the future, hackers will actively use artificial intelligence to outsmart cybersecurity systems.

Andrey Golovin, director of the information security department at Oberon, points out that artificial intelligence (AI) is steadily developing with the aim of being widely used in the IT industry. It allows you to automate and investigate the cybersecurity of the “victim” faster and better than the attacker himself would have done.

“One of the popular applications of AI is to imitate the voice of a company executive who asks to transfer money to an account. In 2019, there was the first ever case of the theft of funds using AI.

In March of this year, attackers used publicly available voice imitation methods to mislead a British company employee. The employee thought he was talking to the CEO and transferred $243,000 to the scammer’s account.

Obviously, such cases will become more frequent and hackers will look for more and more sophisticated ways to use AI, ”said Golovin.

quantum computers

According to experts, traditional methods of cryptography will become irrelevant in 5-10 years due to the widespread use of quantum computing. This poses a serious threat to banking transactions and digital currencies such as Bitcoin, Andrey Golovin explains. Using a quantum computer, it will be easy to calculate the private key from the associated public key, which will allow attackers to take possession of users’ financial assets.

“Hackers will be able to quickly penetrate secure segments of computer networks and gain access to confidential information.

But not everything is so bad: in parallel with the development of quantum computing, there is a development of quantum cryptography, which is resistant to encryption key selection methods.

A number of digital currencies are already using algorithms resistant to quantum computing, and computer networks with quantum encryption have been launched in a number of countries, such as the United States, China and Russia,” the expert recalled.

Internet of Things

The Internet of Things, which is a network of digital devices that have access to the network and the ability to transmit data, is already widely used in various areas of modern society. This technology attracts a lot of attention from hackers who find vulnerabilities in it that allow them to compromise the entire system.

On the one hand, the Internet of things simplifies life, but on the other hand, it gives rise to a new wave of threats to information security, Alexander Chernykhov agrees.

“No one is surprised by the news that a “smart” refrigerator sends thousands of spam emails, and its owner has no idea about it. each using their own security standards and protocols that have not yet been verified by experts and may contain dozens of vulnerabilities,” the expert believes.