Thursday, 25 July 2019 18:32

Hackers stole millions of dollars, forging voices of CEO of the companies

The fake of voices with use of artificial intelligence became fundamentals of new fraudulent technology for theft of finance, personal and corporate data. [CNews]

You speak further, you are forged

Modern technologies of artificial intelligence can be used for machine learning of systems which can be used for full imitation of the speech of any person, including top managers and even the CEO of any company subsequently.

It was declared by representatives of the Symantec company following the results of own investigation of a number of cybercrimes with use of counterfeit voices of a number of top managers of a number of the companies. According to analysts of Symantec, such systems can be used by cyberswindlers for plunder of various assets, including finance, corporate and private information.

According to Symantec, technologies it was already used for plunder of "millions of dollars". Analysts of the company reported about at least three cases when counterfeit voices of CEOs were applied for deception of heads of finance departments of the companies for the purpose of fraudulent funds withdrawal. The name of the affected companies in Symantec was not called.

"Deep fake": audio is simpler than video

The new cyberfraudulent technology received at analysts of Symantec the name Deepfaked Audio, that is, "audio fake on the basis of machine learning".

For the first time the English-language term Deepfake synthesized from expressions "deep training" (Deep Learning) and Fake appeared a few years ago in relation to technology where the artificial intelligence is applied to a training of systems with application of real images and video by the person for synthesis of counterfeit video.

Many public persons and heads of the companies became the victims of "a deep forgery of video" already repeatedly. Emergence in social networks of counterfeit video with Mark Zuckerberg, the head of Facebook became one of the most juicy scandals. The case of a realistic forgery of video with a performance of Barack Obama, the former U.S. President is also known.

Everything that is necessary for technology "training" of "deep counterfeit audio" - it is "enough" audio recordings of the alleged victim, note in Symantec. Further the artificial intelligence uses audio for a training of two so-called "generative competitive networks" (GAN). In the course of training two neuronets of this kind "compete" with each other: one generates fakes, another tries to distinguish a fake from the real sample of data, and in process both networks study on the mistakes.

In case of crimes against heads of the companies as audio sources for an AI training numerous records of a voice in open access – corporate videos, audio recordings of quarter conferences for investors, public statements, reports can speak at conferences, the presentation, etc.

According to doctor Hugh Thompson, the executive technical director of Symantec, technology of modeling and a fake of a voice can be very close to perfection now.

According to him, in the course of imitation of a voice cyberswindlers use the most various tricks. For example, specific background noise allow to disguise a manner of pronunciation of syllables and words in the least convincing places. For such purposes imitation of faltering cellular communication or background noise in the lively crowded place is used.

According to doctor Alexander Adam, the specialist in data at AI division of Symantec, for production of really qualitative fakes of audio considerable temporary and financial resources are required.

"The training of models can cost thousands of dollars as considerable computing power are for this purpose necessary. Human hearing is very sensitive in the wide frequency range so on modeling of really realistic sounding will leave fairly time", other Alexander Adam noted.

According to him, in certain cases on creation of rather realistic voice profile only 20 minutes of audio recordings can be necessary. Nevertheless, hours of initial records with high quality will be necessary for full imitation of realistic rhythms and live intonations of speech patterns of counterfeit audio.

Unlike counterfeit video, the technology of imitation of a voice has considerably the bigger potential for fraudulent frauds. So, unlike a forgery of video where "the trained model" has to replace initial video for a fake of the person, counterfeit the profile can be used by audio even with long ago known technology of transformation of the text to a voice.

And how to overcome them.

According to specialists of Symantec, the key element of the companies having access to the order finance should analyse seriously audio recordings of the performances in public access. From now on malefactors can receive voice samples necessary for them even in telephone a conversation or in personal meeting.

Advise finance departments of the companies of analytics Symantec to rethink the level of threats from cybercrimes with use of counterfeit audio, and treats protection of access to confidential data and finance of the company more seriously.

In Symantec said that now develop methods of the analysis of audio which would allow to check phone calls and to estimate the probability of their authenticity. The existing technologies for prevention of cyber attacks with application of counterfeit audio, according to analysts of the company, are meanwhile too expensive.

One of possible ways of a solution in Symantec call use of the certified communication systems for communications between the companies. Other potentially perspective way is implementation of technology of a blockchain for IP-telephony with obligatory authentication of the defiant subscriber.

Protection against counterfeit audio does not cancel use of other technologies of protection of corporate data, emphasize in Symantec – such as systems of filtration and authentication for e-mail, payment protocols with multiple-factor authentication, a call back and so forth.