I guess, the takeaway from this is, if you're a public figure or otherwise in a position where samples of your voice could be easily obtained (vlogger, frequent town hall participant, PR rep, etc) that you could be a victim of voice spoofing. In most cases, there isn't much you could do to protect against it.
Computer scientists at the University of Waterloo have discovered a method of attack that can successfully bypass voice authentication security systems with up to a 99% success rate after only six tries.
Voice authentication—which allows companies to verify the identity of their clients via a supposedly unique "voiceprint"—has increasingly been used in remote banking, call centers and other security-critical scenarios.
"When enrolling in voice authentication, you are asked to repeat a certain phrase in your own voice. The system then extracts a unique vocal signature (voiceprint) from this provided phrase and stores it on a server," said Andre Kassis, a Computer Security and Privacy Ph.D. candidate and the lead author of a study detailing the research.
"For future authentication attempts, you are asked to repeat a different phrase and the features extracted from it are compared to the voiceprint you have saved in the system to determine whether access should be granted."
After the concept of voiceprints was introduced, malicious actors quickly realized they could use machine learning-enabled "deepfake" software to generate convincing copies of a victim's voice using as little as five minutes of recorded audio.