https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/
Pixabay
Artificial Intelligence (AI) is a major driver of jobs and innovation. It’s not a secret that companies from Snapchat to Microsoft have pursued AI technology to add to their products, but the dark side of AI comes from its potential threat to security.
AI is a power tool. I have written before about the potential of AI chat bots and technologies are being embraced by the public and software companies including Microsoft, Google and Snapchat are following the trend. Hardware and chip companies including Broadcom and IBM are working to meet the demand for AI applications, and the seriousness of this technology must be evaluated for harm to people around the world amid the exuberance.
AI can pose a security threat to governments. Deep fakes have been produced that can be used to create convincing propaganda and false reports. AI can also threaten the economic prosperity of nations and the security of its people. Deepfakes, propaganda, cybercrime and social engineering using AI chat bots including ChatGPT and video fabrication software are extant threats that must be addressed. AI could even cause a financial crisis by posing as an analyst or CEO who fraudulently declares that the market will crash. Our security as a nation could be put in jeopardy if leaders are mimicked by AI that puts words in their mouth. This is a serious and timely threat.
Recently, AI has been used to extort money from victims. Deepfakes and fabricated calls have been used by criminals to scam victims who think their loved ones are in danger. Videos can be fabricated by AI programs to scare and manipulate people. Tools from companies including ElevenLabs and Synthesia, which makes videos from written scripts using AI protocols threaten vulnerable populations who are not aware of the power of AI. In the hands of a dedicated criminal, AI becomes a powerful tool that can be used to exploit and rob victims.
AI is similar to other powerful tools in use around the world. How can we ensure the security of our citizens while allowing for innovation? We need to look at the strengths of AI and develop regulations that make sense. In industry, every tool, chemical and system is governed by regulations and consensus standards which limit the harm they can cause. Companies who specialize in AI content including ElevenLabs are facing public scrutiny, and this may force regulators to take a look at regulating and guarding the public from the harm AI can pose. Regulations are forged when hazards exist and harm is realized. It has been suggested that the threat and regulations be evaluated by consensus bodies, government agencies and the public. A partnership between public and private bodies and interests is essential to meeting this challenge.
A Steemit Exclusive