
The rapid adoption of AI has complicated the cybersecurity landscape, with increasingly automated and sophisticated cyberattacks often overwhelming organizations. Kaspersky addresses this complexity with a philosophy of "cybersecurity designed for business," focusing on tangible results rather than technical jargon. AI has enabled malicious actors to automate tasks like coding, writing, and information gathering, lowering the barrier to entry for less experienced individuals. Vladislav Tushkanov, Group Manager at the Kaspersky AI Technology Research center, notes that skilled individuals work faster, and those without skills gain capabilities, for example, by using ChatGPT for programming. Malicious actors use AI to maximize attack speed and stealth. Deepfakes, including AI-generated images, videos, and audio, are particularly dangerous. For instance, UK engineering firm Arup lost $25 million after an employee was deceived by an AI-generated video call, and fraudsters extracted $35 million from a UAE company by faking emails and audio. These attacks are more personal and targeted, playing on emotions. Beyond deepfakes, AI enhances various stages of cyberattacks, increasing speed, efficiency, and adaptability, such as providing contextual advice to avoid detection. The rise in attacks has led 72% of companies to express strong concern about AI use by malicious actors, seeking equally fast and intelligent defenses. Kaspersky leverages machine learning to detect AI-driven threat
Free daily or weekly digest of the most important stories from across 10 countries. No spam, unsubscribe any time.
This summary was AI-generated from a story originally published by Le Matin.