According to a Google Cloud threat intelligence analyst, while generative AI can be used by attackers, security professionals should not be overly concerned about it. In recent virtual sessions, Google Cloud discussed the most notable cybersecurity threats of 2023, including multi-faceted extortion and zero-day exploitation. It was predicted that there would be an increase in zero-day attacks in 2024, and both attackers and defenders will continue to use generative AI. However, it is unlikely that generative AI will create its own malware in 2024. The two most significant cybersecurity threats of 2023, as identified by Google Cloud, were multi-faceted extortion and zero-day exploitation. Multi-faceted exploitation includes ransomware and data theft, with a decrease in the number of ransomware attacks observed. Stolen credentials were often the initial point of entry for ransomware attacks, with brute force attacks and phishing being other common infection vectors. Attackers increasingly sell stolen credentials on data leak sites. Zero-day exploitation refers to actively exploited vulnerabilities with no known patches. In 2023, Google Cloud Security tracked 89 such attacks, surpassing previous records. Many zero-day threats are associated with nation-states and are motivated by monetary gain. Looking ahead to 2024, Google Cloud predicts an increase in zero-day attacks by both nation-state-sponsored attackers and cybercriminals. China and Russia are particularly focused on zero-day attacks, and China’s cyber efforts are expected to concentrate on high-tech domains like chip development. Russian-sponsored attackers primarily target Ukraine but have also conducted campaigns outside the country. They use “living off the land” attacks that do not require malware and appear as native traffic. North Korean actors are known for software supply chain attacks, especially in the area of cryptocurrency theft. Credential theft and extortion are also major concerns for 2024, with a focus on data leak sites. Attackers may use tactics to move between different cloud environments, taking advantage of the increased use of cloud and hybrid environments. Generative AI has been used by both attackers and defenders in 2023. In 2024, it may be used to scale attacks, such as using AI in call centers for ransomware negotiations. While generative AI could potentially create malware in the future, it is not expected to happen by 2024. Cybersecurity professionals are advised to stay grounded and not be overly alarmed by generative AI, as many of its threats are hypothetical. AI is not a revolutionary threat, and there is already a lot of misinformation surrounding its capabilities.