A recent HackerOne survey highlights growing concerns about AI in cybersecurity. They gathered insights from 500 security experts, surveyed 2,000 community members, collected feedback from 50 customers, and analyzed anonymized platform data.
The key concerns about AI included:
– 35% worry about leaked training data.
– 33% are concerned about unauthorized usage.
– 32% fear hacking of AI models.
Nearly half the respondents, 48%, believe AI is the biggest security risk their organizations face. This underscores the pressing need for businesses to reevaluate their AI security strategies.
In response, the security research community is adapting. About 10% of security researchers now specialize in AI. Furthermore, 45% of security leaders view AI as one of their organization’s top risks, especially regarding data integrity. Jasmin Landry, a HackerOne pentester, noted, “AI is even hacking other AI models.”
The survey revealed that 51% of participants see basic security practices getting overlooked as companies rush to implement generative AI. Only 38% of HackerOne customers feel ready to fend off AI threats.
On the technical side, HackerOne has tracked a 171% increase in AI assets over the past year. The most frequently reported vulnerabilities in AI include:
– 55% related to general AI safety, like preventing harmful content generation.
– 30% were business logic errors.
– 11% involved LLM prompt injection.
– 3% were due to LLM training data poisoning.
– 3% concerned LLM sensitive information disclosure.
HackerOne stresses the importance of human expertise in maintaining safety. Chris Evans, HackerOne’s CISO, emphasized, “Even the most sophisticated automation can’t match the ingenuity of human intelligence.”
Interestingly, concerns about cross-site scripting (XSS) and misconfigurations remain high among their community members. Respondents believe penetration tests and bug bounties are the best ways to spot these issues.
A separate report from SANS Institute found that 58% of security professionals feel caught in an “arms race” with threat actors leveraging generative AI. While 71% have used AI to automate repetitive tasks, they recognize the potential for threat actors to boost their efficiency through AI as well. Phishing campaigns powered by AI are particularly worrisome for 79% of respondents.
Matt Bromiley from SANS advised security teams to find the right ways to apply AI, noting that overlooking limitations could create more work. An external review for AI implementations proved to be the most popular recommendation, with 68% of those surveyed supporting this approach to identifying security issues.
Dane Sherrets, a Senior Solutions Architect at HackerOne, pointed out that teams are now more aware of AI’s limitations than last year. He highlighted the unique human insight crucial for addressing both defensive and offensive security. Despite AI’s challenges, it still excels in tasks that don’t require extensive context.
Recent findings from the SANS 2024 AI Survey show that:
– 38% plan to adopt AI for their security strategy.
– 38.6% faced limitations using AI for threat detection.
– 40% cite legal and ethical issues as barriers to AI adoption.
– 41.8% experience employee pushback over AI decisions, often due to a lack of transparency.
– 43% already utilize AI in their security strategies.
AI is frequently deployed for anomaly detection (56.9%), malware detection (50.5%), and automated incident response (48.9%). However, 58% of respondents agreed that AI struggles with detecting new threats, primarily due to insufficient training data, and 71% reported issues with false positives.
To enhance AI security, HackerOne recommends:
– Regular testing and evaluation of AI models throughout their life cycle, from training to use.
– Researching compliance requirements relevant to AI and establishing a governance framework.
They also encourage organizations to communicate about generative AI and provide training on security and ethical concerns. HackerOne released some survey data last September, with a full report coming in November, offering a comprehensive view of the current landscape.