Nearly Half of Security Experts View AI as a Potential Risk

A recent HackerOne survey highlights growing concerns about AI in cybersecurity. They gathered insights from 500 security experts, surveyed 2,000 community members, collected feedback from 50 customers, and analyzed anonymized platform data.

The key concerns about AI included:

– 35% worry about leaked training data.
– 33% are concerned about unauthorized usage.
– 32% fear hacking of AI models.

Nearly half the respondents, 48%, believe AI is the biggest security risk their organizations face. This underscores the pressing need for businesses to reevaluate their AI security strategies.

In response, the security research community is adapting. About 10% of security researchers now specialize in AI. Furthermore, 45% of security leaders view AI as one of their organization’s top risks, especially regarding data integrity. Jasmin Landry, a HackerOne pentester, noted, “AI is even hacking other AI models.”

The survey revealed that 51% of participants see basic security practices getting overlooked as companies rush to implement generative AI. Only 38% of HackerOne customers feel ready to fend off AI threats.

On the technical side, HackerOne has tracked a 171% increase in AI assets over the past year. The most frequently reported vulnerabilities in AI include:

– 55% related to general AI safety, like preventing harmful content generation.
– 30% were business logic errors.
– 11% involved LLM prompt injection.
– 3% were due to LLM training data poisoning.
– 3% concerned LLM sensitive information disclosure.

HackerOne stresses the importance of human expertise in maintaining safety. Chris Evans, HackerOne’s CISO, emphasized, “Even the most sophisticated automation can’t match the ingenuity of human intelligence.”

Interestingly, concerns about cross-site scripting (XSS) and misconfigurations remain high among their community members. Respondents believe penetration tests and bug bounties are the best ways to spot these issues.

A separate report from SANS Institute found that 58% of security professionals feel caught in an “arms race” with threat actors leveraging generative AI. While 71% have used AI to automate repetitive tasks, they recognize the potential for threat actors to boost their efficiency through AI as well. Phishing campaigns powered by AI are particularly worrisome for 79% of respondents.

Matt Bromiley from SANS advised security teams to find the right ways to apply AI, noting that overlooking limitations could create more work. An external review for AI implementations proved to be the most popular recommendation, with 68% of those surveyed supporting this approach to identifying security issues.

Dane Sherrets, a Senior Solutions Architect at HackerOne, pointed out that teams are now more aware of AI’s limitations than last year. He highlighted the unique human insight crucial for addressing both defensive and offensive security. Despite AI’s challenges, it still excels in tasks that don’t require extensive context.

Recent findings from the SANS 2024 AI Survey show that:

– 38% plan to adopt AI for their security strategy.
– 38.6% faced limitations using AI for threat detection.
– 40% cite legal and ethical issues as barriers to AI adoption.
– 41.8% experience employee pushback over AI decisions, often due to a lack of transparency.
– 43% already utilize AI in their security strategies.

AI is frequently deployed for anomaly detection (56.9%), malware detection (50.5%), and automated incident response (48.9%). However, 58% of respondents agreed that AI struggles with detecting new threats, primarily due to insufficient training data, and 71% reported issues with false positives.

To enhance AI security, HackerOne recommends:

– Regular testing and evaluation of AI models throughout their life cycle, from training to use.
– Researching compliance requirements relevant to AI and establishing a governance framework.

They also encourage organizations to communicate about generative AI and provide training on security and ethical concerns. HackerOne released some survey data last September, with a full report coming in November, offering a comprehensive view of the current landscape.

Unlock your business potential with our expert guidance. Get in touch now!

tr_20241213-google-android-xr.jpg

Introducing a New Age of Smart Glasses

network-security-key-featured-image-12102024-min.png

Step-by-Step Guide to Locating Your Network Security Key on Any Device

money-growth-fotolia.jpg

Unlocking AI’s Potential: Three Steps to Maximize ROI in 2025

surveillance-biometrics-identity-privacy-KUBE-adobe.jpg

UK Police Illegally Retain Millions of Custody Images

is-faxing-secure-featured-image-12092024-min.png

Is Faxing Secure? Absolutely, When You Use Proper Network Protection

tra_20241209-the-complete-2025-comptia-certification-training-super-bundle-by-idunova.jpg

Get Ready for 2025 with Our CompTIA Training Bundle for Just $50!

stateless-firewall-featured-image-12052024-min.png

5 Benefits of Implementing a Stateless Firewall (and 3 Important Limitations)