Open-source software is everywhere in tech, but it comes with its own set of security challenges that differ from proprietary software. Chris Hughes, the chief security advisor at Endor Labs, a startup focused on open-source security, shared some insights with TechRepublic about the current state of security in the open-source realm and what we can expect in the next year.
Hughes noted that organizations are beginning to establish governance processes to better understand their open-source usage. They want clarity on where open-source components are deployed and what applications rely on them.
He explained that open-source software refers to code that anyone can access, use, and modify, although there may be restrictions. A study from Harvard Business School revealed that without open-source tools, businesses would face an astronomical cost of $8.8 trillion just to recreate the software they use today. Hughes added that around 70-90% of all applications utilize open-source components, and about 90% of those codebases are entirely open-source.
Looking ahead to 2025, Hughes outlined several predictions:
1. Open-source adoption will accelerate, but this will also invite more sophisticated attacks by malicious actors.
2. Organizations will solidify their foundational governance for open-source software.
3. More companies will adopt both open-source and commercial tools to better assess their open-source consumption.
4. Risk-informed approaches will shape how organizations use open-source software.
5. Enterprises will demand more transparency from vendors about their open-source usage, though no broad mandates are expected.
6. AI will continue to influence both application security and open-source practices. Organizations will increasingly leverage AI to analyze code and address vulnerabilities.
7. Attackers will target popular open-source AI libraries and projects, potentially launching supply chain attacks against the entire open-source AI community.
8. AI code governance will gain traction, providing organizations with greater insight into the AI models they deploy.
Hughes emphasized that organizations want to know how secure their open-source tools are, including the speed and quality of vulnerability responses. He recalled an attack in April 2024 that involved social engineering targeted at open-source projects, specifically tampering with the XZ Utils utility. This highlighted the vulnerabilities in a community largely sustained by unpaid volunteers.
In October 2024, the Open Source Initiative established a definition for open-source AI, emphasizing four key freedoms: use, study, modify, and share. Hughes pointed out that with platforms like Hugging Face distributing open-source AI models widely, organizations need to scrutinize what’s in these models and who contributed to them. For larger companies, this transparency with vendors is often easier to achieve than for smaller businesses, which can face growing challenges in understanding the AI models integrated into their software.
In March 2024, CISA released a secure software development self-attestation form for developers supplying software to the U.S. federal government. This encourages secure development practices and might prompt similar requirements in commercial sectors. While trust still plays a role in vendor relationships, the conversation around security is becoming more prevalent.
As we approach 2025, merely performing software composition analysis isn’t enough. Hughes stressed the complexity of modern software, which leads to an overwhelming number of vulnerabilities for developers to manage. Endor Labs offers insights into open-source dependencies, including indirect ones, aiming to ease some of this burden. Understanding aspects like reachability and exploitability will be crucial for compliance and reducing pressure on development teams.