Dark Reading is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them.Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Application Security

05:50 PM

Open Source Developers Still Not Interested in Secure Coding

Security and development are still two different worlds, with open source developers resistant to spending time finding and fixing vulnerabilities.

Coding new features, improving tools, and working on new ideas are the top 3 activities that motivate open-source developers to continue coding. At the bottom of the list? Security.

In a survey of 603 free and open source software (FOSS) contributors, the Linux Foundation's Open Source Security Foundation (OpenSSF) and the Laboratory for Innovation Science at Harvard University (LISH) discovered that the average FOSS developer only spent 2.3% of their time on improving the security of their code. While the contributors expressed the desire to spend significantly more time on their top 3 activities, they did not feel compelled to spend additional time on security, according to the 2020 FOSS Contributor Study released this week.

Related Content:

Open Source Flaws Take Years to Find But Just a Month to Fix

The Changing Face of Threat Intelligence

New on The Edge: BECs and EACs: What's the Difference?

Developers' opinions of security and secure coding — calling it a "soul-withering chore" and an "insufferably boring procedural hinderance" —  highlight that companies who want to harden their applications against attacks have a significant gap between those desires and getting their own developers on board, says Frank Nagle, a Harvard Business School professor and contributing author to the report analyzing the survey results.

"It appears that this 'shifting left' has not fully pervaded the minds of FOSS developers," he says. "Although we did not specifically ask whether developers think security is important, they likely understand that is a concern, but believe others should deal with it."

Open source components and applications account for more than 70% of the code included in modern applications, making the security of those components of paramount concern. Yet, open source developers are more focused on working on the latest tools and implementing their own priorities, according to the 2020 FOSS Contributor Survey report.

The perception that open source components often have unresolved vulnerabilities has led to more companies implementing a variety of security checks and procedures, including more than half — 55% — requiring regular patches and updates, 49% permitting and blocking specific components, and 47% using a manual review process to allow specific components, according to the DevSecOps Practices and Open Source Management report published by software security firm Synopsys this week. 

Companies' approaches to open source software continues to be uneven, says Tim Mackey, principal security strategist for Synopsys. 

"One key takeaway from this report is that greater automation is required to inventory open source usage," he says. "From there, businesses need to develop and implement processes to benefit from all the innovation occurring within open source communities."

Companies are still figuring out how to integrate security into their DevOps pipelines, according to the Synopsys survey. While a third of companies consider their approach to DevSecOps to be mature, another 40% only have limited implementations or pilots, and the remaining 27% are still researching or not planning to follow DevSecOps.

Media coverage of specific open source vulnerabilities and the general issue of open source security has prompted many companies to put more stringent controls in place and migrate to better-maintained open source projects, according to Synopsys's report. 

However, media coverage of a particular threat is not a good indicator of how dangerous a vulnerability or flawed component may be, says Synopsys's Mackey.

"What we should recognize is that media coverage will cause non-technical people to start asking questions," he says. "Those non-technical people want to ensure that their business isn't in the news for a similar event and will start to ask questions about how open source security is managed within their organization. Having a well-defined process, one which is able to quickly identify the impact of a new vulnerability, goes a long way to calming concerns."

The FOSS Contributor Survey suggests that companies should start with a focus on secure code as one of the requirements of the business. Writing simpler, well-commented code, automating tests and security checks, and using memory-safe languages can minimize coding mistakes. 

"As we see an increasing number of companies actively paying their employees to work on FOSS projects, these employers should incentivize their employees to both write secure code from the beginning, and also spend some time helping find and address existing security vulnerabilities," Harvard University's Nagle says.

Companies that do not perform their due diligence could find that the open source building blocks of their applications introduced security vulnerabilities into their products. On Dec. 8, for example, network-security firm Forescout disclosed vulnerabilities in four different open source TCP network stacks installed on millions of connected devices and routers.

The Open Software Security Foundation recommended that organizations who pay employees to contribute to open source projects should also contribute to security audits and have those employees rewrite portions or components of those libraries. Part of such a rewrite could be to switch to a memory-safe language, the FOSS Contributor Survey report said.

Veteran technology journalist of more than 20 years. Former research engineer. Written for more than two dozen publications, including CNET News.com, Dark Reading, MIT's Technology Review, Popular Science, and Wired News. Five awards for journalism, including Best Deadline ... View Full Bio

Recommended Reading:

Comment  | 
Print  | 
More Insights
Newest First  |  Oldest First  |  Threaded View
User Rank: Moderator
12/10/2020 | 6:12:57 PM
FOSS developers don't get paid for secure coding
This doesn't surprise me, as even full-time paid commercial programmers produce code that is riddled with security vulnerabilities and insecure coding practices. They are much more security conscious than FOSS programmers, but still not always at the security level that would be desired in organizations with serious data privacy concerns.

I don't expect this situation to change whatsoever, so I believe that the workaround is for security conscious users & organizations to assume that FOSS software is highly insecure and should only be run on untrusted PC's in untrusted network subnets. By this I mean that a computer network should be divided into isolated & firewalled subnets that are separated into high security (trusted), medium security (production), low security (untrusted) and public (totally untrusted) zones that never co-mingle their network traffic. That way security breaches in untrusted subnets are irrelevant to the organization because no valuable private information ever exists in them – they are only for public facing insecure tasks with no privacy value.

That, actually, makes sense for those of us embracing open source – why would we need data security privacy on a computer devoted to creating FOSS & FOSH content that we'll be donating to the global commons anyway? Sure, we might take basic security precautions, but nothing beyond that is worth our time & effort. Especially if the FOSS we're using is full of unpatched security holes anyway...
Cyberattacks Are Tailored to Employees ... Why Isn't Security Training?
Tim Sadler, CEO and co-founder of Tessian,  6/17/2021
7 Powerful Cybersecurity Skills the Energy Sector Needs Most
Pam Baker, Contributing Writer,  6/22/2021
Microsoft Disrupts Large-Scale BEC Campaign Across Web Services
Kelly Sheridan, Staff Editor, Dark Reading,  6/15/2021
Register for Dark Reading Newsletters
White Papers
Current Issue
The State of Cybersecurity Incident Response
In this report learn how enterprises are building their incident response teams and processes, how they research potential compromises, how they respond to new breaches, and what tools and processes they use to remediate problems and improve their cyber defenses for the future.
Flash Poll
How Enterprises are Developing Secure Applications
How Enterprises are Developing Secure Applications
Recent breaches of third-party apps are driving many organizations to think harder about the security of their off-the-shelf software as they continue to move left in secure software development practices.
Twitter Feed
Dark Reading - Bug Report
Bug Report
Enterprise Vulnerabilities
From DHS/US-CERT's National Vulnerability Database
PUBLISHED: 2021-06-22
Trusty TLK contains a vulnerability in the NVIDIA TLK kernel function where a lack of checks allows the exploitation of an integer overflow on the size parameter of the tz_map_shared_mem function.
PUBLISHED: 2021-06-22
Trusty TLK contains a vulnerability in the NVIDIA TLK kernel�s tz_handle_trusted_app_smc function where a lack of integer overflow checks on the req_off and param_ofs variables leads to memory corruption of critical kernel structures.
PUBLISHED: 2021-06-22
Trusty TLK contains a vulnerability in the NVIDIA TLK kernel where an integer overflow in the tz_map_shared_mem function can bypass boundary checks, which might lead to denial of service.
PUBLISHED: 2021-06-22
Trusty contains a vulnerability in TSEC TA which deserializes the incoming messages even though the TSEC TA does not expose any command. This vulnerability might allow an attacker to exploit the deserializer to impact code execution, causing information disclosure.
PUBLISHED: 2021-06-22
Trusty contains a vulnerability in all TAs whose deserializer does not reject messages with multiple occurrences of the same parameter. The deserialization of untrusted data might allow an attacker to exploit the deserializer to impact code execution.