By Matt Germonprez, Dawn Foster, and Sean Goggins
Corporations have increased their investments in open source because of its potential to share the weight of non-differentiating technology costs with other organizations that rely on the same core technologies, and consequently innovate more quickly and increase organizational value. In many cases, the financial leverage gained through open source engagement is substantial, visible, and measurable. However, open source engagement is, to some extent, a cost each organization must assess. For organizations considering open source engagement, it means evaluating the ratio of increased value over the costs of engagement – a ratio that may very well be directly affected by AI.
Open source has benefited an untold number of industries. Open source carries forward well known and positive outcomes for engagement by companies. These include leveraged development, distribution of software maintenance costs, improved time to market, increased innovation, and talent acquisition. However, these positive outcomes, derived from the value leverage provided by open source, now have the potential to be found elsewhere, most notably through the use of artificial intelligence that leverages large language models.
It is becoming increasingly clear that AI will be an open source disruptor that alters how companies think about things like the provenance of source code, as seen in the Linux Foundation’s recent release of their Generative AI Policy. The LF policy highlights key areas of concern including contributions, copyright, and licensing. Other key efforts to address open source and AI include the OSI’s deep dive in Defining Open Source AI and AI Verify Foundation’s focus on building ethical and trustworthy AI. These initiatives are critically motivated to address key issues of AI as part of open source processes and the needed accessibility of AI for all. Each initiative rightfully assumes a future that includes AI and also rightfully prepares an audience for key issues that require attention.
AI is already emerging as a disruptive factor in the work of open source communities. Some open source communities are having issues with the volume of low quality AI-generated code contributions. People often contribute to open source projects (a secondary goal), particularly high-profile projects, to build their resumes and GitHub profiles (a primary goal). However, AI now provides an option for people to reduce the work needed to achieve secondary goals in hopes of achieving primary goals. As a result, open source projects are seeing an increase in nonsense code contributions that are causing additional work for already overloaded project maintainers.
Within any company, AI has the capacity to impact how engagements with open source projects are evaluated and approached. We know reasons for corporate engagement with open source projects include the reduction of the internal resources needed for software development, maintenance and improving product time to market. To obtain these positive outcomes, the costs of engaging with open source projects by assigning employees to contribute, and become leaders are offset by the benefits. Open source program offices aim to lower the costs and amplify the benefits of these engagements. But what if AI, used to increase development speed, and the expense of engagement with open source communities further lowers the costs and retains the benefits associated with developing software in the open? What if a company could still achieve cost and time savings without working in the public? What if conversations that were otherwise present in open source projects and communities could now take place as well-defined AI prompts? Should open source program offices be focusing on working with AI, in addition to working with open source projects?
Questions that need more exploration are premised on how AI carries the potential to alter the cost-benefit ratios of corporate software development in lieu of engaging with open source projects across three key areas including:
- Community-level: Working in a Community
- Does AI increase open source community level noise?
- Are AI developed contributions distinguishable from those developed by individuals?
- Does AI reduce the volume of corporate engagement within open source communities?
- Ecosystem-level: Working in an Ecosystem
- Does AI reduce the need for companies to perform ecosystem level monitoring?
- Does AI reduce the need for companies to engage with open source communities?
- Policy-level: Addressing Licensing and Security Concerns
- Does AI provide a source of legal exposure for communities and companies?
- Will AI be used to mask malicious code within communities and companies?
Underlying these questions is a certainty that AI will alter the dynamics of collaboration in open source engagement, and we suggest that this new reality be addressed directly. There is a case where AI will alter cost ratios within individual companies,, as well as uncertainty about how these changes will shift, or possibly erode critical value presently derived from open source engagement. One core challenge we face will be identifying corporate approaches to AI within open source that affects communities, ecosystems, and policies with deliberateness. To date, corporate engagement with open source recognizes that a rising tide lifts all boats. Will AI change our views of the tide?