The Open Source Economics That Caused Heartbleed and How to Prevent it From Happening Again

The Heartbleed bug has seemingly shone a light on the dark side of our open source architecture. The benefits of open source have long been known: it's free and you have a large community of users that will continuously improve the software. It's a naturally occurring collaborative arrangement that seems to beat the other corporate options out there. 

The downsides have previously been less publicized, but have always been there. Many large corporations have moved to open source, but there is a reason why some stick to commercial solutions. I work for a company that creates commercial data analytics software, and while I am a huge proponent of open source alternatives, I see why clients turn to us rather than open source alternatives like R, Python, or Gretl. If you're using an open source option, answers are almost always a google or openstack search away. If you're using a commercial option like my company's, you'll be able to get an expert on the phone who can show you what to do (that expert would probably be me.) At its core, the main service my company provides when compared with open source options is our company assumes responsibility for the software we produce.

This dynamic is why it took two years to notice Heartbleed. There was always an active development community surrounding it that in many ways is more dynamic than any commercial community can be, but no one ultimately can be held responsible if things go wrong. Cyber security is something that users won't notice unless it fails. Open source dynamics do many things right, but this is not one of them.

In my industry there is also the beginnings of a solution to this problem. Companies such as Revolution Analytics and Continuum Analytics have emerged as the commercial face for open source R and Python respectively. The underlying architecture is free, but companies like these are able to add consulting services or custom addins to open source software.

The dream of open source was that users will be actively maintaining the environment. While this has come to fruition in terms of upgrading user centric functionality, there are some holes, and ultimately no responsibility. This evolution in open source economics allows us to have it both ways. We can get large open source communities, but also have pay options available for those who need it. The providers of pay options can begin taking responsibility for software, and care about it in the same way commercial providers do. Large open source userbases provide the externality of a well maintained infrastructure that these consulting companies can take advantage of. Consulting companies, worried that their paying clients wouldn't trust the software if it had security and other non-user centric bugs that would never be noticed by volunteer communities will work to fix them, providing an externality to the free user community.

Much has already been written about how we need to pay people to solve security issues. Grants might be feasible in the short term, but the industry arrangement I describe above came about fairly naturally in data analytics. I don't see why something like this can't be encouraged elsewhere.

No comments:

Post a Comment