business|bytes|genes|molecules

At the interface of science and computing

Scientific Software and Being Customer Centric

A lot of scientific software, and this is especially true in bioinformatics, is “open source” in some way or another. That it seems the community doesn’t quite understand the value of open source is another matter and another post, so for the sake of this post, let’s assume it is. Perhaps more importantly, a good chunk of the software used is developed by academia. In my mind, this increases the bar on code quality and software stewardship. Most importantly, developers of academic software need to think about their applications differently and funding agencies need to think about how they fund software development differently.

Under the assumption that the majority of code used to do scientific discovery originates in academia, the question to ask is, what responsibiity does a scientific software developer have? Should they think of their potential users as customers from the beginning or is that something that becomes important later in the process. While in some open source academic projects, especially ones that gave been developed ground up, a customer-centric approach seems to exist, in general it appears that much code is developed to get published or to get something out there to solve a particular problem. Given the realities of scientific problems, I don’t believe you can assume on day 1 that your applications are going to find use in the broader community, but it is a safe assumption to make that for many applications that is the end goal. The reality is that you might be the only one that ever uses the code, especially if it is being developed to solve a specific problem, then it might be your team, then other labs and collaborators and ultimately a wider community. This means that not only should scientific software developers take a step back and think about the potential scope of their project as it evolves, it means that funding agencies need to rethink how they fund software.

First, publishing software as papers needs to go away. Algorithms should get published, novel architectures should get published. Software should only be published as a note to aid discovery. Funding agencies also need to recognize that funding new software projects for 3-5 years and expecting the developer to know the outcome at the beginning is short sighted. Software evolves, features and scope evolves along the way. Three years is an eternity for a software project, five .. I don’t have a word for how long that is. Funders also need to recognize that there is a greater need for funding as a piece of software grows and is recognized by the community. In a way that could be looked at as a return on investment. The broader the reach and impact to science the more successful the initial funding, but you need the concept of angel funding as well to get a project off the ground, see how it will evolve. We also need to raise the bar. Should new proposals be funded or should developers be encouraged to contribute to existing projects? Since there doesn’t seem to be much emphasis on the latter, you see new applications being developed as opposed to getting funding to contribute to existing applications.

The problem with scientific software is more cultural than anything else. As Susan Baxter tweeted

bioinformatician still = PI mentality, not team-based or community

Software development is different, it works at different time scales and it requires a different approach. Note that I am not talking about research code, but code that’s meant to be used over a period of time, at the least by multiple generations in your research group. The change has to start within the community, but they aren’t going anywhere without funding agencies changing the incentives.

Repo of the Week - Sept 8, 2012

I have been on a soapbox lately around programming and bioinformatics. So I am going to try and point to find a random repo every week that I like and put it up here. They will mostly be from Github, but that’s not a requirement.

Today’s repo comes to you via the Faculty of Life Sciences at the University of Manchester. The repo consists of “Scripts, utilities and programs for genomic bioinformatics”, and contains scripts for a variety of genome informatics tasks.

This is the kind of repo that’s super useful. For now their seems to be one person pushing code, so hopefully there will be more. There are at least 2 forks, and reasonable activity.

RAID Doesn’t Make You Resilient

I’ve now heard at least a couple of people managing large life science repositories talk about resiliency and durability and mention that they have durability cause they use RAID. That’s just a cringeworthy thing to hear. I would hope that people managing core repositories know better. To the best of my knowledge they do, but it is troubling. I was reminded of the apparently lack of understanding in managing data by a tweet from Adam Kraut earlier where he linked to a paper that talks about the challenges of maintaining file integrity. In general, I recommend anyone in the world of informatics building large scale storage (or even small scale storage) check out James Hamilton’s blog post covering a talk by Jeff Dean on building large distributed systems pdf. The key is failure happens. Between 1-5% of your disks are going to fail over the course of a year and 2-4% of your storage servers. There are any number of reasons these could happen and they all have different failure rates. Google has published work on their analysis of disk failure rates pdf analysis on Storage Mojo.

Where am I going with this? As the size of our storage systems in informatics increases, as we keep data around for longer, we need to take a deeper look at how we are managing our data, and not make naive assumptions. Think about the tradeoffs you need to make between performance, availiability and durability (and think through what durability means). There are simple and creative ways of getting there (e.g. keeping a copy of a disk array in a friends lab in a different building), and a number of solutions (including some from my day job), but let’s not assume that RAID = durability. In the end managing your data is less about the hardware and more about the operational processes and software sitting on top of the hardware.

Titus Has a Point

There seems to be a bit of a debate brewing in the bioinformatics community around code. There have been a number of posts recently, including my own. A recent entrants is a wonderful post by Titus Brown. The concern that Titus raises, and I see in many comments and discussion is that a lot of computational science, at least in the life sciences, is very anecdoctal and suffers from a lack of computational rigor, and there is an opaqueness that makes science difficult to reproduce (or replicate as Titus prefers). I’ll let you read Titus’ post for his reasoning and thought process. My concern is where I think computational science is right now. Maybe I am being too negative, but here’s what I think

  • We are accepting mediocrity and a non-open culture. I crave a world of science full of gists and code thrown up on github. Who knows how it might end up being useful, or end up fostering interesting collaboration. But for whatever reason we aren’t ready to do that.
  • Actually I think I do know why. The bioinformatics community is all too aware that the quality of our code is very substandard. Even today, we don’t consider programming skills and computational literacy an essential requirement for biological research. So we have way too many people writing poor code, even if it is code never meant to see the light of day. My biggest concern is that this is driving shoddy science that we can’t trust. There is a difference in the skillset required for an algorithm developer, and someone using computational techniques to analyze data. The code bar for the latter should be a lot lot higher.
  • We have a cultural problem cause good hacking skills are not exactly the route to scientific success.

A recent example was a case where I was encouraging someone to cite an application by pointing to it’s source, but others insisted on a paper (which was not even about that particular piece of code). That’s just wrong. We have to do better. I am getting a little tired of excuses about time and a lack of funding. Yes, funding is important, and funding agencies need to realize that we need to encourage the right skill sets. But we have to be responsible for the quality of science and the quality of our work. Perhaps all that work hidden in our machines is good, but right now I don’t believe it.

Note that none of this is about software engineering. There are software products, e.g. repositories, deployment infrastructure, visualization systems, that are different and have an even higher bar. I am exclusively talking about the code we use to actually do exploratory research (good frameworks will make exploration a lot more effective, but that’s another post).

Update: Greg Wilson adds to this discussion as well.

Research Code

Iddo Friedberg has an interesting post on making research software accountable. While I have never been in academia since I left grad school, I have been around it through friends and my wife, and I am not sure I completely agree with the post.

He writes

This practices of code writing for day-to-day lab research are therefore completely unlike anything software engineers are taught.

and

Research coding is not done with the purpose of being robust, or reusable, or long-lived in development and versioning repositories.

Just reading these lines and seeing other issues I’ve seen with academic code makes me think of a few things. (1) Scientific programmers are either poor programmers or lazy programmers. That measn that a lot of the reasons scientific code is not robust or maintainable is because they don’t know how to write robust code. (2) There seems to be an assumption that all code inside a software company is written with lots of time on hand and is user facing. There is a lot of code that is created to create metrics, analyze data and is done in “can I get that answer in the next 4 hours”.

Perhaps more than anything what it brought to mind was ”technical debt”. For anyone that’s been around software, technical debt is a reality and there is always a tension between speed and debt. The fact remains that debt catches up with you. And then you are faced with all kinds of issues. In the scientific world, I’ll call out specific examples of the impact of technical debt

  • You hack something together to get some preliminary data. You are short on time, so you hard code some parameters, and along the way you forget that you did. Guess what, that could result in scientific errors down the line cause you have bad parameters or you made a mistake in some algorithm that you fat fingered in your hurry.
  • Your code is lying around and gets picked up by someone else. They make assumptions, the wrong ones.
  • You essentially have to reinvent the wheel often cause you don’t have quality reusable code, which also means that your research is going to take even longer.

The fact is that every field has slice and dice code. The better the quality of your programming the better the slicing and dicing. The better your documentation, the more it goes from being something one person knows, to being part of the toolchest of an entire group. I wonder if people would take such shortcuts with their lab protocols?

In the end, no amount of enforcement or procedure is going to help. While there will always be a need to hack something up, and often, scientists need to become better programmers, and realize that code has impact on the quality of the science. A few things that I do think will help. Make programming more of a first class citizen. Right now, it’s still thought of as this other thing. The successful groups have proper software engineers doing the hard stuff, but the majority of scientists can barely script, forget thinking through smart ways of building pipelines or even hacking. The concept of “publishing” scientific code also needs to change. It should be less about publishing papers and more about publishing code. Just throw it up on github, even if no one else is ever going to use it. If you are using a version control system, and there is no excuse not to use one, then pushing that out to Github or similar is trivial.

Let’s just stop using the “we don’t have time” excuse. I don’t know too many graduate students who have more time pressure than an engineer or data scientist at a startup, where every minute counts and costs and people are wearing 10 hats.

Open Data Begets Cool

I am a huge fan of Common Crawl. For those who don’t know, Common Crawl is a non-profit whose goal is to build and maintain an open crawl of the web. Their hope is that with the availability of an open, high quality crawl, cool things will happen, e.g. like Michael Nielsen’s how to on crawling 250 million web pages quickly and inexpensively. The thing that makes Common Crawl work is not just quality raw data. They also provide JSON crawl metadata in an S3 bucket, and an Amazon Machine Image to help both users get up and running quickly. The image includes a copy of the Common Crawl User Library, examples, and launch scripts that show users how to analyze the Common Crawl corpus using their own Hadoop cluster or Amazon Elastic MapReduce.

It is this complete picture, data + tools, and the easy availability of infrastructure to do so that make a project like Common Crawl so compelling. When you have the infrastructure in place, the friction to do something interesting gets reduced sufficiently that there are enough smart people using the data that interesting things are inevitable. With people like Michael and Pete Warden publishing great getting started posts, the barriers to entry for Common Crawl are essentially the cost of running a small cluster for a few hours.

I can think of a few life science data sets that would benefit from such an approach, e.g. data sets releated to disease outbreaks, expression profiles, etc. Data that can be analyzed and mashed up with other sources with minimal friction. That would be awesome.

Data Flow

Russell Jurney has a great blog post on the Hortonworks blog entitled Pig as Hadoop Connector …. I’ve long been a fan of data flow-style approaches and Pig fits my mental model better than something like Hive. The post does a great job explaining how you can move data through Hadoop, into MongoDB, and eventually turn the data into a web service (in this case via Node.js). Such a workflow is a particularly nice fit for modern bioinfomratics, especially in a high scale next-gen world. MongoDB with it’s document-based model and rich query syntax is quite popular with the next-gen sequencing crowd, and I’ve started to see a lot more Hadoop, especially in commercial services that need to scale cost-effectively.

Biological data is a great fit for Mongo-style key-value stores. In practice, I wonder how many people are using such pipelines, where they may use something like Hadoop to aggregate a large number of “events”. In this case an event could be the output from a single experiment or pipeline. Essentially you could just stream the output from all your pipeline runs into one or more Hadoop clusters that would do your aggregation and sorting, and then feed that into MongoDB or similar K-V store. From there, publishing the data as a service is a relatively simple step, and you can even make it look pretty quickly with something like Bootstrap.

The key message here is that we have unprecedented access to the kinds of tools that allow us to work with data flexibly at various scales, and, even better still, make results available to a broader set of users and developers via web services. Sometimes it feels like there are too many tools to keep track of and learn, and to some extent that is true (pretty much the story of my life), but it’s a fun time to be a developer.

Platforms for Citizen Science

Nice article in the NY Times around citizen science (or here). It presents a balanced view of citizen science, a topic I care about deeply

In the end, citizen science is many things. It is a way to stimulate public interest, help collect data that would be difficult to do without engaging the community, but perhaps most importantly, citizen science allows the broader public to be engaged in science. Of the many people participating in data collection, perhaps a few will actually do some analysis, and an even fewer number will end up pursuing science as more than a hobby. That’s OK and that’s how it should be.

The key in my mind is to make sure we are developing and nurturing the frameworks that enable participation. The Zooniverse is a great example of making participation easy and fun. Foldit is another model that makes participation fun and rewarding. The current reach of the web makes such platforms very viable and very powerful. Do all models and efforts need to work? No, that is very difficult. Is it OK to leave the hard science to the “experts”? To an extent, that is a good model, but you never know who the experts really are and assuming the sit in some laboratory is both limiting and naive. Not proceeding forward in areas where work can be done by the broader community in chunks because we worry about quality is going to only hold science back. The key once again is to make sure that the underlying platforms make participation easy, and also allow quality to be managed and filtered. In the biological sciences, we haven’t quite seen a project like the Zooniverse, at least not to my knowledge. Initial success has come from efforts that involve the broader scientific community and some hobbyists. Over time, hopefully we will achieve the scale that the web enables and reach a wider set of people, not just scientists. I am pretty sure folks like Andrew Su are thinking of how to do exactly this.

The GATK License

One of the catalysts for restarting the blog was the new GATK license. GATK is a great tool for the genomics community and has historically had an open (MIT) license. However, with GATK 2.0, the license is moving to a hybrid licensing model. Per the announcement

The complete GATK 2.0 suite will be distributed as a binary only, without source code for the newest tools. We plan to release the source code for these tools, but its unclear the timeframe for this. The GATK engine and programming libraries will remain open-sourced under the MIT license, as they currently are for GATK 1.0. The current GATK 1.0 tool chain, now called GATK-lite, will remain open-source under the MIT license and distributed as a companion binary to the full GATK binary. GATK-lite includes the original base quality score recalibrator (BQSR), indel realigner, unified genotyper v1, and VQSR v2.

GATK 2.0 is being released under a software license that permits non-commercial research use only. Until the beta ends and the full GATK 2.0 suite is officially launched, commercial activities should use the unrestricted GATK-lite version. In the fall we intend to release the full version of GATK 2.0. The full version will be free-to-use version for non-commercial entities, just like the beta. A commercial license will be required for commercial entities. This commercial version will include commercial-grade support for installation, configuration, and documentation, as well as long-term support for each commercial release.

This is the wrong direction. Mixed licensing has been the bane of chemistry codes for years, but seeing it in the genomics world, especially for something that started with a more permissive license is a step in the wrong direction. Others have commented on the potential reasons; commercialization, concern about use by dodby DTC genomics sites; but all of those reasons are quite weak.

So why is this a mistake? First, it shuts out those who may not be academics, but want to (a) do good science, and (b) contribute to good science. What if I was a smart developer, perhaps at a small company, or working for myself. Suddenly, not only is the code no longer available without a license, but their ability to contribute to improving the code is severely diminished. Second, it betrays a lack of understanding of what open source means. Yes, there are plenty of open core models, but GATK is not a company or commercial service. If it plans to be one, they should say so more clearly and spin off a company that does the work of developing products around an open source core. This is neither here nor there, and all it does is come in the way of doing good science and writing good software.

In the end, this sets a terrible precendent. The world of open source has lots of good models for monetizing software. If that is the goal, it would be best to follow those models, or focus on providing quality services, but the non-commercial entity only model is a huge backward step.

The Return

It’s been a while. When the original bbgm went down, I thought it would take a few days bringing it back online. Days became weeks, weeks became months. It’s been over a year since I last wrote a post, and strangely enough for a while there it felt good not to think about writing. Life has been incredibly busy, especially once I moved into my current role. The little spare time available has been spent with family and indulging hobbies old and new.

For now, I have given up any illusions of trying to resurrect the original bbgm, but loss brings new opportunities, and this blog is that opportunity. As always I will write about things I care about, especially science, which is a smaller part of my life than it has been in years. As always, there will be limited writing about my day job, but there’s enough to write about in the world of science, technology, and product development.

The original bbgm ran on Wordpress. For a long time, I’ve wanted to switch to more static sites. deepaksingh.net uses Jekyll and dualnatureofmatter.net uses nanoc. This site uses Octopress, which is a blogging system on top of Jekyll, and is hosted on Amazon S3. Oh and here’s the new bbgm RSS feed.

So yes, this is a reboot of bbgm. Whether it has any legs remains to be seen.