Our ‘best of 2013’ nomination phase will end tomorrow (Jan 3rd). Several judges mentioned that they were away during Christmas or New Year day, and we decided to keep it open during the holidays.
Note: All nominations are compiled in this wiki page.
It is time to find the best bioinformatics contributions of 2013 just like we did in 2012 (Top Bioinformatics Contributions of 2012). The original idea came to us after noticing that the yearly reviews in Science and Nature celebrated the large experimental projects, whereas bioinformatics tools like BLAST, BWA or SOAPdenovo rarely got mentioned despite their immense contribution to biology. More importantly, papers discussing elegant computational algorithms got recognized years after their publication (Pevzner’s dBG, Myers’ string graph) or never got recognized (Ross Lippert’s 2005 papers on using Burrows Wheeler Transform in genomics). So, we wanted to give recognition to the major computation discoveries in biology and try to bring attention to under-appreciated contributions with potential long-term benefit.
For this year’s effort, we assembled an outstanding panel of judges.
Anton Korobeynikov: Anton Korobeynikov is a member of Algorithmic Biology Lab of St. Petersburg Academic University of the Russian Academy of Sciences and part of Faculty of Mathematics and Mechanics at Saint Petersburg State University, Russia. He contributed to the development of very useful NGS algorithms, such BayesHammer for error correction and SPAdes for genome assembly.
Eran Elhaik: Eran Elhaik, a researcher in evolutionary genomics, finished his post doctoral research at Johns Hopkins and is leaving next month to become a professor in University of Sheffield, UK.
Heng Li: Heng Li is currently a research scientist at Broad Institute. He is the author of several outstanding bioinformatics tools, including BWA and samtools.
Istvan Albert: Istvan Albert is a professor at Penn State University. Biostar is his most popular contribution, but he wrote many other interesting papers in bioinformatics and theoretical physics.
Jared Simpson: Jared Simpson joined Ontario Institute for Cancer Research as a Fellow this year. If we cannot keep him distracted, he will develop another efficient genome assembler.
Joanna Sulkowska: Joanna Sulkowska is a researcher working on protein folding and topological properties of biomolecules at the Center for Theoretical Biological Physics at UCSD and Instytut Fizyki Polskiej Akademii Nauk of Polish Academy of Sciences.
Nikolay Vyahhi: Nikolay Vyahhi is a Visiting Scholar in the Department of Computer Science and Engineering at University of California San Diego (UCSD). Together with Phillip Compeau, he co-founded Rosalind, a free online resource for learning algorithmic biology. Nikolay directs the M.S. Program in Bioinformatics in the Academic University of St. Petersburg, Russian Academy of Sciences and recently founded the Bioinformatics Institute in St. Petersburg as well as Stepic, a project focusing on content delivery for online education.
Rayan Chikhi: Rayan Chikhi is a post-doctoral researcher working with Paul Medvedev at Penn State University. He worked on bioinformatics algorithms and data structures during his PhD at IRISA/Ecole Normale Suprieure of Brittany (France). He is co-author of the Minia assembler, which won our last year’s assembler award.
Although each member of the panel made outstanding contributions to bioinformatics, readers may perceive the panel to be biased in many ways - nationality, country of work, age, race, skin color, religion, gender, sexual orientation (we never asked), height, adaptability to tropical weather and many other criteria that you can think of. Most importantly, the panel may not have a representative from your favorite area of bioinformatics. We expect to overcome those shortcomings in three ways -
(i) with help from you, the readers, in suggesting the best contributions of 2013:
Please feel free to suggest any contribution that you find outstanding. We are also placing an announcement in Biostar and you can post your suggestions there.
(ii) keeping the evaluation process open and transparent (please see the rules).
(iii) accepting that final decision is merely the collective opinion of eight bioinformaticians, each of whom has human limit and fallibility.
Rules, Categories and Decision Process
Our rules are flexible
Please note that we like to make the process fun and enjoyable for all of us and not another huge time sink with deadlines, etc. So, we will try to minimize formalities and make sure we reach the stated goal in the best way. As an example, one judge mentioned that he will be away during Christmas and may not be able to access the internet. Others may have similar constraint. If that delays our dates for decision by a week or two, we will be flexible.
We like to find elegant papers and methods, which are currently under- appreciated, but will have major impact in the future (in the opinion of judges). You can consider the process similar to investors trying to find promising early-stage start-ups except that we do not have anything to invest apart from our reputation. There is some risk involved, because, five years down the road, the paper/algorithm selected by the judges may not turn out to be as promising as expected.
We may keep similar set of categories as last year, but all other suggestions are welcome. (i) bioinfomatics in general, (ii) NGS assembly and alignment algorithms, (iii) teaching tools, (iv) blogs and twitter feeds, (v) journals.
i) Nomination from readers (and judges):
At first, we will build a large list of contributions. All readers are free to nominate as many papers as they want, provided they include a short paragraph mentioning why they find the paper as the best. The list can include papers from the judges as well. We do not like to be biased by high-profile journals and would like to include arxiv and blog posts in our consideration.
The nomination stage will go on for about three weeks. Here is the relevant Biostars thread.
ii) Preliminary screening:
In this step, each judge will pick up to 3 entries from the large list of nominations based on their thoughts on why the entries qualify as the ‘Best of 2013’. A judge is not allowed to pick his/her own papers. Each judge will also write a short paragraph explaining what he found unusual in the papers.
After preliminary screening, we will post the shortlist on the blog along with comments from judges. At this stage, the readers are requested to -
(a) point out errors in the thinking of judges,
(b) suggest relevant alternative to the selected papers, such as ‘if you pick BWA paper, you should also pick Bowtie paper’, or ‘paper X has nothing novel beyond what Waterman showed 20 years back’.
We encourage debate on the short-listed papers, but are very unlikely to consider brand new nominations for papers at this stage.
iii) Final voting:
About 10-15 days after step (iii), the judges will go through the final voting. That will most likely take place during the first or second week of January. Voting will be done only on the short-listed set, but Rule #1 will be respected for highly unusual entry that may have been mentioned during the discussion.
Each judge is free to score as many papers from the list as possible on a scale of 1-10. Judges will vote independently and not see each other’s votes. We will tally the results and post the final winners in the blog (or more likely the top three). The individual votes by the judges will not be posted so that they can freely express their opinions.
Please spread the word and nominate your favorite paper/blog/twitter thread with a short description on why it is the best.