It is time to find the best bioinformatics contributions of 2013 just like we did in 2012 (Top Bioinformatics Contributions of 2012). The original idea came to us after noticing that the yearly reviews in Science and Nature celebrated the large experimental projects, whereas bioinformatics tools like BLAST, BWA or SOAPdenovo rarely got mentioned despite their immense contribution to biology. More importantly, papers discussing elegant computational algorithms got recognized years after their publication (Pevzner’s dBG, Myers’ string graph) or never got recognized (Ross Lippert’s 2005 papers on using Burrows Wheeler Transform in genomics). So, we wanted to give recognition to the major computation discoveries in biology and try to bring attention to under-appreciated contributions with potential long-term benefit.
For this year’s effort, we assembled an outstanding panel of judges.
Continue reading Top Bioinformatics Contributions of 2013
Orli Bahcall (@obahcall), who is a Senior Editor of Nature Genetics, posted the following chart on success of GWAS. Nature Genetics is a ‘high-impact’ vanity journal, which should be renamed Nature GWAS for their recent focus.
Joe Terwilliger, a colorful person (more on that later), wrote the following text in 2008 on his now defunct blog. We are yet to get his permission. So, if the text disappears, please check here.
The Rise and Fall of Human Genetics and the Common Variant – Common Disease Hypothesis
By Joe Terwilliger
There is an enormity of positive press coverage for the Human Genome Project and its successor, the HapMap Project, even though within the field the initial euphoric party when the first results came out has already done a full 180 to be replaced by the hangover that inevitably follows such excesses.
For those of you not familiar with the history of this field and the controversies about its prognosis which were present from the outset, I refer you to a review paper I and a colleague wrote back in 2000 at the height of the controversy – Nature Genetics 26, 151 – 157 . The basic gist of the argument put forward for the HapMap project was the so-called common variant/common disease hypothesis (CV/CD) which proposed that “most of the genetic risk for common, complex diseases is due to disease loci where there is one common variant (or a small number of them)” [Hum Molec Genet 11:2417-23]. Under those circumstances it was widely argued that using the technologies being developed for the HapMap project, that one would be able to identify these genes using “genome-wide association studies” (GWAS), basically by scoring the genotype for each individual in a cross sectional study for each of 500,000 to 1,000,000 individual marker loci – the argument being that if common variants explained a large fraction of the attributable risk for a given disease, that one could identify them by comparing allele frequencies at nearby common variants in affected vs unaffected individuals. This point was contested by researchers only with regard to how many markers you might have to study for this to work if that model of the true state of nature applied. Many overly optimistic scientists initially proposed 30,000 such loci would be sufficient, and when Kruglyak suggested it might take 500,000 such markers people attacked his models, yet today the current technological platforms use 1,000,000 and more markers, with products in the pipelines to increase this even more, because it quickly became clear that the earlier models of regular and predictable levels of linkage disequiblrium were not realistic, something that should have been clear from even the most basic understanding of population genetics, or even empirical data from lower organisms.
Today such studies are widespread, having been conducted for virtually every disease under the sun, and yet the number of common variants with appreciable attributable fractions that have been identified is miniscule. Scientists have trumpetted such results as have been found for Crohn’s disease, in which 32 genes were detected using panels of thousands of individuals genotyped at hundreds of thousands of markers – this sounds great until you start looking at the fine print, in which it is pointed out that all of these loci put together explain less than 10% of the attributable risk of disease, and for various well-known statistical reasons, this is a gross overestimate of the actual percentage of the variance explained. Most of these loci individually explain far less than half a percent of the risk, meaning that while this may be biologically interesting, it has no impact at all on public health as most of the risk remains unexplained. This is completely opposite to the CV/CD theory proposed as defined above. In fact, this is about the best case for any complex trait studied, with virtually every example dataset I have personally looked at there is absolutely nothing discovered at all.
At the beginning of the euphoria for such association studies, the example “poster child” used to justify the proposal was the relationship between variation at the ApoE gene and risk of Alzheimer disease. In an impressively gutsy paper recently, a GWAS study was performed in Alzheimer disease and published as an important result, with a title that sent me rolling on the floor in tears laughing: “A high-density whole-genome association study reveals that APOE is the major susceptibility gene for sporadic late-onset Alzheimer’s disease” [ J Clin Psychiatry. 2007 Apr;68(4):613-8 ] – in an amazingly negative study they did not even have the expected number of false positive findings – just ApoE and absolutely nothing else… And the authors went on to describe how important this result was and claimed this means they need more money to do bigger studies to find the rest of the genes. Has anyone ever heard of stopping rules, that maybe there aren’t any common variants of high attributable fraction??? This was a claim that Ken Weiss and I put forward many times over the past 15 years, and Ken has been making this point for a decade before that even, in his book, “Genetic variation and human disease”, which anyone working in this field should read if they are not familiar with the basic evolutionary theory and empirical data which show why noone should ever have expected the CV/CD hypothesis to hold…
In many other fields, the studies that have been done at enormous expense have found absolutely nothing, and in what Ken Weiss calls a form of Western Zen (in which no means yes), the failure of one’s research to find anything means they should get more money to do bigger studies, since obviously there are things to find but they did not have big enough studies with enough patients or enough markers – it could not possibly be that their hypotheses are wrong, and should be rejected… It is a truly bizarre world where failure is rewarded with more money – but when it comes to promising upper-middle-aged men (i.e. Congress) that they might not die if they fund our projects, they are happy to invest in things that have pretty much now been proven not to work…
While in a truly bizarre propaganda piece, Francis Collins, in a parting sycophantic commentary (J Clin Invest. 2008 May;118(5):1590-605) claimed that the controversy about the CV/CD hypothesis was “… ultimately resolved by the remarkable success of the genetic association studies enabled by the HapMap project.” He went on to list a massive table of “successful” studies, including loci for such traits as bipolar, Parkinson disease and schizophrenia, and of course the laughable success of ApoE and Alzheimer disease. To be objective about these claims, let me quote from what researchers studying those diseases had to say.
Parkinson disease: “Taken together, studies appear to provide substantial evidence that none of the SNPs originally featured as PD loci (sic from GWAS studies) are convincingly replicated and that all may be false positives…it is worth examining the implications for GWAS in general.” Am J Hum Genet 78:1081-82
Schizophrenia: “…data do not provide evidence for involvement of any genomic region with schizophrenia detectable with moderate [sic 1500 people!] sample size” Mol Psych 13:570-84
Bipolar AND Schizophrenia: “There has been great anticipation in the world of psychaitric research over the past year, with the community awaiting the results of a number of GWAS’s… Similar pictures emerged for both disorders – no strong replications across studies, no candidates with strong effect on disease risk, and no clear replications of genes implicated by candidate gene studies.” – Report of the World Congress of Psychiatric Genetics.
Ischaemic stroke: “We produced more than 200 million genotypes…Preliminary analysis of these data did not reveal any single locus conferring a large effect on risk for ischaemic stroke.” Lancet Neurol. 2007 May;6(5):383-4.
And the list goes on and on of traits for which nothing was found, with the authors concluding they need more money for bigger studies with more markers. It is really scary that people are never willing to let go of hypotheses that did not pan out. Clearly CV/CD is not a reasonable model for complex traits. Even the diseases where they claim enormous success are not fitting with the model – they get very small p-values for associations that confer relative risks of 1.03 or so – not “the majority of the risk” as the CV/CD hypothesis proposed.
One must recall that in the intial paper proposing GWAS by Risch and Merikangas (Science 1996 Sep 13;273(5281):1516-7) – a paper which, incidentally, pointed out that one always has more power for such studies when collecting families rather than unrelated individuals – the authors stated that “despite the small magnitude of such (sic: common variants in)genes, the magnitude of their attributable risk (the proportion of people affected due to them) may be large because they are quite frequent in the population (sic: meaning >>10% in their models), making them of public health significance.” The obvious corollary of this is that if they are not quite frequency, they are NOT having high attributable fraction and are therefore NOT of public health significance.
And yet, you still have scientists claiming that the results of these studies will lead to a scenario in which “we will say to you, ‘suppose you have a 65% chance of getting prostate cancer when you’re 65. If you start taking these pills when you’re 45, that percent will change to 2″. Amazing claims when the empirical evidence is clear that the majority of the risk of the majority of complex diseases is not explained by anything common across ethnicities, or common in populations… (Leroy Hood, quoted in the Seattle Post-Intelligencer). Francis Collins recently claimed that by 2020, “new gene-based designer drugs will be developed for … ALzheimer disease, schizophrenia and many other conditions”, and by 2010, “predictive genetic tests will be available for as many as a dozen common conditions”. This does not jibe with the empirical evidence… In Breast Cancer for example, researchers claimed that knowledge of the BRCA1 and BRCA2 genes (which confer enormously high risk of breast cancer to carriers) was uninteresting as it had such a small attributable fraction in the population. Of course now they have performed GWAS studies and examined tens of thousands of individuals and have identified several additional loci which put together have a much smaller attributable fraction than BRCA1 and BRCA2, yet they claim this proves how important GWAS is. Interesting how the arguments change to fit the data, and everything is made to sound as if it were consistent with the theory.
I suggest that people go back and read “How many diseases does it take to map a gene with SNPs?” (2000) 26, 151 – 157. There are virtually no arguments we made in that controversial commentary 8 years ago which we could not make even stronger today, as the empirical data which has come up since then basically supports our theory almost perfectly, and refutes conclusively the CV/CD hypothesis, despite Francis Collins’ rather odd claims to the contrary…
In the end, these projects will likely continue to be funded for another 5 or 10 years before people start realizing the boy has been crying wolf for a damned long time… This is a real problem for science in America, however, as NIH is spending big money on these rather non-scientific technologically-driven hypothesis-free projects at the expense of investigator-initiated hypothesis-driven science. Even more tragically training grants are enormously plentiful meaning that we are training an enormous number of students and postdocs in a field for which there will never be job opportunities for them, even if things are successful. Hypothesis-free science should never be allowed to result in Ph.D. degrees if one believes that science is about questioning what truth is and asking questions about nature, while engineering is about how to accomplish a definable task (like sequencing the genome quickly and cheaply). The mythological “financial crisis” at NIH is really more a function of the enormous amounts of money going into projects that are predetermined to be funded by political appointees and government bureaucrats rather than the marketplace of ideas through investigator-initiated proposals. Enormous amounts of government funding into small numbers of projects is a bad idea – one which began with Eric Lander’s group at MIT proposing to build large factories for the sequencing of the genome rather than spreading it across sites, with the goal of getting it done faster (an engineering goal) instead of getting more sites involved so that perhaps better scientific research could have come along the way. This has led to a scenario years later in which the factories now want to do science and not just engineering, which is totally contrary to their raison d’etre, and leads to further concentrations of funding in small numbers of hands when science is better served, perhaps by a larger number of groups receiving a smaller amount of money so that more brains are working in different directions thinking of novel and innovative ideas not reliant on pure throughput. Human genetics has transformed from a field with low funding, driven by creative thinking into a field driven by big money and sheep following whatever shepherd du jour is telling them they should do (i.e. innovative means doing what they current trend is rather than something truly original and creative). This is bad for science, and also is bad science. GWAS has been successful technologically, and it has resoundingly rejected the CV/CD hypothesis through empirical data. If we accept this and move on, we can put the HapMap and HGP where it belongs, in the same scientific fate as the Supercollider, and let us get back to thinking instead of throwing money at problems that are fundamentally biological and not technological!
(most notably in terms of the big money NIH is sending into these non-scientific technologically-driven hypothesis-free studies, rather than investigator initiated hypothesis-driven science – one of the main causes of the “funding crisis” at NIH where a tiny portion of new grants are funded – get rid of the big science that is not working – like the supercollider! – and there is no funding crisis)
Joe Terwilliger’s description of the success of GWAS seems more appropriate than Orli Bahcall’s, but you may say – his blog is defunct, and Nature Genetics is among the most cited journal in the world. There goes your ‘impact factor’. Irony of all ironies – Joe Terwilliger and Ken Weiss warned about the same problems in the pages of nowhere other than Nature Genetics, as early as 2000 !!!
When Dr. Terwilliger is not working on genomes, he plays tuba, goes to North Korea as the Korean translator of Dennis Rodman or stands on his balcony being dressed as Abe Lincoln every February.
Frivolous pursuit? He explains -
Is it possible to imagine a more frivolous pursuit than GWAS? Well, other than sequencing, I mean . At least The tuba playing, Abe Lincoln impersonating, Korean translating for Rodman, competitive eating, Manhattan walking and all the rest are not being done with false promises at the expense of taxpayers who contribute to the economy by tuba playing, Abe Lincoln impersonating, Korean translating for Rodman, competitive eating, etc… To my mind frivolous is taking money from society under false promises, returning nothing of value compared to what was advertised, and then asking for more money because of said failure… Which pursuit is more frivolous, I ask you……
In the ‘beginning’, Universities had strict rules to make sure their funding sources could not dictate faculty decisions. That was the origin of tenure system.
Then came a series of good ideas.
US government decided to impose a ‘victory tax’ on people. It was a good idea, because Nazis were very bad, the tax was only temporary (‘one time’) and, to sweeten the deal, government promised to refund the money after war. Wow, who does not want to pay for ‘victory’???!!
Government repealed victory tax in due time, but imposed another tax levy of equivalent amount. This time the tax was permanent. Still it appeared like a good idea, because permanent military needed permanent funding.
Dropping atom bombs on Japan ended WWII and it seemed like a good idea to use part of military money to pay physicists for nuclear research. After all, without their help, the war would not have ended.
Physicists did not want to live like prisoners in military barracks of Los Alamos, and went back to their universities. It seemed like a good idea to fund their nuclear research at the universities.
If nuclear physicists were good scientists paid by the government, why not other physicists, chemists, engineers and many others? It seemed like a good idea to have government-funded research grants for other scientists. NSF was born. (Note the catch word below – ‘national defense’).
The NSF was established by the National Science Foundation Act of 1950. Its stated mission is “To promote the progress of science; to advance the national health, prosperity, and welfare; and to secure the national defense.”
Russians sent a person in space, starting ‘space race’. It seemed like a good idea to join the race, because what if those ‘bad boys’ won it due to lack of competition? NASA was born.
By the time, space race (part of cold-war) ended, government figured out that scientists got adrenalin rush through ‘mini-war-like’ situations and people were more willing to pay for wars. Declaring ‘war on cancer’ seemed like a good idea. After all, who did not want cancer to be eradicated by 1975? National Cancer Institute was born. (In the following speech, note two things -(i) reference to world war to motivate the new ‘conquest’, (ii) government was only 16% of economy.).
By this time, government became the biggest game in academic town, and many scientists wanted to get money from the new Santa. However, this new Santa was so centralized that researchers could not simply knock on its door and give their wish list. Centralized procedures had to be created.
One of those centralized procedure was to give grants according to the performances of the researchers. Scientists wrote papers. So, it seemed like a good idea to measure that performance based on the number of papers.
That created a problem, because the high-quality papers of some scientists got as many brownie points as someone else’s low quality papers. Centralized body did not understand ‘quality’ and needed more concrete procedures. Using impact factor seemed like a good idea.
Impact factors are calculated yearly starting from 1975 for those journals that are indexed in the Journal Citation Reports.
The inventor of ‘impact factor’ made an unexpected high-impact discovery. He found that Science and Nature were at the core for all of hard science, core defined by his measure called impact factor.
The creation of the Science Citation Index made it possible to calculate impact factor, which measures the importance of scientific journals. It led to the unexpected discovery that a few journals like Nature and Science were core for all of hard science. The same pattern does not happen with the humanities or the social sciences.
It seemed like a good idea to ignore all those chit-chat about impact factor and simply give grants to those, who published in Science and Nature. Cell journal was born in 1974 and did not count in his calculation.
By this time, US society changed quite a bit. Women got liberated and blacks got ‘desegregated’. If you go to wealthy California towns like Beverly Hills, Woodside or Atherton today, you find a large black population, unlike prior to 1960s, when the society was segregated.
Therefore it seemed like a good idea to make science grants serve all kinds of social purposes – commitment to teaching, ‘equal opportunity’, community service, etc. With centralized donor, it was fairly easy to change the rules and pass the paper-work to hapless grant-writers.
Universities realized that Federal and legal paper-work were taking a lot of time for the researchers. It seemed like a good idea to establish a separate grants management department and take a cut from each science project. Institutional overhead was born. Professors were happy to be shielded from learning about constant changes in law.
Apart from war, a race was another thing people loved. An ‘international cancer race’ led to finding of two genes (BRCA1 and BRCA2) associated with breast cancer in a small group of people. The book “Breakthrough: The Race to Find the Breast Cancer Gene” was read by every Tom, Dick and Harry, thus bringing genetics to home. Around the same time, another group of scientists (Lap-Chee Tsui, Francis Collins,J. R. Riordan) discovered the gene related to cystic fibrosis.
Among all those scientists involved, Francis Collins was the one presenting the most positive vision with very specific timelines. So, it seemed like a good idea to give him the most money and responsibility. From his 1999 paper -
A HYPOTHETICAL CASE IN 2010
General visions of gene-based medicine in the future are useful, but many health care providers are probably still puzzled by how it will affect the daily practice of medicine in a primary care setting. A hypothetical clinical encounter in 2010 is described here.
John, a 23-year-old college graduate, is referred to his physician because a serum cholesterol level of 255 mg per deciliter was detected in the course of a medical examination required for employment. He is in good health but has smoked one pack of cigarettes per day for six years. Aided by an interactive computer program that takes John’s family history, his physician notes that there is a strong paternal history of myocardial infarction and that John’s father died at the age of 48 years.
To obtain more precise information about his risks of contracting coronary artery disease and other illnesses in the future, John agrees to consider a battery of genetic tests that are available in 2010. After working through an interactive computer program that explains the benefits and risks of such tests, John agrees (and signs informed consent) to undergo 15 genetic tests that provide risk information for illnesses for which preventive strategies are available. He decides against an additional 10 tests involving disorders for which no clinically validated preventive interventions are yet available.
John’s subsequent counseling session with the physician and a genetic nurse specialist focuses on the conditions for which his risk differs substantially (by a factor of more than two) from that of the general population. Like most patients, John is interested in both his relative risk and his absolute risk.
John is pleased to learn that genetic testing does not always give bad news — his risks of contracting prostate cancer and Alzheimer’s disease are reduced, because he carries low-risk variants of the several genes known in 2010 to contribute to these illnesses. But John is sobered by the evidence of his increased risks of contracting coronary artery disease, colon cancer, and lung cancer. Confronted with the reality of his own genetic data, he arrives at that crucial “teachable moment” when a lifelong change in health-related behavior, focused on reducing specific risks, is possible. And there is much to offer. By 2010, the field of pharmacogenomics has blossomed, and a prophylactic drug regimen based on the knowledge of John’s personal genetic data can be precisely prescribed to reduce his cholesterol level and the risk of coronary artery disease to normal levels. His risk of colon cancer can be addressed by beginning a program of annual colonoscopy at the age of 45, which in his situation is a very cost-effective way to avoid colon cancer. His substantial risk of contracting lung cancer provides the key motivation for him to join a support group of persons at genetically high risk for serious complications of smoking, and he successfully kicks the habit.
Please note that – (i) 23andMe was scolded by FDA for doing the exact same thing that he predicted would happen by 2010, that too without incorporating the benefits of improvement in sequencing technology, (ii) John cannot even get a functional healthcare website from the government in 2013.
Human genome was sequenced by a private company in 2000, and it seemed like a good idea to double the budget of NIH to help discover medicines.
USA was bankrupted by its banking industry in 2008, and it seemed like a good idea to ‘stimulate’ the scientists further with more borrowed research money.
At the end, we have an academic system and ‘tenured professors’ at the beck and call of those providing money for research, namely the people and the government. That was not the original design.
In this context, the proposal of Schekman (the editor of HHMI-sponsored ELife) of not using Science/Nature/Cell seems like moving from one funding source partly controlling science to another funding source fully controlling science, as someone commented elsewhere –
I am curious about the consequences of having ‘luxury’ funding agencies like the HHMI and Welcome trust in charge of a journal. Have the funders run a journal seems to present a conflict of interest. Will the editors of Elife be under any pressure to review HHMI funded research? Or will they refuse to consider research that contradicts a member of their prestigious body? Will people without HHMI funding feel under pressure to publish in Elife to help get HHMI funding?
A few months back we wrote a commentary on a recently published paper by ‘Happiness Lady’ that looked like total BS.
Tragedy of the Day: PNAS Got Duped by Positivity Lady !!
Nick Brown is one of the warriors, who joined Alan Sokal and Friedman to call the BS in another highly cited paper by Frederickson.
Are You Feeling Lucky?
Fredrickson and Losada’s paper was a huge hit. It became a go-to reference in the literature on positivity and garnered almost 1,000 citations in less than a decade—the academic equivalent of a No. 1 New York Times bestseller. Fredrickson parlayed that success into Positivity, the 2009 mass-market book mentioned above, which makes a big deal about the 3–1 ratio vindicated by Losada’s sexy math.
Except Losada’s sexy math is totally incompetent.
That’s the upshot of the scathing paper by Brown, Sokal, and Friedman. Losada had recorded the chatter of teams of business professionals collaborating on projects, and researchers later coded the “speech acts” of team members as either positive or negative. They also assessed the performance of those teams along certain metrics. Putting the two together, Losada found that teams with a 2.9013–1 ratio of positive to negative comments performed much better than those with only slightly lower ratios. However, as Brown, Sokal, and Friedman explain, Losada’s data are flat out the wrong kind to plug into differential equations, and Losada’s attempt to do so produced not a breakthrough about the nonlinear, tipping-point dynamics of positivity, but complete gibberish.
It is a sad reflection of contemporary academia that Nick Brown barely completed his MSc at age over 50 -
while Frederickson continues to offer her ‘science-backed’ Love 2.0 classes as “Kenan Distinguished Professor of Psychology and Director of the Positive Emotions and Psychophysiology Lab at the University of North Carolina at Chapel Hill” at a cost of $395/person ‘by recording’ !!!!
On Sunday, 28 June 1914, Austrian prince Franz Ferdinand was traveling with his wife in Sarajevo, a volatile peripheral part of their empire, when an anarchist group assassinated them. Their death exposed decades of enmity collecting below the surface in Europe and resulted in 30 years of the most brutal warfare, not seen since the times of Eighty Years’ War( Dutch War of Independence).
We know that a lot of enmity has been collecting below the apparently peaceful surface of academic world of USA and UK. We also recognized a shot being fired by Randy Schekman against Cell, Nature and Science.
How Journals like Nature, Cell and Science are Damaging Science
We all know what distorting incentives have done to finance and banking. The incentives my colleagues face are not huge bonuses, but the professional rewards that accompany publication in prestigious journals – chiefly Nature, Cell and Science.
These luxury journals are supposed to be the epitome of quality, publishing only the best research. Because funding and appointment panels often use place of publication as a proxy for quality of science, appearing in these titles often leads to grants and professorships. But the big journals’ reputations are only partly warranted. While they publish many outstanding papers, they do not publish only outstanding papers. Neither are they the only publishers of outstanding research.
These journals aggressively curate their brands, in ways more conducive to selling subscriptions than to stimulating the most important research. Like fashion designers who create limited-edition handbags or suits, they know scarcity stokes demand, so they artificially restrict the number of papers they accept. The exclusive brands are then marketed with a gimmick called “impact factor” – a score for each journal, measuring the number of times its papers are cited by subsequent research. Better papers, the theory goes, are cited more often, so better journals boast higher scores. Yet it is a deeply flawed measure, pursuing which has become an end in itself – and is as damaging to science as the bonus culture is to banking.
Will it lead to generational warfare between the boomers and their children? Some of the initial reactions suggest so. BioMickWatson, who writes a widely read blog on bioinformatics, commented -
I think you might be a hypocrite
I wrote a blog post recently called “We didn’t ask for it”, and in case you missed the subtle nuance (!) of the post, what I was trying to say is that if you’re an established scientist, a tenured professor with hundreds of peer-reviewed papers behind you, and especially if you’re a man, you don’t get to tell people like me that the system is broken, because you’re the one who broke it!!!
Bear in mind that Randy is already part of the generation who burned all the fossil fuels, created the hole in the ozone layer, oversaw the destruction of rain forests and the loss of countless species, overfished the seas, sent countless pieces of junk into space, wrecked the global economy and got rich off the housing market, then (mostly) retired in their 50s, and you might begin to understand a simmering anger in the younger generation(s) when people who in powerful positions tell us that things are broken and need to be fixed.
Here’s the thing – the younger generation are going to fix things, because we have to, and you don’t get to take any of the *!&%-ing credit Randy!
Given that academia is a microcosm of the greater society, we expect similar generational warfare to play out in many other places of the Anglo-Saxon society.
Generational Dynamics blog has been covering this social progression for many years, and we would encourage readers to take a look at their analysis. It follows the theory of Strauss and Howe. Our forecasts are slightly different from what he expects. We expect -
i) Most boomers to lose their pensions. Greece is an early indicator of what to expect.
ii) University endowment funds, non-profit funds, etc. to follow the way of church property during French revolution.
iii) ‘Nationalism’ followed by civil war within USA.
When Whole-Genome Alignments Just Won’t Work: kSNP v2 Software for Alignment-Free SNP Discovery and Phylogenetics of Hundreds of Microbial Genomes
This paper is everything that the TIGRA paper is not. In twitter, @pathogenomenick forwarded the link and we were immediately able to glance through it without being asked to pay $20 ! Thank you Shea Gardner and Barry Hall for making the paper open access.
The authors use k-mer counting (Jellyfish) and follow-up analysis to align large number of closely-related microbial genomes. The workflow is clear from the above chart. Speaking of k-mer-based methods, readers may also look at Sailfish, which used k-mers to quickly estimate gene expressions in RNAseq.
Since many of our readers are also interested in algorithm development, they will find the following extracts from two sections helpful.
Advantages and Disadvantages
kSNP cannot find SNPs that are too close together (closer than one half k).
K is usually in the range of 13–31. For viruses, we have found that k = 13 or 15 works well, and for bacteria, k = 19 or 21, and have included the Kchooser script to assist the user in selecting an optimal value of k for a given data set.
Repetitive elements like gene duplications can contain SNPs so long as the duplicate kmer locus does not create an allele conflict within a given genome. Even if such regions create allele conflicts within a subset of genomes, the SNP locus can still be detected as a SNP in other genomes without an allele conflict. This facilitates identification of SNPs on regions that may be duplicated or horizontally transferred, such as phage, plasmids, or other mobile elements, in those genomes for which the duplication does not create a SNP allele conflict. But the SNP will not be reported in the genomes with allele conflicts, which would require a longer value of k, i.e. more sequence context, in order to tell the duplicates apart. So running k with a longer value of k should be better at distinguishing loci in homologous regions by detecting some of the SNPs that would be considered allele conflicts with a shorter value of k. But the tradeoff is that a longer k will miss all those high density SNPs in which there is sequence variation within half k of the SNP.
kSNP cannot distinguish true SNPs from sequencing errors. It is advised that for raw read data, some quality filters are imposed on the reads prior to running kSNP (e.g. replace bases with quality below Q20–Q30 with N, and remove adaptors, barcodes, or other non-biological portions of reads).
kSNP v2 does not find indels. Indel sequencing errors that occur in the kmer sequence flanking a SNP will cause a SNP detection failure for that locus in that genome.
Some unique features of kSNP v2 are that it scales better for large data sets (hundreds of bacterial or viral genomes) than other SNP finding approaches (Table 1). It can handle many genomes as unassembled raw reads. For example, we have run it in 6.9 hours on 5.8 GB of input for 212 Salmonella genomes, including many in raw reads from multiple sequencing technologies, on a node with 48 GB of RAM and 12 CPU. It does not require a multiple sequence alignment or a reference sequence, so avoids biases stemming from the choice of a reference.
kSNP finds SNPs that are not in the core genome, as well as those that are. It phylogenetically analyzes both core SNPs only, and all SNPs, and allows users to investigate cases intermediate between these ends of the spectrum, as SNP loci shared by at least a user-specified fraction of the genomes.
One application of kSNP could be a quick initial look at a large data set to determine clades, prior to full genome multiple sequence alignments of genomes within clades to look at strain differences including indels in more detail.
Improvements from version 1.
For better speed, v2 uses MUMmer instead of BLAST, jellyfish instead of sa (suffix array) for k-mers<32, and FastTreeMP and Parsimonator instead of RAxML  and PHYLIP .
There are algorithmic changes as well: In version 1, k-mers were initially computed for all genomes at once, and these k-mer lists were used to find candidate SNPs. BLAST was run to compare all candidate k-mers against all genomes to identify SNPs (allele variation among genomes), conflicting alleles (allele variation within a genome), and identify the allele variant within each genome. This use of BLAST was more memory intensive because all candidate SNP loci and all possible allele variants had to be compared to each genome, and positions even in raw read or merged contig genomes were found, even though that positional information was irrelevant. When run against GB of genomes in raw reads in v1, this step was more likely to run out of memory. In version 2, k-mer comparisons are used much more extensively and BLAST is replaced by MUMmer, which is called very minimally. First, jellyfish is run against each genome individually, and PERL and Unix scripts are used to parse the k-mer lists to determine SNPs, alleles within each genome, and conflicting alleles. Forward and reverse complement k-mers and counts are summed and only the orientation occurring first in an alphabetic sort is stored, saving time and space compared to v1. However, this means that more of the loci are reported in the reverse direction than in kSNP v1. MUMmer is only used to determine the position of the allele in finished genomes specified in the -p option input file. Also, k-mer calculations are performed in subsets by prefix, enabling better memory management for extremely large data, and better parallelization.
Relevant twitter discussion -
TIGRA: A Targeted Iterative Graph Routing Assembler for breakpoint assembly
Recent progress in next-generation sequencing has greatly facilitated our study of genomic structural variation. Unlike single nucleotide variants and small indels, many structural variants have not been completely characterized at the nucleotide resolution. Deriving the complete sequences underlying such breakpoints is crucial for not only accurate discovery, but also the functional characterization of altered alleles. However, our current ability to determine such breakpoint sequences is limited because of challenges in aligning and assembling short reads. To address this issue, we developed a targeted iterative graph routing assembler, TIGRA, which implements a set of novel data analysis routines to achieve effective breakpoint assembly from next-generation sequencing data. In our assessment using data from the 1000 Genomes Project, TIGRA was able to accurately assemble the majority of deletion and mobile element insertion breakpoints, with a substantively better success rate and accuracy than other algorithms. TIGRA has been applied in the 1000 Genomes Project and other projects, and is freely available for academic use.
From the abstract, it seems like an interesting paper. Wish we could say more, but the paper is locked up right now. We do not understand what pleasure these researchers get in doing all these hard work and locking up their papers in inaccessible journals.
A little more from the supplement -
TIGRA is a computer program that performs targeted local assembly of structural variant (SV) breakpoints from next generation sequencing short-read data. It takes as input a list of putative SV calls and a set of bam files that contain reads mapped to a reference genome such as NCBI build36. For each SV call, it assembles the set of reads that were mapped or partially mapped to the region of interest (ROI) in the corresponding bam files. Instead of outputing a single consensus sequence, tigra attempts to construct all the alternative alleles in the ROI as long as they received sufficient sequence coverage (usually >= 2x). It also utilizes the variant type information in the input files to select reads for assembly. Tigra is effective at improving the SV prediction accuracy and resolution in short reads analysis and can produce accurate breakpoint sequences that are useful to understand the origin, mechanism and pathology underlying the SVs.
In 1983, mathematician M. Lothaire wrote a book titled on Combinatorics on Words. Roger Lyndon of ‘Lyndon word’ wrote this in the foreword -
This is the ﬁrst book devoted to broad study of the combinatorics of words, that is to say, of sequences of symbols called letters. This subject is in fact very ancient and has cropped up repeatedly in a wide variety of subjects.
Lothaire was so much ahead of his time that it took 14 years for his book to get any attention. The 1983 book was reprinted in 1997, and he followed up with two other seminal books on the subject – “Lothaire, M. (2002), Algebraic combinatorics on words” and “Lothaire, M. (2005), Applied combinatorics on words”.
But here is the most fascinating aspect of today’s commentary. There was no mathematician named M. Lothaire.
Who wrote the books then? It was written by a number of mathematicians, and many of them were students of Marcel-Paul Schützenberger, a well-known French mathematician, or was he?
Schützenberger’s first doctorate, in medicine, was awarded in 1948 from the Faculté de Médecine de Paris. His doctoral thesis, on the statistical study of gender at birth, was distinguished by the Baron Larrey Prize from the French Academy of Medicine.
Biologist Jaques Besson, a co-author with Schützenberger on a biological topic, while noting that Schützenberger is perhaps most remembered for work in pure mathematical fields, credits him for likely being responsible for the introduction of statistical sequential analysis in French hospital practice.
“First doctorate” means he got a second doctorate, and it was indeed in mathematics. Schützenberger continued to do great work in mathematics and express his disdain for geneticists. So, it is ironic that the genome scientists are finding mathematical theories developed by him and his students useful.
M. Lotharie reminds us of another prolific and influential mathematician, Nicolas Bourbaki, who did not exist. This character was also created by the French mathematicians.
Nicolas Bourbaki is the collective pseudonym under which a group of (mainly French) 20th-century mathematicians wrote a series of books presenting an exposition of modern advanced mathematics, beginning in 1935. With the goal of founding all of mathematics on set theory, the group strove for rigour and generality. Their work led to the discovery of several concepts and terminologies still discussed.
While there is no Nicolas Bourbaki, the Bourbaki group, officially known as the Association des collaborateurs de Nicolas Bourbaki (Association of Collaborators of Nicolas Bourbaki), has an office at the École Normale Supérieure in Paris.
Books by Bourbaki
Bourbaki’s main work is the Elements of Mathematics (Éléments de mathématique) series. This series aims to be a completely self-contained treatment of the core areas of modern mathematics. Assuming no special knowledge of mathematics, it tries to take up mathematics from the very beginning, proceed axiomatically and give complete proofs.
Set theory (Théorie des ensembles)
Topology (Topologie générale)
Functions of one real variable (Fonctions d’une variable réelle)
Topological vector spaces (Espaces vectoriels topologiques)
Commutative algebra (Algèbre commutative)
Lie theory (Groupes et algèbres de Lie)
Spectral theory (Théories spectrales)
The title is a bit of over-simplification to make a point. However, the strong connection between dBG and BWT becomes clear, when you understand the ‘succint de Bruijn graph’ method presented by Alex Bowe. We will encourage you to read the linked well-written commentary first. Following discussion presents the conceptual details that may help you understand what is going on.
Burrows Wheeler Transform
We covered Burrows Wheeler transform in the following two commentaries.
Finding us in homolog.us
Finding us in homolog.us – part II
Essentially you start with a word and keep rotating it by moving one character from front to back until all possibilities are exhausted. You end up with a table like this.
Sort all the entries in the table and take the last column. That is your Burrows Wheeler transform of the original word.
Continue reading Burrows-Wheeler Transform (BWT) is de Bruijn Graph is BWT
The following narrative is a beautiful demonstration of how natural forces work more powerfully than human attempts to micromanage them. It has all components including banned chemicals, honey bee colony collapse, widely disliked ants, government-funded efforts to keep them at bay and nature’s hammer on them. It even presents a new threat to the highest form of human ascendancy – namely innovation of electronic gadgets, toys, video games and other circuitry. Readers are warned that the words ‘intelligent design’ are broadly defined to include the activities of various central planners, environment agencies, government bureaucrats and various other demigods of contemporary society.
Fipronil is a modern pesticide that targets the central nervous system of insects. More specifically, it blocks the passage of chloride ions through the GABA receptor. The chemical does not affect humans, because its target, glutamate-gated chloride (GluCl) channel, is not present in mammals. Therefore, the discovery of the chemical (~1987) was greatly helped by modern understanding of cell biology. Fipronil was approved for use as insecticide after extensive testing between 1987-1996.
The most important effect of Fipronil is that it can act as a slow poison to kill an entire colony of bugs, and not only a few bugs on which the insecticide is sprayed. From the wiki -
Fipronil is a slow acting poison. When used as bait, it allows the poisoned insect time to return to the colony or harborage. In cockroaches, the feces and carcass can contain sufficient residual pesticide to kill others in the same nesting site. In ants, the sharing of the bait among colony members assists in the spreading of the poison throughout the colony. With the cascading effect, the projected kill rate is about 95% in three days for ants and cockroaches. Fipronil serves as a good bait toxin not only because of its slow action, but also because most, if not all, of the target insects do not find it offensive or repulsive.
Toxic baiting with fipronil has also been shown to be extremely effective in locally eliminating German wasps (commonly called yellow jackets in North America). All colonies within foraging range are completely eliminated within one week.
Readers should note that although Fipronil does not affect mammals, other vertebrates are fair game. From a paper titled -
Fipronil: environmental fate, ecotoxicology, and human health concerns
One of its main degradation products, fipronil desulfinyl, is generally more toxic than the parent compound and is very persistent. There is evidence that fipronil and some of its degradates may bioaccumulate, particularly in fish.
Fipronil is highly toxic to bees (LD50 = 0.004 microgram/bee), lizards [LD50 for Acanthodactylus dumerili (Lacertidae) is 30 micrograms a.i./g bw], and gallinaceous birds (LD50 = 11.3 mg/kg for Northern bobwhite quail), but shows low toxicity to waterfowl (LD50 > 2150 mg/kg for mallard duck).
Honey Bee Colony Collapse Disorder
Six years back, when we were working on the honey bee genome paper, colleagues often mentioned one strange and recent observation. You can read the wiki on colony collapse disorder:
Colony collapse disorder (CCD) is a phenomenon in which worker bees from a beehive or European honey bee colony abruptly disappear. While such disappearances have occurred throughout the history of apiculture, and were known by various names (disappearing disease, spring dwindle, May disease, autumn collapse, and fall dwindle disease), the syndrome was renamed colony collapse disorder in late 2006 in conjunction with a drastic rise in the number of disappearances of Western honeybee colonies in North America. European beekeepers observed similar phenomena in Belgium, France, the Netherlands, Greece, Italy, Portugal, and Spain, and initial reports have also come in from Switzerland and Germany, albeit to a lesser degree while the Northern Ireland Assembly received reports of a decline greater than 50%.
The cause of colony collapse disorder is still not known, and everything from starvation, pathogens, mites to modern insecticides were blamed. No matter what the cause is, bees are the types of bugs you do not like to see disappear. Disappearance of bees will cause havoc to farming, because there will be less pollination and thus less food production.
Some researchers blamed Fipronil as the potential cause of colony collapse disorder.
Fipronil is one of the main chemical causes blamed for the spread of colony collapse disorder among bees. It has been found by the Minutes-Association for Technical Coordination Fund in France that even at very low nonlethal doses for bees, the pesticide still impairs their ability to locate their hive, resulting in large numbers of forager bees lost with every pollen-finding expedition. A 2013 report by the European Food Safety Authority identified fipronil as a “a high acute risk to honeybees when used as a seed treatment for maize and on July 16, 2013 the EU voted to ban the use of Fipronil on corn and sunflowers within the EU. The ban will take effect at the end of 2013.”
Also read -
BASF challenges EU ban on fipronil pesticide
German chemicals group BASF said it launched a legal challenge against the European Commission’s ban of BASF’s insecticide fipronil, imposed in July on concern its use as seed treatment is linked to declining bee populations.
The European Union in July added fipronil to its blacklist of substances suspected of playing a role in declining bee populations.
The ban follows similar EU curbs imposed in April on three of the world’s most widely-used pesticides, known as neonicotinoids, and reflects growing concern in Europe over a recent plunge in the population of honeybees critical to crop pollination and production.
Let us now switch gear to a different kind of insect that nobody loves – the fire ants. Fire ants came to the USA from Brazil through trade and spread all over the southeastern states. They are found in everyone’s backyard, front yard, school playground and other places inside and around the house. Every once in a while, you come across stories like this – “Texas Boy Dead After Fire Ant Bites“.
Fire ants are usually removed by pesticides, but some people go on to take extreme measures. Do not try this at home !!
Fly Eating the Brains of Fire Ants
How to control the fire ants? Researchers and grant agencies came up with an elegant ‘natural’ solution. An entomologist named Sanford Porter observed that a type of flies go inside fire ants’ head and lay eggs there. When the eggs hatch, they eat the heads of the ants.
Absurd Creature of the Week: This Fly Hijacks an Ant’s Brain — Then Pops Its Head Off
So Porter searched for a natural enemy that might be keeping southern populations in check. Following a tip from a colleague, he began seeking out fire ants fending off attacks from tiny flies. He gathered some of these besieged individuals and returned to the United States, where he soon began finding maggots in the ants’ bodies. “And around about two weeks [after that] I found that the heads would fall off,” he told WIRED, “and lo and behold I could see the pupa inside the ant’s head.”
The flies he’d observed weren’t hunting the ants. They were much too small for that. Apparently not to be bothered with the stresses of parenthood, they were infesting the creatures with their young. Here, take this for me, the flies seemed to say, I’ve got a lot going on in my life right now.
Here’s how it works. Attracted by the smell of the fire ant’s alarm pheromone, the female ant-decapitating fly hovers a few millimeters from her target. “When they get into just the right position, they dive in,” said Porter, who is now with the USDA Agricultural Research Service. The fly has a sort of lock-and-key ovipositor, the shape of which varies widely between species, “and once that’s fit onto the ant’s body, around the legs somewhere, then what happens is that there’s an internal ovipositor that looks like a hypodermic needle, and that hits probably in the membranes in between the legs,” firing a tiny torpedo-shaped egg into the ant.
The following video shows the process. It is the weirdest thing that we have seen in a while!!
Bringing a bug from south America and putting into Texas was no easy task. The researchers had to make sure the flies do not become nuisance by themselves. Finally, humans found a solution to the fire ant problem, or have they?
Rasberry Crazy Ants
Nature had a different plan to take care of fire ants unlike the ‘natural’ solution introduced by humans. Enter ‘Raspberry crazy ants’. About 6-7 years back, a new kind of ants arrived in Texas from South America following the same route as originally taken by fire ants, namely trade from south America. These ants are aptly named ‘crazy ants’, because they are driving the fire ants crazy !!
Apparently, they are also driving the Texans crazy, because anyone experiencing crazy ant attack wants the fire ants back as the more humble bugs !!
A few observations about crazy ants -
(i) they do not respond to commonly used insecticides,
(ii) their colonies have more than one queens. So, it is much harder to destroy their colonies by killing one queen ant.
(iii) they grow very fast and are attracted to electronic circuits. Therefore, they can damage computer gadgets very fast.
The ‘crazy ants’ are nothing like people of USA have seen before. To understand how unusual they are, readers are encouraged to go through the following two articles -
There’s a Reason They Call Them ‘Crazy Ants’
Soon ants were spiraling up the tongues of my sneakers, onto my sock. I tried to shake them off, but nothing I did disturbed them. Before long, I was sweeping them off my own calves. I kept instinctively taking a step back from some distressing concentration of ants, only to remember that I was standing in the center of an exponentially larger concentration of ants. There was nowhere to go. The ants were horrifying — as in, they inspired horror. Eventually, I scribbled in my notebook: “Holy [expletive] I can’t concentrate on what anyone’s saying. Ants all over me. Phantom itches. Scratching hands, ankles, now my left eye.” Then I got in my car and left.
The 5 craziest moments from the Times’ feature on “crazy ants”
Response of Central Planners
This part is the most hilarious. The arrival of crazy ants in Texas was noticed by Mr. Tom Rasberry, who never completed anything beyond high school but knew his bugs well through his profession as an exterminator. He alerted the central planners about arrival of this new kind of ants in 2002, but they were too busy fighting the last year. Within five years, the crazy ants spread all over Texas, Louisiana, Mississippi, Florida and Georgia and started replacing fire ants.
Finally the central planners took notice, and the first thing they did was to write a grant. But the grant did not get approved, because -
This meeting took place on Oct. 9, 2008, just as the American econ- omy was crumbling. Six days earlier, President Bush signed over $700 billion to the new Troubled Asset Relief Program. “I don’t think the fed- eral government had a lot of money to spend on bugs,” one task-force participant remembered. In fact, very quickly, the conversation foun- dered in a maddening Catch-22: the government preferred not to release any money to research or combat the crazy ants until it knew what species it was dealing with. The scientists insisted that they need- ed funding to figure that out.
Finally, one man spoke up. “I said: ‘You know? You all sound like a bunch of idiots,’ ” he recalls. He was 52, with a graying, bristly mustache and leathery skin, and on paper at least, he had no business being there. He wasn’t a bureaucrat or a scientist. He’d never even gone to college. He was just an exterminator — the kind who drives around in a truck and sprays stuff. But he was the exterminator who discovered the ants. His name was Tom Rasberry. He’d named them after himself.
That turned out to be a problem. Central planners decided to give it a different name – ‘tawny crazy ants’. For a long time, they continued this name-giving game, while the ants marched on to new territories !!
From the NY Times story -
Tom Rasberry collected samples of the ant at the Pasadena chemical plant in 2003 and sent them off to a lab at Texas A&M to be identified. But taxonomy — the process of ordering living things into species — is arguably more an art than a science, and figuring out what species the ants were, and where they came from, quickly became vexing. Academics from other institutions swarmed in to debate, for example, the significance of four tiny hairs on the ant’s thorax. For years, they hurtled through a series of wrong answers, but the consensus eventually leaned toward a certain invasive ant, called Nylanderia pubens, which has been in Florida since the 1950s.
Rasberry was convinced this couldn’t possibly be the same ant. “It’s just common sense,” he said. His ant was ripping through Texas like a violent dust storm; their ant had been entrenched in Florida for more than 50 years, barely dispersing or causing any trouble. Why would the bug suddenly behave so differently? Rasberry began his own, amateur taxonomic investigation, spending thousands of hours out in the field or examining samples with a microscope in the back room of the Rasberry’s Pest Professionals office. “It was a nightmare,” he told me. He’d never had any interest or aptitude for science, didn’t find bugs that fascinating and hates reading. But he willed his way through the entomological research, looking for answers. (“It was an obsession,” his daughter, Mandy Rasberry-Ganucheau, said. For years, Rasberry would come over once a week to see his grandkids and end up talking about crazy ants.) Still, the science kept creeping toward its own conclusion. And as long as there was evidence that the ants in Texas were pubens and not something new, the government felt it was reasonable not to act. Roger E. Gold, a veteran Texas A&M entomologist working on the species, told me that the scientific uncertainty became “almost a reason for the federal government not to get involved,” even as the situation in Texas grew catastrophic. “The taxonomy thing was almost a joke,” Gold added, “if it weren’t so serious.”
State and federal agencies have now financed a very limited amount of research, and the E.P.A. has tweaked its regulations to allow the use of a high-powered pesticide against the ant. The taxonomy question was settled only in September 2012, when scientists led by a fellow at the Smithsonian looked at the molecular sequencing of a broad range of specimens and concluded that the Rasberry crazy ant is not the same ant that was collected in Florida in the 1950s. It’s Nylanderia fulva, a species native to Brazil. Rasberry, in other words, was vindicated. And yet, so many speculative plot twists and Latin names have accumulated around the ant that it’s still easy to get confused. A policy manager at the U.S.D.A.’s Animal and Plant Health Inspection Service recently explained to me that because the ants in Texas are “the same species” — pubens — that has been long established in Florida, the pest has “become too widespread to take effective action.” In short, the ant is already out of the bag.
Then, last winter, the federal research entomologist David Oi and the researcher who led the taxonomy study, Dietrich Gotzek, complicated the story further. They gave fulva a common name, via a petitioning process administered by the Entomological Society of America. Everyone was already calling it Rasberry crazy ant, but that hardly mattered: Naming a bug after a person is strongly frowned on. Besides, Oi told me, the name was too confusing: “People thought it was supposed to be the fruit.” He and his colleague rechristened it the Tawny crazy ant, a name almost no one in Texas appears to use — and especially not Tom Rasberry, who took Oi’s maneuver as a personal attack. “It may sound arrogant,” Rasberry told me, “but I think they’re totally irritated that someone without a college degree one-upped all the Ph.D.s.”
Bring back the Fipronil
Now that the real nature is on the march, humans have very few weapons to fight against it, other than those supposed to ‘damage nature and environment’. ‘Crazy ants’ do not respond to regular insecticides and therefore the big guns are needed.
Pesticide for SE Texas ‘crazy’ ants approved by EPA
Acting on a request by the Texas Department of Agriculture, the U.S. Environmental Protection Agency on Tuesday approved a crisis exemption for use of fipronil (Termidor SC) on crazy ant infestations. The crisis exemption is in effect until the EPA rules on the state’s request for a specific exemption so the pesticide could be used for three years.
Crazy Rasberry ants, called “crazy” because of their zigzag march and named after Tom Rasberry, the Pasadena exterminator who discovered them in 2002, now infest Harris, Brazoria, Galveston, Jefferson, Liberty, Montgomery and Wharton counties.
The rice-grain-size ants, which can bite but not sting, have a penchant for infesting electrical devices and have been blamed for the failure of computers, sewage pumps and electric gate motors.
How about the bees? The beekeepers are so threatened by ‘crazy ant’ attack that colony collapse disorder appears to be minor problem in comparison.
We are not sure whether this convoluted story has any simple conclusion. Humans studied biology and came up with a poison that kills only bugs and not mammals. The pesticide was tested ‘thoroughly’ and released on to the market. It worked so well on the bugs that honey bee colonies started to disappear almost overnight. The chemical was banned, while a more ‘natural’ and benign solution to fire ant problem was introduced. In the meanwhile, nature released its own terror – crazy ant, which displaced fire ants, munched on computer circuits and threatened human civilization so much that the banned pesticide was brought back as an ‘emergency measure’.