In 1961, Derek J. de Solla Price quantified the exponential growth in scientific publications. Considered the father of scientometrics, he predicted that the number of journals and research publications would double every 10-15 years [1]. Price’s observations were indeed accurate with the number of journals in medical subspecialties nearly doubling from 1998 to 2010 [2]. Rapid advances in science, increasing resources for research, greater availability of data and opportunities to publish are likely contributors to this relentless growth of scientific papers. On first impressions, such growth suggests tremendous success in research and academia, yet whether this growth reflects useful research output is far less certain. One measure of the impact of an article is the number of times it is cited because the act of citation is recognition that the knowledge generated by the article has influenced others’ research or practice. Using this metric, we recently assessed the 5-year citation of articles published in cardiovascular journals from 1997-2007 in the Scopus citation database [3]. In keeping with prior studies, we showed that the number of cardiovascular articles and journals increased by 56% and 75%, respectively. Of concern, our analysis of 164,377 articles from 222 journals show that nearly half (46.0%) of all cardiovascular articles were poorly-cited (<5 citations), and 15.6% articles had no citations, 5 years after publication. The absolute number of poorly-cited articles increased by 2,595 articles over the same period. Moreover, 44% of all cardiovascular journals had more than three-quarters of the journal’s content poorly-cited at five years. Our findings suggested that many journals and articles that are increasingly produced have limited impact because the research output is of little value to end users, or because the value of the research output is unrecognized. The literature alludes to several reasons why many articles are poorly-cited. Many publications address questions of little relevance to patients and clinicians. More than half of studies also fail to consider existing studies at inception potentially leading to duplication of research [4]. Furthermore, studies often have prolonged delays to publication [5], have important methodological limitations [6-8], or fail to report important aspects of the intervention or study outcomes that may limit their usefulness [9]. Moreover, the notion of ‘publish or perish’ that is omnipresent in academia may lead to persistent pressure to publish less useful or poorly performed science. Many medical journals are also for-profit enterprises or generate substantial revenues for non-profit entities, and the growth in journals may potentially reflect publishers’ motivation to increase profits. Lastly, recent evidence suggests that even high-quality publications may not be recognised because scientists are overwhelmed by the immense volume of publications [10]. Nevertheless, these so-called ‘sleeping beauties’ seem unlikely to account for the large proportion of poorly cited papers. Our findings have several implications for academia. In the world of scientific research, peer-reviewed publications are often considered a measure of research success. The number of publications is often the deciding factor in awarding qualifications, academic promotions, and research funding. However, the number of publications alone is perhaps a poor indicator of the value of research output. Research is also costly, resource intensive, and may involve substantial contributions from study subjects. Generation of research output of limited impact, irrespective of the reason, reflect waste in the research enterprise. Indeed, it is estimated that up to 85% of resources spent on research may lead to outputs of limited value [11]. Lastly, the rising number of low impact publications and journals highlight the need for concerted efforts to reduce waste in research. Such efforts might address research quality as well as dissemination. A recent series of articles in The Lancet proposed several strategies to achieve this goal and included (1) prioritizing research of relevance to end users [12]; (2) improving the quality of research output through better research design, conduct, analysis, and reporting [13]; (3) ensuring research data are accessible to the end users [14]; and (4) streamlining regulatory and study management processes thereby minimizing undue burden on researchers and patients [13]. Rigorous application of these steps in the research process may stem the rising tide of low impact publications.