Big thanks to you and everyone who’s emailed me back about wanting to leave some feedback for my upcoming book, “The Resilient Researcher.” It’s not too late if you want to help me write the early draft.
It’s inspired by our successful PhD Thesis Fasttrack webinar, which is still available for purchase.
Also, on the Twitter/X sphere, I had a new record tweet with 8+ Million views about how to use your emotion words (it’s always the random ones that go viral):
Also, big shoutout to my grant mentees, who have successfully submitted their NSF CAREER proposals and who are writing their NSERC Discovery grants with my help. Working with you is inspiring. If you’re struggling with grant writing (or any other academic writing), let’s book a Discovery call to see if I can help you, too.
The dirty truth behind citation counts and impact
You’ve probably heard it a million times: citation count is the ultimate measure of a paper’s importance. But is it really? Let’s peel back the layers of this pickled little onion and see what’s lurking beneath its surface. Spoiler alert: It’s not Sauerkraut.
I get weirded out whenever people boast about their citation count (but I do it myself online and definitely catch flag for it when I do). Because this is how the system works. Citations mean impact. No doubt. But do they measure what’s actually advancing the field? How could a paper be highly cited yet not be particularly valuable? Is that possible (sure hope that’s not the case with my papers)? This thought spawned this newsletter issue.
After digging into the literature a bit and talking to other seasoned researchers, I’ve come up with some insights that might just change the way you think about citations and research impact.
Ready to challenge some assumptions? Let’s crack our heels on these cobblestones.
Citation Count Issues
Submitting your paper and seeing the citation count rise can feel like you’ve conquered a mountain. But wait — citation counts can be tricky. They don’t always indicate the quality or importance of your work. Some papers get cited often because they’re controversial or have mistakes, not because they’re revolutionary.
Citation counts are kind of like that shiny sports car you dream about. They look impressive, they’re fun to talk about at parties, but they don’t tell the whole story. In scientific publishing, citations are more like a Swiss Army knife — they’ve got tons of uses, some obvious, some not so much. We’re talking everything from genuine admiration and grant support, to subtle shade-throwing. So, before you get swept away by those flashy numbers, remember, there’s way more to this citation game than meets the eye.
Not all citations carry the same weight. Some cite a paper’s core intellectual contribution; others refer to a minor point. Occasionally, citations are even used to critique or challenge the cited work. In academia, it’s essential to remember that “not all citations are equal.”
A 2014 study found that counting raw citation numbers is a lousy measure of a paper’s influence. The researchers tackled this issue by asking authors to identify the key references in their work, creating a dataset of citations ranked by their academic impact.
Research on the ”intellectual lineage” of science shows that new studies often build upon the work of past influential researchers. Using network-based methods, these studies assess the importance of various references within a paper. It’s clear that not all citations hold the same significance.
Other factors can mess with citation counts too. Groundbreaking or interdisciplinary research might take a while to get noticed and rack up citations as it finds its footing in the field. Likewise, niche studies or research that goes against the grain might get fewer citations, even if they’re incredibly valuable.
So, where does that leave us? While citation metrics have their place, they’re just the starting point, not the final say on a paper’s impact. The most cited papers aren’t always the most valuable — the true intellectual giants might be hiding among the less-cited references, just waiting to be discovered.
The Halo Effect
Ok, let me explain this “halo effect” thing, Master Chief. When a paper comes out, published in a high-impact journal (or conference) or written by some big-shot professor from Harvard or Yale. Instant street cred, right? People start throwing citations at it like confetti, even if the actual research is kinda “meh.” It’s like those RayBan sunglasses you buy just because of the brand name, even though they might not actually be the best fit for your face.
Turns out, there’s science to back this up. Studies show that papers from those big-name journals tend to get cited way more often, regardless of whether they’re actually good or not. It’s like being in the VIP section of a club — you get more attention, even if you’re not the most interesting person in the room, Kanye.
So, what’s the takeaway for you here? Don’t be fooled by the glitz and glam! When you’re checking out a research paper, look past the shiny journal title and the impressive author list. They mean nothing, Jon Snow. Put your reading goggles on and see what the research actually says. Then, decide for yourself if it’s worth all the hype. It’s the content that counts, not the packaging! (Cue the people emailing me about buying their books just for the cover. I see you there. I do it, too.)
Impact Beyond Citations
Real impact isn’t just about racking up citations. Consider how your research shapes policy, informs practice, or sparks further studies. The most valuable research often drives significant advancements or tackles real-world problems, even if it doesn’t accumulate citations right away.
Quantifying the “intellectual lineage” of science mentioned earlier shows that highly-cited papers aren’t always the ones that build upon the most influential past work. Instead, some lesser-cited papers may draw from the most important foundations, quietly advancing the field.
Picture a study that completely changes our understanding of the origins of life. Its findings are so transformative they become the new standard, making additional citations redundant. Now consider a paper that introduces a pioneering new technique. Its value isn’t in the number of direct citations but in the future discoveries it sparks, the problems it solves, and its lasting impact on the field.
A Closer Look at Citation Metrics
Citation metrics like the h-index are often used to measure research impact, but they have big flaws. They can easily be skewed by self-citation, where researchers cite their own previous work, or citation circles, where a group of researchers frequently cite each other’s publications. These practices can inflate citation counts without truly reflecting the research’s intellectual significance or broader impact.
Moreover, citation metrics don’t capture why a paper is cited — whether it’s to build on the core ideas, critique the methods, or just provide background information. Relying too much on citation-based metrics can give a skewed and incomplete picture of a scholar’s contributions.
Quality over Quantity
In academia, there’s often a massive push to crank out a ton of papers. But chasing quantity can sacrifice quality. Flooding the field with publications that lack rigour and significance dilutes your research’s overall impact. It’s far better to be picky and intentional, publishing fewer but higher-quality papers that genuinely advance your field.
Quality should always trump quantity. The most impactful work comes from meticulous, in-depth research that pushes the boundaries, not from superficial studies aimed at increasing publication counts. When researchers really focus on doing thorough, impactful work, their studies can make a big difference in their field and really advance our understanding.
Peer Review and Impact
Peer review is the backbone of the whole academic publishing game, but let’s face it — not all peer reviews are created equal. Some papers slip through with barely a glance, while others get the full treatment. This inconsistency can mess with how a paper’s value is perceived.
In a perfect world, peer review would judge the quality, significance, and originality of research, not just if it’s technically correct. But in reality, the process isn’t always so sharp. Reviewers might have biases, conflicts of interest, or just not enough expertise, causing them to miss big flaws or fail to see a paper’s true worth.
Then there’s the anonymity factor. Without accountability, some reviewers may give half-baked or even careless reviews, while others might use their power to block papers that challenge their own work or beliefs.
To fix these issues, some folks are pushing for changes in the peer review system. Ideas include more transparency, post-publication peer review, and focusing more on the research’s overall significance and potential impact, rather than just the technical details.
Collaboration and Diversity
Collaborative research often yields higher-quality, more impactful work than solo projects. Teaming up with folks from different backgrounds and areas of expertise can help us uncover cool new ideas, make our findings more solid, and improve the overall impact of our work. The mix of perspectives and approaches in a team setting lets us reach a deeper, richer understanding than we could on our own.
On the flip side, single-author research might miss the multifaceted approach and comprehensive scope that define the most groundbreaking papers. Sure, solo work has its place and importance in academic discourse (especially popular in the humanities), but the biggest breakthroughs usually come from teams of researchers pooling their knowledge and skills to tackle complex problems from different angles. This collaborative magic pushes the boundaries of our fields in ways solo efforts often can’t match. Fight me on this.
Action Steps (or should we call it ‘Homework’?)
- Evaluate Research Beyond Citations. Look beyond citation counts when assessing research. Consider the content and contributions of the paper. Ask: What new knowledge does this paper bring to the field?
- Focus on Quality Research. Aim to produce high-quality research that addresses significant gaps or challenges. Depth and rigor should be your priorities, not just high citation counts.
- Be Critical of “Halo Effect” Papers. Be critical when reading papers, regardless of their source. Evaluate the methodology, data, and conclusions on their own merits.
- Highlight Broader Impact. When presenting or writing about your work, emphasize its broader impact. Explain how your research can be applied or how it advances the field.
- Use Multiple Metrics. Assess research impact with multiple metrics. Consider adding altmetrics, which track mentions in social media, news outlets, and policy documents.
- Prioritize Quality Over Quantity. Focus on producing fewer, high-quality papers that make significant contributions. Avoid diluting your research with high volume.
- Choose Rigorous Peer Reviews. Publish in journals with rigorous peer review processes to enhance the credibility and impact of your work. Focus on highest quality venues.
- Seek Collaborative Opportunities. Collaborate with researchers from different disciplines or institutions. This enriches your research and increases its impact.
Tada. That’s my take on this. The next time you’re tempted to judge a paper by its citation count, be more like Homelander and laser that paper right through its head (kidding). Keep questioning, keep exploring, and keep making your mark. Until next time.
Curious to explore how we can tackle your writing struggles? I've got 3 suggestions that could be a great fit.
- Get my CHI paper writing masterclass: Unlock your potential with the How to Write Better Papers Course for HCI researchers. This course offers concise, actionable video lessons you can absorb at your own pace, saving you time. Get expert guidance tailored for CHI and HCI publications with proven strategies. Gain the skills to succeed in a competitive field.
- Learn how to write papers with AI ethically: Access the AI Research Tools Webinar to improve your research and writing skills. Enjoy a 3-hour tutorial with subtitles in 19 languages, 46 detailed slides, and a 1-hour ChatGPT bonus tutorial with 39 prompts. Learn from 3 app tutorials (Yomu AI, SciSpace, and Sourcely) and get a 184-page Mastery Guide on 34 AI tools. This bundle provides everything you need for AI-powered academic success.
- Defend your thesis with confidence: Increase your productivity and graduate success with this thesis workshop. Get instant access to a 3-hour video, 64 instructional slides, and curated productivity software. Use our online whiteboard and a 7-page workbook of checklists and prompts. Prepare confidently with a 10-page Viva questions guide and a PhD exam checklist. Optimize your thesis workflow and excel in your studies.