
In this article I'd like to remind how much peer-reviewing does matter in academic research, in particular in emerging fields such as agentic and generative AI.
Let's take the example of "Canaries in the Coal mine", a famous working paper from Brynjolfsson, Chandar and Chen (Stanford) that defends the idea of a strong impact of AI on employment since 2022.
And I think perhaps the only real canary in a coal mine here is the peer-reviewing process.
The 6 facts from "Canaries in the Coal Mine"
The paper exposes six very important facts about the relationship between AI and employment :
- "we uncover a substantial declines in employment for early-career workers (ages 22-25) in occupations most exposed to AI"
- "overall employment continues to grow robustly, but employment growth for young workers has been stagnant since late 2022"
- "not all uses of AI are associated with declines in employment"
- "employment declines for young, AI-exposed workers remain after conditioning on firm-time effects"
- "the labor market adjustments are visible in employment more than in compensation"
- "the above facts are largely consistent across sample constructions designed to address various alternative explanations for the core findings"
In this analogy, the (dead) canaries in the coal mine are the first waves of people who struggle to find a job due to AI replacing their position. Software engineering is used as a key example, among other fields, and youg people are very exposed.
Peer-review does matter, in particular when studying the impact of new technologies
This working paper resonated a lot here in France. Erik Brynjolfsson in particular is a highly regarded scholar and is often quoted in official notes (together with Daron Acemoğlu) such as the latest one about AI agents from the "Conseil de l'intelligence artificielle et du numérique"
It is surely a brilliant piece, written by researchers who earned respect and backed by an extremely strong methodology.
But did you notice how I put working paper in bold font?
That's because no matter how brilliant is a team of researchers, its work is not truly academic until it has been reviewed by peers. That's for many good reasons, given that no human is exempt of bias and a tendency for self-indulgence.
For instance, calling the 6 facts, well, "facts", is certainly a bit much. These facts will be facts when enough researchers from diverse backgrounds agree that they are fact through the publication of a fact-checked ranked review issue. But not until then!
Peer-reviews are not just about nitpicking between scholars, they are about turning enlightened opinions and quality scientific work into knowledge ready to be spread.
Alternative explanations about tech layoffs and junior-level employment must be explored
Let's take an example. In the "Canaries in the Coal mine" paper, major changes to Internal Revenue Code §174, aka Section 174, that happened in the same timeline are only mentioned once in a footnote linked to an unrelated text about the effects of COVID-19 on employment, as shown in this screenshot:

The Pragmatic Engineer blog, a non-academic but well-written source, gave a detailed explanation of the potential impact of this rule change on employment in the software industry.
To put it in a nutshell, this law affects how R&D and software development work are accounted and has a significant impact on companies cashflow. In France, it would be comparable to brutally changing the Crédit Impôt Recherche and Crédit Impôt Innovation rules. Which we did for the latter, and our tech industry suffered a lot in 2024/2025.
Let me be clear : Brynjolfsson et al. are most probably the ones that are right. They did account for Section 174, and figured that it's only worth a footnote.
Yet, I am pretty certain that a peer reviewer would catch this footnote and ask more questions about the impact of these regulatory changes on the software industry, leading to a deeper and stronger analysis. It may or may not change the conclusion, but that's beside the point.
Data are never enough, hypothesis matter
The study is backed with a lot of statistics, charts, and data analyses. That's what makes it very strong.
But as an engineer who did a bit of statistics, I'd like to recall that statistics is the science of the unknown. Statistics help understanding phenomena we can't model, and also validating new models. They turn data AND hypothesis into facts, but they never create facts out of the blue. The role of statistics in research is therefore both critical and overrated: strong hypothesis must be made before we fire the statistics gun.
With regards to the Canaries in the Coal mine working paper, from a qualitative standpoint, 2022 feels way too early for AI to have a strong impact on software engineering!
The ReAct paper, which introduces the concepts behind what's now known as the "agent loop", was submitted in October 2025 and published in March 2026. LangChain framework was released in October 2023 and reached maturity in October 2025 while ChatGPT was released in November 2023 as a product. Over 2025 and now in 2026 it has become clearer that agentic AI, which is born out of the encounter between software engineering and generative AI has a much more disruptive impact than generative AI alone.
That is, it's complicated to bundle "AI" in a single concept and in 2022 real-life profesionnal use cases for LLMs barely existed.
Finally, the papers proceeds with many comparisons between jobs, but the productivity gains brought by AI in the short run varies a lot among jobs. Marketing and sales managers were typically bridgehead of generative AI adoption as they get immediate and impressive results with plain LLMs. This is not true for software developers, that on the contrary are still working now (2026) on getting a real productivity benefits out of AI agents.
Generative AI is definitely not adopted in French software engineering companies, and yet we do face an employment crisis hitting junior developers very hard.
If there is no firedamp in your mine and yet your canaries are dead, you'd want to look for another culprit.
Working papers are great, published papers are better
We are lucky to have a free, instant access to the most up-to-date knowledge produced across the world. We are lucky to have highly skilled researchers like Erik Brynjolfsson, Bharat Chandar and Ruyu Chen sharing their work with us.
And we are lucky because we are blessed with critical thinking, a skill that allows researchers to work together to solidify their hypothesis so they become fact. This process has in fact a name : peer-reviewing. Don't let this canary die in the coal mine.
