News listAI recruitment will recognize "its own handwriting", research finds: resumes written by the same AI see a 60% surge in acceptance rates.
動區 BlockTempo2026-04-27 03:07:45

AI recruitment will recognize "its own handwriting", research finds: resumes written by the same AI see a 60% surge in acceptance rates.

ORIGINALAI 徵才會認「自己的筆跡」,研究發現:同款 AI 寫的履歷,入取率直接暴增 60%
AI Impact AnalysisGrok analyzing...
📄Full Article· Automatically extracted by trafilaturaGemini 翻譯1861 words
A joint study by three universities has discovered a "self-preferencing" phenomenon in AI recruitment tools—they not only favor resumes written by AI, but specifically prefer versions rewritten by the same AI model. Candidates who use the right tool can increase their shortlisting rate by 23%–60%. (Context: 14 prestigious universities were embroiled in a "secret prompt brainwashing AI" scandal; a Waseda University professor noted that too many people are lazily outsourcing thesis reviews to AI.) (Background: A University of California study on "AI brain fog" found that 14% of office workers are being driven crazy by Agents and automation.) Think a well-written resume is enough to pass AI screening? Research shows the key isn't how well it's written, but which AI was used to write it and whether it matches the one used for screening. A summary post by X user @heynavtoor has gone viral on Twitter over the past few days, accumulating over 21,000 likes. He cites a paper jointly published by the University of Maryland, the National University of Singapore (NUS), and The Ohio State University, claiming that "for the same resume, the probability of an AI-rewritten version being selected by an AI recruitment tool is as high as 97.6%." However, this 97.6% figure needs to be understood within a more complete context. The paper, "AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights" (arXiv:2509.00462), was first published in September 2025, updated on February 9, 2026, and has been accepted to the AAAI/ACM AIES Conference. In other words, this is not a brand-new study—it has recently resurfaced due to viral social media circulation. The research design was rigorous: the team collected 2,245 resumes written by real humans from job platforms before the advent of ChatGPT, ensuring the originals were purely human-authored. They then had seven mainstream AI models rewrite these resumes, covering closed-source models GPT-4o, GPT-4o-mini, and GPT-4-turbo, as well as open-source models LLaMA 3.3-70B, Mistral-7B, Qwen 2.5-72B, and DeepSeek-V3. The paper states that among large alignment models, the rate at which AI-rewritten resumes are prioritized by the same AI screening tools ranges from 68% to 88%; GPT-4o’s self-preferencing rate exceeds 80%. The 97.6% figure circulating in tweets is a single data point under specific extreme conditions or test settings and does not represent the overall conclusion range of the paper. A deeper and more concerning finding in the study is the "LLM-to-LLM self-preferencing" phenomenon. Each model not only prefers AI-rewritten versions but specifically prefers the version it rewrote itself. The paper points out that DeepSeek-V3 selects its own rewritten resumes more frequently than it selects LLaMA versions by 28%; the tweet version’s claim of 69% likely covers extreme values from a specific subset. The research team also ruled out a common counter-argument: "AI rewrites are better, so of course AI chooses AI." They had human raters independently score the same batch of resumes for "clarity, coherence, and persuasiveness" and re-ran the experiment. After controlling for quality variables, the self-preferencing phenomenon persisted and did not disappear. In other words, even if human reviewers considered the original human-written version better, the AI screening tool still selected the version it had rewritten itself. The most impactful part of the study for real-world scenarios is the professional context simulation. The team simulated the results of candidates using "the same AI as the recruitment tool" vs. "a different AI" to prepare their resumes across 24 different professions. Conclusion: Candidates using the same AI have a 23%–60% higher shortlisting rate than others. In other words, in the current AI recruitment ecosystem, the key to resume optimization is no longer "how well you write," but "guessing which AI the company uses for screening and using the same one to prepare." If this logic holds, the recruitment process has quietly shifted from a competition of ability to a systemic bias driven by information asymmetry. The study cites industry data indicating that 99% of large global enterprises now import resumes into AI screening tools, with the majority using the GPT series of models. The problem is that job seekers have almost no way of knowing which model the company's recruitment system is running. This makes the strategy of "choosing the
Data Status✓ Full text extractedRead Original (動區 BlockTempo)
🔍Historical Similar Events· Keyword + Asset Matching0 items
No similar events found (requires more data samples or embedding search; currently MVP keyword matching)
Raw Information
ID:2f7ed1ce49
Source:動區 BlockTempo
Published:2026-04-27 03:07:45
Category:zh_news · Export Category zh
Symbols:Unspecified
Community Votes:+0 /0 · ⭐ 0 Important · 💬 0 Comments
AI recruitment will recognize "its own handwriting", research finds: resumes written by the same AI see a 60% surge in acceptance rates. | Feel.Trading