ArXiv to Ban Researchers for a Year if They Submit AI Slop

TL;DR

ArXiv will impose a one-year ban on authors caught submitting AI-generated research with clear evidence of errors or fabricated content. The policy aims to address the rise of ‘AI slop’ in academic papers, especially in computer science.

ArXiv, the open-access preprint repository, will ban authors for one year if their submissions contain incontrovertible evidence of AI-generated errors or misconduct, marking a significant step in addressing the proliferation of low-quality, AI-produced research papers.

Thomas Dietterich, chair of the computer science section at ArXiv, outlined the new policy in a post on X (formerly Twitter), stating that authors submitting papers with clear signs of AI slop—such as hallucinated references or misleading meta-comments—will face a one-year ban from the platform. The ban applies to cases where evidence shows authors did not verify AI-generated content, and decisions will involve a formal moderation process with opportunities for appeal. This policy follows ArXiv’s November 2025 decision to stop accepting computer science review articles due to an influx of AI-related submissions, and a January 2026 requirement for first-time submitters to have endorsements from established authors amid rising concerns over fraudulent citations.

Why It Matters

This development is significant because ArXiv is a major platform for early-stage research dissemination, especially in computer science. The move aims to curb the spread of AI-generated ‘slop’ that can undermine scientific integrity, strain peer review, and introduce fabricated data and references into the research ecosystem. It reflects broader concerns about AI’s impact on academic quality and the need for stricter controls in open-access repositories.

Amazon

AI research verification tools

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Background

In 2025, ArXiv faced a surge in submissions, particularly in computer science, driven by generative AI tools that make it easier to produce papers that often lack original research. The platform responded by restricting certain categories and requiring endorsements for new authors. A recent study by Columbia University found a rising rate of fabricated citations in biomedical papers—one in 277 in early 2026—highlighting the growing problem of AI-generated misinformation in research. The platform’s upcoming independence from Cornell Tech in July aims to bolster its capacity to enforce stricter policies against AI slop.

“If generative AI tools generate inappropriate language, plagiarized content, biased content, errors, mistakes, incorrect references, or misleading content, and that output is included in scientific works, it is the responsibility of the author(s). We have recently clarified our penalties for this.”

— Thomas Dietterich

“This change will help arXiv raise more money from a wider range of donors, which Morrisett said is needed to deal with the emergence of ‘AI slop.'”

— Greg Morrisett

Amazon

plagiarism detection software for academic papers

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

What Remains Unclear

It remains unclear how strictly the ‘incontrovertible evidence’ standard will be applied in practice, and whether authors will be able to appeal bans. Details about the specific detection methods and how disputes will be resolved are still emerging.

Amazon

AI-generated content checker

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

What’s Next

ArXiv is expected to implement the new ban policy in the coming weeks, with further clarifications on enforcement procedures. Monitoring how the community responds and whether the policy effectively reduces AI slop will be key developments.

Amazon

reference validation tools for research

As an affiliate, we earn on qualifying purchases.

As an affiliate, we earn on qualifying purchases.

Key Questions

What constitutes incontrovertible evidence of AI-generated slop?

Examples include hallucinated references, meta-comments from language models, or fabricated data that clearly indicate AI involvement, as clarified by ArXiv officials.

Can authors appeal a ban?

Yes, the policy allows for appeals, and decisions will be reviewed through a formal process involving moderation and section chair confirmation.

Will this policy affect all categories on ArXiv?

It currently applies primarily to computer science submissions, where AI-generated content has been most problematic, but future scope is uncertain.

How will ArXiv detect AI-generated slop?

The specifics are still being developed, but likely involve moderators reviewing flagged submissions for signs of AI involvement, including hallucinated references and meta-comments.

You May Also Like

The Democrats Can’t Let Go of Racial Preferences

Despite the Supreme Court’s 2023 ban on racial preferences, many Democrats continue to support race-based policies, risking political fallout and public backlash.

Microsoft’s Edge Copilot update uses AI to pull information from across your tabs

Microsoft Edge updates its Copilot AI to extract information across open tabs, enhancing browsing and productivity features. The update includes new tools and memory capabilities.

Philippine Vice President Sara Duterte set to be impeached for second time

The Philippine House of Representatives will vote on impeaching Vice President Sara Duterte, a move with significant political implications ahead of 2028 elections.

The Supreme Court abortion pills case, explained

The Supreme Court is deciding whether to uphold restrictions on telehealth abortion pills amid a legal challenge from Louisiana. Here’s what’s confirmed and what remains uncertain.