2%的ICML论文因作者在评审中使用LLM而被拒稿

📄 中文摘要

ICML会议的审稿过程引发了关注,数据显示约2%的提交论文因作者在评审中使用大型语言模型(LLM)而被直接拒绝。这一现象引发了对学术诚信和评审质量的讨论。使用LLM可能导致评审意见的标准化,影响论文的原创性和独特性。学术界对这一趋势的反应不一,有人认为应加强对LLM使用的监管,以确保评审过程的公正性和有效性。与此同时,也有观点认为,合理使用LLM可以提高评审效率,帮助审稿人更好地理解和评估论文内容。

📄 English Summary

2% of ICML papers desk rejected because the authors used LLM in their reviews

Recent data from the ICML conference reveals that approximately 2% of submitted papers were desk rejected due to authors utilizing large language models (LLMs) in their reviews. This trend has sparked discussions regarding academic integrity and the quality of the review process. The use of LLMs may lead to a standardization of review comments, potentially impacting the originality and uniqueness of the submissions. Reactions within the academic community are mixed; some advocate for stricter regulations on LLM usage to ensure fairness and effectiveness in the review process, while others argue that judicious use of LLMs can enhance review efficiency and assist reviewers in better understanding and evaluating the paper content.

Powered by Cloudflare Workers + Payload CMS + Claude 3.5

数据源: OpenAI, Google AI, DeepMind, AWS ML Blog, HuggingFace 等