
AOM Artificial Intelligence (AI) Policy
Academy of Management Guiding Principles for AI in Management Scholarship
AOM is adopting the following foundational principles regarding the use of AI in management scholarship. AI has the potential to enhance and advance the frontier of science, but when used inappropriately, it can pose risks. We hold that AI must never replace human judgment or accountability. The benefits and risks of AI use in research—including its potential to accelerate discovery, improve rigor, and broaden accessibility, as well as its challenges around bias, transparency, and misuse.
AI Policy for Authors
Appropriate use of AI can advance the frontier of research by improving the quality and efficiency of scientific discovery. However, misuse of AI can hinder scientific progress and raise serious ethical concerns. Authors may use AI tools in their research, but AI cannot replace human judgment or accountability at any stage of the research process.
Accordingly, in their use of AI, authors are expected to:
- Take full responsibility for the accuracy of all content in their manuscripts and the integrity of their research process;
- Follow a two-step process when reporting the use of AI in their research:
- a. Disclosure: For each stage of the research process, authors must identify whether AI tools were used;
- b. Accountability: If AI tools were used, authors must explicitly confirm that they carefully reviewed, verified, and accepted any AI-involved output. Upon request from editors, authors may also be asked to provide additional details describing how AI was used or verified in specific tasks.
The research stages for disclosure are as follows. For each stage, authors may also provide a brief, optional clarification:
- Conceptualization — Idea generation, problem framing, literature review, or theoretical development.
- Research Design — Developing research methods, instruments, or data collection strategies.
- Data Preparation and Analysis — Cleaning, coding, analyzing, or interpreting data.
- Presentation of Results — Creating tables, figures, or other visual representations.
- Writing and Editing — Drafting, refining, or finalizing manuscripts.
AI Policy for Reviewers
Reviewers are expected to exercise their own independent judgment and expertise in evaluating manuscripts. The use of AI should never substitute for the reviewer’s personal assessment of the quality, contribution, or integrity of a submission. To preserve confidentiality and intellectual integrity:
- Reviewers should not upload any portion of a manuscript, including text, figures, or data, into any AI tool or platform.
- Reviewers may, however, use AI tools to assist in editing or improving the clarity of their own written reviews.
Guidance for Authors
Authors are required to:
- Notify AOM of any AI use as part of the manuscript submission process (as outlined above).
- Clearly indicate the use of language models in the manuscript cover letter and acknowledgements (for journal submissions) or on the manuscript title page (for Annual Meeting submissions), including which model was used and for what purpose.
- Verify the accuracy, validity, and appropriateness of the content and any citations generated by language models and correct any errors or inconsistencies.
- Provide a list of sources used to generate content and citations, including those generated by language models.
- Be conscious of the potential for plagiarism where the LLM may have reproduced substantial text from other sources. Check the original sources to be sure you are not plagiarizing someone else’s work.
- Acknowledge the limitations of language models in the manuscript, including the potential for bias, errors, and gaps in knowledge.
- Citations: Authors should cite their AI use as outlined within the Chicago Manual of style. See https://www.chicagomanualofstyle.org/qanda/data/faq/topics/Documentation/faq0422.html
We will take appropriate corrective action where we identify published articles with undisclosed use of such tools.
Guidance for Editors and Reviewers
Editors and reviewers should evaluate the appropriateness of the use of AI and ensure that the generated content is accurate and valid.
Editors and Reviewers must uphold the confidentiality of the peer review process. Editors must not share information about submitted manuscripts or peer review reports with generative AI or LLMs such as ChatGPT. Reviewers must not use artificial intelligence tools to generate review reports, including but not limited to ChatGPT.
These guidelines may evolve further as we work with our publishing partners to understand how emerging technologies can help or hinder the process of preparing research for publication.
Please visit the AOM.org author resources page for updates and the latest information.
AI and AOM Code of Ethics Statement
4.2.1.2. AOM members explicitly cite others’ work and ideas, including their own, even if they are not quoted verbatim or paraphrased. This standard applies whether the previous work is published, unpublished, or electronically available. Work submitted to AOM must be created by the authors and not the product of artificial intelligence tools unless appropriate to the research question and properly cited.