SRCD Generative Artificial Intelligence (AI) Policy

Description

As recommended by the Publications Committee and adopted by the Governing Council, August 2024.

Components
Text

The use of generative artificial-intelligence (AI) tools is on the rise in scientific publishing and such tools are increasingly being used as assistants to prepare manuscripts and draft peer reviews (Conroy, 2023). AI tools may also be beneficial for authors whose first language is not English. At the same time, scientific societies, editors, and academic publishers must attend to the integrity, quality, accuracy, and authenticity of the research they publish. As such, the best practices and ethical standards in the use of AI in scholarly publishing are shifting rapidly, and any policy on the use of AI in publishing will be revisited on an annual basis. In this policy, we provide guidelines for authors and peer reviewers on the use of AI in SRCD’s publications.

Caution to Authors

Misinformation/Disinformation/Hallucinations. Generative AI software is trained on existing information. As such, it is likely to produce “hallucinations,” whereby the software generates incorrect yet convincing output when there is insufficient training data, incorrect assumptions in the model, or biases in the data used to train the model. This increases the potential for the promotion of misinformation or disinformation, hate speech, or bias (Lorenz et al., 2023).

Perpetuate Biases in Training Data. Generative AI is liable to reproduce and perpetuate the biases present in the resources used as training data (Lorenz et al., 2023).

Intellectual Property/Copyright. Generative AI is trained on vast sums of data that include copyrighted data. There are ongoing debates worldwide about whether AI-generated materials can be copyrighted. At present in the US, copyright can only be claimed by a human author and cannot extend to AI software. Furthermore, any material copied or uploaded into the generative AI tools may be used for ongoing model development (Lorenz et al., 2023).

Guidelines for Authors

In the alignment with the Committee on Publication Ethics (COPE), SRCD's policy is that AI tools do not meet the requirements for authorship, and therefore, cannot be listed as an author on SRCD publications.

When generative AI is used in drafting or writing of a manuscript, or generation of images or graphics, for a SRCD publication, authors are required to:

  1. disclose in the materials or methods section how the AI software was used, in what sections of the manuscript, and provide an accurate citation for the specific AI software or tool.
  2. Include the full output of the AI as supplemental material.
  3. articulate any such use upon submission as part of the author checklist and in the letter to the editor.

Authors are responsible and will be held accountable for all errors, biases, or misrepresentations that arise from use of AI tools and any breach of publication ethics.

This policy does not extend to the use of spell-and-grammar-check software, citation software, or plagiarism detection software. Use of these tools does not require explicit acknowledgement in the manuscript, nor disclosure to the editor.

Guidelines for Reviewers

Reviewers must not use AI tools in developing or writing their reviews. Engaging in this behavior may breach the confidentiality of the review or violate copyrights.

References

Conroy, G. (2023). Nature, 622, 234-236.

Lorenze, P., Percent, K., & Berryhill, J. (2023). Initial policy considerations for generative artificial intelligence. OECD Artificial Intelligence Papers (September 2023, No. 1). https://www.oecd-ilibrary.org/science-and-technology/initial-policy-considerations-for-generative-artificial-intelligence_fae2d1e6-en