Policy on the use of artificial intelligence (AI) in articles

Grup de gent sostenint una bombeta
Version 1.0

In recent years, we have seen a strong proliferation of artificial intelligence (AI) tools, moving from anecdotal use in very specialised areas to widespread application in many areas of our society. This phenomenon presents not only interesting new opportunities for science, but also challenges.

For example, since the emergence of chatbots, it has become possible to produce texts that, while linguistically accurate, fluent or convincing, can be compromised in various ways. Several sources warn us that these tools run the risk of containing biases, distortions, irrelevancies, misrepresentations and plagiarisms,[1][2] many of which are caused by the algorithms that control their generation and which depend to a large extent on the content of the materials used in their training.

There is thus sufficient evidence to argue that AI poses significant risks to both the creation and dissemination of knowledge due to its potential to amplify misinformation and disinformation,[3] which in turn raises new legal issues related to intellectual property.[4]

It is therefore necessary to put in place mechanisms to ensure the appropriate use of AI in science, regardless of the technology used or the use to which it is put.

For all these reasons, the journal is committed to ensuring academic integrity and transparency in research in its field, and requests that authors adhere to the following guidelines for the use of artificial intelligence in the preparation of articles.

 

Unaccepted uses

1. Authorship of the article. The IA cannot be listed as author or co-author of an article. Although the legal status of an author differs from country to country, in Spain and in most jurisdictions, authorship is always attributed to a legal entity.[5]

2. Autonomous generation of scientific content. AI should not be used to generate hypotheses, interpret data or formulate conclusions without the supervision and validation of the authors.

3. Failure to disclose. Failure to disclose the use of AI in the article is considered academic misconduct and may result in rejection of the proposal or retraction of the published article.[5]

 

Accepted uses

1. Writing and editing assistance. AI may be used to improve the clarity, grammatical accuracy and fluency of the text, but the authors are responsible for the final content and must verify its accuracy.

2. Literature review. Authors can use AI to assist in the process of literature review and article summary, always ensuring that the information generated is accurate and supported by verifiable sources.

3. Data analysis and modelling. The integration of AI for data analysis and modelling is allowed, as long as it is described in detail in the methodology and the reliability of the resulting data is guaranteed.

4. Plagiarism and reference checking. Authors can use AI tools to check the originality of their work and improve the accuracy of citations and references.

 

In terms of disclosure, any use of AI must be explicitly disclosed in the article by identifying the tool used, its purpose, and the prompts used (or the actions taken to reproduce the result) in an AI use statement published at the end of the article. This statement should include:

  • Name and version of the AI tools used.
  • A brief description of how the AI was used (e.g., assistance with literature review, editing and drafting, data analysis, etc.).
  • Measures taken to obtain the results for each task (e.g. prompts, programming, etc.) and to ensure reproducibility.

In addition, the author must report the use of AI during the submission process by ticking the appropriate box on the checklist.

 

Editorial workflow

Editors and reviewers must indicate any use of AI tools in the evaluation of the manuscript and in the preparation of reviews and correspondence. This information should be made available to authors and others involved in the editorial process.

To ensure the privacy of research and review data, all parties should be aware that most AI tools retain the instructions and data sent to them, including the content of the manuscript, and that, for example, providing an author's article to a chatbot is a breach of confidentiality.[5]

 

References

[1] Elali, F. R., & Rachid, L. N. (2023). «AI-generated research paper fabrication and plagiarism in the scientific community». Patterns (New York, N.Y.), 4(3), 100706. https://doi.org/10.1016/j.patter.2023.100706

[2] COPE (2023). Guest editorial: the challenge of AI chatbots for journal editors. https://publicationethics.org/news-opinion/guest-editorial-challenge-ai…

[3] Bhuiyan J. (2023). «OpenAI CEO calls for laws to mitigate ‘risks of increasingly powerful'AI». The Guardian. https://www.theguardian.com/technology/2023/may/16/ceo-openai-chatgpt-a…. Accessed May 27, 2023.

[4] Appel G, Neelbauer J, Schweidel DA. (2023). «Generative AI has an intellectual property problem». Harvard Business Review. https://hbr.org/2023/04/generative-ai-has-an-intellectual-property-prob…. Accessed May 27, 2023.

[5] Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapeña JF, Pai SA, Ing Edsel, Citrome L, Alam M, Voight M, Habibzadeh F. (2023). «Recomendaciones de WAME sobre “chatbots” e inteligencia artificial generativa en relación con las publicaciones académicas». Colombia Médica, 54(3), e1015868. http://doi.org/10.25100/cm.v54i3.5868

 

Licence

Document produced by the UAB Publications Service and distributed under CC-BY licence.