Artificial Intelligence Use Policy

Use of Artificial Intelligence

The Editorial Unimar of the Universidad Mariana adopts the guidelines of the Committee on Publication Ethics (COPE) regarding the use of Artificial Intelligence (AI) in manuscripts and publications; it also supports and welcomes the Heredia Declaration (2024).

Authorship of manuscripts

An AI tool cannot be classified as an author or co-author (COPE Council, 2025); therefore, authorship cannot be attributed to such a tool. “Authorship is a human responsibility and involves tasks that should only be performed by humans” (Tavira, 2024, p. 1).

The manuscript will be rejected if it is discovered that part or all of a manuscript has been created using AI tools without informing the editor. In addition, if the article has been published and such behavior is subsequently discovered, it will be retracted.

Accepted and restricted use of artificial intelligence tools

For Editorial Unimar, authors must follow the principles of honesty, transparency, and responsibility and declare the use of AI, language models, machine learning, or similar technologies when they have been used to create content, modify the text, or edit manuscripts.

  1. Writing: It is acceptable to use it to check the correct writing of manuscripts, but authors must ensure that the content has not been altered in its meaning, guaranteeing coherence, precision, and originality. Writing assistance differs from content generation, since the latter may lead to deceptive practices or plagiarism (Universidad Tecnológica de Pereira, 2025).

Declaration of AI Use*: In preparing this article, the author(s) used the tool/service ‘name of tool/service’ to refine the wording. After using this tool/service, the author(s) carefully reviewed and modified the content; therefore, they take full responsibility for the publication.

*Please include this statement after the references section of your manuscript.

  1. Translation: It is accepted that it can be used to improve the translation done by the authors; however, this process must be supervised to ensure its quality. It is recommended that it be done and checked by an expert.

AI* Usage Statement: In preparing this article, the author(s) used the tool/service ‘name of tool/service’ to fine-tune the translation. After using this tool/service, the author(s) carefully reviewed and modified the content; therefore, they take full responsibility for the publication.

*Please include this statement after the references section of your manuscript.

  1. Data analysis: It is acceptable to use them to analyze data, as long as the authors understand and oversee the methods used. In these cases, and following the recommendations of the Heredia Declaration (2024) and Flanagin et al. (2023), the appropriate way to report this use, given that these tools were part of the design or research methods used is, as follows:

Method:

  • The AI model or type, its version, and the date of use.
  • State how it was used, and present the interactions, combinations, and orders that were introduced into the AI. Authors must attach the backup file generated by the AI.
  • Indicate which products and formats generated by the tool have been integrated into the results section presented in the manuscript.

Results

  • The analysis or content selected by the author for inclusion in this section must be fully identified with its citation according to the journal’s standards.

References

  • Reference the model or type of AI used, following the publication standard.

Note: The use of AI tools to arbitrarily manipulate or alter data in research or publication is unacceptable. If such practices or behaviors are detected, the author may be banned indefinitely, in addition to rejection of the submission or retraction of the published work.

Declaration of AI* Use: In preparing this article, the author(s) used the tool/service ‘name of tool/service’ for data analysis. After using this tool/service, the author(s) carefully reviewed and modified the content; therefore, they take full responsibility for the content of the publication.

*Please include this statement after the references section of your manuscript.

  1. Figures and images: AI-generated or modified ‘tables or figures’ are not currently permitted for use in articles (Elsevier, 2023), as they may pose legal risks related to copyright and integrity due to their susceptibility to ‘manipulation’; this position on AI-generated multimedia may change as copyright law and ethical standards evolve.

As part of the editorial process, “adjustments to brightness, contrast, or color balance may occasionally be made if they do not affect the original information” (Tavira, 2024, p. 1), all under supervision. Tables and figures are understood to be: “photographs, graphs, data tables, medical images, image fragments, computer code, and formulas. The term ‘manipulation’ includes enlarging, hiding, moving, removing, or introducing a specific feature within an image or figure” (Taylor & Francis, 2023, para. 20).

Declaration of AI Use*: In preparing this article, the author(s) used the tool/service ‘name of tool/service’ to adjust brightness, contrast, and/or color balance. After using this tool/service, the author(s) carefully reviewed and modified the content; therefore, they take full responsibility for the content of the publication.

*Please include this statement after the references section of your manuscript.

Use of artificial intelligence by peer reviewers

To date, peer reviewers are not authorized to use AI tools to evaluate or formulate opinions on assigned manuscripts. It is their responsibility to provide their opinions and recommendations responsibly and objectively (Springer, 2023; Taylor & Francis, 2023).

Risks

This follows the position of Taylor and Francis (2023), when they mention that “while generative AI has immense possibilities to enhance the creativity of authors, there are certain risks associated with the current generation of generative AI tools” (para. 3). Some of the risks are:

  1. Inaccuracy and bias: Generative AI tools are statistical (as opposed to factual) in nature, and as such may introduce inaccuracies, falsehoods (so-called hallucinations), or biases that can be difficult to detect, verify, and correct.
  2. Lack of attribution: Generative AI often lacks the global academic community’s standard practice of correctly and accurately attributing ideas, citations, or references.
  3. Confidentiality and Intellectual Property Risks: Generative AI tools are currently often deployed on third-party platforms that may not provide sufficient standards of confidentiality, data security, or copyright protection.
  4. Unintended uses: Generative AI providers may reuse input or output data from user interactions (e.g., for AI training). This practice could infringe the rights of authors and publishers, among others. (para. 7-10)

In conclusion, the goal of this AI Use Policy is to “protect personal, confidential, sensitive, or third-party data when there is no explicit authorization to use it as part of queries to an AI model” (Heredia Declaration, 2024, p. 5) and to promote original authorship through principles such as honesty, transparency, and responsibility. It is worth noting that this policy will be modified as progress is made on this issue.

Reference

Consejo COPE. (2025). Posición COPE - Autoría e IA - inglés. https://doi.org/10.24318/cCVRZBms

Elsevier. (2023). The use of AI and AI-assisted technologies in writing for Elsevier. https://www.elsevier.com/about/policies/publishing-ethics-books/the-use-of-ai-and-ai-assisted-technologies-in-writing-for-elsevier

Flanagin, A., Bibbins-Domingo, K., Berkwits, M. & Christiansen, S. L. (2023). Nonhuman "Authors" and Implications for the Integrity of Scientific Publication and Medical Knowledge. JAMA, 329(8), 637-639.10.1001/jama.2023.1344

Heredia Declaration: Principles on the use of Artificial Intelligence in scientific publishing (L. Penabad-Camacho, M. A. Penabad-Camacho, A. Mora-Campos, G. Cerdas-Vega, Y. Morales-López, M. Ulate-Segura, A. Méndez-Solano, N. Nova-Bustos, M. F. Vega-Solano, & M. M. Castro-Solano, Trans.). (2024). Revista Electrónica Educare, 28(S), 1-10. https://doi.org/10.15359/ree.28-S.19967

Revista Española de Pedagogía. (2024). Políticas.  https://www.revistadepedagogia.org/rep/policies.html

Revista Actualidades Biológicas. (2025). Política de uso de inteligencia artificial. https://revistas.udea.edu.co/index.php/actbio/IA

Springer. (2023). Inteligencia artificial (IA). https://www.springer.com/de/editorial-policies/artificial-intelligence--ai-/25428500

Tavira, R. (2024). IA y políticas editoriales de revistas académicas: Elsevier, Springer-Nature, Taylor&Francis, Wiley, Cambridge UP, AIP, IEEE, AAAS, COPE, WAMA y JAMA. https://boletinscielomx.blogspot.com/2024/08/ia-y-politicas-editoriales-de-revistas.html?spref=tw%2Chttps%3A%2F%2Fboletinscielomx.blogspot.com%2F2024%2F08%2Fia-y-politicas-editoriales-de-revistas.html&utm_source=substack&utm_medium=email

Taylor & Francis. (2023). Política de IA. https://taylorandfrancis.com/our-policies/ai-policy/?_ga=2.124732610.992816887.1738017327-773782224.1738017327

Tecnología en Marcha. (2024).  Política de uso de Inteligencia Artificial (IA). https://revistas.tec.ac.cr/index.php/tec_marcha/libraryFiles/downloadPublic/115

Universidad Tecnológica de Pereira. (2025). Política sobre el uso de la inteligencia artificial. https://revistas.utp.edu.co/index.php/revistaciencia/Politica-sobre-el-uso-de-la-inteligencia-artificial