Asian Publication Corporation welcomes the new opportunities offered by Generative AI tools, particularly in: enhancing idea generation and exploration, supporting authors to express content in a non-native language, and accelerating the research and dissemination process.  

Generative AI tools can produce diverse forms of content, spanning text generation, image synthesis, audio, and synthetic data. Some examples include ChatGPT, Copilot, Gemini, Claude, NovelAI, Jasper AI, DALL-E, Midjourney, Runway, Rytr AI, DeepSeek, Gemini, Canva, and many others.

Generative AI policies cover the use of AI tools in content creation by all participants in the journal editorial process, including authors, reviewers, and editors. They aim to ensure transparency and disclosure of the role of AI, redefinition of authorship categories, quality control and verification, and awareness-raising initiatives. While Generative AI has immense capabilities to enhance creativity for authors, there are certain risks associated with the current generation of Generative AI tools.  

Some of the risks associated with the way Generative AI tools work today are: 

  1. Inaccuracy and bias: Generative AI tools are of a statistical nature (as opposed to factual) and, as such, can introduce inaccuracies, falsities (so-called hallucinations) or bias, which can be hard to detect, verify, and correct. 
  2. Lack of attribution: Generative AI is often lacking the standard practice of the global scholarly community of correctly and precisely attributing ideas, quotes, or citations. 
  3. Confidentiality and Intellectual Property Risks: At present, Generative AI tools are often used on third-party platforms that may not offer sufficient standards of confidentiality, data security, or copyright protection.  
  4. Unintended uses: Generative AI providers may reuse the input or output data from user interactions (e.g. for AI training). This practice could potentially infringe on the rights of authors and publishers, amongst others.  

Authors 

Authors are fully responsible for the content of their work. AI tools cannot be listed as authors of an article, as they lack subjectivity and cannot assume responsibility for the generated content.

Authors are responsible for ensuring that the content of their submissions meets the required standards of rigorous scientific and scholarly assessment, research and validation, and is created by the author. Note that some journals may not allow use of Generative AI tools beyond language improvement, therefore authors are advised to consult with the editor of the journal prior to submission. 

Authors should disclose in their manuscript the use of AI and AI-assisted technologies and a statement will appear in the published work. Declaring the use of these technologies supports transparency and trust between authors, readers, reviewers, editors and contributors and facilitates compliance with the terms of use of the relevant tool or technology.

Authors should not submit manuscripts where Generative AI tools have been used in ways that replace core researcher and author responsibilities, for example:  

  • text or code generation without rigorous revision 
  • synthetic data generation to substitute missing data without robust methodology  
  • generation of any types of content which is inaccurate including abstracts or supplemental materials 

These types of cases may be subject to editorial investigation.  

Asian Publication Corporation currently does not permit the use of Generative AI in the creation and manipulation of images and figures, or original research data for use in our publications. The term “images and figures” includes pictures, charts, data tables, medical imagery, snippets of images, computer code, and formulas. The term “manipulation” includes augmenting, concealing, moving, removing, or introducing a specific feature within an image or figure.   

Utilising Generative AI and AI-assisted technologies in any part of the research process should always be undertaken with human oversight and transparency. Research ethics guidelines are still being updated regarding current Generative AI technologies.

Authors are fully responsible for the content of their work. AI tools cannot be listed as authors of an article, as they lack subjectivity and cannot assume responsibility for the generated content.

Authors should disclose in their manuscript the use of AI and AI-assisted technologies and a statement will appear in the published work. Declaring the use of these technologies supports transparency and trust between authors, readers, reviewers, editors and contributors and facilitates compliance with the terms of use of the relevant tool or technology.

In such cases, authors must declare the use of generative AI tools by adding a statement at the end of the manuscript when the paper is first submitted. The statement will appear in the published work and should be placed in a new section before the references list. An example:

Title of new section: Declaration of the Use of Generative AI Tools

Statement: During the preparation of this work the author(s) used [NAME TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the published article

 

 

Editors and Peer Reviewers 

Editors’ and peer reviewers’ use of manuscripts in Generative AI systems may pose a risk to confidentiality, proprietary rights and data, including personally identifiable information. Therefore, editors and peer reviewers must not upload files, images or information from unpublished manuscripts into Generative AI tools. Failure to comply with this policy may infringe upon the rightsholder’s intellectual property.  

 

Editors  

Editors are the shepherds of quality and responsible research content. Therefore, editors must keep submission and peer review details confidential. Managing the editorial evaluation of a scientific manuscript implies responsibilities that can only be attributed to humans.

Editors should check with us prior to using any Generative AI tools, unless they have already been informed that the tool and proposed use of the tool is authorised. Journal Editors should refer to our Editor Resource page for more information on our code of conduct.  

 

Peer reviewers 

Peer review is at the heart of the journal editorial process and Eurasian Journal of Chemistry abides by the highest standards of integrity in this process. Reviewing a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI tools should not be used by reviewers to assist in the scientific review of a paper as the critical thinking and original assessment needed for peer review is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript. The reviewer is responsible and accountable for the content of the review report.

When a researcher is invited to review another researcher’s paper, the manuscript must be treated as a confidential document. Reviewers should not upload a submitted manuscript or any part of it into a generative AI tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.

This confidentiality requirement extends to the peer review report, as it may contain confidential information about the manuscript and/or the authors. For this reason, reviewers should not upload their peer review report into an AI tool, even if it is just for the purpose of improving language and readability.

 

 

Generative AI Opportunities, Scope, and Risks

Asian Publication Corporation recognizes the benefits that Generative AI tools offer, specifically in fostering creativity, aiding authors who are writing in a non-native language, and speeding up the processes of research and dissemination. These AI tools can generate diverse content, such as text, synthetic data, images, and audio.

Scope and Goals of the Policy

Generative AI policies apply to all participants in the journal editorial process, including reviewers, editors, and authors, regarding the use of AI tools in content creation.

The policy aims to achieve several goals:

  • Ensuring transparency and disclosure regarding the AI's role.
  • Redefining the categories of authorship.
  • Implementing initiatives for awareness-raising.
  • Maintaining quality control and verification.

While these tools possess great potential for enhancing author creativity, the current generation of Generative AI tools presents several risks.

Key Risks Associated with Generative AI

The risks currently associated with how Generative AI tools operate include:

  1. Inaccuracy and Bias: Generative AI tools are statistical rather than factual, meaning they can introduce inaccuracies, bias, or falsities (known as hallucinations) that are difficult to verify, detect, and correct.
  2. Lack of Proper Citation: Generative AI frequently fails to follow the standard scholarly practice of accurately and precisely citing or attributing ideas and quotes.
  3. Confidentiality and Intellectual Property Concerns: Currently, many Generative AI tools are accessed via third-party platforms that may not meet sufficient standards for data security, copyright protection, or confidentiality.
  4. Unintended Data Use: Providers of Generative AI services might reuse the output or input data from user interactions, such as for AI training purposes. This practice could potentially infringe upon the rights of authors and publishers.
  5. The Generative AI policies cover the use of AI tools by all participants in the journal editorial process, including authors, reviewers, and editors.

Authors

Authors are ultimately responsible for the content of their work and must ensure their submissions meet the required standards of scholarly assessment and research, and are created by the author.

Action

Requirement

Authorship

AI tools cannot be listed as authors of an article, as they lack subjectivity and cannot assume responsibility for the generated content.

Disclosure

Authors must disclose the use of AI and AI-assisted technologies in their manuscript. A statement will appear in the published work.

Statement Location

The statement must be placed in a new section before the references list, with a suggested title of "Declaration of the Use of Generative AI Tools”.

Journal Consultation

Authors are advised to consult with the editor of the journal prior to submission, as some journals may not allow the use of Generative AI tools beyond language improvement.

Oversight

Utilizing Generative AI should always be undertaken with human oversight and transparency.

Authors must not submit manuscripts where Generative AI tools replace core researcher responsibilities, which includes text/code generation without rigorous revision, generating synthetic data to substitute missing data without robust methodology, or generating any inaccurate content. Asian Publication Corporation also currently does not permit the use of Generative AI in the creation or manipulation of images, figures, or original research data.

 

Editors and Peer Reviewers

Editors and peer reviewers must keep submission and peer review details confidential. Their use of manuscripts in Generative AI systems poses a risk to confidentiality, proprietary rights, and data.

Role

Confidentiality Requirement

Editors & Peer Reviewers

They must not upload files, images, or information from unpublished manuscripts into Generative AI tools. Failure to comply may infringe upon the rightsholder’s intellectual property.

Editors

Editors should check with the publisher prior to using any Generative AI tools, unless already informed the use is authorized. Managing editorial evaluation implies responsibilities that can only be attributed to humans.

Peer Reviewers

Peer review requires critical thinking and original assessment outside the scope of Generative AI. The reviewer is responsible and accountable for the review report content. Reviewers should not upload the manuscript or their peer review report (even for language improvement) into an AI tool, as this violates confidentiality and proprietary rights.