AI Tools Usage Policy

⚡ AI Policy Summary: Journal of Business Application (JBA) follows the COPE (Committee on Publication Ethics) Position Statement on Artificial Intelligence and aligns with the latest Scopus requirements for AI usage in scholarly publications. This policy establishes ethical standards for the use of AI tools by authors, reviewers, and editors to ensure transparency, accountability, and integrity of the scientific record.

A. PURPOSE AND SCOPE

This policy establishes ethical standards for the use of Artificial Intelligence (AI) and AI-assisted technologies in the preparation, review, and publication of scholarly works in Journal of Business Application. It applies to all authors, peer reviewers, editors, and editorial staff involved in the publication process.

The journal adheres to COPE's Core Practices and the ICMJE recommendations regarding the use of generative AI and large language models (LLMs) in scholarly publications. This policy is designed to:

  • Safeguard research integrity and transparency
  • Clarify the responsibilities of all parties involved
  • Ensure appropriate disclosure of AI tool usage
  • Maintain confidentiality and data protection
  • Align with international publishing standards

Definition: "AI or AI-assisted tools" include large language models (e.g., ChatGPT, GPT-4, Gemini, Claude), generative AI for text, images, or data analysis, and AI-assisted software for writing, paraphrasing, translation, or visualization.

B. COPE POSITION STATEMENT ON AI AUTHORSHIP

"AI tools cannot fulfill the requirements for authorship. Authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics."

— COPE Position Statement on Artificial Intelligence

In accordance with COPE guidance, JBA affirms that:

  • AI tools cannot be listed as authors or co-authors of any manuscript submitted to the journal.
  • Authorship requires human intellectual contribution including conception, design, data analysis, interpretation, and approval of the final version.
  • Authors bear full responsibility for the accuracy, originality, and ethical compliance of all content, including any generated or assisted by AI tools.
  • AI systems cannot hold ORCID IDs or be listed in author affiliations.

Important: Any submission that lists an AI tool as an author will be rejected without review. Authors must ensure that all co-authors are human and meet the journal's authorship criteria.

C. USE OF AI TOOLS BY AUTHORS

Permitted Uses (with Disclosure)

Authors may use AI tools for the following purposes, provided full disclosure is made:

Language Editing

Grammar correction, spelling, punctuation, and style improvement

Translation Assistance

Translating text written by authors, with human verification

Data Analysis (Methodological)

If AI is part of the research methodology (e.g., machine learning for financial forecasting), fully described in Methods

Formatting Assistance

Reference formatting, structural organization

Restricted and Prohibited Uses

AI as Author

Listing AI tools as authors or co-authors

Fabrication of Data

Using AI to generate or falsify research data, results, or financial models

AI-Generated Citations

Using AI to generate references or citations without verification (risk of hallucinations)

Undisclosed AI Use

Using AI tools without proper disclosure in the manuscript

AI-Generated Images/Figures

Creating or altering scientific images/figures/charts using generative AI (except with explicit editorial permission and labeling)

Confidential Data Upload

Uploading unpublished data, proprietary business information, or confidential research to public AI systems

Author Accountability: Consistent with COPE and ICMJE guidance, authors are fully responsible for verifying the accuracy, validity, and originality of all content generated or assisted by AI tools. Authors must ensure that AI use does not constitute plagiarism or copyright infringement.

D. DISCLOSURE REQUIREMENTS

Authors must disclose any use of AI or AI-assisted tools in manuscript preparation. The disclosure must be placed in a dedicated "AI Use and Provenance" statement before the References section.

Required Information:

  • Tool/Software name and version (e.g., ChatGPT-4, Grammarly, DeepL)
  • Provider/Developer (e.g., OpenAI, Google, DeepL GmbH)
  • Date of use
  • Purpose and scope of AI assistance (e.g., language editing, translation, data analysis)
  • Verification steps taken by authors to ensure accuracy and integrity

Sample Disclosure Statements:

"The authors used ChatGPT-4 (OpenAI, 2025) for language editing and grammar improvement only. No content, data, figures, or references were generated by AI. All AI-assisted text was reviewed and verified by the authors, who take full responsibility for the content."

"DeepL Pro (DeepL GmbH, 2025) was used to translate the manuscript from Bahasa Indonesia to English. The authors compared the translated text with the original and verified all terminology and meaning. The final version was reviewed by a native English speaker."

If No AI Tools Were Used:

"No artificial intelligence or AI-assisted tools were used in the preparation of this manuscript."

Important: Failure to disclose AI use, incomplete disclosure, or prohibited use constitutes a violation of publication ethics and may result in rejection, retraction, or institutional notification following COPE guidelines.

E. USE OF AI TOOLS BY PEER REVIEWERS

Confidentiality is Paramount

Peer reviewers must never upload any part of a manuscript or its data to public AI tools (e.g., ChatGPT, Claude, Gemini). This would violate confidentiality agreements and may expose unpublished research.

AI-Generated Reviews

Reviewers must not use AI to generate peer review reports or recommendations. Peer review requires human critical judgment, expertise, and accountability. AI-generated reviews risk being superficial, biased, or inaccurate.

Limited Permitted Use

Reviewers may use AI tools for language editing of their own review text (the text they have written), provided that:

  • Confidentiality is preserved (offline tools or tools that do not store data)
  • The reviewer remains fully responsible for the scientific content of the review
  • Any substantial AI assistance is disclosed to the editor

Consequences of Misuse: Breaches of confidentiality or undisclosed AI-generated reviews may lead to removal from the reviewer database, notification to institutions, and disqualification from future reviews following COPE guidance.

F. USE OF AI TOOLS BY EDITORS

Editorial Decisions Must Be Human-Made

All editorial decisions (accept, reject, revise) must be made by human editors. AI tools cannot be delegated to make or inform editorial judgments.

Confidentiality Requirements

Editors must not upload confidential manuscripts or data to public AI systems. Any use of AI-assisted screening tools (e.g., plagiarism detection, reviewer discovery) must use identity-protected, secure systems with appropriate data privacy controls.

Permitted Editorial AI Use

The journal may employ AI tools for limited, routine processes with human oversight:

  • Plagiarism and similarity checking (e.g., Turnitin, iThenticate)
  • Metadata completeness verification
  • Reviewer matching and discovery
  • Language quality checks

All such tools are used with human oversight ("human-in-the-loop") and are regularly assessed for accuracy, bias, and privacy compliance.

G. AI-GENERATED IMAGES, FIGURES, AND MULTIMEDIA

Scientific visuals must faithfully represent the underlying data. JBA adopts the following policies regarding AI-generated or AI-altered images:

  • Prohibited: Using generative AI to create or alter research images, figures, charts, or data visualizations that misrepresent findings.
  • Permitted (with disclosure): AI-generated illustrative artwork for non-scientific use (e.g., cover art, graphical abstracts) requires prior editorial permission, rights clearance, and explicit labeling as "AI-generated".
  • Verification: The journal may request original, unprocessed image files to verify integrity.

Labeling Requirement: If AI-generated images are permitted (with editorial approval), they must be clearly labeled in the caption. Example: "Figure 1. Conceptual diagram generated using DALL·E 3 (OpenAI, 2025) for illustrative purposes."

H. DETECTION, VERIFICATION, AND INVESTIGATIONS

The journal employs multiple approaches to ensure compliance with this policy:

  • Screening: Submissions may be screened using similarity detection and other tools to identify potential undisclosed AI use.
  • Human Assessment: The journal never relies solely on AI-detection scores to judge misconduct. All cases are assessed by human editors using evidence and COPE flowcharts.
  • Investigations: If concerns arise, the journal may request raw data, code, AI interaction logs, or prompt histories from authors.
  • Institutional Notification: In cases of confirmed misconduct, authors' institutions may be notified following COPE guidance.

Potential Sanctions:

  • Manuscript rejection (pre-publication)
  • Correction with disclosure (post-publication)
  • Expression of concern
  • Retraction of published article
  • Temporary or permanent submission ban
  • Notification to employers/funders

I. DATA PRIVACY AND CONFIDENTIALITY

All parties involved in the publication process must adhere to strict data privacy and confidentiality standards:

  • Authors: Must not upload identifiable patient information, proprietary business data, confidential research data, or copyrighted materials to public AI systems that may store or reuse content.
  • Reviewers and Editors: Must not upload unpublished manuscripts or associated materials to external AI systems that may compromise confidentiality.
  • Compliance: When AI is used as part of research methodology, authors must ensure compliance with data protection regulations (e.g., informed consent, privacy safeguards) and disclose these in the manuscript.

Note: Authors are responsible for ensuring that their use of AI tools complies with applicable data protection laws and institutional ethics requirements.

J. ALIGNMENT WITH SCOPUS REQUIREMENTS

This policy aligns with the latest Scopus requirements for AI usage in scholarly publications:

  • Transparency: Clear disclosure of AI tool usage is required.
  • Accountability: Authors are fully responsible for all content.
  • Authorship: AI tools cannot be authors.
  • Peer Review Integrity: AI must not be used in confidential review processes.
  • Ethical Oversight: Clear procedures for handling AI-related misconduct following COPE guidelines.

Scopus-indexed journals are expected to publicly document robust ethics and malpractice controls, including policies on AI usage. This policy fulfills that requirement.

K. POLICY REVIEW AND UPDATES

Recognizing the rapid evolution of AI technologies, this policy will be:

  • Reviewed annually by the Editorial Board to ensure alignment with evolving COPE guidance, ICMJE recommendations, and international publishing standards.
  • Updated as necessary to address emerging ethical challenges and best practices.
  • Published prominently on the journal website with version control and effective dates.

Authors, reviewers, and readers will be notified of significant policy changes through announcements on the journal website.

L. CONTACT FOR AI ETHICS CONCERNS

Editorial Office
Journal of Business Application (JBA)
Universitas Dr. Djar Wattiheluw
Indonesia

Email (AI ethics inquiries): admin@unidjar.id
Website: https://unidjar.id/index.php/jba

Editor in Chief: Eduard Yohannis Tamaela

For questions about this policy, to report suspected AI misuse, or to seek clarification about permitted AI use, please contact the editorial office.

This AI Tools Usage Policy follows COPE Position Statement on Artificial Intelligence (2023-2025) and aligns with the latest Scopus requirements for AI usage in scholarly publications (2026). Last updated: March 2026.

Effective from: March 15, 2026 | Next review: March 2027