ChatGPT: A Threat or an Opportunity for Scientists?

The emergence of artificial intelligence (AI) has brought about numerous opportunities and challenges for scientists in various fields. One notable advancement in AI is OpenAI’s ChatGPT (Generative Pretrained Transformer), a powerful language model based on the GPT‐3.5 architecture that can generate text resembling human‐like responses. Like any technological advancement, the risk‐benefit opinions of ChatGPT diverge among scientists. This paper explores the implications of AI and, in particular, ChatGPT, delving into its potential as both a threat and an opportunity for scientists. It examines the risks associated with misinformation, ethical considerations, and the impact on traditional research processes. Ultimately, the argument put forth is that while ChatGPT and chatbots like it do present challenges, they also have the potential to significantly transform scientific inquiry and foster innovation.

• AI chatbots, like ChatGPT, utilize natural language processing to learn from Internet data and offer AI-driven written responses to user questions • They present challenges but also transform scientific inquiry, with the potential to foster innovation as both a threat and an opportunity Although AI chatbots demonstrate remarkable capabilities, it has sparked valid concerns within the scientific community.One significant apprehension revolves around the potential misuse of the technology and the dissemination of misinformation.Due to chatbots' ability to produce text that appears credible, there is a real risk of false information being propagated, leading to scientific inaccuracies and public confusion.A recent analysis conducted in 2021 on publications in Microprocessors and Microsystems uncovered approximately 500 dubious articles (Cabanac et al., 2021).Their results demonstrated that these articles included flawed references, scientifically incorrect statements, and nonsensical material, thereby rendering the papers non-replicable.The researchers speculate that authors might have utilized an AI chatbot to mask instances of plagiarism and inflate the length of their manuscripts.
It is the responsibility of researchers to critically evaluate and verify the information generated by AI chatbots to ensure its accuracy and reliability.Furthermore, it is important to implement measures that can mitigate the spread of misinformation, such as robust fact-checking processes and transparent reporting of outputs generated by AI systems.
Another concern pertains to the possibility of job displacement for researchers.The automation of certain tasks by AI chatbots, such as data analysis, could render some roles obsolete or reduce the need for human intervention in these areas.This raises questions about the future of scientific employment and its impact on researchers' career prospects.However, it is essential to recognize that while AI chatbots can automate specific tasks, it cannot replace the indispensable qualities of human creativity, critical thinking, and problem-solving abilities.Researchers will continue to play a vital role in formulating research questions, designing experiments, and interpreting results.Therefore, it is more likely that AI chatbots will enhance and complement researchers' work rather than completely replace them.

Opportunities Presented by AI Chatbots
While there are valid concerns surrounding AI chatbots, they also present numerous opportunities for scientists.The algorithm's capabilities can greatly assist researchers in several aspects of their work, ultimately enhancing the scientific process.Here are some opportunities that AI chatbots bring to the forefront: 1. Support for non-native English-speaking scientists.AI chatbots serve as a valuable tool for non-native English-speaking scientists, helping to verify and improve their English proficiency (Kim, 2023).In addition, in countries where English is not the primary language, there are specialized companies that offer verification and correction services for English texts used in scientific contexts and the advent of AI chatbots are expected to have a profound impact on such activities.2. Data analysis.AI chatbots can help scientists in analyzing complex and/or vast data sets (e.g., Biswas, 2023).Its capacity to process and comprehend large volumes of information can expedite data analysis tasks, enabling researchers to extract insights more efficiently.By automating certain aspects of data analysis, AI chatbots can save valuable time and resources, allowing scientists to focus on higher-level interpretation and drawing meaningful conclusions from the data.3. Hypothesis generation.AI chatbots can be a valuable tool for hypothesis generation.Scientists can interact with the model, presenting their initial ideas or research questions and receiving AI-generated insights and suggestions.AI chatbots' ability to generate creative and contextually relevant responses can inspire researchers, sparking new avenues of exploration and helping refine hypotheses.This collaborative interaction fosters innovation and can lead to the formulation of novel research directions.
In summary, AI chatbots offer significant opportunities for scientists, ranging from language support and data analysis to hypothesis generation and exploring vast data sets.Leveraging these capabilities can enhance the research process and drive scientific progress.

Ethical Considerations
Although AI chatbots provide a plethora of possibilities for scientific research, it is of utmost importance to acknowledge and tackle the ethical implications that come with its utilization (e.g., De Angelis et al., 2023;Zhou et al., 2023).It is imperative to give due diligence to the following critical factors: 1. Biases in data and outputs.AI chatbots learn from large data sets, which can potentially contain biases present in the training data.It is essential to actively monitor and mitigate biases in both the training data and the outputs generated by AI chatbots.Researchers should strive to ensure fairness and prevent the reinforcement of existing biases that could perpetuate social inequalities or misrepresentation in scientific findings.2. Transparency and explainability.To establish trust and credibility, transparency and explainability of AI-generated findings are paramount.Researchers should make efforts to understand and communicate how AI chatbots arrive at its responses.This includes providing explanations of the model's decision-making process and the underlying reasoning for the generated outputs.Transparent reporting allows researchers to evaluate and validate the outputs effectively, ensuring that they align with scientific principles and ethical standards.3. Validation and verification.While AI chatbots can be a valuable tool, it is essential to validate and verify the information it produces.Researchers should critically evaluate the outputs and cross-reference them with established scientific knowledge and rigorous validation processes.Peer review remains an important part of scientific research, helping to ensure accuracy, reliability, and the avoidance of potential errors or misinterpretations that may arise from relying solely on AI-generated findings.

Debate Arises Over Crediting ChatGPT as Author in Scientific Literature
There is a debate among journal editors, researchers, and publishers about whether AI tools like ChatGPT should be cited as authors (Stockel-Walker, 2023;Teixeira da Silva, 2023;Thorp, 2023;Yeo-Teh & Tang, 2023) and several preprints and published articles have already credited ChatGPT with formal authorship.While acknowledging the AI's contribution to writing papers, publishers and preprint servers agree that AIs cannot fulfill the criteria for study authorship, as they cannot take responsibility for the content and integrity of scientific papers.
The potential misuse of AI in academia is also a concern, as people without domain expertise could attempt to write scientific papers using AI systems.

Utilizing AI Chatbots: Best Practices for Scientists
To effectively harness the potential of AI chatbots, scientists should adopt a cautious and critical approach (e.g., Huang & Tan, 2023).Here are some best practices to consider: 1. Validation and fact-checking.It is crucial to validate and fact-check the content generated by AI chatbots.While the model can provide valuable insights, it is important to cross-reference the information with established scientific knowledge and conduct thorough verification.Researchers should use their expertise to evaluate the outputs and ensure the accuracy and reliability of the information before incorporating it into their work.
2. Supportive tool, not replacement.AI chatbots should be viewed as a supportive tool rather than a replacement for human expertise.Scientists should leverage AI chatbots to enhance their research processes, automate certain tasks, and gain new perspectives.However, the critical thinking, creativity, and problem-solving skills of researchers, and human mind in general, remain invaluable in formulating research questions, designing experiments, and interpreting results.AI chatbots should be seen as a complement, with researchers providing the necessary context and judgment to guide and evaluate the outputs generated by the model.The synergy between scientists and AI systems should be regarded as a harmonious collaboration, intertwining their respective strengths to propel the boundaries of scientific knowledge forward.

Conclusions
At the forefront of AI, ChatGPT has generated significant buzz.It presents a multifaced landscape for scientists, encompassing a range of opportunities and challenges.This language model offers unprecedented capabilities in data analysis, collaboration, and knowledge sharing.However, it is imperative to approach its usage with careful consideration of ethical implications and the preservation of human expertise.Scientists should regard AI chatbots as a tool that enhances their skills and knowledge while maintaining transparency, fairness, and responsibility.
In conclusion, while AI chatbots introduce challenges and ethical considerations, it also holds immense potential to revolutionize scientific inquiry and drive innovation.By embracing these new tools responsibly and collaborating with human researchers, scientists can harness its power to advance research and propel scientific progress into the future.I am grateful to my colleagues who, in light of the rapid spread of AI chatbots in the scientific community, have motivated the writing of this article.I would also like to thank the colleagues from the American Geophysical Union who initiated this discussion during the recent meeting in Lisbon.Finally, I would like to express my gratitude to Editor in Chief Michael Wysession for his valuable insights and comments.

Figure 1 .
Figure 1.Illustration showcasing the power of artificial intelligence, which holds immense potential to revolutionize scientific inquiry and drive innovation.Credit: Pixabay.
3. Transparency in research.Scientists should be transparent about the involvement of AI in their research.Clearly communicating the role of AI systems in data analysis, hypothesis generation, or other tasks helps ensure the integrity and reproducibility of scientific findings.Transparent reporting of the use of AI technologies fosters accountability and allows for a better understanding of the limitations and potential biases associated with the generated outputs.4. Collaboration and symbiotic relationship.Researchers should embrace collaboration between scientists and AI systems like ChatGPT.This symbiotic relationship combines the strengths of human expertise and AI capabilities to advance scientific knowledge.Engaging in meaningful collaborations, scientists can leverage AI chatbots' language processing abilities, data analysis capabilities, and insights to explore new research avenues, uncover patterns, and generate innovative ideas.By working together, researchers and AI systems can enhance the efficiency and effectiveness of scientific research.
• Using them necessitates ethical and human expertise considerations.Scientists should view them as tools, prioritizing transparency and responsibility COMMENTARY Perspectives of Earth and Space Scientists FLORINDO 10.1029/2023CN000212 2 of 5