Научная статья на тему 'ChatGPT: Where Is a Silver Lining? Exploring the realm of GPT and Large Language Models'

ChatGPT: Where Is a Silver Lining? Exploring the realm of GPT and Large Language Models Текст научной статьи по специальности «Языкознание и литературоведение»

CC BY-ND
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
generative pre-trained transformers (GPT) / ChatGPT / artificial intelligence (AI) / AI chatbots / natural language processing (NLP) / large language model (LLM)

Аннотация научной статьи по языкознанию и литературоведению, автор научной работы — Elena Tikhonova, Lilia Raitskaya

Introduction: the JLE editors analyse the scope and depth of the subject area of ChatGPT and related topics based on the Scopus database. The Scopus statistics prove a skyrocketing rise in the number of publications in the field in question during 2023. The major alarming themes cover authorship and integrity related to AI-assisted writing, threats to educational practices, medicine, and malevolent uses of ChatGPT. Keywords Explained: the key terminology is defined, including generative pre-trained transformers (GPT); ChatGPT; artificial intelligence (AI); AI chatbots; natural language processing (NLP); large language models; Open AI; large language model (LLM). International Research on ChatGPT: as of September 24 2023, the Scopus database has indexed 1,935 publications, with “ChatGPT” in the title, abstract, or keywords. A skyrocketing rise in the number of research has been reported since the early days of 2023. 1,925 indexed publications out of 1,935 were published in 2023. Most of them came from the USA, India, the UK, and China. The number of documents indexed in the Scopus database as well as PubMed, arXiv and others are exponentially rising. ChatGPT in Education: the academic community has been actively discussing the challenges education will face in the era of ChatGPT in the context of the fundamental threats posed to the educational system. The latter include assessment procedures, information accuracy, and skill devaluation. As many complex technologies, generative pre-trained transformers are ambivalent in nature, providing a great potential for learning and education at large, including new approaches based on critical thinking and awareness of the pros and cons of AI. ChatGPT in Science: great prospects for text generation and improvements in language quality adjoin to dubious authorship and potentially inconsistent and erroneous parts in the AI-produced texts. Publishers and journals are working out new publishing policies, including publishing ethics towards AI-assisted or AI-improved submissions. Conclusion: JLE is planning to revise its editorial policy to address the new challenges from AI technologies. JLE editors welcome new submissions of research articles and reviews as well as special issues on ChatGPT and related themes, with potential applications of chatbots in education, innovative approaches to writing assignments, facilitating personalized learning, academic integrity issues related to AI-supported writing, etc. in focus.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «ChatGPT: Where Is a Silver Lining? Exploring the realm of GPT and Large Language Models»

https://doi.org/10.17323/jle.2023.18119

Citation: Tikhonova E., & Raitskaya L. (2023). ChatGPT: Where Is a Silver Lining? Exploring the realm of GPT and large language models. Journal of Language and Education, 9(3), 5-11. https://doi.org/10.17323/jle.2023.18119

Correspondence:

Elena Tikhonova, etihonova@hse.ru

Received: September 01, 2023 Accepted: September 15, 2023 Published: September 30, 2023

ChatGPT: Where Is a Silver Lining? Exploring the realm of GPT and Large Language Models

1, 2© 3©

Elena Tikhonova , Lilia Raitskaya

1 National Research University Higher School of Economics

2 Peoples' Friendship University of Russia (RUDN University)

3 Moscow State Institute of International Relations (MGIMO University)

ABSTRACT

Introduction: the JLE editors analyse the scope and depth of the subject area of ChatGPT and related topics based on the Scopus database. The Scopus statistics prove a skyrocketing rise in the number of publications in the field in question during 2023. The major alarming themes cover authorship and integrity related to AI-assisted writing, threats to educational practices, medicine, and malevolent uses of ChatGPT.

Keywords Explained: the key terminology is defined, including generative pre-trained transformers (GPT); ChatGPT; artificial intelligence (AI); AI chatbots; natural language processing (NLP); large language models; Open AI; large language model (LLM).

International Research on ChatGPT: as of September 24 2023, the Scopus database has indexed 1,935 publications, with "ChatGPT" in the title, abstract, or keywords. A skyrocketing rise in the number of research has been reported since the early days of 2023. 1,925 indexed publications out of 1,935 were published in 2023. Most of them came from the USA, India, the UK, and China. The number of documents indexed in the Scopus database as well as PubMed, arXiv and others are exponentially rising.

ChatGPT in Education: the academic community has been actively discussing the challenges education will face in the era of ChatGPT in the context of the fundamental threats posed to the educational system. The latter include assessment procedures, information accuracy, and skill devaluation. As many complex technologies, generative pre-trained transformers are ambivalent in nature, providing a great potential for learning and education at large, including new approaches based on critical thinking and awareness of the pros and cons of AI. ChatGPT in Science: great prospects for text generation and improvements in language quality adjoin to dubious authorship and potentially inconsistent and erroneous parts in the AI-produced texts. Publishers and journals are working out new publishing policies, including publishing ethics towards AI-assisted or AI-improved submissions.

Conclusion: JLE is planning to revise its editorial policy to address the new challenges from AI technologies. JLE editors welcome new submissions of research articles and reviews as well as special issues on ChatGPT and related themes, with potential applications of chatbots in education, innovative approaches to writing assignments, facilitating personalized learning, academic integrity issues related to AI-supported writing, etc. in focus.

KEYWORDS

generative pre-trained transformers (GPT), ChatGPT, artificial intelligence (AI), AI chatbots, natural language processing (NLP), large language model (LLM)

INTRODUCTION

The world witnesses that AI-generated writings are spreading across vari-

ous fields. AI assistants have been used for the recent years across education and science. The most popular and efficient AI tools encompass Grammarly1,

Grammarly. https://www.grammarly.com

Jasper AI2, JenniAI3, Hemingway Editor4, QdouillBot 5, and others. Their use improves writing, checks for errors, corrects spelling. Some of them assist with citations (e.g. QuillBot), others check for plagiarism (e.g. Grammarly). All AI tools of the kind do not produce text. Their usefulness is obvious. In November 2022, an advanced technology of generative artificial intelligence was launched by OpenAI, an American California-based laboratory, showing great performance and spreading at lightning speed. ChatGPT reached one million users only within five days as compared with 300 days for Facebook, 75 days for Instagram to reach the same audience (Firat, 2023, p. 58).

The popularity of ChatGPT can be easily explained. Writing forms an integral and important part of education and work elsewhere. Educational systems of assessment are essentially based on writing. Professional requirements widely imply good-to-perfect writing skills and skills of writing communication. For instance, in the USA, 872 occupations relate to writing skills (Steele, 2023). Authors, journalists, and researchers are frontrunners in writing. They ought to possess most elaborate writing skills. Not surprisingly, the spheres they are engaged in are likely to be influenced most by rapidly developing large language model (LLM) chatbots. The recent ubiquity of the advanced AI technologies replicating human language patterns has led to a discussion of their pros and cons. The challenges, or rather threats and advantages, are considered to have some potential implications for education and various professional fields. The perceptions of the brand-new technologies range from negative or even alarming to positive and enthusiastic.

Even before the arrival of ChatGPT 4.0, its previous version was successfully applied in medical education and practice. ChatGPT is good at "interpreting clinical information" (Ho, Koussayer, & Sujka, 2023), giving full and correct answers to all questions that students of medicine may get at an examination, diagnosing complicated cases, and consulting on treatments. In the same vein, journals on medicine and nursing became the frontrunners who introduced a new stand on ChatGPT's participation in writing research.

Some of medical journals stick to an editorial policy allowing AI-generated text incorporation but subject to a statement of the way ChatGPT was used. Authors are required to indicate where and how this technology was applied. The sections of the submissions covering this information may vary, but most sources single out the methods section or the acknowledgements section as the most appropriate. But all agree that it may be any section but for the information on the authors.

Some researchers assume that artificial intelligence chatbot may pose a threat to the very pillars of education, including assessment of students' educational outcomes (Rudolph et al., 2023), accuracy and credibility of information, skills devaluation (Steel, 2023). The technologies may bring ethical threats and academic integrity concerns, wider exposure to misinformation and fake news in the media (Tewari et al., 2021). Different malevolent uses are likely to influence other human activities (Alasadi, & Baiz, 2023; Fyfe, 2023; Firat, 2023; Illia et al., 2023; Yeo, 2023; Lund, & Wang, 2023).

The JLE editors in their review aim to consider the scope of the emerging field and outline some implications of the technology for scholarly publishing and education as well as the most essential directions of research.

Keywords Explained

Generative pre-trained transformer (GPT) is a large language model serving as a framework for generative artificial intelligence. Such transformers are pre-trained on big sets of text. They generate human-like texts.

ChatGPT is an Al-powered language model developed by OpenAI (Los Angeles, California). On November 30 2023, Open AI launched ChatGPT that opened new opportunities for text production. At present, ChatGPT3.5 and ChatGPT4 (or ChatGPT Plus) are available on the market. The former was freely released as a research preview. ChatGPT4 is distributed to paid subscribers.

Artificial intelligence (AI) may be defined as the intelligence of software, mainly high-profile applications such as advanced web search engines, natural language understanding, generative tools, recommendation platforms, driverless cars, strategic games. AI became an academic field and discipline in 1956.

AI chatbots represent a software application initially called chatterbots, mimicking human conversation. AI chatbots are based on text or voice interactions.

Natural language processing (NLP) is a subfield of computer science and linguistics. It aims to enable computers to understand and generate human languages based on natural language datasets in the form of corpora (both text and speech).

Open AI is an American artificial intelligence laboratory founded in 2015. In 2020, Open AI presented GPT3, a large language model trained on big datasets. In late 2022, GPT3.5 chatbot was launched. In March 2023, GPT4 entered the market.

Jasper AI. (https://www.jasper.ai/) JenniAI. (https://jenni.ai/)

Hemingway Editor. (https://hemingwayapp.com) QdouillBot. (https://quillbot.com)

4

Large language model (LLM) is a model of natural language processing that uses big data to be trained to generate human texts. The model is based on billions of parameters that help it to mimic human languages.

International Research on ChatGPT

We searched the Scopus database for the key word "ChatGPT" and found 1,935 indexed documents on September 24, 2023. Almost all publications (n=1,925) came out in 2023. The prevailing publication type is the "article" (n=827). There are quite many letters (n=350), editorials (n=204), and notes (n=164). Conference papers account for 211 papers. 123 reviews were published during 2023. The most prolific authors include A.Kleebayoon (n=35), V.Wiwanitkit (n=29), and P.P.Ray (n=22). The most highly cited publication in the area is headlined "ChatGPT is fun, but not an author" and has 233 citations as of September 24, 2023 (Thorp, 2023).

Most publications came from the USA (n=611), India (n=192), the UK (n=161), and China (n=154) (see Fig.2). Medicine (n=797), Computer Science (n=493), and Social Sciences (n=472) top the breakdown by subject area (Figure1).

To analyse the speed at which the field had been rising, we compared the readings of the above search and the one made as of April 1, 2023 (Liu et al., 2023). The latter brought 194 papers mentioning ChatGPT on arXiv. A search on the keyword "ChatGPT" identified 186 articles in the PubMed as of April 3, 2023, as compared with only 36 publications on February 23 2023 (Misra & Chadwar, 2023). We expect that much more research articles and other publications will add to the field in the near future.

The most cited article in our search dwell upon the issues of authorship (Thorp, 2023). Other popular directions of Figure 1

Scopus-Indexed Research on ChatGPT: Breakdown by Subject Area

research cover priorities related to ChatGPT for researchers (Stokel-Walker, 2023), challenges and implications of ChatGPT in research (Qasem, 2023), practice and policy, ChatGPT performance in the US medical licensing examination and its implications for medical education, the quality of writing (articles, abstracts, essays, etc.) by ChatGPT, potential for education (Crompton, & Burke, 2023; Ivakhnenko, & Nikolskiy, 2023; Fuchs, 2023; Kikalishvili, 2023; Su, & Yang, 2023; Rudolph, Tan, & Tan, 2023), ethical challenges for publishing, ChatGPT and assessment in education (Rudolph, & Tan, 2023), AI-based bot impact on libraries (Lund, & Wang, 2023), ChatGPT in journalism, the future of education, a new academic reality (Lund et al., 2023), academic integrity (Perkins, 2023), science communication (Schäfer, 2023), etc.

ChatGPT in Education

ChatGPT makes educators, teachers, faculty, professors, and lectures revise traditional educational practices. Assessment is an essential part of education at all educational levels as it gives feedback and outlines the educational outcomes. Traditionally, writing is a predominant way of evaluation. As it takes time, it is often applied out of class (writing essays, reports, and other tasks). An easy access to ChatGPT and similar technologies may tempt students to outsource their tasks to AI tools (Steele, 2023). Two important educational failures are likely to follow: wrong assessment and devaluation of skills.

GPT bots may lure students into accumulating the information they need, ignoring other more reliable sources. Reliability can be easily sacrificed in favour of availability of information via ChatGPT. Educators will have to work out tasks based on critical thinking for students to evaluate any i nformation they get. Traditional writing forms of assessment may be limited to digital-free class (Kikalishvili, 2023).

Note. Source: Scopus Database as of September 15 2023.

Figure 2

Scopus-Indexed Research on ChatGPT: Breakdown by Country or Territory

Note. Source: Scopus Database as of September 24 2023.

As previous threats to education (for instance, a calculator some forty years ago), ChatGPT may prove boon. A similar paradox (Steele, 2023) will cause major revisions in the measurements of knowledge and skills. A new Al-supported workplace will require employees with adequate skills. The latter are to be reevaluated to meet the emerging demands. Given mental awareness and critical perception of information, any new technologies may be adapted for educational purposes and turned into supporting tools. Alarming rhetoric of educators is largely caused by prudence and conserv-ativeness of education as a social institution. But Covid-19 pandemic and the related pedagogy of emergency have increased the adaptivity of educational systems. Today, they are more or less prepared for the sweeping changes associated with AI.

JLE board members and editors are looking forward to new research submissions that will shed light on the looming educational landscape where AI plays on educators' and students' side. The research agenda covers "further exploration of the ethical implications of AI for education, the development of strategies to manage privacy concerns, and the investigation of how educational institutions can best prepare for the integration of AI technologies" (Firat, 2023, p.57). The recently published literature reviews and research outline some potential lines of research that we also see as promising and essential for the academic community at large. They embrace potential applications of chatbots in education, including innovative methods and writing assignments, shifting the focus on skills and competencies (Firat, 2023); facilitating personalized learning and consequently academic achievements, engagement, and self-efficacy (Fuchs, 2023); academic integrity issues related to online examinations (Huber et al., 2023; Fyfe, 2023); ChatGPT mediating role in assessment practices (Farazouli et al., 2023), etc.

ChatGPT in Science

The academic community is stirred by consequences of potential uses of ChatGPT-generated content (Tang, 2023). As the texts produced by AI may successfully follow language patterns typical of the academic writing style and mimic research articles, there is a growing concern that unscrupulous researchers may be tempted to generate partially or more extensively texts, using ChatGPT, and deceitfully pass off AI-created texts as their writings. They may incorporate incomplete, inconsistent, or fallible pieces of the LLM-based texts into their submission (Tools such as ChatCPT, 2023).

No doubt, the technologies are advantageous for non-native English-speaking authors or even native speakers as they may avoid weaknesses in their submissions related to the language quality. But can such a text be totally attributed to the researcher? The plagiarism detecting tools can tell the ChatGPT-generated texts from human writings, though ChatGPT-produced texts are considered as original or newly produced. Special tools detecting AI-generated content are already available with more work in progress (Misra & Chandwar, 2023). The way AI presence is found is connected to regular patterns and algorithms any AI-generated text is based on. Researchers may select to play around with the ChatGPT's help throughout the writing, or only in some chunks of the article.

Authorship of such texts as submissions is raising doubts (Hufton, 2023). Academics and researchers express worries as AI does not bear any responsibility for the produced information (Tang, 2023). In late 2022 and early 2023, several preprints and publications were released, with ChatGPT indicated as a co-author. It led to a heated discussion of Al's authorship. In the wake of ChatGPT launch, Springer Nature was nearly the first to develop new technologies spotting LLM-generated output. The publishers also supported those researchers who disapprove of "citing the bot as an author"

(Tollefson, 2023). The debate is still on the rise on the role of the AI tools in producing scientific literature. LLM tools cannot be accepted as a credited author as "any attribution of authorship is connected to responsibility" (Tools such as ChatCPT, 2023) that sounds senseless if applied to AI.

Many journals are revising their editorial policy regarding their authors' use of AI in their submissions. They tend to disallow crediting ChatGPT or other artificial intelligence language models as a co-author. In early 2023, a few preprints and submissions, mainly in medicine, turned out to contain information on AI authorship6. It launched a discussion on the possibility of AI authorship. Consequently, medical journals pioneered the revision of roles of authors and contributors, specifying the disclosure procedure of artificial intelligence-assisted technology in the production of any submission.7 Elsevier was among those publishers who pioneered new policies related to AI-assisted tools. Elsevier in their journals expects their authors to make a statement on the use of AI-assisted tools. In other publishing houses or journals, researchers should seek permission from their publisher or editor in case they use AI in any part of their submission (but for the information on the authors that is generally prohibited) or specify the sections where they used AI.

Elsevier's Practices

In February 2023, updates on the use of artificial intelligence tools in the submissions were introduced into Elsevier's authorship policy (Hufton, 2023). According to Elsevier policies and guidelines, authors, editors, and reviewers are to follow Publishing Ethics where the use of generative AI and AI-assisted technologies (ChatGPT, NovelAI, Jasper AI, Rytr AI, DALL-E, etc.) in scientific writing, in the journal peer-review and editorial process is described8.

confidentiality and data privacy rights. As correspondence with authors contains personal data, editors cannot upload it into a generative AI tool either. Reviewers should not use AI-assisted tools in the scientific review as peer review is based on critical thinking that is missing from such tools. Moreover, generative AI technologies may produce incorrect or biased conclusions.

The academic community is unanimous that any content produced by AI tools should be "screened and edited for accuracy and appropriateness before dissemination" (Misra & Chadwar, 2023). JLE editors cannot but share the stance of Elsevier and other publishers on the AI-related publishing ethics.

CONCLUSION

ChatGPT has been changing the realities in education, academia, media, and communication. At present, it is impossible to foresee the speed, depth, and scope of transformations. New and unexpected implications of ChatGPT may arise soon. It is high time for journals to revise their notions related to authorship, integrity, and use of AI at large in research and scholarly writing. Following this editorial, JLE is planning to include a provision regulating AI-supported and AI-generated writing in the JLE guidelines for authors and reviewers.

As this new emerging field of study is rising fast, JLE editors welcome any initiatives on special issues, new submissions of research articles and reviews on ChatGPT and associated themes.

AUTHORS CONTRIBUTIONS

For Elsevier authors: The policy regarding AI-based technologies exclusively refers to the writing process barring the research process. Authors may improve readability and language of their submission without reservations. General oversight and editing are the author's responsibility. If AI is applied, the author is to make a statement. "Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author." 9

For Elsevier editors and reviewers: As any submitted manuscript is confidential, no part of it may be uploaded into a generative AI tool. The latter may infringe the author's

Elena Tikhonova: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Validation, Visualization, Writing - original draft, Writing -review & editing, other contribution.

Lilia Raitskaya: Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Validation, Visualization, Writing - original draft, Writing -review & editing, other contribution.

King, M.R., & ChatGPT. (2023). A conversation on artificial intelligence, Chatbots, and plagiarism in higher education. Cellular and Molecular Bioengineering, 16, 1-2. https://doi.org/10.1007/s12195-022-00754-8

International Committee of Medical Journal Editors. Defining the Role of Authors and Contributors. Artificial Intelligence-Assisted Technology. https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors. html (accessed September 19, 2023).

Elsevier. Publishing Ethics. https://beta.elsevier.com/about/policies-and-standards/publishing-ethics?trial=true Ibid., The use of generative AI and AI-assisted technologies in scientific writing.

6

7

REFERENCES

Alasadi, E.A., & Baiz, C.R. (2023). Generative AI in education and research: Opportunities, concerns, and solutions. Journal of Chemical Education, 100(8), 2965-2971. https://doi.org/10.1021/acs.jchemed.3c00323

Crompton, H., Burke, D. (2023). Artificial intelligence in higher education: the state of the field. International Journal of Educational Technology in Higher Education, 20(1), 22. https://doi.org/10.1186/s41239-023-00392-8

Farazouli, A., Cerratto-Pargman, T., Bolander-Laksov, K., & McGrath, C. (2023): Hello GPT! Goodbye home examination? An exploratory study of Alchatbots impact on university teachers' assessment practices. Assessment & Evaluation in Higher Education. https://doi.org/10.1080/02602938.2023.2241676

Firat, M. (2023). What ChatGPT means for universities: Perceptions of scholars and students. Journal of Applied Learning and Teaching, 6(1), 57-63. https://doi.org/10.37074/jalt.2023.6.1.22

Fuchs, K. (2023). Exploring the opportunities and challenges of NLP models in higher education: is ChatGPT a blessing or a curse? Frontiers in Education, 8, 1166682. https://doi.org/10.3389/feduc.2023.1166682

Fyfe, P. (2023). How to cheat on your final paper: Assigning AI for student writing. AI and Society, 38(4), 1395-1405. https://doi. org/10.1007/s00146-022-01397-z

Huber, E., Harris, L., Wright, S., White, A., Raduescu, C., Zeivots, S., Cram, A. & Brodzeli, A. (2023). Towards a framework for designing and evaluating online assessments in business education. Assessment & Evaluation in Higher Education. https://doi. org/10.1080/02602938.2023.2183487

Hufton, A.L. (2023). No artificial intelligence authors, for now. Patterns, 4, 14, 2023. https://doi.org/10.1016Zj.patter.2023.100731

Ho, W.L.J., Koussayer, B. & Sujka, J. (2023). ChatGPT: Friend or foe in medical writing? An example of how ChatGPT can be utilized in writing case reports. Surgery in Practice and Science, 14, 100185. https://doi.org/10.1016/j.sipas.2023.100185

Illia, L., Colleoni, E., & Zyglidopoulos, S. (2023). Ethical implications of text generation in the age of artificial intelligence. Business Ethics, Environment and Responsibility, 32(1), 201-210. https://doi.org/10.1111/beer.12479

Ivakhnenko, E. N., Nikolskiy, V. S. (2023). ChatGPT in higher education and science: A threat or a valuable resource? Vysshee obrazovanie v Rossii = Higher Education in Russia, 32(4), 9-22. https://doi.org/10.31992/0869-3617-2023-32-4-9-22

Kikalishvili, S. (2023). Unlocking the potential of GPT-3 in education: Opportunities, limitations, and recommendations for effective integration. Interactive Learning Environments. https://doi.org/10.1080/10494820.2023.2220401

King, M. R., & ChatGPT. (2023). A conversation on artificial intelligence, chatbots, and plagiarism in higher education. Cellular and Molecular Bioengineering, 16, 1-2. https://doi.org/10.1007/s12195-022-00754-8

Lund, B.D., & Wang, T. (2023). Chatting about ChatGPT: How may AI and GPT impact academia and libraries? Library Hi Tech News, 40(3), 26-29. https://doi.org/10.1108/LHTN-01-2023-0009

Lund, B.D., Wang, T., Mannuru, N.R., Nie, B., Shimray, S., & Wang, Z. (2023). ChatGPT and a new academic reality: Artificial Intelligence-written research papers and the ethics of the large language models in scholarly publishing. Journal of the Association for Information Science and Technology, 74(5), 570-581. https://doi.org/10.1002/asi.24750

Misra, D.P., & Chandwar, K. (2023). ChatGPT, artificial intelligence and scientific writing: What authors, peer reviewers and editors should know? Journal of the Royal College of Physicians of Edinburgh, 1-4. http://doi.org/10.1177/14782715231181023

Perkins, M. (2023). Academic Integrity considerations of AI Large Language Models in the post-pandemic era: ChatGPT and beyond. Journal of University Teaching and Learning Practice, 20(2), 7. https://doi.org/10.53761/1.20.02.07

Qasem, F. (2023). ChatGPT in scientific and academic research: Future fears and reassurances. Library Hi Tech News, 40(3), 3032. https://doi.org/10.1108/LHTN-03-2023-0043

Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? Journal of Applied Learning and Teaching, 6(1), 342-363. https://doi.org/10.37074/jalt.2023.6.1.9

Rudolph, J., Tan, S., & Tan, S. (2023). War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. Journal of Applied Learning and Teaching, 6(1), 364-389. https://doi.org/10.37074/ jalt.2023.6.1.23

Schäfer, M.S. (2023). The Notorious GPT: Science communication in the age of artificial intelligence. Journal of Science Communication, 22(2), Y02. https://doi.org/10.22323/2.22020402

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Steele, J.L. (2023). To GPT or not GPT? Empowering our students to learn with AI. Computers and Education: Artificial Intelligence, 5, 100160. https://doi.org/10.1016/jxaeai.2023.100160

Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: Many scientists disapprove. Nature, 613 (7945), 620-621. https://doi.org/10.1038/d41586-023-00107-z

Su, J., & Yang, W. (2023). Unlocking the power of ChatGPT: A framework for applying generative AI in education. ECNU Review of Education, 6(3), 355-366. https://doi.org/10.1177/20965311231168423

Tewari, S., Zabounidis, R., Kothari, A., Bailey, R., & Alm, C.O. (2021). Perceptions of human and machine-generated articles. Digital Threats: Research and Practice, 2(2), 12. https://doi.org/10.1145/3428158

Tang, G. (2023). Academic journals cannot simply require authors to declare that they used ChatGPT. Irish Journal of Medical Science (1971 -). https://doi.org/10.1007/s11845-023-03374-x

Thorp, H.H. (2023). ChatGPT is fun, but not an author. Science, 379(6630), 313. http://doi.org/10.1126/science.adg7879

Tollefson, J. (2023). The plan to "Trump-proof" US science against meddling. Nature, 613(7945), 621-622. http://10.1038/d41586-022-03307-1

Tools such as ChatGPT threaten transparent science; here are our ground rules for their use. (2023). Nature, 613(7945), 612. https://doi.org/10.1038/d41586-023-00191

Yeo, M.A. (2023). Academic integrity in the age of Artificial Intelligence (AI) authoring apps. TESOL Journal, 14(3), e716. https://doi.org/10.1002/tesj.716

i Надоели баннеры? Вы всегда можете отключить рекламу.