Hardly a day passes without a report of some new, startling application of Artificial Intelligence (AI), the quest to build machines that can reason, learn, and act intelligently.
ChatGPT is widely considered to be the prototype. With its human-like writing abilities and OpenAI’s other recent release, DALL-E 2, it generates images on demand and uses large language models trained on huge amounts of data. The same is true of rivals such as Claude from Anthropic and Bard from Google. These so-called “chatbots,” computer programs designed to simulate conversation with human users, have evolved rapidly in recent years.
Examples of their application abound. Two recent articles in the journal Nature described its application to weather forecasting. Currently, it is difficult and time-consuming because to make predictions, meteorologists must analyze weather variables such as temperature, precipitation, pressure, wind, humidity, and cloudiness individually, but new AI systems can significantly speed up the process.
The first article describes how a new AI model, Pangu-Weather, can predict worldwide weekly weather patterns much more rapidly than traditional forecasting methods but with comparable accuracy. The second demonstrates how a deep-learning algorithm was able to predict extreme rainfall more accurately and more quickly than other methods.
A July 5th article in Technology Review magazine offered some examples of how AI is advancing various other scientific disciplines:
Scientists at McMaster University and MIT, for example, used an AI model to identify an antibiotic to combat a pathogen that the World Health Organization labeled one of the world’s most dangerous antibiotic-resistant bacteria for hospital patients. A Google DeepMind model can control plasma in nuclear fusion reactions, bringing us closer to a clean-energy revolution. Within health care, the US Food and Drug Administration has already cleared 523 devices that use AI — 75% of them for use in radiology.
Less momentous, but also fascinating, was a recent article by Ethan Mollick, a professor at the Wharton School at the University of Pennsylvania, which described the ability of the newest, "multimodal" version of chatbot GPT-4 to "see," "hear," and "understand" what is being presented to it. In an experiment in which the chatbot is asked to design a trendy women's shoe, it offers several possible alternatives and then, when asked, serially and skillfully refines the design.
The chatbot cannot (yet) do everything perfectly, however. This passage from Mollick's article is revealing:
"I also gave it the challenge of coming up with creative ideas for foods in my fridge based on an original photo (it identified the items correctly, though the creative recipe suggestions were mildly horrifying)."
It is early days, however, and once GPT-4 and its successors become acquainted with the contents of cookbooks by the likes of Escoffier, Julia Child, and Gordon Ramsay, we suspect that the recipes will improve!
It should be noted that sometimes chatbots fabricate information, a process called “hallucination,” so, at least for the time being, references and citations should be carefully verified.
When it comes to education-related applications of AI, the media have paid the most attention to applications like students getting chatbots to compose their essays and term papers.
Here, we discuss some of the advantages, opportunities, and challenges of chatbots in primary, secondary, and higher education.
What opportunities do chatbots offer for primary and secondary education?
1. Personalized learning: Chatbots can provide personalized learning experiences to students by tailoring content and explanations based on individual needs and learning styles.
2. Instant feedback and support: Students can receive immediate feedback and support from chatbots, enabling them to comprehend complex concepts and increase their fund of knowledge of various subjects.
3. Increased engagement: Chatbots can engage students in interactive and dynamic conversations, making learning more enjoyable and promoting active participation.
4. Accessible and inclusive: Chatbots can be accessed anytime and anywhere, allowing students to learn at their own pace. Because the most advanced chatbots can “see” and “hear,” they can assist students with disabilities by offering alternative modes of communication.
5. Supplemental resource: Chatbots can be a valuable supplemental resource for teachers, providing additional explanations, examples, and resources to reinforce classroom learning. They have tremendous versatility and flexibility, depending on how students pose questions to them.
Challenges of using chatbots in primary and secondary schools
1. Accuracy and reliability: As mentioned above, although chatbots are encyclopedic, constantly improving, and gaining new capabilities, they occasionally provide inaccurate or incomplete information. Thus, there is a risk that students will receive incorrect answers or be exposed to misleading content depending on how questions are posed.
2. Lack of human interaction: Although chatbots can simulate human-like conversations, they lack the emotional intelligence and contextual understanding of human teachers. Building strong interpersonal relationships and empathy is important for effective education.
3. Data privacy and security: Using chatbots involves sharing student data, which raises concerns about privacy and security. Implementing robust measures to protect student information and ensure compliance with data protection guidelines and regulations is essential.
4. Dependency and critical thinking: Relying too heavily on chatbots may hinder the development of students’ critical thinking skills. It is important to strike a balance between using AI tools and encouraging independent thinking, problem-solving, and creativity.
5. Ethical considerations: Chatbots must be programmed and used responsibly, taking into consideration ethical concerns such as bias, discrimination, and fairness. Schools need to address these issues and provide guidelines for responsible AI usage.
To maximize the benefits and mitigate the challenges of chatbots, schools should combine their use with guidance and supervision from teachers. This blended approach ensures a well-rounded education experience that combines the strengths of both AI and human interaction.
Chatbots in universities
The advantages and challenges of using chatbots in universities share similarities with those in primary and secondary schools, but there are some additional factors to consider, discussed below.
Advantages of using chatbots in universities
1. Advanced subject knowledge: Chatbots can provide in-depth explanations and resources for complex topics across various disciplines. It can assist students in higher education by offering detailed insights and serving as a knowledge repository.
2. Research support: Chatbots can aid students and researchers in gathering preliminary information, identifying relevant sources, refining research questions, and constructing figures. It can save time and provide a starting point for further investigation.
3. Distance learning and flexibility: Chatbots enable universities to offer flexible learning options more easily, including online courses and distance education. Students can access educational resources and receive guidance remotely, expanding access to education.
4. Collaboration and discussion. Chatbots can facilitate online discussions, group projects, and collaborative learning experiences, allowing students to engage with peers and share ideas, fostering community and active participation.
5. Virtual tutoring and mentoring. Chatbots can provide virtual tutoring and mentoring services, guiding students through coursework, assignments, and career advice. They can supplement the support offered by faculty members and academic advisors.
Challenges of using chatbots in universities
1. Academic integrity. The use of chatbots raises concerns about academic integrity and plagiarism. Universities must establish clear guidelines and policies to ensure that students use AI tools appropriately and give proper credit to original sources.
2. Disciplinary limitations. Chatbots' expertise is based on the training data it has received (although they do have the ability to “learn” with exposure to new information), and they may not possess the depth of knowledge in specialized or niche areas. In such cases, subject matter experts should be consulted for accurate and comprehensive information.
3. Critical thinking and analysis. Although chatbots can provide information, they should not act as a substitute for, instead of spurring the development of students' critical thinking and analytical skills. Universities need to emphasize the importance of independent research, critical evaluation, and synthesis of knowledge.
4. Quality control. Chatbots’ responses can vary in accuracy, and there is a risk of conveying incorrect or biased information. Universities must ensure quality control mechanisms to verify the accuracy and reliability of the AI-generated content. Special care must be taken in situations where faulty information could be dangerous, such as in chemistry laboratory experiments, using tools, or constructing mechanical devices or structures.
5. Ethical considerations. Ethical issues such as bias, fairness, and privacy are relevant in university settings. Universities should address these concerns and establish ethical guidelines for the responsible use of AI technologies.
Summary: Universities can leverage the benefits of chatbots while being mindful of the unique challenges they may face. Faculty members play a crucial role in guiding students, promoting critical thinking, and curating the AI-generated content to provide a high-quality and ethical learning experience.
Ethical Use Policy for Chatbots
1. Respectful and Lawful Use. Users must use chatbots in a manner that respects the rights and dignity of others. They should not be used for malicious purposes, harassment, hate speech, or any activity that violates applicable laws or regulations.
2. Accountability. Users are responsible for how they use the content generated by chatbots when interacting with it. They should ensure that the information they provide and how they use the model aligns with ethical standards and legal obligations.
3. Avoiding Misinformation and Disinformation. Users should be cautious about the information generated by chatbots and not rely solely on them as sources of information. They should critically evaluate and fact-check the responses to prevent the spread of misinformation or disinformation.
4. Transparency and Disclosure. Users should clearly disclose when they are utilizing an AI system such as chatbots in their interactions. They should make it apparent that their responses were generated by an AI model and not human-authored.
5. Privacy and Data Protection. Users should prioritize the privacy and data protection of individuals when using chatbots. They should avoid sharing sensitive personal information and refrain from using the model to extract or manipulate personal data without proper consent.
6. Fairness and Bias Mitigation. Users should be aware of potential biases in the training data that chatbots are based on and take measures to mitigate the amplification of biases in the generated content.
7. Continuous Learning and Improvement. Users should stay informed about the latest developments and best practices in AI ethics. They should strive to understand the limitations and capabilities of chatbots and contribute to the responsible and ethical use of AI technologies.
8. Feedback and Reporting Concerns. Users should provide feedback to OpenAI, Google, and other relevant creators and stakeholders regarding any concerns or issues they encounter while using chatbots. Reporting any instances of misuse or ethical violations will help to improve the system and its guidelines.
Summary: These points offer a general framework, and users should consider adapting and expanding it to align with their specific uses and ethical considerations.
Conclusions
Chatbots’ ease of use and ability to rapidly create human-like text, including everything from reports, essays, and recipes to computer code, ensure that the AI revolution will be a powerful tool for students at every level to improve their capabilities and expertise. The list of apps and services is growing longer every day.
However, like most powerful technologies, the use of chatbots offers challenges and opportunities. We discuss strategies to minimize the former and accentuate the latter.
Georges Seil is a professor at Rushmore University’s Faculty of Science. Henry I. Miller, a physician and molecular biologist, is the Glenn Swogger Distinguished Fellow at the American Council on Science and Health. He was the founding director of the U.S. FDA's Office of Biotechnology.