How Professors Are Leveraging 7 AI Tools in the Classroom


How Professors Are Leveraging 7 AI Tools in the Classroom

Gorodenkoff/Shutterstock

At their best, AI tools can be used to improve the grammar and clarity of student writing, enrich student knowledge of course material, and center and analyze student reading responses. Chatbots can also help professors create quizzes or study guides. But then, at their worst, AI tools can spread errors, misinterpret information, rely on biased or faulty data, or be used for plagiarism. This month, a (not yet-peer-reviewed) study by MIT by Nataliya Kosmyna et al. also found that the use of AI assistants when essay writing reduced writers’ cognitive activity, compared to non-assisted essay writing.

How, then, can AI tools be used in a way that promotes critical thinking? With that question in mind, let’s examine some common AI tools, their strengths and drawbacks, and how professors are utilizing them in a way that can enhance critical thinking, not replace it.

Microsoft Copilot and Grammarly AI

These two tools fall into the category of writing assistants. Microsoft Copilot operates within Microsoft apps, whereas Grammarly AI is more flexible and can be used within web browsers via extensions or within Google Docs. Both AI writing assistants will create texts from prompts, edit text for grammatical correctness and clarity, create outlines, and perform other assorted writing tasks. They can even create text that mimics the user’s own writing style; in Grammarly, this is called “personalized voice detection and application.”

On one hand, these tools can help students improve their writing and understand patterns of error; on the other, if used improperly, Grammarly and Copilot can help students avoid the act of writing.

Dr. C. Balam-Kuk Solís, a lecturer in the department of organization, workforce, and leadership studies at Texas State University, uses Copilot as a way for students to automate the creation of data tables. “I teach courses associated with applications development for non-computer science majors,” Dr. Solís explains. “My goal is to help students who traditionally did not enter the computer development fields to see themselves in a new light that allows them to create a greater vision for themselves. My courses take students from concept development to deployment of web or mobile apps. Prior to AI generative tools, students had to spend quite a bit of time creating data sets that would enable them to build the application as a proof of concept. Now, using gen AI, such as Microsoft Copilot, ChatGPT, Gemini, or Canva, the students envision a project, figure out the type of data they need, and in seconds they can generate a data table with over a hundred different cases that they can use to build the application interfaces against.”

Image and Video Generators

There are a variety of AI image and video generators with free and paid options. These tools can be used by faculty to create attractive infographics or by students to create images that express their identity. Students and faculty, however, should be aware of the ethical and intellectual copyright ramifications of these tools. Disney, for example, has initiated a lawsuit against Midjourney, and other copyright suits are ongoing. While many industries nonetheless use image and video generators, faculty should carefully weigh the educational benefits of these tools against ethical concerns.

Perplexity AI

Perplexity is a search engine that alleges to cite and carefully curate its sources. While it does automatically cite and limit its sources, it still may rely on untrustworthy information. Students could therefore be required to evaluate and vet the sources that it uses.

Amnesia AI

This tool is a unique “choose-your-own-adventure game” that places the user in an embodied historical moment to teach about a chosen topic. Students could potentially be asked to use this tool as a way to familiarize themselves with new topics in a more “immersive” way. However, Amnesia doesn’t provide citation information or share where it gets its information from, so it may be unreliable. Nonetheless, students could be asked to fact-check various information.

Poe AI

Poe AI is an interface that allows users to evaluate how different chatbots respond to the same query. Professors might use Poe to evaluate which chatbot would be most effective for a given task.

NotebookLM

Google Labs developed NotebookLM as a research and note-taking tool that uses the AI capabilities of Gemini. Professors can use it to summarize student responses or to engage with course readings. NotebookLM can also change texts into audio and create podcasts. Like many other chatbots, Google does not use inputted text to train its AI.

Dr. Timothy Beal, a distinguished professor of religious studies and the director of h.lab at Case Western University, uses NotebookLM to upload all student writing responses to an assignment. Students will then ask NotebookLM to synthesize “common themes” and “big questions” from their responses, and they will reflect on the outputs individually, in small groups, and as a class using the 1-2-4-ALL method.

Finally, Beal “[uses] the W3 group method (What? So what? Now what?) in the larger group to process what happened, why it might matter, and where to go from here.”

This use of NotebookLM demonstrates how student writing can be centered and discussed by students, rather than having AI compose text or develop questions. Beal discourages plagiarism by creating assignments that are not traditional papers, but rather “are in-depth accounts, narratives really, of those experiments, written or recorded in the first-person voice and rooted in personal experience (AI can’t do that).”

Through this Beal seeks to follow “a praxis approach of doing and reflecting,” which means “creating spaces for hands-on experimentation with new and emerging computational tools and methods — individually and collectively, in and out of the classroom — and critically reflecting together on our experiments,” with an aim “[help] students develop as socio-technical leaders.”

Customizable Chatbots

Beyond these general tools, the past year has witnessed numerous collaborations between universities and generative AI companies. Most notably, Open AI has launched ChatGPT Edu, “a version of ChatGPT built for universities to responsibly deploy AI to students, faculty, researchers, and campus operations.” Universities such as Columbia, Wharton Business School, the University of Nebraska, and others are using ChatGPT Edu to create and train their own chatbots. Universities can upload course materials and syllabi to their chatbots, and in turn the chatbots can answer student questions or even quiz students on course content. Once trained, chatbots can be used as teaching and learning aids in tasks as complex as demonstrating legal arguments or helping students practice German at a certain language level.

Anthropic seeks to compete with Open AI with its Claude for Education chatbot, as does Google with Google Gemini for Education.

All of the aforementioned chatbots promise to keep data private and not to train their chatbots using inputted text. However, even carefully curated chatbots are not perfect: law professors at Stanford, led by Lisa Larrimore Ouellette, found that chatbots made a substantial number of serious errors when they were quizzed about an inputted legal text.

Dr. Elizabeth Calloway, an assistant professor of English at the University of Utah, who is part of the National Humanities Center’s Responsible AI Project, teaches the courses Praxis Lab in Responsible AI, Responsible AI and the Literary Imagination, and Writing for Humans in the Age of AI. Through her work, Calloway aims not to teach students “how to use AI, but rather how to make informed decisions about when and how they use AI and what they are agreeing to, supporting, or exposing themselves to when they do use it.” Her students “…study things like AI bias, [data sources], surveillance, labor, and environmental consequences…[They also study the] mental health and societal consequences of these types of AI.”

To address how chatbots create easier opportunities for students to plagiarize, Calloway tries to give students assignments that inspire and motivate students. She also clearly explains the “purpose and benefit of the assignment,” so that students understand how not doing the assignment would harm their learning. Finally, she assigns texts such as “Inside the AI Factory” by Josh Dzieza, to inform students that “they’re still taking credit for someone else’s work if they turn in AI work as their own.”

Final Thoughts

As the above suggests, AI should ideally follow — not lead or even co-author — students’ critical thinking and writing. Bringing in AI tools after a student has conceived of and even drafted a project can leave more room for the student’s critical and creative thinking. While AI may threaten to usurp key analytical and self-critical thinking processes, teaching students how to critically engage with these tools, while difficult, is a moral imperative.



Source link