Skip to content

Field Trip: AI in Education: Current Outlook? What’s Coming Next? How Should Schools Respond?

 

Diving into the evolving landscape of artificial intelligence (AI) in K-12 education, with a focus on its implications for educators, administrators, and students! Dr. Ellen Agnello, a former classroom teacher turned consultant and researcher, shares insights from her recent study on AI’s integration into schools, addressing teachers’ and administrators’ perceptions, the potential uses of AI for learning and administrative tasks, and the challenges involved in adopting this technology. 

We also highlight: 

  • Key differences in comfort and attitude between teachers and administrators regarding AI 
  • The role of AI in enhancing teaching practices and professional development 
  • The importance of AI literacy and the need to balance technological advancements with the intrinsic value of interpersonal learning experiences 
  • What the future of AI in education may hold

 

Timeline

00:00 – Introduction to AI in Education: Perspectives and Potential 

00:49 – From Classroom to AI Research 

03:44 – Surveying Educators on AI: Insights and Surprises 

07:45 – AI Detection Tools and Teacher Concerns 

09:19 – Navigating the AI Spectrum in Schools 

11:15 – Demystifying AI for Educators 

13:57 – AI’s Practical Applications in the Classroom 

22:09 – Expanding AI Use Beyond the Classroom 

25:17 – Looking Ahead: The Future of AI in Education 

Related Resources: 

Full Transcript  

DR. ELLEN AGNELLO: A lot of teachers were saying things like, “I don’t know why we’re I don’t know why we would even be considering bringing this technology into school.” Whereas the administrators were saying things like, “We need to figure out how to productively bring this technology into school.”

Today we’re diving into a topic that has been buzzing in a lot of educators’ minds, AI, and specifically AI in our schools. How is artificial intelligence shaping the future of learning, teaching, and educational administration? From demystifying AI to uncovering its applications and addressing the dilemmas it brings, join us as we consider these issues and more, offering a peek into what may be next for schools across the globe. From Frontline Education, this is Field Trip. 

 

***music*** 

 

RYAN ESTES: Our guest today is Dr. Ellen Agnello. 

ELLEN: Sure. Hi, I’m Ellen Agnello. I used to be a classroom teacher. I was a ninth grade English teacher for a while.  

Since getting her doctorate, Ellen has been consulting with school districts, freelance writing on education topics, and teaching pre-service teacher classes at a university in Connecticut. And lately, she has been exploring and researching the use of artificial intelligence in education. 

 

RYAN: Well, Ellen, to begin, we’re here to talk today about some of the research you’ve been doing into the use of AI in K-12 education. Could you begin by just sharing with us what inspired you to take on that project? What were your primary objectives for this study? 

ELLEN: Sure. So last spring, as I mentioned, I was finishing up my dissertation. It explored human cognition, or more specifically, whether the brain comprehends texts containing numbers differently from texts containing words alone. And I became interested in that because before I started my doc program, when I was teaching high school English, I noticed that my kids had a much more difficult time understanding, comprehending informational texts and narrative texts or, you know, short stories. 

In doing that research, I was reading a ton about things like neural networks and semantic webs, mental models, schema constraint satisfaction, and around the same time, ChatGPT and other large language models were really gaining a lot of public attention. And I noticed that same language used in reference to them, which makes sense because those systems are trying to replicate human cognition. They’re trying to mimic how our brain works. And then I started tuning into the AI and education conversations and found them really interesting. 

A lot of the discussion at that time was about whether it’s possible to catch students using AI to generate their assignments. And so I started playing with AI text detectors and found that the complexity of the text, which is one of my main research interest areas, really impacted the detectors so they could easily detect simple, straightforward AI-generated texts, but were often tricked when I ran more complicated AI-generated texts through them, so, ones that I made by prompting ChatGPT to add figurative language or more complex diction or syntax. 

And from there, I was really all in. There were a lot of voices in the K-12 space yelling very loudly about the benefits of AI, and so I wanted to cut through that noise a little bit and see how it could be applied in both authentic and useful ways. So I reached out to some districts and started meeting with leadership to explore their thinking and come up with a plan for some professional learning sessions on AI. 

And finally, leading to how this project started, we felt that an important first step was to explore teachers’ current understandings and perceptions of the new technology. So the goal of the research was really to pre-assess knowledge and attitudes so we could design professional learning sessions that would address knowledge gaps and meet the educators where they were in the moment instead of making blind assumptions about their knowledge and attitudes. 

RYAN: Well, how did you go about conducting the research? What did the process actually look like? 

ELLEN: So, it was a survey, we administered a survey that we developed, and I’m talking myself and the school leadership that I worked with over the summer. We administered that survey to about a hundred middle and high school teachers in Connecticut public schools last summer, and the goal was to quickly gauge their knowledge, attitudes, and perceptions of AI. 

I surveyed the teachers, but also from that initial survey I created a professional learning session that I facilitated with a group of district and school administrators for the first time last summer. And so after that first session, I surveyed the administrators as well, because I wanted to know how their knowledge and attitude towards AI compared to the teachers’. 

RYAN: Well, let’s get into what you found. Was there anything surprising or that you found particularly transformative or just really interesting? What did you learn as a result of doing this survey? 

ELLEN: I learned a lot of really interesting things. I had already been pretty immersed in AI, so my knowledge was pretty comprehensive at that point. And when you’re inside your head, you kind of assume that everyone has that same baseline knowledge. But I learned that wasn’t necessarily the case. And before I share more about my findings, I think it’s a little important to remind ourselves that the surveys did go out last summer, so the data is a bit dated now, especially because we’re talking about knowledge and attitudes in tech, all of which can change overnight. 

RYAN: Oh, AI has been accelerating so quickly. Everything’s changing one month to the next, it seems. 

ELLEN: Yeah, really overnight. But even still, I was surprised at the time by how low the teacher comfort was. So on that Likert scale of zero to five, the average was 1.6. 

RYAN: What does that mean? When you say what, 1.6? What does that tell us versus five? 

ELLEN: So, five is very comfortable, like expert-level comfort, and zero is absolute no confidence, no comfort at all. So 1.6 is not even the midway point, right? It’s extremely low comfort. And then if I look, if I broke the data up into the content area cohorts, I found that some content area groups like special ed fell below that average, and then others exceeded the average, like math and science. Although I guess that might not be terribly surprising. 

In comparing the teachers to the administrators, the administrators had a slightly higher average. So, theirs was about 2.2, almost at the midway point. 

RYAN: Meaning that they are a bit more comfortable with AI than teachers are.  

ELLEN: A bit more knowledgeable, a bit more comfortable, but their open-ended responses, or I guess I should say at the same time, their open-ended responses, revealed much more positive attitudes than the teachers, so they had slightly more confidence in their knowledge of AI. However, their responses revealed a positivity that wasn’t really there with the teacher responses. The teachers were more neutral, more negative. 

RYAN: Can you give me some examples of what that looked like between the teachers, when you said you were getting positive examples from administrators, what might that look like compared to what the teachers were saying? 

ELLEN: So a lot of teachers were saying things like, “I don’t know why we’re even entertaining this idea. I don’t know why we would even be considering bringing this technology into school.” Whereas the administrators were saying things like, “We need to figure out how to productively bring this technology into school.” 

So really, the question amongst the teachers was, “Should we be bringing in AI technology into the classroom? Should we integrate it into teaching and learning?” But the administrators weren’t questioning whether or not it should be brought in. They were already acknowledging, they had already accepted that AI is becoming a part of teaching and learning, part of the curriculum. Their question was, “How do we get there? And how do we do it in ways that will most benefit students and teachers?” 

RYAN: You already touched on what teachers are looking for, which, you know, teachers are concerned about kids using ChatGPT to write essays for them, and they were, you’re saying, asking about, “Hey, is there a way that AI can be used to detect AI generated text?” Right? 

ELLEN: Big concern, huge concern for the teachers. That came up again and again and again. And so that became a big part of the professional learning curriculum that I developed, because they can’t. A lot of the detectors claim to be the most accurate. “We can tell if it’s ChatGPT-generated.” But they simply can’t. And in playing with a number of them, I found that you can manipulate ChatGPT to create text that those detectors will say is human generated. And also, you can input human-generated text that the detectors will say is AI generated. Because it’s so unreliable, teachers can’t depend on them. I hate to say it, I think teachers want that black and white, “Can we figure it out?” But I always tell them that the best AI text detector is the teacher, because they’re the ones who know the students their best. 

RYAN: Yeah. Well, I think a lot of the conversation around AI really is on the side of caution, and in some cases, fear, and certainly worries about, “What will this look like? What will this do to jobs, to schools? How will all of this actually shake out?” But at the same time, there’s the flip side where you have the potential for some really interesting things to happen as a result of using this technology. 

And I know that schools are on a spectrum in terms of some of them being very reluctant. Absolutely locking down. “No, we’re not going to bring in any AI technology.” And others, you know, on the other end are saying, “Sure, bring it in. Let’s give this a shot and see what happens.” What did you find as far as where schools are tending to fall on that spectrum? 

ELLEN: I think you’re right. There is certainly a spectrum. I can’t say that most schools are falling on any particular location within that spectrum. I have worked with schools who have blocked every single application. And I think the reason for blocking the applications is because they’re concerned about what will happen to sensitive data, especially thinking about students. 

Some schools have to clear applications, internet applications, through actual clearing houses in order for them to be used by students and faculty with minimal legal risk. And so these schools, primarily public schools, are falling on that conservative end of the spectrum where they’re blocking applications, or maybe they’re allowing teachers to submit applications because they’re curious about attempting some sort of project using a new AI technology with their students. Those applications are vetted, approved, and then the teachers are responsible for tracking the progress and reporting back to the rest of their colleagues. So I’ve seen some programs like that. 

Other schools, maybe, are allowing access to ChatGPT or certain AI applications, but then blocking others. And then there are some schools that maybe aren’t as concerned about legal ramifications, whether they’re private or something of that nature, and so they are taking a much more liberal approach and allowing both faculty and students to explore. 

RYAN: For a minute let’s think about any administrators who might want to begin implementing AI technology. They see some of the benefits, or at least are saying, “This is coming. Let’s get ahead of it.” What are some of the ways, as they try to take advantage of this technology, what are some of the most significant hurdles that you’re seeing that they’re going to face? I’m guessing that given the mixed feelings that some people have, that simply demystifying how AI works and showing people the ways that these tools can make certain tasks a lot easier, would be a key step. 

ELLEN: Yeah, I think the greatest hurdle is the fear of AI, which in working with the leadership last summer, we thought we could quell that fear by showing teachers that they’re already using AI in their daily lives for a ton of different tasks. And then another thing that we’ve done is boiled it down in the most simplified way possible to explain how generative AI, specifically large language models like ChatGPT and competitors, work. And so to do that, I explained that in its training phase, ChatGPT was fed a ton of internet texts and made to calculate the probabilities of words appearing next to each other given a particular context. Think of your input. So to produce an output, ChatGPT basically does the same work. It generates a highly probable string of words based on that prompt input. So the example I like to give is asking ChatGPT, “What is AI literacy?” And so its output begins with AI literacy is because it knows this is how human language works and when we’re asked a question, we always respond with the subject. Then it generates a list of probable next words and selects one. So the output would become “AI literacy is the” and then it generates the next most probable word and selects one. “AI literacy is the knowledge.” It keeps generating next most probable words until the output is complete, and it does a much faster job than I do. 

So the output really looks magical, and it sounds exactly like human language, but it obviously has some limitations, right? It’s just probabilities. It’s not that the technology knows what it’s producing or is an expert on AI literacy. The combinations of words are highly probable, but the information could be false. It could include a quote or other form of evidence that it made up or hallucinated. And these limitations and others are important to keep in mind when thinking about using it to take on or assist with job functions.  

Another strategy I’ve used to overcome the hurdle of accepting the new technology is to show how it could help with some of their more tedious tasks. And so we walk through tons of use cases, but always questioning the output and the authenticity of having the technology take on the task. 

RYAN: Well, let’s get into that a little bit. Take the classroom, which is, I think, where people’s minds tend to go first when they think about AI in schools. And we’ve already touched on the fact that teachers are understandably wary of students using ChatGPT to write their essays. But you make the case that this technology can be put to really good use for teachers, and maybe in some ways that many people might not have thought about. What would be some of those ways? 

ELLEN: Sure, it can be used for a variety of school tasks for sure. But when you say good use, that makes me think of two things. The first being task success, like when I ask it to create a rubric based on an assignment description that I created, it can complete that task. So maybe that is good use. But the second thing it makes me think of is, should teachers be passing off that task to technology? And is that task even a good task or an example of best practice in the first place? The way I use AI personally, and how I encourage teachers to consider using it, is as an assistant or an intern. So, delegate tasks that you are expert at, give highly prescriptive parameters, and then always check its output or its work. 

With that little caveat, there are lots of things that teachers can use it for to save time, but I don’t like to promote these things as best practice. I like teachers to make that determination for themselves. 

Assessment is one area that teachers can use it for, but I’ve gotten some mixed results when I’ve used it, depending on the type of assessment. So one use case that I think is really successful is using ChatGPT to identify students’ misconceptions on a prior knowledge assessment. Imagine you’re a ninth grade science teacher. You have 120 students across five sections of earth science, and you want to pre-assess their knowledge of ecosystems. You assign them a Google Form and a question prompts them to enter what they know about the term ‘ecosystem.’ Each student writes in at least one sentence, so you have 120 responses to read, but you want to identify gaps quickly so that you can use them to plan instruction for tomorrow. 

You could sit there and read through all of them. It would take you all night. You’d probably be up until 2:00 AM. But to speed up the process, you could prompt ChatGPT to read through the student responses and identify what they currently know about ecosystems as well as their misconceptions for you. You’re the expert, so you can tell whether ChatGPT’s analysis is correct or incorrect, and you can always return to your data set to check its work. 

After writing your prompt along those lines, you paste in all 120 student responses into ChatGPT and hit send. And in seconds it’ll return its analysis, and it does a pretty good job of this, I think. Because it’s a really straightforward task and it has probably learned a lot of language about ecosystems, so it’s able to catch language that doesn’t belong, given that context, and it’ll flag those as misconceptions. 

You could do something very similar with students’ writing. For this, though, I wouldn’t use ChatGPT because if you’re inputting entire student essays, it’s just going to overload ChatGPT. It has a limit of about… between 1000 and 3000 words is its max for input. And most people are like, “Why would you prompt it more than 3000 words? That’s a very long prompt. What are you asking it to do?” Well, If you want it to analyze longer data sets, like an entire class’s worth of essays, then you might want to use a different large language model like Claude, which has an input limit of 175,000 words. 

You could ask Claude, similar to ChatGPT, “Here are my students’ narratives. Identify common areas of strength and need, and then upload their work,” and hit send. So if Claude tells you that narrative structure is a need area, you could immediate immediately get to work designing a lesson targeting that, and teach that lesson the next day instead of weeks later, after you’ve read all 120 students’ five-page papers. That’s a huge time savings. But of course you want to check, you want to go back and make sure that this is actually a need area and that the computer didn’t just hallucinate it. 

RYAN: And you also I believe talk about using AI as a way to help design sort of self-driven professional development curriculum. Can you talk about that?  

ELLEN: Yeah, I think this would be a really empowering use case. Teachers could use ChatGPT, or Bing has an LLM, they call it Copilot. And they could use these to generate professional learning curricula based on personal goals, based on goals identified through evaluations. So they can start by providing some context, who they are, what their goal is, what the target output format is, and then some inclusion criteria for professional learning materials. 

For instance, I prompted Copilot by saying, “I’m a fourth grade teacher whose professional goal is to implement writer’s workshop to improve my student’s writing self-efficacy. Design a professional learning curriculum. It should look like a college syllabus. It should span 12 weeks and include articles to read, videos to watch, and podcasts to listen to. All content must have been published between 2013 and 2023,” and it’ll do that. And why I’d use Copilot for this is because Copilot is, it’s like using a loophole to access ChatGPT-4. Copilot is built on top of that. So if you don’t pay for the ChatGPT-4, that’s a paid version, which is internet integrated, you can still access it and you can still have that internet integration using Bing’s Copilot. And so Bing will scour the internet, supposedly, and pull resources for you, which the free version of ChatGPT-3.5 cannot do, right? It’s a static technology. 

RYAN: Are there particular pitfalls that you can think of, Ellen, around these examples that you gave? Obviously this technology is still brand new. Growing fast, but brand new. Are there ways in which even these examples might be either not ideal or might be something to really be careful with? 

ELLEN: Sure. I think the professional learning curricula I’ve had varying levels of success doing that. Because again, these are prediction models, so they’ll give you links to different resources. The links will be active, but when you click on the link, it might take you to a completely different webpage than what is advertised in the output. Does that make sense? 

RYAN: Yes. The completely incorrect link is what you’re saying. Like, where is this coming from? 

ELLEN: And so the link might actually be there in that curriculum, but it could be in a different section. And is the information best quality? Is it best practice or is it just optimized for search engines, which is why it’s easy for the technology to grab? So the resources might be the most available, but maybe not the most useful or informative. 

RYAN: I think what I hear you saying is, don’t check your brain at the door when you use AI for this kind of thing. You can’t just rely on it and assume that because it’s coming out of a computer, everything is going to be accurate, perfect, well conceived. 

ELLEN: Yes, exactly. And another thing to note when working with students and using this technology is that OpenAI has acknowledged through their own research that ChatGPT is biased and it has been found to produce output that has religious biases, racial biases, gender biases and political biases. They know that this is an issue. It’s not that the technology was created to be prejudiced, it’s just that you have to think of how they were trained, how the LLMs were trained. They were fed all these internet texts. Internet texts are produced by humans. Humans are inherently biased, so the output is going to be a bit skewed. But when working with kids who don’t understand bias, they don’t know how to detect bias and they trust this magical tool, which seems to know way more than they do. They may not think to question the output. 

AI literacy, I think, is going to be the next big thing. We have to figure out, what skills comprise that though? What knowledge comprises that? And figure out how to integrate that into education as soon as possible, because kids are already using this technology. 

Beyond the classroom, beyond equipping teachers, Ellen has been thinking about use cases for AI in areas like Human Resources, too. 

 

ELLEN: So I think we can all agree that HR has a ton of tasks on their plates similar to teachers. One that I think AI could help with is modifying job postings to reflect new district or school initiatives. So, for instance, imagine you work in HR at a really innovative district, and one of your goals for the school year is to start implementing culturally responsive practices, and another is to integrate AI into teaching and learning. And you’re already working on professional learning for current faculty and staff. You’ve made a ton of strides in getting existing staff on the same page, which we know is a challenge. But now you want to make sure that new hires are also on that page and they’re coming ready to innovate. So you can prompt ChatGPT to write a job description that aligns to those district goals. 

For instance, I could tell it, “You’re a human resources professional. Write a job description and essential job functions for the position of ninth grade English language arts teacher. The district goals that staff will build capacity, knowledge, and skills around culturally responsive practices, and the district also wants to adopt gen AI technologies in K 12 classrooms.” And so it will do a pretty comprehensive job of that. But again, the real experts will have to screen the output, make sure that it really aligns with those goals before posting it on LinkedIn or K12JobSpot. 

RYAN: Right, right. Yeah, that human screening, just to make sure we’re actually putting out there what we want to be putting out there. 

ELLEN: Exactly. We talked about the professional learning curricula use case too. Another thing that I was thinking of is, it’s very helpful in analyzing data. So an administrator or an HR professional, if their job is to generate professional learning curricula for their staff, if they surveyed the staff and they found out this is a need area for professional learning, or they were trying to probe that, what do teachers want to learn about AI? They could take all of those survey responses, if they had 250 teachers in their district, run it through ChatGPT, and ask for some help analyzing that qualitative data. “Identify trends or patterns in these survey responses. What do teachers know? What do they want to know? How are they feeling?” You could ask ChatGPT to do what I did in my survey. But again, it always goes back to, the person who’s putting in that input needs to be expert and comb through the response. 

And then from there you could save the time in generating that PD. You can even take the survey analysis and say, “Here’s what my teachers currently know. Here’s what they want to find out. Generate a two day long professional development itinerary for me to target these needs.” And it will. It will generate an outline for you, 

RYAN: Which would at least be a place to begin if it’s not a place to, hey, stamp of approval on it, put it out the door as is. Yes. 

ELLEN: Yeah, you’re still going to be doing a lot of work, but it might give you, it’s a starting point aligned to, ideally, or hopefully, the teacher’s needs. 

RYAN: I didn’t tell you I was going to ask this, but I am curious for you to look into your crystal ball. What do you think is coming next when you think about AI? And I’m not even asking about the AI industry, but when you think about the advances that we’re seeing on it seems like a weekly basis now, and you look at education in particular, what do you think is coming around the corner? And maybe, what are you optimistic about? 

ELLEN: I hate to say it, but I think AI came out at a really crazy time, right? Post-Covid, lots and lots of schools use funding to purchase tech devices to increase their 1:1 programs. And so now we have this super disruptive technology and it feels really easy to simply just flip the classroom, right? Every student has access. We can all have personal tutors in our backpacks. The teacher can take on a different role in the classroom. And I hope that I don’t see that, because I think students really need that personal interaction. And they don’t need to be on screens all day long. 

However, I am optimistic that students can take on personal learning initiatives if they’re so inclined. Maybe we’ll see more independent studies using this type of technology where students can pursue coding, for instance. 

But I hope that it doesn’t replace the in-person collaborative learning that has always been a part of K 12, especially in the younger grades. So I think we will be seeing more student learning systems that have these AI capabilities where they will suggest content based on student assessment performance. And I think we’ll also see, I think it would be cool to see, some student analytics systems where maybe the technology is helping us with the challenging task of tracking interventions and identifying students who are needing interventions, because that’s something that’s very important, but it’s hard to find the time to do that. And when it’s relegated to one person at a school, it’s extremely challenging. So applying the technology in that way, I think, would be pretty cool. But schools are always a little bit behind the curve, the technology curve, so I wonder how long it will take to get there. 

RYAN: This is great stuff. Dr. Ellen Agnello, thank you for sharing your findings on this research and talking to us about AI. I really appreciate your time coming on the podcast. 

ELLEN: Oh, thanks for having me. It was fun. 

Field Trip is a podcast from Frontline Education, the leading provider of K 12 solutions for human capital management, business operations, analytics, and student management. For Frontline, I’m Ryan Estes. Thanks for listening and have a great day.