LinkedInFacebookShare

AI Session One: Uncovering the Potential and Risks of AI in SEA Practice (Transcript AD)

Describer:

Welcome to the Comprehensive Center Network’s Virtual Learning Series: Artificial Intelligence—Opportunities and Risks in Education. Session One: Uncovering the Potential and Risks of AI in SEA Practice, recorded on October 6th, 2023. Audio descriptions have been added for people who are blind or visually-impaired.

Kate Wright:

I want to start by welcoming you and introducing myself. I’m Kate Wright. I’m the director of the Region 15 Comprehensive Center. We are thrilled to have you here on a Friday to join us to talk about artificial intelligence. This is the first in a three-part webinar series that really came out of conversations with our states over the summer, where we had some state leaders say we would love to have a space to talk about artificial intelligence. And so, in response, we’ve put together three webinar series to create that space. It’s in partnership with Region 2 and Region 13. So, I want to thank my colleagues Sarah Barzee, who’s the director of Region 2, and Steve Canavero, who’s the director of Region 13, for collaborating and putting some thought together to join us in thinking about what is artificial intelligence?

What are the risks and opportunities for us as we consider it in the education space?

Describer:

On this slide, Kate Wright refers to map of the United States titled “Comprehensive Center Program (2019 to 2024).” Clusters of adjacent states are color-coded to identify the 19 Regional Centers to which they belong.

Kate Wright:

Just to give you a little bit of grounding, the Comprehensive Center Network includes 19 Comprehensive Centers. I’ve spoken about three that are participating today in that center. The 19 serve across the region, the SEAs and there’s one National Center.

So, we are all federally funded technical assistance centers serving our state education agencies.

Describer:

On another map of the U.S., the three regions participating in the webinar are highlighted blue and pulled out slightly from the map. Region 2, directed by Sarah Barzee, includes Connecticut, New York, and Rhode Island. Region 13, directed by Steve Canavero, includes New Mexico and Oklahoma. Region 15, directed by Kate Wright, includes Arizona, California, Nevada, and Utah.

Kate Wright:

You can see from this graphic, the three that are participating today, and just for your knowledge, too, the states who are participating today, the SEAs are coming from these states, Region 2, Region 13, and Region 15.

As we get kicked off this morning or afternoon, depending on where you are, we’d like to ask you to think about what is, or to share with us, what is your role, and what state are you from? So, we’ve got a poll to get us started. You should see it now.

And if your state is not one of those that’s represented, please feel free to share your state in the chat. Okay, I think I might rely on my tech support here, but I think I’m going to end the poll and share results.

Brianna Moorehead:

I think they should be sharing, but if not, I’m happy to say a couple of our highest percentages.

Kate Wright:

Would you please Brianna from my perspective with screen sharing? I can’t see.

Brianna Moorehead:

Absolutely. So, looks like the majority of our participants have joined us from New York and Nevada. And for roles, it looks like the highest percentage is other. If you’re, if you’d like to, you can share your role in chat with us. And other than that, the next highest, we have our deputy superintendents and education technology officers.

Kate Wright:

Perfect. Thanks, Brianna and welcome. That’s helpful for us to know who’s in the room. As I mentioned, this is the first session in a series of three. This session is titled “Uncovering Potential Risks of AI and SEA practices.” And we’re really going to be talking about the landscape of artificial intelligence from a somewhat high-level perspective this morning. Session two, we’re going to dig into a cohesive and comprehensive approach where we really look at a technology plan within a state agency at a state level, and then what, how it impacts at the local level by considering how we manage, operate, and oversee artificial intelligence. And then session three will be focused on really application and we’ll be talking about what it looks like in a very quickly evolving AI policy landscape and how we work at a national, state, and regional level to support that work. All right, one last activity before we jump into our content.
Because artificial intelligence is such a quickly evolving space, we’d like you to think for us this morning about what words come to mind when you think about artificial intelligence. On the screen now is an opportunity to respond using Menti. You can use your phone to access the QR code or the URL is listed underneath as well.

If you’ll take a minute to please put in that application. What are the words that come to mind when you think about artificial intelligence? And in a second, we’ll share the Menti.

Describer:

As participants enter their one-word responses to Kate’s question in the online poll, the words appear on screen, shifting in size and location to represent the frequency of participants’ words in the poll.

Kate Wright:

All right, I love the word clouds. So, you can really see the balance of emotion as you watch the shift of the words on the screen: potential, exciting, scary—which navigated to the side now—evolving, powerful, transformative, unknown, automation, developing. Thank you. These words, I believe, likely resonate with all of our participants today.

They certainly were words that came to our mind as we started to develop the session and the series. Really understanding that as state leaders, you are working around artificial intelligence, understanding that it’s constantly changing. It offers tremendous potential that makes it exciting and also, at the same time, somewhat scary.

Describer:

On a slide titled “Artificial Intelligence—Transformative Technology,” a towering ocean wave of blues and greens curls over to form a tube against a background of the sun and illuminated clouds.

Kate Wright:

We’ll keep this in mind as we navigate through our presentations today. And we were thinking ourselves in our planning that artificial intelligence is thrilling, fascinating, and also confusing and somewhat terrifying. And it’s definitely the newest wave of things that we’re navigating in the educational space.

Our agenda for today. We’re in the process of the first piece. We’re going to be moving to our first content, which is around recommendations from the Office of Ed Tech. We’ll have time to discuss and to reflect on those recommendations. And then our second content presentation is around policy and program guidance.

And again, we’ll offer cross state reflection time and discussion, and then we’ll close and talk about next steps, including introducing our second in the series. Our objectives for today is to really understand the recommendations, explore the policies and guidance. And then really that last is most important, which is offering time for reflection and discussion with your state peers.

It’s my pleasure to introduce our speakers for today’s session. Kevin Johnstun will kick us off. He is an education program specialist with the Office of Education Technology in the U. S. Department of Education. And then our second two presenters are Alix Gallagher. She’s the Director for Policy Analysis for California Education or PACE at Stanford University and Glenn Kleinman, who is part of Stanford Accelerator for Learning and in addition is our WestEd consultant for artificial intelligence.

I am now going to transition and turn it over to our first presenter, Kevin Johnstun. Thank you.

Kevin Johnstun:

Hi, everybody. It’s really great to be able to join you in D. C. this afternoon. I’ll introduce myself shortly just with two kind of quick factoids. Given the states that make up this, these regions, I thought it would be fun.
So, first, I am a graduate of Juab High School in rural Utah. And second, I was, once upon a time, a 7th and 8th grade English language arts teacher in Oklahoma City. So, it’s really great to be able to talk to all of you and think about some of the things that I’ve seen in both of those contexts and how it overlays with some of the recommendations that I’ll be talking about today.

Describer:

The slide provides the title of the report to which Kevin Johnstun refers: Artificial Intelligence and the Future of Teacher and Learning, by the U.S. Department of Education, Office of Educational Technology. www.tech.ed.gov/AI

Kevin Johnstun:

So, I’ll begin with the broadest overview of the administration and the things that they’re doing on artificial intelligence. So since taking office, President Biden and Vice President Harris and the entire Biden Harris administration have moved with urgency to seize the tremendous promise and manage the risks posed by artificial intelligence.

Right now, the Biden administration is currently developing an executive order that will ensure the federal government is doing anything in its power to advance safe, secure, and trustworthy AI and manage its risks to individuals and society. Over the past year, there have been critical moments worth noting.
In July, the Biden Harris administration secured independent commitments from seven leading AI developers—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—that will help move us towards safe, secure, and trustworthy development of AI technologies. In June, President Biden met with top experts and researchers in San Francisco as part of his commitment to seizing the opportunities and managing the risk posed by AI.

In May, the President and Vice President convened the CEOs of Alphabet, Anthropic, Microsoft, and OpenAI to emphasize the importance of driving responsible, trustworthy, and ethical innovation with safeguards that mitigate risk and potential harms to individuals and society. Last October, the Biden Harris administration published a landmark blueprint for AI Bill of Rights to safeguard Americans rights and safety, and we’ll talk a little bit more about that in the presentation, but also this served to shine a spotlight on preventing algorithmic bias and leveraging existing authorities to prevent unlawful bias, discrimination, and other harmful outcomes. And another key part of this has been also the president’s meetings with the president’s council of advisors for science and technology. He has now met with that prestigious group twice to discuss AI.

And the Biden administration is also working on an international framework to govern the development, the responsible development of AI and use of AI worldwide. So, within the education context, the Biden and Harris administration remains committed to addressing the increasingly urgent need to leverage technology in the education sector and promote novel and impactful ways to bring together educators, researchers, and developers to craft better policies and products and practices.

The Department’s Office of Educational Technology looks forward to our continued work in helping to promulgate a shared vision, inform educators, involve educators, and develop education-specific guidelines and guardrails. So, the biggest thing that we’ve done in this area is to release a report on artificial intelligence and the future of teaching and learning. If you go to the next slide.

Clear point from the get-go is that this report is a long time in the making. This was not something we started in November of 2022. Rather, it draws on decades of research in artificial intelligence and its potential applications and studied applications in education, and it also we began working on this in 2020.

We brought together an expert panel, we collaborated with the National Science Foundation, and then we embarked on a set of public listening sessions and brought together over 700 different stakeholders to gather their input as we drafted the report. And we also were, you know, integrally involved in the blueprint from the AI of Rights that came out from LSTP in the Office of Science and Technology Policy at the White House in 2022.

So, this is something that we’ve been committed to, that we have been thinking about, and that we didn’t start just when the latest wave hit the internet. Next slide.

So, I think there’s a couple of important things to notice when we start talking about what is AI and what is it doing in the education landscape?

So, we say AI, fundamentally enabling two kinds of shifts within educational technology. One is moving from education applications that capture a lot of data but then don’t really have any ways of pooling or aggregating to EdTech applications that are using that data and detecting patterns and making and turning them into actionable insights kind of, in real time, or at least in rapid succession.

The other is that we’ve seen education technology focused a lot on providing access to instructional resources. So, you get big repositories. You get item banks, that kind of thing. We think that we’re seeing a shift now that’s guided by artificial intelligence to automating decisions about teaching and learning and moving things forward that way.

So largely we would characterize these things as accelerating, expanding, and creating new opportunities and risks. Next slide, please.

Describer:

Slide Title: Defining AI Broadly (It’s not just essay-writing tools.)

Kevin Johnstun:

So, I think this is a really crucial slide for SEA and district leaders to reflect on, which is that artificial intelligence is two things. One, not new, and also not a thing.

It is a conglomeration of multiple distinct things and multiple distinct technologies.

Describer:

Kevin refers to a diagram on the slide. Text in a circle at the center of the diagram reads “Artificial Intelligence.” Lines branch out from the circle and connect with 9 ovals surrounding the circle. Starting with the top oval and moving, the text in each oval reads,

  • Computer vision
  • Robotics
  • Natural language processing
  • Automated planning and scheduling
  • Optimization
  • Components of ai
  • Types of ai
  • Machine learning
  • Knowledge based systems

Kevin Johnstun:

So, we had a, a really big moment with the, the rise of generative AI, where people were like, wow, this is incredible. And it became very visible. But it’s one of multiple strands that have all been advancing very quickly.

And so, we can’t just think about generative AI. We have to think about the ways in which that capability is going to interlay across a bunch of other capabilities. Like one that I think is always very interesting to think about is like computer vision, and how you’re going to see, and predictive AI, how you’re going to see predictive AI, computer vision, and generative AI overlay against each other and create new capabilities. And we’re in for surprises in all of these different lanes. Next slide, please.

Describer:

Slide title: Insights about AI in Education. Text reads: AI goes beyond existing educational technology capacity to enhance teaching and learning in multiple ways.

Kevin Johnstun:

So, the report specifically talks about what’s new in the age of AI and education. And so, we think that there are some things that are in fact going to be new. One, we’re going to see new forms of interaction. For anybody who’s ever been, involved in e-learning and, you send out your multiple choice and you wish you could say, I wish I knew what these people were thinking.

And that’s going to change. We’re going to be able to analyze that kind of input and provide feedback very quickly. We’re also, we’re looking at a world in which educators have virtual assistants and agents. Educators are able to not just understand the learner variability in their classroom, but also get real time insights into how to address that learner variability. Now, reliability, veracity, all the, all those things are part of this as well.

So, I’m not trying to be pie in the sky, but I think we’re in to see these things, people attempt to do these things and have tools that give them a real shot. Adaptivity, and then also, like I mentioned, looking at closing those feedback loops very quickly so people can give authentic responses and get feedback in basically real time. Next slide.

Describer:

Slide Title: Blueprint for an AI Bill of Rights. An October 4th, 2022 quote from the Algorithmic Justice League reads, “This blueprint provides necessary principles and shares potential actions. It is a tool for educating the agency is responsible for protecting and advancing our civil rights and civil liberties. Next, we need lawmakers to develop government policy that puts this blueprint into law.”

Kevin Johnstun:

So, as we’re thinking about this tremendous power, we’re thinking about, scale, we’re thinking about tailoring, uniqueness, all of those things are driven by, the collection and analysis of people’s personal data. And I think that’s some of what the bedrock of the AI bill of rights is looking at.
And so, we have to be able to do this in a way that’s safe and effective, that doesn’t use that same data or the same patterns within the data to discriminate. We have to think about how we’re going to protect that data privacy and we have to think about how the AI is going to have larger societal impacts.

And we have to think about whether or not how people can meaningfully opt out and still receive quality services. So, all these things are going to be at the forefront of this. And we think this is foundational to what the OET report had been picking up on all along and is now kind of like trying to concretize within the education space. Next slide, please.

So, we offer this kind of metaphor of sorts when thinking about AI-

Describer:

Kevin refers to an illustration of a person riding an electric bike to build a metaphor for the role of AI in a technology-enhanced future.

Kevin Johnstun:

  • and we think the primary use case of AI, as in, the metaphor, should not be a robot vacuum, something that you roughly put in a few, kind of, overarching parameters and let it do its thing. Rather, we should think a lot more about the electric bicycle where the human is integrally involved in each of those decisions.

But the AI is helping them get where they want to go and do what and cover more ground because of its enabling features. And so, we offer this as kind of our primary thought piece on how to the overarching framework for how to leverage AI in education. Next slide, please.

And as you attempt to do this. They’re going to be all kinds of trade offs along the way.

Describer:

On the slide, three sets of parallel lines depict the factors to consider when designing, selecting, and evaluating AI tools. In each line pair, one factor is listed at the left end of the first line, and another related factor is listed at the right end of the 2nd line. A short vertical bar tethered to the two lines represents a control which one can hypothetically slide left or right to indicate a preference for one factor over the other. The first pair of factors includes “Teacher in Control” at the left end and “Technology in Control” at the right end. The second pair of factors includes “Customized Assistance” at the left and “Teacher Surveillance” at the right. The third pair includes “Representing Students” at the left and “Protecting Student Privacy” at the right.

Kevin Johnstun:

So, when, as you move towards one, so you see the slider, kind of as you move towards one thing, you’re going to end up farther away from another, and that comes with its own set of advantages and disadvantages.

So, you want a ton of teacher control? Well, that’s really time consuming. You want a lot of technology control? That’s less time consuming, but we don’t know the quality. So, we are trying to figure out how to position these sliders as we’re as people are inventing products or policies or practices. Also, you want to have robust representation of students but that requires a lot of data collection, as I mentioned previously.

So how are you going to protect that privacy but also allow for the systems. One of the big problems in artificial intelligence is their blind spots, right. If they don’t have the data, often times very hard for them to make correct predictions. So, thinking about these kind of balances as you try to get these systems to do what we want them to do. Next slide, please.

Describer:

Slide Title: Evaluating AI in Education—Multiple Dimensions to Consider.

Kevin refers to a diagram that looks like a petaled flower surrounded by two concentric circles. In the center of the flower, text reads, “Centering Students and Teachers.” Starting with the top petal and moving clockwise around the flower, the text on each petal reads,

  • Privacy and data security
  • Aligned to our vision for learning
  • Inspectable, explainable, overridable
  • Minimize bias and promote fairness
  • Content-aware and effective across contexts
  • Transparent, accountable, and responsible use

The inner most circle surrounding the flower is labeled “Humans in the Loop.” The outer most circle is labeled “Within an Educational Systems Perspective.”

Kevin Johnstun:

So, we don’t have an overarching framework for how to integrate AI. This is too new and too fast. But we do have a kind of thought piece, another thought piece for you, which is this kind of concentric circles and thinking about all of these things, thinking about how to balance them against each other.

And so, we offer, humans in the loop, that kind of thing. It has to be foundational. It has to go around everything that we’re doing. And so, but then inside of that, you have, accountability, you have fairness, all of these things. And you’re trying to figure out how to get things to move in that direction, in the right direction by balancing all of these concepts against each other. Next slide, please.

So, we offer seven recommendations in the report. I won’t go through all of them, but I will highlight a few of them. First, I’ll just double click on humans in the loop. And when we say humans in the loop, we don’t mean marginally or passively in the loop.

We mean fully within the loop so that if they’re not guiding each decision, they are at least able to drill down into every decision or, any kind of meaningful decision, so that they can have trust and understanding in the system. So, we also think that it’s absolutely critical that these tools are not developed in service of what they can do, but in service of how they can fulfill a vision for education.

So, we shouldn’t start with “Wow!” We should start with “What do I want to see happen?” And we should ground that in modern learning principles. And we should definitely inform and involve educators in the design of these tools. And we’re really, one of the pieces that we’re very emphasizing within the Department of Education.

I had the opportunity to spend about five months at the Institute for Education Sciences. And we, we’re very clear that, like, for some of these transformational applications that we’re trying to set up, it’s partnership between a product developer, a product developer, a SEA or an LEA, and a researcher is table stakes.

We’ve got to be able to get those in place so that we have everybody at the table as we move them forward. And I think that also goes along with like, we are very early days. It’s, I’ve heard quite a few people use this phrase that the version of the technology that you’re experiencing right now is the worst it will ever be.

And we’re going, like, it’s going to move forward from here but that’s going to take a lot of intentional R and D to get it, to do exactly what we want and to get to be, to do it in a way that’s safe. And so, the last thing is we’re trying to think really hard about developing guidelines and guardrails both for the ones that, the federal government has particular authorities in, and then also trying to think about any kinds of guidance we can give in other spaces as well.

Describer:

Other recommendations listed on the slide include:

  • Align models to a shared vision for education, and
  • Prioritizing strengthening trust.

Kevin Johnstun:

So, the last slide.

Anyway, we’re in the process of developing a new toolkit that will follow on to the report that can go, like, one level down. And then we’re also working on some new higher ed guidance. So, if you’re a state that oversees higher ed we’re working on that as well And we’re going to be pushing out some of these new documents over the coming year. Now, it looks like they’ve got, you’ve got the pull up. So, I’ll pass it over to you.

Describer:

The poll question appears on screen: “Which of the recommendations resonate the most with you currently?”

Julie Duffield:

We have things coming up in the poll and developing guidelines and guardrails seems to be a really a recommendation that’s resonating for folks.

And I can read you some questions that are coming up when we finish this. We’ll give folks a few more moments and then I can read some questions. Kate, over to you.

Kate Wright:

Thanks, Julie. Appreciate the responses in the poll. I welcome my colleagues to end the poll whatever point makes sense. And then I think what we’ll do is just see if there are any questions in the chat, and then we’re about to transition you into an opportunity to talk with your state peers around sort of AI generally, and the content that Kevin presented for us so Julie if you’re good, do you want to see if there are questions in the chat?

Julie Duffield:

Yeah. Mary Ann asked what do you mean by teacher surveillance? Can you share a little bit about that?

Kevin Johnstun:

Yeah. So, when we’re trying, when you’re gathering data, right, you have to collect that data from inter, like whatever participants you want to be ultimately serving.
And so, you would hope, the hope is that we don’t have teacher surveillance, but we’re able to gather data in a way that is really helping the teacher get what they want and allowing that there’s, different things around like edge processing and that kind of stuff that can allow the device to capture the data, but not send it back to the main model.

So, you’re hoping you’re able to do these kinds of things, but that’s not always available. So sometimes you do have to collect, massive amounts of data about groups. And so, the question is, what is the kind of thing that you want to develop, and how much can you lean towards customization?
And then in what cases do you have to have more large-scale data collections around a group? And that’s just part of what I mentioned about those trade-offs.

Julie Duffield:

Well, I think that’s all we have in the chat at the moment. Some folks brought up the importance of that human in the loop point that you brought up.

And I think Glenn just mentioned he’ll be picking up on that in his presentation. Kevin will be around, so if you have any questions in the chat, he can answer them there as well.

Kate Wright:

And Kevin, I was just going to say, does it surprise you that guardrails and guidelines came up as being one of those recommendations that seems most interesting to this participant audience?

Kevin Johnstun:

No, I think we’re, as I had mentioned to the WestEd team, we’re in the process right now of looking at our own Department of Ed specific guidelines and guardrails, right, while we’re trying to talk to schools and others and state agencies about theirs. And so, I think everyone’s trying to figure out what does this new technology look like within the existing environment. And that, that, is fundamentally guidelines and guard guidelines and guardrails work.

Kate Wright:

Thanks, Kevin. Really appreciate your support and participation in the webinar series up to this point. We covered our questions. We’re going to transition to our breakout. As I mentioned before these are, these cross-CC webinar series are really intended to both share content, give you access to content that you might not have had access to otherwise, and really create space for you to have conversations and discussion.
So, we’re going to spend about 15 minutes in random setup breakout rooms. We really encourage you to stay with us to use these spaces, to learn and reflect with your colleagues, and to be prepared to sort of pose any questions that you want additional time to cover in future webinars.

They will be very lightly facilitated, so there will be someone there to help guide conversation, and we are using Padlet to sort of take big key takeaways, so your facilitators will have access to that as well. There will be time for you to do some reflection in the process of the breakout rooms. And then the real actionable question we’re asking you to consider with your colleagues today is with the emerging capabilities of artificial intelligence, how are you planning to use it to improve outcomes? That could be outcomes within your SEA, on your teams, in the field, with your LEAs. It’s pretty open ended, but really focused on potential. How do you plan on using it for improvement? And then if time permits, we’ll talk about challenges, but we want to start with outcomes.

And so, at this point, unless there are questions, I will turn it to Brianna to send us to our breakout rooms and we’ll see you back here in about 15 minutes.

Describer:

Breakout Room Session

Kate Wright:

Thanks again, Kevin. Thank you for being so productive. We were sort of monitoring the Padlet as you were discussing and joining and adding.

Describer:

Topics and notes discussed in the four break-out groups appear onscreen, which Kate summarizes.

Kate Wright:

And you can see now on your screen, all of the things that were added to the Padlet in just those 15 minutes. So, it’s pretty robust. There should be some natural things that sort of stand out from your breakout discussions.

One is ChatGPT looks sort of to be one of the topmost popular AI apps that’s being used. Predictive AI for writing. So, these pieces around helping to support development or creation of text or narrative seem to really stand out. I can’t maybe Julie scroll down just a little to see if I miss anything.

Julie Duffield:

Yes, ChatGPT as helping with IT. A comment there.

Kate Wright:

And some good concerns, right? This concern around bias, data privacy. The idea around the fact that this is still emerging at a very quick pace. And the challenge around protecting data and balancing it with students need to access and utilize these tools because they’ll be part of the workforce for them as they transition.
We will be moving to our second presentation in just a second, and we will be addressing ChatGPT and some of these other pieces that came up in your conversations. Before we do that, do any of our facilitators have anything that really stood out within your breakout conversations that maybe isn’t reflected clearly on the Padlet that you think the whole group would benefit from hearing?

Kevin Johnstun:

Maybe I’ll just add I think generally the our group was in the exploration mode. They’re technology professionals and have the responsibility in their organizations, respectively around technology, and they’re very much in the like personally exploring and then helping the agency think about low-risk opportunities to explore for agency use.

And then I thought it was really interesting the conversation relative to the humans are absolutely needed here and the art form of asking the appropriate question. And the example came up where in the individual is 69, and he asked ChatGPT for, to predict the probability of that he was, that he died when he was 65 years old. And it took like three or four iterations of the right question or to get the question right, or an image without a, a giraffe and asking whether or not the giraffe had spots and AI say “yes, the giraffe has spots.” So, it was kind of a very interesting conversation about the question that you ask and how important that question is.

Kate Wright:

Thanks, Steve.

Mel Wylen:

Kate, I’d like to add to that. So, we had a really interesting conversation about how these tools are very powerful and how they can be really leveraged to support students and increase their outcomes but at the same time that in this space and time that we’re in right now, that it’s really important for us to start thinking about how are we socializing this with educators?

How are we really identifying misconceptions that people have about the technologies and identifying those and walking people through them so that they aren’t fearful of the technology and so that they have the frame of mind that they want to leverage it for improvement in their classrooms and with their students.

Kate Wright:

Thanks, Mel.

Kelly Wynveen:

And Kate, one thing I’ll share from my group, too, that we just started getting into is the capability for AI to support with equitable access for families. We had some folks share around how they’re using AI for translation, but not just translation of English to Spanish or English to Amharic, but instead translation of some technical language into language that parents and families can understand.

So, I think that’s just a great opportunity for AI that we just started to dig into when we were talking about some of the capabilities.

Kate Wright:

Thanks, Kelly. I think that’s really important. And something that stood out to me is while concerns were part of that Padlet, there were a lot of, there were a lot of pieces of opportunity that we were noticing were spotlighted from the very beginning of the conversation. So, thanks for that. Right.

As I mentioned before, you will have the opportunity to continue this conversation with more information in the second part of our series or of today’s session, and I just want to remind you of our presenters today because I’m going to turn it over to two of them here in just a second.

Alix Gallagher is the director again of PACE at Stanford and Glenn Kleinman is with Stanford Accelerator for Learning and is also part of West Ed’s team really digging in and thinking about AI within our organization, as well as within the work that we do with our state partners and our district leaders.

And so, I’m going to turn it over to them to talk about key policy and program considerations that really further this effective use and the conversations that you’ve already started here today. Glenn.

Glenn Kleinman:

Thank you. We can go to the next slide. Well, we can go the next slide after that one. You got it. I’ll just mentioned I’m Glenn Kleinman.

Kevin did a great job giving a broad overview and talking about the very valuable Department of Ed and Report. He mentioned his roots in Utah, so I guess I should mention mine in Brooklyn, New York. I’m from, went to Madison High School, grew up in Brooklyn also spent many years at EDC, Education Development Center in the Boston area, and worked with the New England states and with New York quite a bit.

Then went to North Carolina and was at, led the Friday Institute for Educational Innovation there. So basically, I’ve been involved in education technology since the Apple II was the cool new machine. So, I have a long history. I’ve seen lots of waves. I’ve seen lots of excitement. I’ve seen lots of disappointments. And so, I’ll be speaking from that context.

I did note one of the concerns in the Padlet was the rapid evolution of AI. And a number of you clearly already have been exploring ChatGPT. That I think is one of our biggest challenges that as educators, things are moving so quickly. And of course, the change in education is much more complex with many stakeholders to be involved and many processes to be involved.

We actually had a meeting at Stanford a while ago, and someone from OpenAI was there, the developers of ChatGPT. And she said OpenAI wanted to know what educators need from them. And my answer was next time you change our world give us a little notice so we can do some planning. Because this is moving at such a rapid pace, as you all know.

I’m going to say a little bit about generative AI. Kevin is very correct in saying there’s many forms of AI. But ChatGPT was the first generative AI tool to get such widespread use. Since then, Microsoft, Google, everybody else is jumping on it. And there are more and more tools being developed and released.

So, I think it’s important that we all understand what is this generative AI thing. Because it really is different from the prior technologies, including prior AI technologies. One way of capturing that, I’m sure we all have learned that computers do only what they’re programmed to do. And if you want a computer to do something, you’ve got to analyze it in great detail, break it into lots of steps, and then write code in the language that the computer can understand to tell it exactly what to do.

And if it doesn’t do it correctly, it’s because there’s a bug in your code. Generative AI breaks that paradigm and works totally differently. These are systems that learn on their own. They’re fed an enormous amount of data. The original ChatGPT released in November 30th of last year, the estimate was that it had processed the amount of text that’s three times the amount in total contained in the Library of Congress.

They identify patterns in the information they’re fed. They create neural networks that can be enormous, billions of connections, loosely modeled after the human brain, and then they can use that to generate new things. So, we’re not programming them to do something specific. We’re creating a system that has an enormous knowledge base and can use it in generative ways.

I can tell you because I know some of them, the people who created these systems were shocked at what they could do. This was not planned by any means, and they talk about this something about the scale that has emergent behaviors coming from it. Things no one expected, because fundamentally, these are patterns and prediction machines, but they do all kinds of things no one expected.

So, I think that’s an important piece of it. They can mimic human-like responses, as you’ve seen in your explorations of ChatGPT. But I’ll come back to that they’re very different from human intelligence. And I think that’s an important fundamental thing to keep in mind. People talk about generative AI as foundational, and some people connect it to, or say it’s analogous to the creation of electricity, or the cell phone, or something that where nobody knew what would happen, so many things depend on it, but we’re just beginning to discover what generative AI can do and how it can be used. And of course, the once it was tied to chat agents that communicate in English, not in a computer language became readily available to just about everybody. So, this is a different thing. Don’t assume it’s like prior technologies, very important point. Oh, I can’t move the slide you need to move the slide.

So, as you know, you’ve been exploring ChatGPT. You’ve already seen it can do many things, but generative AI and machine learning can do many things often surprisingly well. A big challenge in the AI world was to get the technology to play games like chess and go very complex games and beat the world’s best players. And that has been accomplished.

And for many many years, they were trying to build expert chess systems, figuring out how an expert plays and building in a sequence of rules. That did not succeed. Generative AI learns to play chess by playing against itself millions and millions of times and figuring out what a winning way to win is and therefore it’s been able to beat the highest-level player in the world.

Someone’s already mentioned translating between language. They can now see, interpret, and navigate the physical world. That’s what’s behind all of the self-driving cars, analyzing medical images, analyzing scientific data, writing all kinds of things, creating images, writing programs, and all of that can be used to assist teachers in many ways.

AI as a teaching assistant, I think, is worth thinking about tutor students, provide creative tools, and much more that serves education. Next slide.

But it’s also important to recognize what brings this incredible power and possibilities. There are lots of limitations and some of these were already reflected in the Padlet and risks.

One is that these systems are trained at a given time on a given body of data They now add filters as another level because they found these things were kicking out toxic information, biased information. So, there’s an attempt to put another layer on the systems to filter that out. That is very complex.

And of course, even just defining what’s appropriate and what’s not appropriate is a big challenge, but they’re very limited in the information they have and when they received it. You’ve I’m sure you’ve seen, if you work with ChatGPT, they can make stuff up. Black truthiness, I think is a good way of framing it.

They talk about it as hallucinating, but anything you do with these, you need to be responsible for checking and verifying because it can do some crazy things. The complexity of systems mean they lack transparency and explainability. We don’t know what is going on under the hood. You can’t get “Well it produced this.” “Why did it do that?” “How did it do that?” It might give you an explanation, but it’s just making stuff up when it gives you an explanation. And this is a big issue the computer science folks are working on. How do we make these systems more transparent and explainable so that we can monitor what they’re doing?

And there are also, of course, issues of privacy and security. And I see some of you have raised that. So big issues there. Because they’ve been trained in most cases on an enormous amount of data off the internet, a lot of that information can contain biases, prejudicial, toxic information. Things that we would not want our students or even our teachers or even ourselves to encounter.

And that is a big, big issue, and it’s going to be very hard to resolve. And someone mentioned in the chat these systems can respond inappropriately to the social and emotional states of the users. And it’s very frightening if a student who’s depressed puts in some things about feeling depressed and looking for guidance.

They may get exactly the wrong kinds of responses. Clearly it raises new questions about academic integrity. I believe we’ll come back to that. And overall, they’re culturally unaware and they can be exclusionary. The information that’s been fed into it is not balanced across different cultures, different races, different religions, different places of origin, different languages.
And so, they can further inequities and exclusionary practices. So, they’re very, very big issues for education that frankly, I don’t see getting sufficient attention in the AI world. I know I’m going very quickly. I’m at New York speed because I wanted to get a lot of things in a short period of time. Next slide, please.

And this is a big one I wanted to come back to and is really the main message I wanted to deliver. Because it can look like human-like stuff, it can interact, you can chat with it, it can produce poems and all kinds of things that seem human like, it’s important that we always remember that artificial intelligence and human intelligence are very, very different and not attribute human-like characteristics to the AI systems.

As I said that powerful pattern-finding and prediction-making technologies that learn by finding patterns and information fed into them. Where human intelligence builds over a lifespan, we clearly have innate abilities for language, for learning language, for social interactions. We learn from others.

Our learning is often tied to goals and solving problems. We have experience in schooling and other learning environments. We have emotional needs. We have broad experiences within the context of family, community, and culture, and much more. So, we have a much richer understanding of many, many things than the AI has now, and that it will ever really have, I believe, because it’s simply mimicking.

It’s not really going through those experiences to learn. Given that, it can provide agents, as they’re called. Things that interact that really can be new entities that play many roles to enhance teaching and learning, you can have an agent that helps to teach with lesson planning. You can have an agent that coaches students. You can have an agent that lives in a virtual environment and interacts with students in different ways.

You can have an agent that serves as a participant in collaborative work. The list goes on and on. But the critical message is that the teachers and students need to be in control and drive the interactions with these AI agents. That’s very consistent with what Kevin was saying earlier. But the real question is how the two can work together to meet human needs.

How do we employ this incredible power? And mitigate the risks while humans remain in charge, and it serves the needs of people of teachers of students in education. And I think that’s our challenge. And that’s your challenge with your state guidelines and programs is how do we really have AI serve education and serve the people involved in education?

It is not a replacement. Is it a powerful way of augmenting what people do? Kevin used the e-bike metaphor. I love my e-bike. It means I can bike up mountains that I can no longer bike up otherwise. But I have a different metaphor. We go to the next slide. And I think this is a way of thinking about it.

If Star Trek, we have Captain Kirk and Mr. Spock and Captain Picard and Data. And if you’ve watched the show, and I imagine many of us has at some time or another since it’s been on forever, Spock and Data, well Spock’s partly human, partly alien, Data is an android. They bring intellectual powers, right?

They analyze data. They’re purely logic. Logical beings. And it’s when they team up with the human captains and notice it’s the humans who are the captains, where the other are the augmenters of them. They can accomplish amazing things together, but the humans are driving it and in control and the key decision-makers and bring all those human values and insights to solving the problems they face.

So, I like the Star Trek metaphor more than the e-bike one, even though e-bikes are a great thing. And I think that was my last slide, I believe. And if there are questions, I’m happy to talk further with them or have Alix go ahead and then we’ll take questions, whatever the process is.

Kate Wright:

Yes, I think that’s perfect. We’ll keep watching the chat, but we’ll pass it to Alix and do all questions at the end.

Alix Gallagher:

Okay.

Kate Wright:

Thanks.

Alix Gallagher:

Lovely to be with all of you today. I will also share that I’m from Brooklyn. I’m a graduate of PS 208 in Brooklyn and then Hunter College High School, and I now live in California, having taught in Arizona.

So, we have accidentally quite a bit of relevant experience, formative moments in the regions who are here today. So, I wanted to talk about my work is generally focused on policies and systems. And from your perspective and your roles in SEAs is things that seem most, most salient as you go into your world supporting educators in artificial intelligence in your state.

And I think one of the things that’s critical to understand that is different from prior waves of technology is not only is AI increasingly ubiquitous, and you all mentioned a wide range of types of AI and ways that you’re using AI in daily life, but it’s also accessed by individuals. And this matters because when computers first came into education, schools decided, and districts decided how many computers to use what software to be on them and how students would access them. There was actually a high degree of administrative control over all of the decisions about computing in classrooms because the people who had formal decision-making power controlled the access. And for better and for worse, artificial intelligence does not work that way.

There are low barriers to access, and organizational gatekeeping is functionally impossible because any person with a phone has access to AI, at a bare minimum. So, it’s everywhere and schools do not have a way of shutting it down, even if they wanted to. It’s also rapidly changing. Other people have mentioned that.

I’m going to come back to the implications of that. That’s a critical feature of it. It’s also really important to understand that while there are surface similarities to previous technology revolutions, the way it works kind of under the hood, and Glenn mentioned some of these things, is really different.

It is patterns. It does not think it cannot feel. And it is not the fact that it is not driven by code that you can then go examine is really important for understanding how it works. To get better, these machines need to continue to be fed data that trains them. And very often things like ChatGPT, when you sign up and open an account, one of the things that it asks you for permission to do is to access the data and use the data you’re providing, your interactions for future training, which means that functionally any data that goes in is permanently in. You can’t get it back, and it can be spread all over in that it affects the neural networks and the understanding the quote unquote understanding of the world because that’s just patterns.

But the patterns that the AI see are affected by all of the data that go in. And so, some of those distinctions really matter. The last point that I think everyone’s aware of is that jobs and broader civic life will be increasingly influenced by AI and while we know this, I think again, it’s it has some super important implications for schools and schooling. Next slide.

So, I want to turn now to five implications for school systems that I think are most important. One is that because it is ubiquitous, because individuals be those teachers, students, families, administrators. Everyone ultimately has ways that they are accessing artificial intelligence, independent of school directives, potentially. It can’t be banned.

Also, because it is developing rapidly, even if a school were to ban a particular product on a given day, they would not be able to ultimately restrict students access to AI across their day, across their week. You can close AI off for points in time, depending on the way you set up instruction, but there is no way to entirely remove student access to artificial intelligence.

So, there’s literally no point in banning it because they cannot, those bans cannot be enforced. The second thing that I would elevate is probably most important to think about from an SEA perspective is the importance of focusing on learning. This is new. It is complex. There are lots of ways that AI works differently than, as Glenn said, anything that has come before.

And most problematically, because some of the surface features are different, it’s really prone to misconceptions. And, earlier this week, I was reading an article in a very prominent, highly respected magazine that talked about AI as thinking, and AI is reasoning, and AI do not do those in ways, and they just don’t have the capacity to tap a wide range of attributes of humanity when they do that.

They can solely recognize patterns and predict things or generate new content based on patterns. Additionally, because things are changing so rapidly and because schools and districts and state education agencies already have their docket full, it’s really important to think about how to get the capacity to learn and to keep up with and make good decisions about AI.

And what strikes me is that there is in the first point that the kind of ubiquity and the fact that people can access AI independently. What every organization has right now are people who are already learning about AI. And so, one of the potential levers for state education agencies all the way down into schools is to really think about how to build collaborative spaces within your SEA and then how to help you how districts and schools can build collaborative spaces for educators to come together, both to share their experiences, share which apps are working to share what they’re learning about key issues around privacy and bias.

And also, ultimately, as these organizations move to a place where you might want to adopt particular technologies to be able to collectively test them and learn about them and how they work well and what the problems are on a small scale before going out and buying some new program.

And I would also want to just add a side note. Every expert I’ve talked to really believes this is not the time to go invest in any particular app. It is the time to be thoughtful in exploration and to really learn about the potential in these areas.

The third thing I want to raise up is the importance of a system role in building coherence. Absent a system role where there are some guidelines around AI use, one of the things that is relatively likely is that at the kind of lowest level of the system, a given student in a class one class, let’s say a secondary student, someone who’s 14 or older, is being taught how to use AI in one particular class and that particular way of using AI the teacher they may go to in the next period thinks that type of use is cheating.
And that’s not helpful for kids. It, it creates a lot of incoherence. It also doesn’t do the things in terms of learning how AI works and learning how to use AI properly that kids will need for their work in the world. So, systems really need while they are learning, they also really need to start thinking about the student experience and what they will do to help keep kids safe and to give them consistency so that we don’t have pretty awful issues around academic integrity.

And then to help students learn at the same time. Glenn has mentioned, and I just want to reiterate this for this part about humans having the final say AI. Does not have ethics. It cannot because it just has patterns.
And people have mentioned the hallucinations or as Glenn said, I love this term, the lack of truthiness that sometimes comes out. There are biases. Glenn mentioned this as well, based on what they have been trained on. These things exist. And there’s also this some very different issues around privacy, which I want to raise, especially for SEAs as you think about data use.

Understanding, I’ll just say as a researcher, there have there was one very noteworthy occasion where a state not on this call sent me student data where they had not appropriately removed the PII. And I was able to destroy that file, write the person back and say “Hey, you just sent me kids, social security numbers. I need you to send me that file again without that information.”

And basically, it could be done again. That can’t happen with AI if it gets sucked into the training materials, you cannot get it back. And so there are things about understanding where data that go in go that essentially when they get taken into AI, it cannot be pulled out. It’s not like it’s kept in a separate place unless you have some type of closed system, and there will be apps that are working on that. So just be really cautious on privacy.

And the last is just remembering that preserving academic integrity. This has this will lead to and has already led to major shifts in teaching and learning, and teachers really need time and support to adapt their instruction to make sure they are providing really high-quality teaching and learning in this new environment, and I think that’s it.

Just a couple of resources on the last slide and a note that TeachAI is releasing a toolkit that is partially aimed at SEAs. And that should be coming out in a couple weeks.

Kate Wright:

Thank you, Glenn and Alix. I don’t see any pressing questions in the chat, although we’ll come back to any. Actually, there may be one. Julie, could you read the one question I saw in the chat?

Julie Duffield:

I sent it to Glenn.

Glenn Kleinman:

I can respond quickly. It’s an important question about how sophisticated scammers, how might they use AI to harm students and teachers? And it’s not just students and teachers. It’s everyone. One of the frightening things about AI, someone could take the video of the few minutes I spoke today and feed it into AI systems and produce a Glenn-bot that looks and sounds like me.

That they can tell to say anything they want. These are the deep fakes you’ve been hearing about. So, there are all kinds of new fraudulent things that I could be saying toxic that my bot could be saying things I would never say and never want to be associated with. I think also AI can be used to so quickly generate information, customize the different audiences, so you can have information for different age levels, for different people, of different races people living in rural versus urban areas, whatever it might be, and you can flood the world with misinformation that’s really customized.

So those are just two examples. I certainly aren’t sitting on any solutions to share with you other than our teachers and the students need to understand these possibilities and become like, like we’ve taught them with the web to evaluate the information and to cross-check it. We need to have them do that for everything, even if it looks like their own teacher is saying some crazy things on a video they received, they need to understand that is most likely not their teacher.

Kate Wright:

Thanks, Glenn. What I’ll mention now is that these resources will be shared with you, including the slides, because I’m going to go past this slide for sake of time. And we’re going to spend just eight minutes in our next breakout reflecting on just the fact that our time is a little bit short. Our remaining time is 10.

So, we’ll spend eight in discussion. That means let’s skip number one which is your personal reflection because I let you’ve likely been doing that throughout this time and really focus on what are your next steps and what do you need? And certainly, focus on either part of that seems most relevant to you based on what you’ve heard today.

And then we’ll come back to talk to you about what our next steps are and how you can access the materials from today’s session. So, Brianna, you can open up our rooms and we’ll spend eight minutes together sort of closing our thoughts on today’s session.

Describer:

Breakout Room Session

Kate Wright:

Welcome back. I want to thank you again for spending part of your day with us on a Friday.

We greatly appreciated this conversation, and the content is so interesting to us and the opportunity to hear what you’re thinking about at the state level is really appreciated. Ajit, we saw your comment just before breakouts and Glenn’s addressing it in the chat as well. If you have other questions or comments, please include them in the chat. We’ll continue to look at them. After we close here in just a second.

Julie’s putting the link to the resources, which will grow, in the chat. We’ll also share both Padlets from both discussions today. So, you have them as well as the slides. Just a reminder that we have the second session in the series on October the 20th.

We will send you the registration link in case you’re not registered. And I want to encourage you to please share that link with colleagues in the department who you think could benefit from this discussion. They certainly don’t need to be in your office of technology. They could live in another part of your agency if you want to broaden this conversation across your SEA, which we think would be really fabulous.

We will, we’ve, I believe, put a link to a survey in the chat. We appreciate your feedback on this session because we want to make sure the following two really meet your needs, but we’ll share that survey in a follow up email as well with the resources and additional materials. Thanks again for your time this morning, this afternoon.

Thanks for being part of this conversation and have a wonderful weekend.