Julie Duffield:
So, you are in the right place. This is our third session, and I’m excited now to hand this the rest of the session over as we do the introduction to Steve Canavero. Steve, over to you.
Steve Canavero:
Thank you, Julie. Good morning and good afternoon. Looks like, yep, just turned afternoon for those joining from the East Coast. And as Julie said, welcome to our third and final in our three-part series, and in session three, we’re going to provide an overview of the fast-evolving artificial intelligence, generative AI, in particular, as well as the policy landscape. We’re gonna include a survey of the latest guidance emerging from the national and state levels, and what the next steps might look like for local, state, and national education leaders who are seeking to institute the conditions for safe, effective, and responsible implementation of AI tools and AI education for the students they serve.
Today, we’re gonna focus, start with just some housekeeping, and we’re gonna move rather quickly through that, considering this is the third session, so I think you’re fairly accustomed to that. I’ll also just point out a highlight from session two and a reminder of the curated resources that we have available for you and for your teams. The meat of today will be spent in a two-act or two-part discussion. The first, and we’re really excited to have some incredible speakers today. The first part will be around generative AI and a perspective on that about the emerging power, and, importantly, the cost, which I think will generate some interesting discussion, ideally, among the group. That will be followed by a breakout. And then our second act will be specific to the policy landscape and the evolving policy landscape, as well as some resources for you. And again, that will also be followed by a breakout where you all among can share among your colleagues to listen and learn. And then we will close and have some final next steps.
As with our prior sessions, this is a cross federally funded CC effort, and three of us that are here housed at WestEd are supporting this Region 15 with directed by Kate Wright on the West Coast, bookended by Region 2 and directed by Sarah Barzee out east. And then myself, I direct Region 13, which includes the Bureau of Indian Education, New Mexico, and Oklahoma. We do know that based upon the interest that this invite has gone out beyond just those three regions.
So, in the next slide, we’d love for you to either use the QR code on your computer screen or look at the Mentimeter in the chat and identify where you are joining us from today, geographically. As we said that, that this is the third and final part of our series where we really kind of canvased a, a strong foundation in session one, understanding some of the risks and potential for AI in the, your practices at the state education agency. Our session two sort of delved really, I think, dove into and built upon that foundation around what does it look like for an SEA to develop an approach, the guardrails if you will, of implementing AI at the state education agency. And then today we’re gonna look at policies and practices in the implementation of AI for you across your state landscape.
The next slide is a reminder of the Padlet and there’s a link in the chat to this Padlet, which really is reflective of our listening and learning as we’ve gone through this series. So, each column is representative of one of our sessions, and then underneath that column is a host of resources that we’re curating on your behalf for you to utilize throughout your work as you seek to implement and work through the implementation of artificial intelligence within your context.
The next slide is the transition a cool image to transition us to the heart and soul of our conversation today. And just before I transfer the control to our guest speakers, I do want to introduce them.
Alex Kotran is the CEO that oversees strategy, partnership, and fundraising, and external relations for the AI Education Project. He built the AI ethics and corporate social responsibility function for his prior company and has led a number of strategic partnerships with organizations that include the United Nations, OECD, NYU law school, and has a tremendous breadth of experience. A couple of fun facts about Alex. His favorite subject in school was history. And when asked what he wanted to be when he got, when he grew up at the age of 14, after joining the speech and language or speech and debate team, he decided at that point he wanted to be a lawyer or any profession that involved argumentation.
Our other speaker today is Erin Moat, and Erin is the co-founder and executive director of InnovateEDU. One of her many initiatives that she’s working on includes coordinating the EdSafe AI Alliance, which is a coalition of organizations representing stakeholders across the education sector in pursuit of a more safe, more secure, and more equitable, and more trusted AI education ecosystem. Erin’s also the co-founder of the Brooklyn Laboratory Charter School with her husband, Dr. Eric Tucker. She’s a recognized leader in technology, mobile, broadband, and spent much of her career focused on expanding access to technology in the U.S. and abroad. And despite my best efforts and the internet, I don’t know what Erin wanted to be when she grew up at age 14, but I’ll continue to try unless she wants to share with us during her presentation. So, with that, Alex, I will hand the controls over to you and welcome you to this session. Thank you,
Alex Kotran:
We have the poll in front of us.
All right, so this is actually, this is great. So, we have two policy or technical experts so they can keep us honest. We have one power user and then everybody else. And I think this is really normal. Everybody’s sort of falling into this, you know, beginning the learning journey category. And what I would say is, and we’ll talk about this a bit more, but even the most advanced users of generative AI have only really had their hands on these tools for the last, you know, couple months, maybe the year at most. And so more or less, everybody is just at the beginning stages of this. And so, I would just encourage you not to feel overwhelmed. And in fact, you know, this is, the fact that we’re even joining this call. You’re ahead of most folks when it comes to, you know, just sort of like building that contextual understanding. That’s, I think gonna be really important for, for every school leader and district leader.
We have one more poll. So now that we know what folks, you know, level of proficiency is and using the tools, we’re curious where are your organizations with respect to actually developing policies? So, I’m gonna put we’re at 20 seconds. Let’s wait 40 more seconds. It’s one minute. Really encourage everybody to respond to the poll. This is an important one for us. I’m actually really curious about this one. So yeah, I think unsurprising here, you know, most organizations are still at the beginning stages. It’s interesting to see that there’s some organizations that haven’t actually sort of created this stated priority. I think it’s really, I think most organizations, if they already published policies, my, and Erin maybe might have some thoughts on this. I actually think it’s pretty unlikely that anybody who’s published policies at this point will have really is truly done with that process. I think that there’s like sort of a really wide gamut of what we might consider policy. So even the policies that we’ve seen, especially from districts, the published policies are still usually very sort of at the surface level, the beginnings of what is gonna be a much more involved and sort of more elongated process. And so again, I would say this is not something to be concerned about. Our goal certainly is to make sure that at a minimum, organizations are paying attention to this and, and coming up with a plan, if not actually beginning the process of drafting, oh, I need to actually share my results here. We’re, so we have nearly over 90% have not started or just beginning to learn and figure things out. So again, the fact that you’re on this you’re part of this series, you’re already well ahead of the game, Julie and the WestEd team, I think we’re really, they provided a ton of context and, I’ve gotten a chance to see what has been covered before. So, you’ve seen a lot of stuff and so I’m, I’m not going to take too much time to really belabor, you know, what is generative AI, how does it work?
I wanna dive into a couple of vignettes that illustrate, you know, the fact that this is a really big deal and that, you know, I think anybody is forgiven for, you know, they see hype and we’ve been through a lot of hype cycles around crypto and the metaverse and NFTs and AR and VR and, and so, you know, for, I think for a lot of people, they’re kind of waiting, they’re sitting back and saying, well, is this just another example of technologists getting ahead of themselves? I would posit to you that the answer is no. That while there may be, there are some things that are overhyped. The arc of artificial intelligence in this technology is profound. and it will be like, we’ll talk about it in the same way that we talk about Sputnik or the moon landing or the birth of the computer or the internet. It might even be even more important than all of those, all of those moments. The analogy that I think we would like to sort of convey to you is, you know, the train has left the station, you know, you’re on the train. It’s not productive at this point to have a debate about whether we should be on the train. Some other, you know, metaphors here, Pandora’s box has been opened. So, when we are thinking about policymaking, our focus needs to be on, you know, how do we look to where the train is going and try to understand that as best as we can, and to the extent that we can steer it and avoid maybe some of the sort of the forks in the road that that could lead us to bad outcomes. Let’s do our best to optimize for those outcomes. But you’re on the train and, there’s no way that we are going to be able to prevent language models or generative AI or artificial intelligence more broadly from becoming an ingrained part of our economy in our society. It’s already a relatively mature technology. We think about machine learning, and we’ll talk about that, but generative AI is, it’s so powerful, it’s so obviously useful to companies that this is going to be adopted by industry, whether or not the education system decides that it’s something that we need to respond to. And because of that, we’re really focused on how do we make sure and optimize for a situation where we’re really harnessing all of the benefits and to the best that we can avoiding, you know, some of the bad outcomes. That and I would actually argue that the education system is actually the vanguard of this is like the way that society and most regular people are gonna interact with artificial intelligence and deal with some of these big issues will be in the education space.
So, you might be a skeptic, and if you’re a skeptic, I would posit one thing for you. It’s possible that what’s happened is you’ve used one of the free tools, you’ve used ChatGPT, the free version, and there’s a really big difference between the free version of ChatGPT and GPTplus. And so, there’s two different models. There’s GPT-3.5 and GPT-4, as you probably know, GPT-4 is significantly more powerful. And the way that I would describe this to you in terms of my own workflow is, you know, when I started using chat GPT-3.5, I try to use it to write grants. I run a nonprofit, we write a lot of grants, it did an okay job, it was certainly cool, and I saw a glimmer of the, the potential, it wasn’t good enough to actually replace the work that I was doing writing grants. It helped me a little bit maybe with outlining. When I got access to GPT-4, after I learned how prompt engineering really worked, I copy and paste the outputs from GPT-4 into my grant applications with almost no edits. And in fact, there’s a few examples of where it did a perfect job writing responses and it didn’t fully automate. I spent a lot of time, you know, prompting it. But that is the difference. You know, 3.5 is, you know, almost like a parlor trick. GPT-4 is something that’s, even if technology didn’t progress any further, it’s already powerful enough to start replacing, you know, significant baskets of tasks in knowledge work. So, if you haven’t used the pro version, I would say it’s worth the expenditure, whether that’s it’s an investment you’re making yourself or hopefully something that your organization can, can cover the cost of. But it really is, I think it’s important to see the, the full potential of these technologies because, you know, we shouldn’t be trying to make policy based on sort of understanding or assumptions about sort of a weaker version of the technology. Okay. So, we’re gonna, we’re gonna try to like sort of shock and awe here.
Again, I’m gonna share some vignettes, not because this is a clear and present issue that schools are gonna deal with, but rather I wanna paint a picture of the arc and the directionality of this technology. And what I wanna really sort of impress is that this is not going away and that the policy implications are only going to get more profound and so we need to get ahead of it. So, what you’re looking at here is basically the application of generative AI language models to FMRI data. So, an FMRI machine, it measures blood flow in the brain. And so, we use the FMRI machine to see what regions of the brain are activated in response to certain stimuli. So, for this model, what they did is they had a bunch of subjects, they show them different images, they measured their brain activity, and then they fed that brain activity correlated to each image into a, into a language model. And what they were able to do is design a model that when presented with new subjects’ brain activity and just the brain activity data, the model is able to very accurately, I would argue, recreate the image that the person was looking at. And you know, you look at this bus, so again, this is the, this is what the image the person was looking at. What you’re seeing on the right is the reconstruction that the model created. And even the orientation of the bus is right, the color is right. They’re both on a city street, the, that it’s both, they’re both overcast days. And that technology has actually been shown to work in other domains as well. So, we can actually predict the internal monologue. So, like literally like what you’re thinking in your head, we can predict what music you’re listening to. And so that might seem kind of crazy. For me, I was like, this was like a oh-crap moment. What does this really mean for education? Well, and this is something that Erin Mote has spent a lot of time thinking about long before generative AI and that is surveillance. And I think we know that surveillance is something that schools are having to deal with out of necessity and in some cases out of pressure and ability, if not probability, that sometime in the next decade there will be companies and tools available that will be able to assess very deep levels of understanding- student understanding, student engagement, student aptitude-by just collecting biometric data and maybe even just facial scans. And in a world where the college essay is dead and we’re looking for other ways to determine who should get into college, we’re gonna have to really have very, very clear and rigorous policies to ensure that we’re protecting students. Because the companies that are building these capabilities aren’t necessarily pausing to ask “if” or “should”. They’re thinking about how. Erin, I just wanna pause here ’cause I know this is a really big stream of work that you’ve been focused on, and I don’t know if you have anything to add here. I know this is a bit like broad strokes.
Erin Mote:
But yeah, I think the other dimension is data that’s used for surveillance. So, one thing to be thinking about is what data is feeding models that might be using algorithms to create surveillance technology beyond early tracking systems or early warning systems. And so, it’s not just wearables, it’s also what data we’re putting into these tools that might create false red flags. And there’s some really great work happening with our colleagues at the Center for Democracy and Technology around how do we think about the intersection of civil rights, how do we think about the intersection of disability rights in surveillance AI, in particular. So, things to be attentive to.
Alex Kotran:
And make no mistake when you hear about personalized learning, when you hear about the opportunities to support neurodivergent kids, those opportunities are profound. But we are going to have to decide that it’s okay for us to hand over a level of data that I think many people might other might be uncomfortable with. And that trade off of per like data privacy and data protection and personalization of learning is something that I don’t think we’re gonna be able to have our cake and eat it too. And again, this is where policy is really gonna have to come in because if we’re going to, if we’re gonna be handing over data, there has to be extremely clear safeguards and firewalls and limitations to how that data is used. And I think this is still very early and because there’s a white space, K-12 organizations, in particular, are gonna have to be looking around corners to the best of their ability.
Erin Mote:
Yeah. And keeping humans in the loop. I think that’s the other thing that is really fundamental to this is that what’s the human check on any data or modeling that you create using this technology? How do you keep a human in the loop?
Alex Kotran:
And I should also say there are going to be solutions that are designed for schools that, that can address some of these specific limitations that schools may want to put in place. And so, when we think about procurement guidelines, I mean this is again, where policy can actually help to guide the technology itself. Because if we decide here is the limitations that we need to have in place in order for a tool to be appropriate for use on kids, we can’t expect tech companies just to build that out of the goodwill of their hearts. They’re going to build based on what we are asking for. So, and that’s I think what’s so exciting about this, even if you’re not a technologist, you still have, I think as a school leader, the chance to help shape the future of artificial intelligence. I wanna talk about cheating.
Julie Duffiled:
Alex, we have about two minutes left of the section.
Alex Kotran:
We’re almost done. Yeah. So, I just wanna talk about cheating. Like the, the headline here is cheating is not a small issue. It’s not something that we should just roll our eyes at and say, oh, teachers need to get with the times. It is an intractable issue today because the AI detectors that we have are unreliable at best. I would argue that you should never be using an AI detector to adjudicate student cheating. And so, in the absence of any watermarks and all that stuff, don’t count on it. There’s very little promising research that suggests that we’re gonna be able to, to build something that AI isn’t gonna be able to get around. So, this is gonna be one of the front lines of policy in the U.S. is like, how do we actually protect academic integrity and protect learning in a world where students are going to have really effective tools to cheat when they go home, even if we block ChatGPT in schools?
And then what I’ll close with is this analogy. So, you know, Steve Jobs talked about computers as bicycles for the mind. And you know, language models are like a supercar for the mind. And as school leaders and administrators, what keeps me up at night is this equity question. If some kids are riding their bike and trying to race against kids who are driving a supercar, it doesn’t matter what they learned in school, they will never be able to out-compete the person in the supercar. And so, when we think about access and we think about equity as principles, that policy can help to advance. This is really the imperative because whether we like it or not, in the job market in the workforce, these tools are going to be out there. And so, and there are private schools today that we know are rolling out prompt engineering courses and lessons. And so, we don’t have the luxury of being able to sit and wait and see because that implicitly means we’re going to leave some kids behind.
I’m not gonna talk too much about cost, but the headline here is language models are very expensive today. They might get cheaper, we hope they’ll get cheaper, we hope that they’re gonna be maybe subsidies for schools. But today, Khanmigo, which is one of the, the few tools available for kids is a hundred bucks per student per year. If you want every kid to have access to GPT-4, which is the more powerful ChatGPT model, you know, that could run you up upwards of a hundred million dollars a year if you’re a large urban district. And so, this is something that is going to, you know, policy is going to implicitly also touch on, on budget and procurement and decisions about whether this is a capital expenditure versus an operating expenditure. Early days, but something to pay attention to. Julie, shall we move into the breakout?
Julie Duffield:
Certainly, I don’t know if anyone wants, Steve, do you have any questions for Alex before we jump in and have the discussion about cost?
Steve Canavero:
Yeah, Alex, I know that we worked rather quickly through this slide, but I think it’s, I’m glad it’s brought back up. Can you go back over those numbers again? I’m sorry you went really quickly. This is the first time we’ve discussed cost in our sessions, and I don’t wanna miss some of the numbers that you are putting up on the screen. Oh, okay. Yeah. Now I see.
Alex Kotran:
I, I had a call with one of my friends who, who built an AI company. We were doing a little of back of the envelope cost estimates. So, what we’re wanting to estimate here is what does it cost for GPT-4, which is the expensive model. And then the reason we’re looking at GPT-4 as the benchmark is because to me it’s unacceptable if even if every student has access to chat to GPT-3.5, if 10% of the kids have access to GPT-4 and those kids are the wealthy kids, they’re going to have an inordinate advantage in class. And so, if we wanna really anchor on equity, we wanna understand what is the cost to make sure that every student has access to the same level of tool. And so, what we pulled here is the cost per 1,000 tokens, basically 500 words plus or minus. So, you’re paying 6 cents per 500 words for the input and then for the output you’re paying 12 cents per 500 words. And so, if you assume that students are using ChatGPT pretty regularly and at least three classes, so that’s like 10 prompts on average per day per class. We’re looking at a couple hundred bucks per year. And so, for just middle school and high school, I didn’t include elementary. What that looks like is, you know, between $150 and $200 million per year for New York City, for the state of Florida, you’re looking at almost half a billion dollars, maybe more. I think these are very conservative. I think the cost will come down from where what we’re looking at today. This is, and Julie actually made a really great point, you know, the first personal computers worth like $10,000, right? So, I don’t think we need to be too scared and, and necessarily making 10-year budgets with these numbers in mind, but there is going to be cost as a prohibitor between where we are today and whatever point at which the models do become more efficient and more cost effective. And I think equity is going to be one of the biggest challenges on that road. And, and maybe it’s a two-year process and that’d be great, but maybe it takes 10 years for us to get the cost down to an acceptable level and we should be thinking about what that means over the next, you know, 5 to 10 years if that’s the case. And how do we mitigate those downside risks with regards to access and equity.
Peter (Participant from Nevada Department of Education):
Hey Alex, this is Peter from Nevada. Great presentation and information. Thank you. And I don’t want to derail the conversation, this is student focused. We’re actively, at least in our office, contemplating the use of AI in, in a very different scenario. For example, in accountability. So, using the data we have available that all of our stakeholders have, but using it to process in a very meaningful way. For example, maximizing allocations where AI might be able to identify maximizing allocations may have the best or most benefit. So, if at any point in time you could provide some costs around that, we would greatly appreciate it. Did that make sense?
Alex Kotran:
It does make sense. It’s very important. If there is a deterministic model, which is basically machine learning. What you’re describing, like a pro like optimization problem. Yes. You don’t need which models to do that. And the good news is that machine learning is significantly cheaper, like in order of magnitude cheaper. Fantastic. So, the answer to a lot of schools is going to be, you know, we don’t like the benefits of AI aren’t necessarily about getting this in the hands of students. That’s important through the lens of workforce readiness. I think every high school student is going to need to have the chance to build that proficiency. But a lot of the applications at the teacher-empowerment level or the district-empowerment level that is machine learning and especially surveillance tools, like machine learning, is also a use case for that. And it’s gonna be a lot more cost effective to give teachers access to these tools because you may have 400,000 students but you’re only gonna have about 10,000 teachers. And so, I think for a lot of districts it makes sense for them to focus initially on, well how do we actually utilize AI as a teaching tool to professionalize the teaching profession even if we can’t necessarily put this in the hands of kids right away.
Peter (Participant from Nevada Department of Education):
One last thought, and these aren’t necessarily just my thoughts, but the output of internal conversations really added is swimming in data. We have more data than we know what to do with. A lot of it is compliance data and perhaps really doesn’t provide insight into some of the programs and initiatives we’re charged with implementing. That being said, we have a lot of data, and we have a lot of longitudinal data, and we were hoping that AI could play a role in maximizing the use of this data, which sometimes just gets reported annually and never looked at again, quite frankly. And so, is there something that AI could help us suss out or understand in our data that we have humanly not been able to do to this point? So, thank you. I appreciate it.
Alex Kotran:
So great. Great question.
Steve Canavero:
Yeah. Oh, go ahead Alex.
Alex Kotran:
There was a lot of low hanging fruit that was move us onto the next phase. ’cause I know that we’re, we’re a little over and I wanna make sure that we can have some chance to talk through this in the breakouts. So Steve, should we, should we park this?
Steve Canavero:
Yeah, go ahead. And if you wanna advance the slide. This is the time for our first breakout and appreciate the questions and the dialogue and now we’ll have an opportunity to have a more intimate setting for for you to discuss with your colleagues as Alex sort of put before us. You know, a main emphasis of your policy is to advance access and equity. If those are the principles that you are looking through your sort of policy structure and the development policy structure.
To what extent have you considered cost? So, if one of the principles that undergird your policy structure is access and equity, to what extent have you considered cost, whether at the state level or whether at the state to the teacher or school level district level or whether all the way through to the student level.
You’ve got the breakout Padlet in your chat, and you’re going to shortly be put into your breakout room and then we just keep an eye on the time clicker that will remind you as to how much time you have. And then when you come back, we’ll do a quick share out of any sort of ahas or, or sort of material moments that you had in your breakout. So, with that, enjoy your breakout.
It seems like a call out here across a few of the groups around like the challenges in increasing inequity if you’ve got folks walking, some folks riding and some folks driving the supercar Yep. Significant issue. Others want to weigh in or, or chime in with something.
Mel Wylen:
I can chime in. Yeah, for our group really quick, Steve. So the majority of our group are still in the infancy stage of really thinking about costs around AI, but I think that our presenter brought up a present, one of the presenter was in our breakout room, which was nice but brought up really something that I think all of us should be thinking about is what lessons did we learn in providing every student with broadband and how can we think about those strategies and apply them to this new technology.
Steve Canavero:
That’s a good point, good point. Fantastic. Well thank you all for engaging in discussion and, and it looks like it’s a rich conversation from looking at the Padlet. So, with that, I’m gonna hand the mic to Erin to lead us into sort of policy conversation. Erin
Erin Mote:
Hi. Thanks so much for having me today. I am incredibly grateful to the team at WestEd and to Alex. He’s one of my favorite co-conspirators in this work and is a real champion for equity and accountability. We can switch the slide. So this is actually from Alex. These are the things that he is watching when we talk about AI policy. So all the way from international institutions to individual governments, to folks who are producing resources to support teachers, schools, and state departments of education. Next slide. Alex, do you wanna jump in here and talk a little bit about your policy development approach for AI?
Alex Kotran:
Yeah, so I think what we try to capture here, there’s actually a couple additional policies that we wanna add to this list, but effectively there are these, there are guidelines coming out. I think when your organization is looking for where do we turn, the first thing you have to understand is that there is no one stop shop, even if there’s an organization that’s writing draft policies, that there’s very little likelihood that you’re gonna be able to copy and paste. So what AI EDU is doing is basically like a synthesis of all these different guideline documents to identify where in this five-step process do those documents fall. And what we found is that most of them are really in steps one, two, and three. So, they’re helping districts identify like high-level values and principles and that’s important and that’s, that is a really critical step. It is definitely not the end state that you’re reaching towards, you know, ultimately what we need is putting policies into practice. We need strategies for actually implementing those policies. We need feedback loops to ensure that we’re constantly assessing the effectiveness of those policies. And there’s some like really big, complicated stuff like procurement, efficacy, how do we actually measure whether or not a tool is actually doing what they purport that it’s supposed to do and when it’s impacting students, especially, what is the standard of efficacy that we need to have. And so those aren’t easy policies. And Erin, I think one of the reasons I was really so enthusiastic about bringing you onto this conversation is that I think your organization has been really focused on that hardest portion of policymaking and and really trying to break that out into component parts that school leaders and district leaders can actually do something with. So hopefully that’s a good segue for you to talk about your work with the executive order and the work that ED SAFE AI is doing.
Erin Mote:
Great. So maybe you all heard this week President Biden signed an executive order that is pretty big. It’s 111 pages. So, I hope to give you some of the cliff notes of what that executive order says and to particularly hone around what it means to have a safe, secure, and trustworthy artificial intelligence ecosystem in education. So, here’s the work the federal government is gonna be doing over the next 365 days. I’m gonna talk broadly and then I’m gonna talk very specifically about education.
So, there will be new standards that are developed at the federal level with a baseline focus on safety and security. That work is gonna be led by our colleagues at the Department of Commerce in NIST. NIST has been working on AI guidelines and guardrails for many years and so they will stand up a new center that was announced by Vice President Harris the next day at the UK Summit. But they’re also gonna be thinking about how do we look at watermarking, how do we look at sourcing, how do we look at what’s authentic imagery, what’s generated imagery? And finally, how do we think about testing and sharing those systems? To Alex’s point about efficacious tools and whether tools do what they say.
There’s also a big focus on cybersecurity and privacy. And that focus is really thinking about, and we talked a little bit about this in our breakout, what are some technologies that might exist like privacy protecting technologies and other middleware that could be built out in an ecosystem approach to make AI safer. And so, when you think about understanding who’s using the system, if you are in a school district, how you’re understanding who’s using AI systems, how you’re making sure that you’re abiding by acceptable use policies that exist on the vendor side, but also probably in your district or state. And then how you’re complying with state-level privacy laws.
I know we have some colleagues here from New York and so I’m sure they’re thinking about 2D, which is a law specific in New York that’s focused on how you can’t use data generated within an educational technology tool to actually improve the tool. That’s sort of the basis of a large language model. And so how we’re gonna get our arms around that from that perspective is gonna be really important. I do wanna talk about an opportunity that exists right now that the federal government is asking for your help.
Through December 5th, and I know this was put in the chat, the Office of Management and Budget, which is going to write the rules for how titles are allocated and how the use of AI in maybe tools that use title funding is governed. They are asking for comments on a pretty in-depth set of things that they’re looking at. If you have time between now and December 5th, which I don’t assume you do, but or you wanna surface some of those concerns to ED SAFE AI, we’re gonna be working with OMB on this specifically around the education use case. What are the things that OMB needs to think about when it comes to procurement, or the use of federal dollars related to the purchase of tools that use AI or services that use AI. And so, these are really relevant questions that we’ve had to take on before in things like E-Rate and other places that were about closing the digital access and digital use divide. And so I think we can harness some valuable lessons from that, but it would be so helpful to hear directly from the folks who are doing the work on the ground. If you wanna know anything else about the EO or the Clif Notes of 111 pages, I’m always happy to share more. But let’s move on to ED SAFE AI Alliance and hopefully something that can be a ready used tool for you to use right now.
So globally, whether that’s with the Council of Europe who’s about to regulate education as a separate use case, or the Australians, or the U.S. government, or the UK, we are putting forward the safe framework. And maybe you heard about the SAFe framework coming from Senator Schumer as well on Capitol Hill. He did add an I, so I call his the SAFe E framework. He added an eye for innovation at the end of SAFe. But really it helps you ask questions around how can we use AI in a safe way that ensures safety, accountability, fairness, and efficacy. I wanna talk about that order because it really matters and there’s a deliberateness in the order of the SAFe framework. Base table stakes are security and privacy and doing no harm. And so, asking yourselves under what conditions can we use AI so that it’s safe and protects student privacy? And then understanding if it is accountable, can you understand who’s using the tool? How AI is being used in the tools that you’re already using? Is it fair? Is it thinking about equity or mitigating bias? I think there’s some really great, if anyone wants a great read Unmasked AI by my dear friend, joy is out now and it’s all about, she’s works with the Algorithmic Justice League. It’s all about bias that exists in our current AI tools, particularly around imagery but also around data. And then finally efficacy. How are we thinking about AI tools, actually improving learning outcomes?
so how are we gonna implement this and how are we working to implement this? We’ve launched a network of district-level policy labs across the United States. We actually did our first meeting with New York City yesterday with a cross-departmental approach with leadership. So, these policy labs over the next year are gonna take an open science approach to developing district policy and we’ll share everything out that we do at aipolicylabs.org. We’ll have two fellowships that’ll open in December. We’d love anyone here to apply for them. One is specific around women in AI. The other is the catalyst fellowship that’s about industry leaders and education leaders coming together to learn and to have some opportunities to work on shared projects and to have the opportunity to attend industry events. We’re gonna continue to really work with lawmakers and policymakers to have the SAFe framework be the way people organize and ask questions about the use of AI in the education use case. And so, you’re gonna see us do a lot of policy advocacy on this. I was just on the hill last week testifying, but also how we translate this framework into usable assets along with partners like AI EDU and others.
And then finally we have convened a pretty robust partner network. So, if we go to the next slide, thanks. This is the partner network within the ED SAFE AI Alliance. I wanna point out a couple folks that folks should pay specific attention to. There’s folks from the technology community like COSN to industry like SIIA to AACTE, which works on teacher preparation and education to our disability community with NCLD and CAST. And so all of these folks are working together and rowing together to think about the types of resources that can be developed, coordinated, and then put out in the field for you for folks to think about while you’re developing policy. And we know that states are developing right now, so only two states, California and Oregon have stated AI policies right now. Twelve we know are working on it. And so, I think that the ask from me is let us know what you need. There’s a whole corpus of organizations including NDIA, which is working on digital access, who wanna run to support the policy development that you all are doing, and you’ll be able to follow on with the SAFe framework and the district network that we’re leading. You can take those as starter doughs I think Alex, right? There’s never gonna be a one size fits all policy, but it’s really helpful to have a little bit of starter dough so you can think about your unique context, but also have a place to start. Alex.
Steve Canavero:
Do you mind if I ask a quick question before we move to Alex, Erin? Are there particular, I know it’s just emerging, so the conversations are just beginning, but with your district lab and your kind of learning labs, are there particular themes that are emerging from the districts in terms of what is needed from the state? Like are there like kind of key themes like, hey, the state, if we could get this from the state, that’d be helpful for us to be able to design or do X, Y, Z to implement AI for, you know, following the SAFe framework for example?
Erin Mote:
Yeah, so I think it’s right, right now we’re really in the S part, right? We’re doing the work that Alex is talking around around gap analysis and needs assessment. I think for, this is a national network, so there’s school districts in California, there’s a tribal community, there’s New York City public schools, the largest school district in the country. So really diverse needs. And we built that deliberately. Something that they’re trying to think about navigating is state-level privacy laws and how that trickles down at the local level. That’s a big one. I would say that’s, that’s something that a bunch of folks are really wrestling with. The other thing is guidance around professional development or maybe shared resources around professional development for educators.
Yesterday in New York City, I’m not gonna betray anybody’s confidence when I say we had a really substantial discussion about how we equip our educators right now, sort of no matter what age they are, whether they’re digital natives and are just maybe entering the classroom, they’re super users or they’re folks who haven’t don’t even know what ChatGPT is. How do we create an on-ramp for folks to come to understand AI from a place of knowledge, not fear. And I think that’s something that states could do a lot to support either through ESAs or the Comp Centers or so like what’s the infrastructure that we can use to give some folks some common resources to get up that learning curve quickly.
And I’ll just share anecdotally, that’s what we’ve done on Capitol Hill is, you know, we are holding these lunches with members of Congress and their staff to help educate them about AI tools across the spectrum. And what that’s meant is that they can think about policy and regulation from a place of knowledge and curiosity, not from fear. And I think that’s why you’re seeing some real bipartisan discussion about what can we do to support folks to do this work. And so, you’ll see things like the CREATE Act, which are especially for educating educators to think about giving directed resources to states to then pass to districts so that educators can get trained and have professional development funds that are specifically focused on understanding this technology and using it responsibly. You’ll see some guidance coming from our colleagues at the Office of Education Technology and some toolkits that are really about knowledge mobilization and engagement.
And so if there’s one thing I would urge a state to think about right now as they’re thinking about policy that they’re also thinking about how they can think about AI literacy. Alex’s organization has some great tools for AI literacy, but I think that’s something that we can think about doing at the state level. So not everybody has to do it themselves.
And then finally, this OMB guidance is gonna have a specific, a pretty, it has consequences for the funds that flow from the federal government to the states and then to the district, specifically the titles, IDEA, even some of the competitive grants. So, I would really urge folks to be paying attention to that space. Procurement is the sexiest way you articulate your values. And so, you know, I think thinking about how you think about the intersection of procurement and AI is gonna be really, really important. So again, that efficacious, that accountability, that work. I hope that’s helpful.
Alex Kotran:
I’ll also add that I think for a lot of districts, they’re trying to triage and there are some immediate policies that need to be put in place with respect to academic integrity, with respect to safety. And you don’t wanna necessarily be putting all policymaking on hold while those longer threads get addressed.
Erin, do you have any advice for a district leader who, you know, they feel like this is gonna be a 12-month process, but we don’t have 12 months to have unanswered questions. Where should they turn to get some immediate ideas about what would an academic integrity policy look like or what are some of the other sort of like burning priorities that we need to have an answer to or even like some provisional guidance?
Erin Mote:
Yeah, I think COSN and the Council for Great City Schools have come out with a really great resource. I will find the link and share it, but it’s basically a readiness assessment set of questions. So, it’s a lot. There’s 93 questions there, so get ready for that. But it’s a really good way to start that set of questions. And if you’re gonna start anywhere, start with the S. Start with the safety, the security, and the do no harm because that is the place where the vast majority of existing regulatory regimes exist. And it’s gonna have probably the most effect on some business outcomes for you. So, I have a couple districts we’re working with where all of a sudden their cyber insurance is asking them questions about AI and how they’re thinking about their AI tool policy. And so, and if they don’t have one, their premium goes up. And so, thinking about acceptable use, about safety as sort of the table stakes.
And then for starter dough, I think there are a couple of resources that are starting to come out. I know our colleagues and one of the EDSA partners, AASA, will have some work coming shortly that some model policies. And so that’s a place that you can start again to see those as starter dough.
Don’t use ChatGPT to write your policy, even if it’s ChatGPT-4, involve your lawyers in this work. I know we love the legal department, but you don’t know what you don’t know and it’s really important to have the lawyers there to help you understand.
I’ll just anecdotally tell you that the EO went to the Department of Justice at 48 pages came out the Department of Justice at 111. So even the federal government used their lawyers and it got, it made the document much longer, but it also made the document much clearer. So, I would just really bring in your legal team if you and do that in partnership.
And then, you know, there’s, a toolkit out right now from TeachAI that I think has some good starter dough. But again, I would just share with you to really use that as a loadstar document rather than like a cut and paste. And that’s how we’ve been working with our district policy labs is sort of the first thing they do before we even do a kickoff is we have a set of loadstar documents that will soon be up on ED SAFE AI that all of the districts read before we have our first meeting. They sort of ingest the knowledge before and then we talk about what they need, what the gaps are, and what that looks like. But start with the S. Is that helpful?
Alex Kotran:
Yeah, it is for me. I’m gonna pause for more questions before we go on. I
Steve Canavero:
Don’t see any of the chat. I think you’re good to go.
Julie Duffield:
I just wanna add that actually AI Toolkit is gonna have a webinar on November the 8th and we’ve got that in our resource kit just to put it out if you wanna learn more about that. Are any more questions? I can’t see anything popping up at the moment,
Alex Kotran:
So, I just wanna flash me and Erin’s emails here folks just have follow-ups, feel free to reach out to us and I’m sure Julie and the WestEd team will also be happy to sort of connect us if you have any questions that you’d like to just continue the conversation on.
I do wanna sort of illustrate, so when, when Erin is talking about, you know, you can’t just cut and paste like academic integrity is a great example of this. So, if you go into the, the AI Toolkit that TeachAI put out, and actually this is also something we’ve seen with a lot of the academic integrity policies around ChatGPT. We have, there are schools that are doing things like asking students to sign a contract where they’re saying, I commit to not using ChatGPT to cheat on my assignments. Again, that’s good if your goal is simply to have something in writing to say we wrote something, but ultimately actually addressing academic integrity involves teacher training and preparedness because teachers need to be equipped to understand how do they design assessments that are more difficult to cheat on. Because ultimately, you’re not gonna know, this is the, the working assumption that you should have is that you’re not gonna know if a student is using ChatGPT. Again, the research shows that all it takes is a few different prompts to get the text to a place where it’s indistinguishable for these detectors. So just having a written policy on academic integrity is not gonna address that sort of burning need. And that’s why, you know, these sort of like starter stepping stones are just that. You know, the hard part is really how do we actually implement a readiness strategy that involves not just the policymaking and bringing in the multidisciplinary group involved in that, but also the teacher training and the, just the awareness that will ultimately determine whether or not the policies are implemented effectively and whether that guidance is actually utilized in an effective way. So that’s my, that’s my soapbox. You know, it’s, we don’t wanna pretend this is easy at the expense of quality.
Erin Mote:
Yeah, and I can I say one add one little thing to that Alex? I think if you are a state leader or a district leader and you have any students who are under the age of 13 using these technologies, you are putting yourself in clear and present danger because the acceptable use policy of all of these tools prohibits the use of these tools for kids under the age of 13. If kids are between the ages, and that’s federal law and, and once you break the acceptable use policy and let’s say you have a cyber incident, you are not covered.
So, number two, if you’re, if students are between the ages of 13 and 18, they can use these tools in an educational setting with parental consent. And so, it’s really important to understand that we need to give our teachers good guidance right now about the table stakes around safety and security, including don’t put personally identifiable information into a large language model. You can’t get it back, even if it feels innocuous, like the first and last name of a student to generate a list of like fun nicknames you, that is breaking the law. And so, we need to be able to, to tell our teachers and our educators and our administrators what those existing rules of the road are right now and how we have to develop policy to make sure that we maintain that safety and security, but also allow us to use these really cool technologies to meet our most marginalized students.
Steve Canavero:
Very good points. Thank you. I believe we’re, are we Julie ready to advance the next?
Julie Duffield:
Yeah, we’re going to go to the second breakout for 10 minutes. Steve, do you wanna set us up and then Brianna can put us in. Thank you, Erin and thank you Alex.
Steve Canavero:
A very short introduction to the upcoming breakout. We have two questions here. One is just to capture some of the existing levers or ongoing conversations that can help with advancing policy development in your particular context. And the other is an open question about what support you might need to move forward. So, the Padlet just was dropped into the chat. If I were to ask to attend to one question in advance of the other, we’d really love to know the supports that you need in order to move forward. So, if you could, if you could ensure that you spend some time there and then you can pick up with the remaining time and spend the balance of it on the policy development in your particular context. That would be at least our preferred order so we can capture a way to best support you all. So, with that, you’ll be heading into your breakouts. Brianna.
Brianna Moorehead:
Thank you.
Steve Canavero:
Welcome back. Welcome back. Okay, any quick share outs conversation, additional supports we could provide, existing places where there’s some traction conversations in the state? Invite perhaps the obviously participants of course are welcome or the facilitators. I did see someone raising that if we could reconvene once a month to keep this conversation going, that would be Libby, it looks like that was your group.
Julie Duffield:
You’re on mute, Libby.
Libby Rognier:
Okay. Not only to keep the conversation going, but also to share information and just give people a chance to vent a little bit. Yes, it’s really overwhelming right now, and people are feeling like too much information coming at me all at once. I need help sifting through this and knowing what I should be paying attention to first.
Steve Canavero:
Yes.
Mary Ann Valikonis:
I think that’s true. I said in my group that I felt like I’m on a treadmill. It’s going faster and faster and I’m gonna go flying off the back. That’s how I feel personally, not, you know, speaking for anyone else.
Kelly Wynveen:
Yeah, I don’t know that you were alone, Mary,
Steve, I think in particular in our group, it was around bringing folks together to hear about what’s happening in other states. That we know that California and Oregon have published some things, but to have more time to hear from one another about what’s actually happening on the ground in other SEAs.
Steve Canavero:
Yep. Okay. Anything else before we transition to close here? Also want to mention that the Padlet, and there was a request for the slide deck. So, all of this will be the recordings, the resources, the toolkits that we’ve had made available to us throughout these sessions is also on the Padlet. So, by all means that Padlet will remain in place. So please go there as you want to sift through or, or double click on anything that was mentioned during this session or the prior sessions.
And I just want to appreciate, you know, this is the third and final of our planned session. So, I just want to appreciate on behalf of the three Comp Centers in WestEd who supported this particular series on AI, just appreciate all of you for attending. We know that the demands on your time, as Mary mentioned, are many and just choosing to spend this time with us and, and learn along with your fellow passengers, of which I am one on this train. We, we are deeply appreciative of that. And we’ll sift through the notes and hope that there’ll be future opportunities for us to continue to support you and, and the great work that you’re doing in your state.
So, with that, we will close down. Wanna appreciate Erin and all our speakers for today. Fantastic discussion and really appreciate your time and expertise, not just with us, but as you try to support the, the nation really across the nation and the work that they’re doing. So, thank you all and have a wonderful rest of your Friday and a great weekend.
Julie Duffield:
Yeah. Thank you, Alex. And thank you Aaron. And thank you Steve for facilitating.