LinkedInFacebookShare

AI Session Two: Developing a Cohesive and Comprehensive SEA Approach to AI (Transcript)

Sarah Barzee:

Good afternoon and good morning, depending on where you’re joining us from today. So today is our second session in a three-part series, Developing a Cohesive and Comprehensive SEA Approach to AI. My name is Sarah Barzee, and I serve as the Director of the Region 2 Comprehensive Center, which serves Connecticut, New York, and Rhode Island. More about that in a few moments. Next slide please.

Today’s agenda for session two. I won’t go over all of this, but the highlights, we will hear from a WestEd colleague on Data Integration Support Center, otherwise known as DISC. We’ll also talk more about AI strategy considerations. We’ll hear from colleagues from the Connecticut and Utah state education agencies. And finally, we’ll have an opportunity to hear from a WestEd colleague on the application of strategic AI principles for public agencies.

So, as I sit at the top, today’s session is sponsored by the Comprehensive Center Network, specifically the three comprehensive centers led by WestEd. Region 2 up in the northeast corner, as I said, Connecticut, Rhode Island, and New York. And I serve as director of that comprehensive center. I’m joined by my colleague, Steve Canavero Director of the Region 13 Comprehensive Center, serving New Mexico, Oklahoma, and the Bureau of Indian Education, and my colleague Kate Wright, who serves as the Region 15 Director, serving the state education agencies in California, Nevada, Utah, and Arizona.

At this point, you know who we are. We’d like to hear from you. So, my colleague, Brianna is going to share a poll and we’d like to find out where in our regions or elsewhere you are joining us from today. So, Brianna, over to you for some instructions.

Brianna Moorehead:

Hello everyone, my name is Brianna and I’m your tech support for today. My colleague Libby just dropped a poll link for our Mentimeter. So, I’m going to share my screen really quickly. You’ll be provided a pin and you’ll go ahead and move your pin to whatever state you’re joining us from. So, I can see that we’ve got some people, some folks in Utah, Nevada. This area is pretty small, so I can’t tell if it’s Connecticut or Rhode Island. But welcome.

Sarah Barzee:

Small but mighty Brianna.

Brianna Moorehead:

Exactly, and it looks like we’ve got some folks from other regions as well. So, feel free to take another 20, 30 seconds and let us know where you’re joining from. Awesome, thank you all for sharing. Thanks.

Sarah Barzee:

Thank you all for participating. So again, welcome back. For those of you who were able to join us for session one, where we talked about the potential risks as well as the potential of AI practices and SEAs. Today we’re talking about developing a cohesive and comprehensive approach in our SEAs regarding AI. And then just foreshadowing, we also have a session three, which I’ll talk more about at the end of the session.

In session one, we may be welcoming some of these folks back. If so, welcome back to Kevin Johnstun who joined us in session one from the U.S. Department of Education. We were also joined by Alix Gallagher and Glenn Kleinman, both from Stanford University. Just a quick recap again for those of you who are here in session one, you would’ve seen and participated in conversations about these recommendations. For those of you joining us for the first time, just a quick recap, won’t go into all of them, but a few that I thought were quite important to lift. One, prioritizing strengthening trust. AI is a new frontier. So, trust and building trust is important. Also, the importance of informing and involving educators and using modern design learning principles in our AI work.

So, at this time, it’s my pleasure to turn to Baron Rodriguez and Anwar El-Amin, starting with Baron, both Baron and Anwar, our colleagues at WestEd. Baron serves as the Director of Information Technology and Privacy, and Anwar serves as Emerging Technology Manager and supports our IT work. So, with that Baron, I’ll turn it over to you to get us started.

Baron Rodriguez:

Thank you, Sarah. Appreciate that. So, I’d like to start things off. Usually with a test, that’s usually the fun way to start things off. We’re all in education, so I figure why not do a test? And on the picture, the picture on the left, if you take a look at it, there’s an automation missing. And this an automation that AI was unable to help with. Seems simple to me. In fact, this is one of those that recently at a tech conference, I was talking to a coworker, and I thought there was no one around me, unfortunately, there was someone around me and I said, you know, can usually gauge someone’s level of intelligence by how they put coffee in their cup. And so obviously I told them what the answer was and there was someone right behind me who was doing exactly what I said. So, whoops. But what’s important to note here is this is missing an automation. And I wanna level set the maturity of the current artificial intelligent products that are out there. There are several that are in tests, there are some that are advanced, but we need to really always have human interaction and really have use case-driven procedures around what we do.

So, in the chat, does anybody have any clue on what I’m talking about with the efficiency that’s missed on the picture on the left with coffee? Why do you need a spoon? You don’t need a spoon. If you pour your creamer first, you get the right amount of creamer right from the beginning. You know how much creamer you usually want. And then from there it stirs as you pour the coffee in. You don’t need to dirty a spoon or throw away a stick. So, there are a lot of people that talk about that. It’s kind of a thing. You can go online and see. There’s a bunch of us that feel this way about it, but it didn’t even use that data or that information on the internet to kind of find that. So, I heard that I needed to speak up, so I’ll speak a little bit louder for sure. So, this is just an example to kind of set reality for where we are with AI right now when it comes to use cases in our particular sector. Next slide please.

And again, garbage in, garbage out, right? So based on the information that’s out there, if the AI is doing a pull from YouTube videos on whether the earth is flat or round, it’s gonna determine that the earth is flat. Now, of course, that’s a very simplistic example, but early on that was the results that you’re getting back. I still have basic questions that I send in on a regular basis to see if it’s gotten better and it’s still got a lot of holes in it. Next slide please. So, let’s kind of look at AI in practical use. So, we’re going to take a moment, I’m gonna give Julie a second to change screens here. And we’re gonna contest, so as you as the picture comes up, we’d like you to determine whether the picture on the left or the picture on the right, just put L or R in your chat. Which one is the deep fake? Which one is a fake image? So those of you who think it’s the picture on the left, put left, those of you who think, right, put right.

Julie Duffield:

And then Byron, you tell me which one to click on and we’ll see who gets it right from the chat.

  • So far left is winning,
  • Left is winning. Should I try the left then?
  • Oh, let’s go. Let’s see,
  • My left. Let’s go. Incorrect. No, we have to play again. We’ll have to play again. Okay, one more.
  • Don’t be a crowd follower
  • Again. Second round, left or right,
  • Seem kind of a mixed bag. This is a tough one.
  • You wanna call it?
  • One on the is the looks like the one on the right,
  • Right. Okay.
  • So
  • Incorrect. Usually

Baron Rodriguez:

There’s some telltale signs within these, but it takes really a human to kind of look and see what those signs are and be able to articulate what those are. And we’ve all seen the information about TSA, right and the scanning of individuals and not being able to differentiate when minorities are going through the checkpoint. So, there’s still a long way to go even with technology that actually has been being used and more; Anwar will talk about it a bit, but for many, many years in the security world. So, it’s much more mature there, but still not mature. All right, next slide please.

So, I’m gonna give you context. Why, why am I talking about this? You know, what, what’s, what are the helps that are out there right now for you? So, the Data Integration Support Center is a WestEd center that’s fully funded by the Bill and Melinda Gates Foundation. And we are here to really focus on providing some supports. AI is one of the areas we’ll be providing support for state public agencies. Next slide.

So, we get into the policy, the legal, the security, and the architecture for building secure systems that host multi-agency data. Next slide. And so, what differentiates us a little bit from the other education-centric centers is we can provide support to workforce agencies, those integrated data hosting agencies, research centers, those that are doing, you know, multi-agency, but also not primarily education focused. Because once you start incorporating those other aspects of data linkages, it gets immensely more complicated on the privacy, on the security, and on the integration front. So, this is to address some of those gaps as we really try to look at whole-person linkages and the data associated with getting that information. Next slide.

So, we also are able to provide help for those state policies and administrative laws and rules that might be out there around data, which also allows us to focus on some of those. So, we’ve got a set of legal experts that can also help your attorneys with some specific questions. They might be grappling with AI; they might be grappling with privacy and being able to share across those sectors. So, we are here to provide those resources. Next.

So, we focus primarily on privacy and governance, external legal supports, legislative analysis, and then also system security and architecture. Next slide.

These are who we partner with Data Quality Campaign, ECS, CCSSO, as well as AISP and Public Interest Privacy Center, who is led by Amelia Vance, education lawyer expert, specifically in the privacy arena. So, we’re really excited about our partnerships, and we’ll be at CCSSOs EIMac next week. Next slide.

So, we’re gonna talk a little bit about the strategy portion. It’s very easy to get caught up in the hype and how exciting AI can be. And even within WestEd, there’s a lot of that. Let’s go, let’s go free AI is often the, the, the term that that gets sent to me. And usually my question is, okay, what specifically is the use case that we’re trying to solve here? We don’t wanna just, I have another slide deck where I actually pull together some examples of blindly using ChatGPT and OpenAI in an environment. And it can, it can give you some very interesting results if you really haven’t thought through your strategy and how you’re going to build the capacity to be able to use the data. And even if you plan to use the data solely within your own sys within your own environment, if the data isn’t good and mature, you are still gonna have problems. It’s not gonna solve data issues that you may already have within your agency. So, you need to keep that in mind as you employ these tools. Next slide.

So, I thought this vice president of Morgan Stanley, who had at the time had completely locked down any access to ChatGPT or OpenAI from their systems. I actually thought that, you know, in cases like that, when you have big financial institutions doing that, that’s kind of a red flag. Be careful if they’re doing that because there was a lot of concerns about being able to even comply with federal regulations around privacy and financial rules and regulations. So, this was a great quote from, from this vice president there. And someone asked what does mature data mean? That usually means you have really strong data governance processes in place, you have data quality checks, you have regular validation of your data, and you have quality assurance as part of your data maturity program. Next slide.

So, what are some of those use cases that you should, you should build around? Well, first, like really think about what you’re trying to benefit from. There are different things you can benefit from with AI. So, when you look at that, you should look at what the return on investment is. What is the level of effort to do that work? What does your data look like per my earlier conversation? What are your security, privacy, and ethical considerations that you need to consider as you pull this information through? And then here’s a big one, training, right? I’ve not heard a lot so far as I’m talking to different public agencies about training staff on appropriate uses, ethical considerations. How are you going to use the architecture of the AI that you are employing? And then you see another quote about data. If your data is not mature enough for business intelligence and analytics, it will certainly not be ready for AI. So, you know, there’s, there’s still that work to do around building really strong data to be able to utilize some of these data. So, we’re not just gonna talk about the problems today. I’ll, I’ll be clear, we’re gonna talk about some, some promising solutions as well. Next. So as I said, these are the areas you should focus your frameworks on. Next slide.

But one of the first things that’s important to do is just really figure out where your agency is as far as its maturity, right? I would say most government agencies, and this is not because of talent or you know, technical capabilities, it’s just government agencies have more, more oversight, more administrative overhead that they have to deal with. Most public agencies are still at level one or level two. And so those are those that have not much experience their awareness of the uses, but don’t really understand how to use them, right? They’re not really familiar with the governance aspects of using AI or the processes needed to put it in place. And then level two, when you look at incorporation, that’s where you have, there’s a few organizations that I’ve seen that are in this space where they have some pilots that they’re using. They’re, they’re in development or in pilot use. They’re building dimensions through training, guidelines, and acquiring technology. So, if you were to ask me where, where WestEd is, it’s probably at level two right now as well because we actually have some active pilots that we’re working on, but we’re not proliferating it yet at this point. And then you can see as you work on to optimization, then you’re starting able to use optimize things like contracts, your financial systems, you’re able to apply regulatory structures around that AI. And then transformative of course is when you’re just actually incorporating that into your everyday look. Next slide.

So, if you think about just within these levels, what do you do, right? That’s a lot to digest in. Well, I’m at, I’m at level one. So now what this is kind of the, now what, and this is from a group that we utilize for our technology research, and they specifically talk about governance, AI governance, data management, the people, right? Really making sure you’re, you’re providing people the capacity to be able to do this work, both from a learning standpoint, a training standpoint, but also you know, the right tools for the right work. And then like really focusing on the process, like how do we select pilot use cases? And then from there, what hardware and software is required to deliver on those pilot projects. Level two, if you go to the next slide, and that’s, this will be the, the last one that we’ll do before we go into an activity, really focuses more on getting deeper into each of these levels.

So, you look at the AI governance at that point, you’re really defining your responsible AI principles. They’re exploring the data management processes, including data collection, storage, and then focusing on gathering and building a strong foundation of expertise and experience, which helps move you to higher levels of maturity. When you look at the processes, it’s important that you’ve established the different processes for how you support AI and production. And then of course building a technology infrastructure to support those pilots. Next slide.

So just to get a feel for, for where we’re at, we have a few guests from some of the states that are represented here today that are gonna share a little bit about where their respective states are when it comes to artificial intelligence. We’re going to start with Dr. Whitney Phillips. She is the Chief Privacy Officer for the state of Utah. We’re honored to have her here today. So, thank you Whitney for being here today. Really appreciate it. Can you talk a little bit about where, where Utah is from a policy perspective as it relates to artificial intelligence?

Whitney Phillips:

Well, Baron, thank you for asking. We actually just had a policy passed last month for a subset of our government entities. So, my title is actually State Privacy Officer. And in Utah we’re very unique where we have two privacy officers. One of them is called the chief privacy officer, the other one the state privacy officer. We like to really confuse folks in Utah. But essentially, it’s the same position, it’s the same duties, but we have different scopes. And so, we’re talking a lot here about education with the state education agencies and the SEAs where Utah is unique, and it has had someone in charge of privacy at the state education agency level for five years now. I was in that position and now Katy Challis is there. So now me and the, the chief privacy officer, we work hand in hand. He’s in charge of state agencies under the government, under the governor’s office. And I’m in charge of all other local government entities. And guess how many local government entities we have? We have 36 state agencies. Any guesses? I know it’s, it’s kind of crazy. I didn’t know how big government was, but we have about 1,500. And so that includes the, the state education agency, which is the Utah State Board of Education, along with its 150 local education agencies. It includes all higher ed institutions of higher education. It includes all cities, towns, counties, water districts, cemetery districts, things that I did not know exist.

So, when you talk about AI and you talk about the policies that are in place, education is a part of a larger, a larger landscape. And so, some of the things that Baron mentioned are really on Utah’s mind right now. Like I said, last month we had a policy pass, I’m going to put it in the chat right now, but this policy refers only to generative AI and it only, here we go. The scope of this is only for agencies under the governor’s office. So, I think what we’re gonna see is when governments are set up a little bit differently, are we going to have consistent ideas of the permissiveness of when to use AI and when not to, in particular, the ethical uses of, of AI and the copyright issues. I’m sure you’re aware that there’s lawsuits now being done by, sorry, the fly course by a number of, of folks, including artists whose work is being used for this generative AI saying, you know, I, this was copyrighted, how could this be used? So, government should be aware of that. And then there’s also the ethical uses, if you don’t mind, I wanna tell a story. We, we like stories, you don’t wanna see some more slides and, okay, I’ll tell you a story.

So, this is how Whitney got a job as state privacy officer. So, it was in, I think it was about 2019, maybe 2000. This was something I wasn’t aware of at the time, just tangentially. There was a company in Utah called Banjo. And Banjo. I know some of you from Utah know what Banjo is. I see Rick and Katie know what Banjo is. Banjo was a they said that they were an AI company that was able to bring in all of this information, usually surveillance footage, 911 calls, license plate reader information into a box where AI would happen. So just think Tinkerbell in the box. There’s wands, you know, there, there’s a black bond, there’s, you know, AI was the answer of what was happening. And output would come a scenario where law enforcement would be able to locate a missing child within or, you know, find a suspect on the street. So, this was kind of the sales pitch that was given. And so there was a $20 million contract with Banjo that was being processed. And we found out that the, in Utah, the owner of this company was involved in a, a, a planning of the bombing of a synagogue. And maybe this is a timely topic, but he was arrested. And so, what happened was there were questions of whether the AI had some type of, of bias within the algorithms that would discriminate in one way or the other. And so, this is all public record that I’m telling you, this isn’t anything secretive, but what happened was an audit was done of the company and, you know, no bias was found, but what was found was AI was not found that the vendor said that there was AI, but it was really a bunch of really neat integrated dashboards. Kind of more Power BI, I would say, or Tableau. And so why I’m mentioning this is this vendor was about to receive all of this data from, from Utah surveillance data, license plate reader data. They were gonna receive And there wasn’t that due diligence of what AI was and was it was, you know, was it bias? Was it not biased? But in addition to that is what was it really doing? And so as we’re looking at not just the AI that we’re gonna be using and what we’re teaching and what we’re gonna utilize ourselves, but also the vendors that we’re going to engage with now and in the future is, you know, is it, is it just a term? How is it defined? Are you able to kind of get down into that box of where this magic happens and find that, you know, in my opinion, AI is just fancy regression. So it’s just statistical modeling and fancy regression of likelihood.

So, so what does that look like and when do humans come in? I know that AI hallucinates, and it hallucinates very coherently. And so, when it says something that’s not accurate, it’ll, it, it almost has bravado and confidence that it’s accurate when looking at our uses of AI. ’cause it’s a lot of fun to play with generative AI. I gotta tell you, it’s, it’s really fun and it’s really useful, but it has a little bit of bravado to where when we’re using it and when those that we’re working with are using it, how are they going to fact check the information that they get and how are they gonna validate that? And so the main issues that we have here within policies is what tools are permissive? Not all tools are created equal. What are the use cases? How are you going to validate authenticity of and non-bias? How are copyright issues going to be figured out now and in, in the long run? And also, that transparency of, you know, how much is, how much of human brain powers is going to be outsourced versus not outsourced and when to kind of put in that human. So yeah, I, I don’t know if that was very helpful, but I put the, the policy that just passed last month, it does refer to a few of these issues. I wasn’t a part of writing this policy per se.

I know that there is within the SEA the, which is called the Utah State Board of Education, they already have a committee put together looking at AI and looking at policies around AI. And I’m not a part of that ’cause I’ve moved on in a different role. But I think because Utah’s a bit advanced with privacy, when AI’s come about, I think, I think that privacy is not an afterthought anymore. It’s really being thought of a little bit more in advance and also that ethical use. I, I hope that’s the case. So, any questions for Utah out there? I like the comment alter ego. Yeah, I like that. And

  • Go ahead,
  • Go ahead. Oh, so, so one of the things, I was actually talking to a colleague of mine that now works for the Future Privacy Forum and I, and I used the term hallucinate and you used the term alter ego and he, he stopped me. I’m sure he wouldn’t be afraid of me saying his name, but we’re privacy people. So, but we, he said a term that, I can’t remember the term, but it’s when you’re making technology or a system, if you’re humanizing it, it almost has a deflative quality. We’re saying things like alter ego or, or having bravado or hallucinating where when we are humanizing this technology, it actually does a disservice. So, I’ll have to think of the term, it was a long word. He’s better with words than I am, but, so we should be hesitant because it’s not a human, it’s a fancy regression.

Baron Rodriguez:

Thank you, Whitney. That’s, that’s very helpful and thanks for sharing your policies. Really appreciate it. Hopefully you can hang out for the breakout rooms that we’re having in just a couple minutes. ’cause we’ve got Utah throughout here and Utah is considered a leader when it comes to privacy in, in the United States. So, kudos to the folks from Utah for the hard work you do and keeping serving as an example between California and Utah. There’s a lot of great examples of really strong privacy laws, but really thinking about things across the board. We also have Ajit from the Connecticut Department of Education, Ajit, just wanted to share for a couple minutes before we go into breakouts about where Connecticut is at with their adoption and consideration of AI policy.

Ajit Gopalakrishnan:

Great. Good afternoon to those on the East coast. Ajit Gopalakrishnan from Connecticut Department of Education, the chief performance officer here. Hello to Sarah Barzee from Connecticut, previously from Connecticut, CSDE as well. So yeah, I mean, hallucinating, you know, this chat thing, I had an example where somebody was asking for a report that was supposedly on our website. This person even sent me a link and you know, I click on the link, and I get the 404-error done a report on that particular topic. And, and I asked the person, I said, where did, he said, I searched, I and I probed, and he had gotten it through ChatGPT, so the, the, the think created a URL that didn’t ever exist as a source for a particular report. And I’m like, anyway, so that was my interaction with hallucinating, and I shared that internally as well. So, everybody else is aware.

So, I’ll mention a few things that are happening in Connecticut around AI. One is the legislature actually passed a law this past year requiring agencies and really the Department of Administrative Services, which is like our, our administrative agency to conduct an inventory of AI use across state agencies. I put the language of the bill in the chat, so you can, you could take a look at that. There’s a task force that’s assembled as part of that bill. Education is not part of that task force. And I don’t think education somehow, naturally is thought of by those not in education in terms of AI use. So, we’re not part of that committee, but we are taking stock of our own use. of AI, and there are a couple of areas where we use, I like how I think Whitney was the one who talked about fancy regression.

We use machine learning algorithms as part of two of our processes. One is early indication tool, which is like our early warning system where we use random forest methods to ascertain a support level for a student. Not at risk level, but a level of support a kid needs to achieve their milestones. Another area where we use this modeling, this type of modeling is to identify students who may have potential for taking advanced courses. And in both cases, and I’ve put both those reports in the chat as well, we have sort of technical reports documenting our methods. In all these cases, the information is triangulated with other humans. It’s a, it’s reviewed, the output is reviewed by humans, but also the, the student information is shared with districts who can then mediate what happens with that output. So, we are taking stock of our own work and I, I didn’t feel like, you know, I still sort of struggle with the definition of what is AI is sort of such modeling consider AI because some of the definitions of AI talk about like, you know, processes that replicate what humans would do. I don’t know of any human that can look through identify patterns in it. So it’s, but in any case, that’s kind of how we sort of currently use AI.

Our state board is going through a, its next five-year plan development and as part of that AI is featuring in that plan as well where, you know, I think the state board is probably taking a more holistic view, sort of looking at all aspects of AI use not just data and analysis, but also teaching and learning, personalizing education supports for students, help for teachers, et cetera, et cetera. So, I think, I think that’s good. So, our state board is very much aware of it and it’s, it’s early days. I mean, so we don’t have any policy or anything figured out, but we are plugged into Code.org and their efforts around this. In fact, we had their Chief Academic Officer, Pat Yongpradit was a keynote speaker at our conference last week and he talked about the Teach AI resource, the toolkit, I think that just came out a couple of days ago. I think that was circulated by this group as well. So, I put a link to our conference flyer. So, we’re, we’re at this point really just increasing awareness and Pat’s probably the most important recommendation for all the educators in the room was you should personally try it out. You should personally do it. Don’t take someone else’s word for it, actually use it and use it quite a bit till you really get a sense of what these tools are doing, especially language generative AI. So, I thought that was important sort of takeaway for all of us, not just see it in a keynote presentation or whatever. So, I have played around with, with some of these types of tools just to, and yeah, I mean they’re good, but to me they’re not replacing a human anytime soon. But anyway, that’s my piece there and hopefully that was, that was helpful.

Baron Rodriguez:

No, very helpful. Thank you. Really appreciate it. Okay, well we are going to move on to our breakout rooms and hopefully you can stay as well and for a few minutes with this. So, we’re gonna do a bit of a reflection, discussion, and connection. We are utilizing Padlet, and I believe the link was sent pre, but also it will be provided to you in the breakout rooms. And what we’re really trying to get together is what are the AI use cases your agencies are considering in the next 12 months? Are you considering using operational improvements? Are you considering using for removing bias in maybe your, your test items, your assessment items? There could be a variety of of areas that you’re thinking about. What are the requests that you’re getting in from your, your respective agencies or staff for use of AI? And then if you could also indicate on that Padlet, what level you’re in sort interest, you’re just interested in it. So, you really haven’t done anything, you’re actively planning, you’re implementing, or you already have it implemented. So, we’re gonna give you, I think we’re gonna give you 10 minutes to do that. We will be sharing this Padlet.

Julie Duffield:

Think we have 15 minutes, Baron? Just 15 minutes.

Baron Rodriguez:

15, okay. Yeah. But we will be sharing the Padlet out after as well. So, we won’t be doing a share out afterwards, but we’ll make that available as part of the webinar. So go ahead and break into those groups. Thanks.

  • Okay, looks like everyone is back.

Anwar El-Amin:

Great. Okay, so everyone’s back and ready to go for the next section. My name’s Anwar El-Amin. I am the Emerging Technologies Manager at WestEd. I’m working with Baron on the DISC program, specifically on strategic councils and of AI and public agencies. And yeah, let’s go over some, some basic ideas of, of how we have concerns. We have concerns about AI, we have concerns about data proliferation. One of the solutions to that problem as an umbrella is a secure AI.

A secure AI is an architectural principle that’s being implemented on very much like high level enterprise networks. I know I was part of a part of a webinar a couple weeks ago where one of the heads of Sony said, yeah, we already have our secure AI built up. So, this concept is this idea as a blueprint for securing the AI in its own environment to which you do not encounter data proliferation issues when do a query to an AI, those data points, whatever you give the AI creates data points on the general AI that, that one that’s free on the outside.

So, at this we’ve come up with a concept that is built off of this particular one, built off the Microsoft network and it’s AI services. Part of that is this flow here where we have, let’s take it from like a simple level. You are that person about to interact with a data system, a state data system, and, and you need some sort of AI service that, that being generative ideation from a document, looking at research data, looking at PII data in, in a particular designed for this. But you don’t want to create those data points on an AI that has a, that data that creates a proliferation of that data, some somewhere outside of a system that you control. So, what we’ve put together is a access request for that person to access that platform and it’s counts on a solution called Azure Virtual Desktop. Effectively, if you are at your computer, you access a website, you gain access that’s been provisioned by an IT, you know, your IT group. And then from there you go into a secure AI enclave, this secure AI enclave. Yes, it exists within the cloud infrastructure, but underneath it you can then access things like ChatGPT, which is provisioned by Microsoft Services. And again, this is a private AI objectively. So, if I do queries to this ChatGPT, I’m no longer creating data points on the outside. And so, someone else who has from a public generative AI perspective can’t then look at my data points and say, oh, you know, Anwar is, is looking at, you know, these documents and he’s producing this information material, which is what you can do with a public ChatGPT. That as an example, this also extends to using proprietary data where your data sources within your systems can be siphoned into the Microsoft platforms, which is secured end-to-end. And then that secure AI can then access that data.

So when it comes to things like analysis, when in the cases of producing statistical analysis research materials based upon the, the current data that that you or your organization or custodians of can now leverage AI services in order to reduce ourselves that do not subsequently create a security issue when those data points that you’ve put into the system are now accessible to everyone outside is effectively a enclave for your AI environment. Next slide please.

So, the benefits of secure AI. So, the, these are many of the concerns and it was just recently about, you know, the, this question of, well, where are our issues that we most want to address when utilizing AI? We put those into categories. These are the several categories that pretty much are attuned to every organization and including educational organizations. Intellectual property. So, when it comes to access to your data, if you’re using an AI tool, again with that AI taking in the data that you currently have, if it is public, then that information is now public.

So, you may think, okay, well what about our, you know, logos, letterheads, you know, reports, internal reports. If you put into the AI, if you question the AI about it and you, you submit that information on, on a public AI, that information is not public. So, the first real, you know, tangible benefit is that your organization as form integrity is, is secured end to end in terms of your intellectual property. Business continuity is a good one. Business continuity refers to the, the, the, the, the cross between your AI using automation with security aspects of your information systems. So now Microsoft has been doing this for, you know, a good amount of time, but now with the advent of generative AI, it can take in data reports and decide that potentially that all that information needs to be public. And what I mean by that is you have maybe a hundred computers on your, in your organization, and if you put in AI that is a public AI, now the, that computer information is now sits on the outside and successful particularly to bad actors. So, a secure AI allows for that information to be leveraged, but then those data points are also not created.

Customer experience also a good one. This, this is in terms of your internal users where when it comes to people within your organization being able to curate their experience and then also for your clients potentially, and most importantly for educational organizations, the interaction between your data system, your information system, and those that you serve. Secure AI means that the passage between the one end user and your clients is a secure transfer of information. Reduced liability also goes with a lot of these settings. The liability of in particular the, the, the method in which the AI is produces its results can potentially be biased ethically askew and produce answers that otherwise would not be reportable or, or valid. So, reduce liability in the long sense means that the answers and the information that you’re getting out of your AI systems that are now leveraging your own data do not become a liability in which you might violate some, some sort of container ship of, of the data that you use.

Innovation and research, this is, this is a heavy one when it comes time for, for your researchers to do their research that the environment that they utilize with the generative capabilities of AI, the analysis capabilities of AI is secured within an environment that does not produce, again, these data points on outside AI platforms. It keeps your non-public information, non-public and outside of a secure AI that is the case. Next slide please.

Ah, data privacy yes, goes along with the idea of liability. Data privacy is a heavy aspect when it comes to your, your contractual terms between your servicer and your client and also your organization. In other words that you may have, may not have the ability to put this particular data in this type of system. What a security can actually do, which is a benefit, is automatically classify the data that you have in reference to the, the data agreement that is carried with that data. Meaning that the, the first level perspective, if someone wants to put, let’s say it’s a secure AI environment, an AI enclave, if they put that data into a particular query, the AI can actually tell you or tell that that person, that the AI user, that they’re not supposed to be doing this type of query. So, it is a multiple level category when it comes to data privacy to not be able to, to be able to secure your data in, in its own use case.

So, robustness, robustness is also a type of infrastructure byproduct of a secure AI. In other words that bad actors won’t have access to your AI to cause in intentionally incorrect conclusions by your user base. So, where a public AI, if you go to ChatGPT right now, yes, you run a query and, and it seems that the information has attribution that comes from reliable sources. However, if you do not use the secure AI, there’s no 100% way to tell whether or not that answer has been adulterated whether or not the data output of a public AI is a hundred percent accurate. In a secure AI, you can control the environment more, more importantly that a vendor like Microsoft controls its own environment and that the, the beginning and the end of your priority remains consistent with the results that you expect it to produce. And that also goes along with validity. So, validity is goes along with robustness in, in all the other categories, but in this case that the, the produced results are also reinforced by your, your expected results from the data. In other words that if you wanna ask the, the AI, your internal secure AI, whether or not the color blue is a is a good idea for this house and it says great bananas are the way that we should go with this, you can see that that is not a valid answer. On an external AI, a public AI, those results may come about and that’s also referred to as hallucinations for compliance.

Compliance is highly useful for a secure AI and, and the Microsoft platform, you’re allowed to put in the compliances into categories. So, and, and I kind of referred to it before where if someone using that secure AI puts that data into a particular query or produces a result that is otherwise unethical, it actually will tell you what compliances might be violated by this answer. And then also goes along with equity, equity itself is kind of like a, a loose term for an AI, but really what it what really matters is the attribution of where that data comes from. Yeah. As long as the, the attribution comes from reliable sources, sources that you provide because it’s surrounding data, the idea of equity is, is a checkable value where, where a human can engage with a secure AI and say the, these results based upon this data research seems to be equitable ’cause I know the original sources. And then, yeah, next slide. Ah, questions. Yes,

  • I see some reflections in the chat, I don’t see any specific questions. I’ll turn to Kate to see if she says any, but I’m sure you could jump into the chat and answer them there if people have any. Sure. We’ll get a few more moments. Yes. See if there’s any questions.
  • I have a quick question Anwar like, and just like putting my SEA hat on for a second, how technically challenging and, and I’m not a obviously by this question, not very adept at at this technology, how challenging is it to create this secure environment? Is this something a typical SEA could do through their, with their team?

Anwar El-Amin:

So, what’s great about Microsoft has really, I have this friend who works for a consulting firm, Capgemini I don’t know if anyone’s heard of it. So, I was having a conversation with him a couple a couple months ago and he was like, yeah, you know, at 75% of Microsoft’s, you know, basically output the revenue has been going towards predictive analysis in AI generation. I said, oh yeah, you know, sounds great. How easy is it put together? It’s, it’s very straightforward. The, the guided principle basically is that there’s a set of services that you’re allowed to put together in, in a Microsoft like subscription. And based upon those services, it actually allow, it facilitates you building out this type of platform. So, you gotta kind of give it to Microsoft. Amazon is almost there as well that the, the concept of a secure AI is actually going to be another part of information technology, right? Yes. You have the, the AI that gen generally everyone uses and their vendors that come up and, you know, look in the bylines to see where their data’s being processed, whether or not secure, but there’s just the entire other sect of information technology offerings from, from today and probably from the next 10, 10 years where organizations are going to steer themselves towards secure AI because the public AI, of course is filled with, you know, potholes and, and, and all, you know, cons of liability really. So it’s actually pretty straightforward. And what’s really interesting is that I’ve already built our environment in, in, in WestEd it is infrastructure as code. It’s effectively code based. So, I can take the, the entire code, put it into a different subscription for Microsoft, hit a button and have it spin up all the virtual machines, all the services, even the Mls and, and have it, you know, ready within a few hours. Good. It’s a good question. Yeah.

-So, I think we can answer the other questions to stay on time in there. I think there’s one more from Ajit Anwar, you might wanna take a look at. I think it applies yes, but we should probably move forward. We want to also kind of hear from the states on, on what you feel are your top And so I’ve got this document here, this will be put into your breakout rooms, but it would be really good to kind of feel the pulse of the, the public agencies that are here and which of these resonate with you. This is from the International Association of Privacy Professionals, their identified AI risks. So, you could just take a few minutes as you break into the room, identify your top two concerns relating to AI and then what supports can our, can WestEd, the Comp Centers, or you know, just in general do you need to support your AI initiatives. So, we’re gonna go into breakout rooms. Thank you Amwar for that presentation and we’re going to take a few minutes and making sure that just the outputs are accurate. So really good, good information. Keep that in mind as you’re developing your, your state procedures or strategies around using AI and we hope this was especially helpful to those of you who are planning. So with that, I’m gonna hand it over to our Comp Center team and they’re gonna close us out.

Sarah Barzee:

Great, thank you so much Baron. So, on behalf of Region 2 region 13 and Region 15 Comprehensive Centers, I thank you for being with us today. I hope you found the information relevant and informative and the discussions in the breakout rooms Useful to go back to your SEAs and put this information to use. Particularly thank you to our colleagues, Anwar and Baron as well as our SEA folks, Ajit and Melissa, we appreciate you making the time and being here to share your expertise and also to our facilitators and our planning team. So just forecasting ahead, we hope you’ll join us on November 3rd for our third and final session in the series where we’ll focus on practices in, sorry, practices in AI policy and implementation. With that, I will say thank you and final series resources. As I said at the top of the session, the resources are all available and posted in the Padlet, so please feel free to find those and if not, reach out to one of us and we’d be help. Happy to help with that. A final thank you, hope to see you on November 3rd and have a wonderful rest of the day and weekend. Take care.