April 26, 2024
This blog post was developed by the Regional Comprehensive Centers 2, 13, and 15 for the U.S. Department of Education’s Office of Educational Technology (OET). It first appeared on OET’s blog and is posted here with permission.
State educational agencies are increasingly focused on how to maximize the benefits and minimize the risks of AI in their state education settings. To inform state—and district—leaders of key things to consider as they make decisions around AI, OET invited Regional Comprehensive Centers 2, 13, and 15—federally funded capacity-building technical assistance partners to state educational agencies—to guest post recent thematic advice their states received from national AI experts regarding how to think about shaping AI policy, parameters, and use cases at a large scale.
“The arc of AI and this technology is profound. We will look back to the year 2023 and we will talk about it like we did about Sputnik or the moon landing–or birth of the internet. It might be more important than all of those moments.” — Alex Kotran, CEO, The AI Education Project (AiEDU), speaking to state educational agency leaders at the recent “Artificial Intelligence — Opportunities and Risks in Education” webinar series
For public education, Artificial Intelligence (AI) holds both major promise and major risk. Foremost for state educational agency (SEA) leaders is protecting and advancing the safety, well-being, and quality of education for children and educators.
During the fall of 2023, leaders from seven SEAs participated in Artificial Intelligence — Opportunities and Risks in Education, a webinar series featuring presentations from national AI experts. Hosted by the Regions 2, 13, and 15 Comprehensive Centers, the series originated when multiple SEAs indicated interest in learning more about existing research on how SEAs can and are shaping the use of AI in K–12 to explore opportunities and risks inherent at the state level.
The themes below reflect the insights, considerations, and foundational resources the following experts surfaced during this 3-part webinar series.
- Anwar El-Amin, WestEd
- H. Alix Gallagher, Policy Analysis for California Education (PACE)
- Ajit Gopalakrishnan, Connecticut State Department of Education (CSDE)
- Kevin Johnstun, U.S. Department of Education
- Glenn Kleiman, Stanford Accelerator for Learning
- Alex Kotran, The AI Education Project (AiEDU)
- Erin Mote, Innovate EDU
- Whitney Phillips, The State of Utah
- Baron Rodriguez, Data Integration Support Center (DISC)
Because the AI space is rapidly evolving, the considerations below are not intended to be comprehensive or inclusive; rather, they are a snapshot of expert perspectives and resources as of April 2024.
This post provides nonregulatory, education-specific advice that is aligned with federal guidelines and guardrails. It is not intended to enable a developer to establish its compliance with any federal laws or regulations. Also, it is not intended to be a comprehensive or exhaustive review of issues or to introduce any new requirements.
Before articulating policy, clarify high-level SEA values and guiding principles.
1. Ensure humans are in control of and driving every meaningful decision made based on AI.
Experts advise SEAs to clarify high-level SEA values and guiding principles before articulating policy. Indeed, AI tools (and tools intended to supervise the use of AI) need to be developed and deployed in service of fulfilling the educational agency’s vision for education. When appropriately directed, AI can provide supports that enable systemwide progress.
“AI tools are a powerful way of augmenting what people do, rather than a replacement,” said Glenn Kleiman of the Stanford Accelerator for Learning. Thus, in service of greater trust and transparency in the system, humans should either actively guide or examine every meaningful decision AI makes.
Centering educators and SEA values is important not only in leveraging existing AI tools but also in developing new tools. Educators must be involved at all stages of development. For example, educators have an opportunity to help shape what tech companies build if they are involved in the development of state or district procurement guidelines.
Put necessary guidelines and guardrails in place to protect users from known risks.
2. Clarify or expand relevant guidelines to ensure age-appropriate use. This includes parental consent requirements for use in an educational setting and prohibitions on young students using the technology without supervision.
Importantly, students and educators should not submit personally identifiable information — including first and last names and social security numbers — into any publicly available AI platform.
It is not always clear how data are stored or used in each AI platform. In many popular applications, individuals’ data are used to train improvements to the AI model. “Once such information is provided, it cannot be pulled back,” cautioned H. Alix Gallagher of PACE.
3. Protect input and output data by embedding AI technologies within the SEA’s overarching cloud infrastructure—the resulting construct is referred to as an “enterprise architecture” or a “Secure AI” enclave.
If effectively encapsulated, personal data and proprietary information will only be accessible to authorized applications and personnel. In all cases, the agency needs to know exactly where user data do and do not go. For more information on best practices in governance of AI systems, please consult the NIST AI Risk Management Framework.
4. Before meaningfully implementing AI, put checks and data governance protocols in place that account for and overcome “blind spots” in AI outputs.
Most AI models to date have not yet been validated using contextually appropriate data, experts noted. In those cases, quality control processes are necessary to ensure people review, verify, and validate generated outputs and recommendations. The technology is imperfect in that it is trained at a given point of time using a given set of data.
Inaccuracies may arise because many models, especially generative AI models, are trained using large-scale internet data. This process will pull in some biased and toxic information. Furthermore, additionally incorrect data points may be formed in answer to a query, following the regression of a machine learning model that takes in incorrect data. For example, some popular generative AI platforms have created references and web links for “sources” that do not exist. Building processes and capacity to confirm the accuracy of generated outputs is vital to avoid significant adverse consequences on students, teachers, families, communities, and entire school systems.
5. Develop and roll out coherent systemwide guidance for students and teachers on how to properly use AI, including how to keep children safe and ensure consistent academic integrity from class to class.
“A lot of districts are [currently] trying to triage—identifying immediate policies to put in place around academic integrity and safety. It’s worth addressing these now. We don’t have 12 months to have unanswered questions,” noted Kotran of AiEDU. The urgent need for systemwide guidance presents an opportunity for collaboration with other SEAs and districts to share resources and lessons learned.
For example, given that it only takes a few prompts for generative AI tools to produce products that are indistinguishable from authentic student work products, educators will not know if a student is using AI. Therefore, teachers need to be effectively trained on how to design and implement instructional practices and assessments that align learning goals with the appropriate level of AI use. It is worth underscoring that educational agencies will need to carve out adequate time and space for teachers to build this capacity.
Deeply explore the strategic potential of AI to advance your SEA’s vision for education, possible use cases, and return on investment for teachers and students. Additionally, explore potential eventualities in your contexts, as well as risks and how to mitigate them.
6. Scale AI at the SEA level after clear guidelines and guardrails are in place and when more is known about the potential of AI.
Given that the repository of data from which AI draws broadens as content and learning develop over time — that is, the technology often “gets smarter” as time unfolds — experts agree that, for now, it is worth being deliberate in exploration.
Experts suggest that as SEAs begin to build out architectures, they simultaneously develop strong use cases and consider small-scale adoption, such as pilots that may later lead to scalable proof points.
7. Proactively explore equity protections. States must ensure it is not just “some” who are driving the car.
SEA leaders participating in this series expressed a strong interest in fully exploring ethical considerations of large-scale AI decisions, noting that such discussions are underway in their agencies. For example, participants agreed that equity safeguards pertaining to procurement, training, and access to AI technologies would be critically important so that educational agencies at all levels would not, through AI, exacerbate existing systemic educational inequities and instead help to overcome them.
Experts agree that should only some students end up receiving school supports to learn about AI and how to use it, resulting gaps would present a major equity concern.
“[We have] concerns around cost going forward because we already have inequity in our buildings,” added another SEA leader. “[I]t is the thing we work on the most. So [we are] really concerned that some of our districts will have a leg up again — like they did in the 80s and 90s when some were doing digital fluency and computer skills when others were not — when talk of a computer lab was not even a possibility.”
8. Consult available national resources, including, as of the date of this publication, the following:
- the Biden administration Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence in K–12 as interpreted by the Data Quality Campaign
- the Office of Educational Technology, U.S. Department of Education policy report Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations and forthcoming toolkit
- ongoing Institute of Education Sciences projects seeking to better understand the efficacy of AI systems in improving student outcomes
- the National Institute of Standards and Technology, U.S. Department of Commerce AI Risk Management Framework; the U.S. National Science Foundation EducateAI initiative
- the Office of Science and Technology Policy, The White House Blueprint for an AI Bill of Rights
- EDSAFE AI Alliance, a diverse alliance of cross-sector organizations anchored in the S.A.F.E. Benchmarks Framework (Safety-Accountability-Fairness-Efficacy): a deliberate, ordered approach to taking steps to organize the use of AI and develop usable assets; the EDSAFE AI Alliance has created a network of district-level policy labs across the United States to help develop district AI policy
- Teach AI’s AI Guidance for Schools Toolkit
- Center for Democracy and Technology brief on the intersection between civil rights, disability rights, and surveillance technology
- Consortium for School Networking (CoSN) and the Council of Great City Schools: K–12 Generative AI Readiness Checklist
- The AI Frontier, a Utah Education and Telehealth Network Online MOOC Course for educators
- AI for Learning and Work — EdTech Center @ World Education
- AI in the K–12 Classroom — AVID Open Access
- DeepLearning.AI — Start or Advance Your Career in AI
- Review of Guidance from Seven States on AI in Education — Digital Promise
9. When thinking through possible use cases, consider data quality, security, privacy, ethics, and training needs.
As suggested by experts, before scaling up build collaborative spaces within and across all levels of SEAs and LEAs for people to come together to share their experiences, identify what’s working, collectively test generative AI platforms to determine what works well, and discuss issues around privacy, bias, and small-scale problems.
For example, states participating in this webinar series reported that they are considering use cases such as the following:
- Virtual assistants/agents to assist educators for some purposes, such as to tutor students or help with lesson planning.
- Translation for students and families. This is particularly important for ensuring that educational opportunities and family engagements levied by AI are equitable.
- Generative AI outputs should be communicated so as to be inclusive, accessible, culturally and linguistically relevant, and resonant to individuals from different backgrounds.
- Timely insights on how to address learner variability and enhance feedback loops. AI apps can detect patterns in data and rapidly predict actionable insights. This presents an opportunity to maximize the use of longitudinal data, leaders agree.
“We use machine learning algorithms as part of an early indication tool — an early warning system to ascertain the support level for students and ID students who may have potential for being in advanced courses,” added Gopalakrishnan of the CSDE. “In both cases, the output is reviewed by humans.”
In any use case, experts agree that it is important to recognize that free versions of popular AI applications may not reveal the technology’s full potential. SEAs will want to make policies based on a full understanding of the latest model, which requires paying attention to and, where possible, investing resources into advanced models and updates as they evolve.
10. Compare AI policy guidelines already adopted by other states to your state’s local context when considering setting policies.
As of the date of this publication, the following jurisdictions have passed K–12 AI policy guidelines:
- California
- Gwinnett County Public Schools, Atlanta
- Indiana University’s K–12 Rural AI Initiative Indiana’s DOE K-12 Pilot Program
- Los Angeles
- New York City
- North Carolina
- Ohio
- Oregon
- Virginia
- Washington State
- West Virginia
“The train has left the station — you are on the train. With respect to policymaking, our focus needs to be on where is the train going — to the extent that we can steer it. Let’s do our best to optimize for outcomes.” — Kotran, AiEDU
Access the Full “Artificial Intelligence—Opportunities and Risks in Education” Webinar Series, Transcripts, and Summaries:
- Session 1: Uncovering the Potential and Risks of AI in SEA Practices
- Session 2: Developing a Cohesive and Comprehensive SEA Approach to AI
- Session 3: Guiding Education Leaders Towards Safe and Equitable AI Implementation
The Regions 2, 13, and 15 Comprehensive Centers work with state educational agencies and their regional and local constituents in Arizona, California, Connecticut, Nevada, New Mexico, New York, Oklahoma, Rhode Island, Utah, and the Bureau of Indian Affairs to improve outcomes for all children and better serve communities through capacity-building technical assistance.
The contents of this post were developed by the Region 2, 13, and 15 Comprehensive Centers. These centers are funded by a grant from the U.S. Department of Education. However, the contents of this post do not necessarily represent the policy of the Department of Education, and you should not assume endorsement by the federal government.
This blog contains examples of resources that are provided for the user’s convenience. The inclusion of these materials is not intended to reflect its importance, nor is it intended to endorse any views expressed or products or services offered. These materials may contain the views and recommendations of various subject matter experts as well as hypertext links, contact addresses, and websites to information created and maintained by other public and private organizations. The opinions expressed in any of these materials do not necessarily reflect the positions or policies of the U.S. Department of Education. The U.S. Department of Education does not control or guarantee the accuracy, relevance, timeliness, or completeness of any outside information included in these materials.