According to a white paper from the Center for Democracy and Technology (CDT), teachers are increasingly using generative artificial intelligence (AI) tools to support students with disabilities in ways that save time for educators and provide best practices for interventions and clear communication for students and parents. In fact, almost 60% of special education teachers reported using AI to develop an IEP or Section 504 plan during the 2024-25 school year.
However, in doing so there are risks in using AI to craft individualized education programs, including potential violations of the Individuals with Disabilities Education Act (IDEA) and privacy laws, as well as possible introduction of inaccuracies and biases. Teachers should be cautious about entering identifiable information of students into AI tools, especially when using ones not vetted and approved by their school system.
According to CDT, although many feel this is an effective and efficient way to create these documents, the risks involved include legal and privacy liabilities. For example, IDEA requires each IEP to be unique and tailored to each students’ disabilities, goals and process for achieving their goals. An AI tool that develops IEPs based on little student-specific information and that is not significantly reviewed and edited by a teacher likely would not meet these IDEA requirements.
Educators and school systems should also be aware of privacy rules under the Family Educational Rights and Privacy Act (FERPA), IDEA and other state-level privacy policies when using AI tools. Further, any student information included in a query to a chatbot can be collected and likely stored by the chatbot company. In addition, the privacy risks and chance of violating FERPA vary depending on factors like the chatbot version being used and whether the school or district has agreements with vendors that license purpose-built tools, which may have more privacy protections.
For more from K-12 Dive, click here.
