PowerSchool, a leader in cloud-based software for K-12 education, has implemented an advanced content filtering solution for its AI assistant, PowerBuddy, to ensure student safety in educational environments. Co-authored by Gayathri Rengarajan and Harshit Kumar Nyati from PowerSchool, the blog details the challenges faced in distinguishing legitimate academic discussions from harmful content.

By leveraging Amazon SageMaker AI, PowerSchool fine-tuned the Llama 3.1 8B model to create a custom content filtering system. This system effectively identifies harmful content while minimizing false positives, a crucial requirement given that many users are minors.

The architecture includes a comprehensive data preparation pipeline, robust model training, real-time inference capabilities, and continuous improvement processes that incorporate user feedback. Initial testing showed the model achieving approximately 93% accuracy with a false positive rate under 3.75%.

With a commitment to enhancing student safety while promoting engagement, PowerSchool aims to adapt and improve its content filtering model continuously as part of its broader AI strategy. The results reflect a transformative approach to integrating AI in education, ensuring that AI tools foster a safe learning environment.