On September 20 at the MoodleMoot Global 2023 in Barcelona we held an OET Workshop. There was some fantastic discussion, with over 150 Open EdTech enthusiasts in attendance. The workshop offered a platform to discuss the path forward for open educational technology, particularly in a world with AI personal assistants.
Kickstarting the Workshop
The session began with a broad discussion about the latest updates from OET. We celebrated the recent official formation of the OET Association and shared insights on our current progress in setting up our environment. Our future plans were laid out, fuelling the energy for the discussions that followed.
Brainstorming Around an AI Personal Assistant
The main highlight of the workshop was an expansive brainstorming session about the open-source, on-device AI assistant that we see as a big part of the OET framework going forward.
Identifying the Problem Areas
Several problem areas were identified during the session:
- Access to the lowest tier of learners: Challenges here revolved around device availability, infrastructure, electricity access, and financial constraints.
- Trust in the system: This covered aspects like data privacy protection, transparency, system accuracy, and ensuring the system had a clear, context-dependent goal, vision, and mission.
With these challenges listed, we zeroed in on solutions:
- How to provide access to the lowest tier of learners?
- Implement a minimal offline mode.
- Ensure portability across devices.
- Initiate programs for free devices and charging stations.
- Engage with people already working with grassroots learners.
- Leverage local resources like libraries and internet cafes.
- Potential to develop open-source software, licensing, and customised Raspberry Pi devices.
- How to maximise trust in the system?
- Provide continuous access to personal data with erasure options.
- Ensure training data has traceable sources and limited biases.
- Utilise decentralised hosting of software and data.
- Maintain a transparent operation system and establish feedback loops
- Incorporate multiple AI models for enhanced trust, reminiscent of Google’s Bard model which has a “verify with Google” function.
- Encourage user skills and knowledge in using the system responsibly.
- Establish dedicated trust boards or committees.
- Opt for localised AI solutions when needed.
After our brainstorm, we had some demos of recent improvements in AI. Fred Dixon demonstrated a new Big Blue Button feature that uses AI to create student polls based off educators’ slide content.
Finally, Martin provided a summary of the day – one of the major takeaways he mentioned was that it was becoming clear that the personal learning AI should not attempt to be a teacher itself – rather it should focus on learning about the user, and optimising the guidance towards high quality learning opportunities and trustworthy sources of information.