From “Duh” to Deep Dive: Ian Baker and the Human Side of AI in Psychology Education

By Keith Taynton (Learning Technology Support Officer)

Ian Baker, a psychology academic at the University of Derby, isn’t afraid to admit he’s navigating the AI revolution alongside his students and colleagues. In a candid conversation, Ian reveals a journey of discovery, marked by initial scepticism, burgeoning excitement, and a deep-seated focus on the ‘human aspect’ of this rapidly advancing technology. His story isn’t about proclaiming AI as a magic bullet, but rather about embracing it as a powerful tool that necessitates a fundamental shift in teaching and learning but always underpinned by empathy and critical reflection.

The conversation begins with a seemingly obvious, yet crucial, point: self-testing is a cornerstone of all effective learning. Ian acknowledges well-established psychological literature which confirms that quizzing oneself after exposure to material significantly aids in consolidation, deeper understanding, and improved recall. One might even think, “duh.” However, the “duh” quickly gives way to a familiar and sobering hurdle: crafting effective self-tests is labour-intensive and, as such, is time consuming and intellectually demanding. However, this is where Generative AI enters the frame.

Ian recounts his early experimentation with AI question generators. The ability to feed session slides into an AI and have it spit out multiple-choice or short-answer questions was, in his words, “brilliant.” This initial spark of enthusiasm wasn’t about replacing the human in the loop, but about alleviating a significant bottleneck in the learning process. The time and cognitive load associated with question creation, a task often relegated to the already overflowing to-do lists of academics, could potentially be drastically reduced! This must surely resonate deeply with all colleagues who are always under pressure to curate an excellent student experience with often limited resources.

Crucially, Ian isn’t advocating for blind faith in AI. He immediately emphasises the necessity of human oversight: “You obviously have to check them… you can’t just let the robots take over completely!” This highlights a crucial understanding – AI is a tool, not a replacement for pedagogical or subject expertise. The generated questions are a starting point, requiring careful review to ensure alignment with learning objectives, accuracy of content, and appropriate level of challenge. This academic perspective is vital, particularly in a field like psychology where critical thinking and nuanced understanding are paramount.

Interestingly, Ian observes a potential generational difference in AI adoption. He notes that staff, generally speaking, appear less inclined to readily engage with AI compared to students. This observation is a significant challenge for universities striving to integrate AI effectively. Ian recognises how important it therefore is for high-quality staff development, framing it not just as adopting a new technology, but as acquiring essential skills for the future of education and work. He candidly admits he doesn’t have all the answers, positioning himself alongside both staff and students on a shared learning curve. This vulnerability and transparency are key to fostering a culture of experimentation and adaptation.

What also sets Ian’s approach apart is his focus on the emotional dimension of AI adoption. As a psychologist, he instinctively delves deeper than just the practical applications. He actively engages students in conversations about not just how they are using AI, but how they feel about using it. This line of questioning reveals anxieties, discomfort, and ethical considerations that so often lie beneath the surface. When students express feeling “twitchy” or uncomfortable, Ian sees this as valuable data, a window into their emotional response to a disruptive technology. He uses his psychological training to explore the roots of these feelings, helping students “come to terms” with the ambiguity and challenges inherent in this technological shift.

This emphasis on emotional intelligence is particularly pertinent in the context of higher education. Students are not just learning content; they are developing as individuals, navigating their place in a rapidly changing world. Ian’s approach acknowledges this holistic development, recognising that technological integration must be accompanied by emotional processing and ethical reflection. His strategy of open dialogue and emotional exploration offers a valuable model for educators across all disciplines grappling with similar anxieties and uncertainties surrounding Generative AI.