Summary
This interactive guide originated as an experiment to explore the extent to which ChatGPT can be used as a tool to scale high-touch virtual programming. It eventually evolved to be an adaptable resource that arms learners with the tools needed to craft and tune effective prompts, giving them the agency to decide if and how to implement it into their workflow(s).
Overview
The Problem
A common decision that learning designers face is choosing between “high-touch” and “high-tech” approaches when considering factors like time, place, and need. While high-touch learning experiences, like workshops, discussions, and projects, offer a high level of personalization, they are difficult to scale and can be quite costly. High-tech learning experiences, like asynchronous eLearning, have the advantage of being more cost effective and scalable, but deprive learners of direct human support, encouragement, and guidance.
The client identified one of the most labor-intensive parts of their recent high-touch virtual program as reviewing and responding to weekly participant reflections. Although reflecting is an important part of learning, they shared that providing personalized feedback makes it difficult to expand the program to more people and takes away time from creating meaningful relationships through additional activities.
The Solution - or at least part of it
As Donald Clark proposes in his Learning Technology: A complete guide for learning professionals, the relationship between learning and technology, as well as with AI and teaching, is a complex and dialectic one. As a result, we sought out an approach that combines the respective benefits of tech-touch and high-touch programming. By exploring ChatGPT’s potential to read reflections and draft responses, facilitators can leverage their reduced workload in favor of more effective teaching.
Design Theory
To prioritize the human aspects of this learning experience, I utilized the Design Thinking Methodology, an iterative and human centric approach to tackling unknown problems through the following five-phases: empathize, define, ideate, prototype, and test.
First, I empathized with the program facilitators by asking probing questions like, “Can you tell me more about what you mean by ‘labor intensive’?” and “Can you tell me about a time that responding to reflections took away time from more meaningful tasks?”
Next, I defined the problem using the “[User] needs a way to [verb] because [insight]” problem statement template. This became:
“Facilitators of high-touch programs need a way to spend less time reading and responding to reflections because they need more time building meaningful interactions”.
During the ideation phase, I imagined this learning experience ranging anywhere from live virtual training sessions, to a compilation of already existing reputable resources on ChatGPT, to an eLearning course. I decided to start by creating a step by step guide as the first prototype. For more detailed explanations of the changes that occurred during the testing phase, refer to the process section.
The Process
This project involved an initial scoping meeting, confirmation of security measures, prompt tuning, documentation, 1:1 training support, revisions based on weekly feedback, and a post-experiment reflection to evaluate successes and areas of growth.
The program itself consisted of four touch-points, with a total of 3 opportunities to reflect between each one. 8 reflections were recorded between first and second touch-points and 20 reflections were recorded between the second and third touch-points. The reflection between the third and fourth touch point was a share-out, so this particular AI intervention was not needed.
To support the facilitators, I provided a total of four resources: a text-based step by step guide, an interactive eLearning course, a progress log, and direct support via Slack. The advantages and disadvantages of each resource are described below:
If we were to try something like this in the future, we would make three major changes to the deliverable, the prompt, and the approach to support. First, we would consolidate the benefits of the above resources into a singular interactive guide. We also discussed the problem with several of the prompt’s parameters, such as “reflection number” and “student name”, which were placed to help ChatGPT generate consistent and higher quality responses. Since these were not part of the student’s original reflection, the facilitator had to manually add them each time, contributing to the labor and time that this intervention was meant to cut. Finally, we also agreed to embed a live training session between myself and the facilitator at the start. This would allow additional space for basking in the novelty and excitement of a new technology, troubleshooting issues, as well as locating and correcting workflow assumptions.
Results & Takeaways:
The facilitator rated the majority of ChatGPT generated responses at a quality level of 3 (“Only minor adjustments needed”) or higher (“Equal to or better than a human generated response”). However, it was ultimately decided that the tediousness of the process outweighed the time saved from automation.
It is difficult to discern if the additional time and effort required at this experimental phase (i.e. overcoming novelty and cultural inertia, learning the tool, refining the workflow, documenting the efficacy throughout the process, starting with a small sample size, etc.) distorts the actual impact of the intervention after the learning curve is overcome. This is notably the case for new technologies such as ChatGPT, in which the version available to the public is a form of experimentation in its own right. In short, as Design Thinking would call for us to do, we must iterate, iterate, iterate.
If the learner decides that a new technology has potential to augment and elevate their work, the next step would be to strategically interrogate and justify the following: “What goal is this particular technology helping me achieve?”, “Which part of the process does it best lend itself to?”, and “How will I be able to measure if it was effective?”.
Finally, this project also illustrated the distinction between “curious experimentation” and “blind integration”. The emergence of *insert hot buzzword here* does not necessitate an urgent uprooting of existing processes to achieve the image of performative innovation, nor should it strike a fear induced paralysis. We need to be especially mindful of this when it comes to new technology that is as dangerously anthropomorphized in daily vernacular as ChatGPT is. Instead, it can look like the exploratory innovation that characterizes sandbox and playground environments, somewhere between a lazy river and a crashing waterfall.