Lures, Myths, Pitfalls and Traps
In order to learn from failure, it helps to categorize failures and have a uniform format for describing failures and their causes. A good description format includes:
In both designing GBS's and critiquing the designs of others, I've found the following categorization of failures to be useful:
Repair: Insist that all decisions about learning goals be supported by task consequences. A myth might be valid, but it has to be defended, just like all other design assumptions. And don't confuse rationalization with a real defense. Question everything.
Repair: Same as for myths.
Repair: Put real students in front of the system early and often. Play student every day. How does the environment feel? What's the most obvious thing to do? What do you think the goal is?
Note that these are all categories of possible causes of failure. Sometimes a cosmic idea leads to a great GBS. You just have to be doubly cautious when your idea or design has the characteristics of the problem areas.
This myth encompasses two claims:
Corollary: a course that succeeds for only a small segment of the population is not worth doing.
The first claim is wrong if we assume technology-based learning environments. There is simply no need to make every student learn fractions the same way. If running a business works for some students and building a bridge works for others, great. Once the courses are built, the costs of delivery are not significantly worse for many courses versus a few.
The second claim is wrong for two reasons. First and most obvious, students differ along just about every important learning dimension you can think of, from intellectual skills to personal interests to community values to common social contexts to... Second and maybe even more important, students, especially young ones, don't want to be treated generically. Even if a kid would love a particular book, making everyone in the class read it is almost guaranteed to make everyone in the class hate it.
The Myth of the Universal Course leads designers to avoid designs leveraging particular interests, e.g., baseball or country music, because not all students share these interests. This is bad thinking. These interests are powerful motivators in students precisely because not everyone shares them.
Some ideas are so important or just plain neat that students have to see them. Recursion a la Gödel, Escher, Bach, relativity, the calculus, evolution, ... Often the idea seems particularly neat because the designer just discovered it, or just saw it's applicability in some new context.
Unfortunately, a cosmic idea often lures you, the designer, into forcing the student to encounter that idea. Whether the student wants to or not, whether the problem and task at hand require it, the student is going to hear that lecture on evolution or else.
A good sign that you've succumbed to this lure is if you're worried that it might be possible for a student to successfully accomplish the goal set in the scenario without ever having to see a particular text or video. If you have this concern, something is wrong. If your design doesn't require the skills and knowledge you think it does, maybe the design is bad, but maybe that knowledge is not so crucial as you think.
Sometimes, you get this really neat idea for how to teach something. Playing pool to learn about the mathematics of angles. Driving a truck to learn the physics of forces and momentum.
Unfortunately, just because a body of knowledge is relevant to a task does not mean that that body of knowledge is necessary for the task. Pool players calculate angles but they do so with special tricks, not with sines and cosines. Very few truck drivers apply F = ma while barrelling down Interstate 95.
GBS scenarios teaching critical thinking skills can get very complex. The student may have to balance dozens of different pieces of evidence, and follow dozens of different chains of argument. To help manage this complexity, the designer may add a bookkeeping tool of some kind.
Example: in Advise systems, the student has to choose between 3 or 4 plans, eventually picking one that best addresses 3 or 4 different goals, based on evidence gathered from dozens of experts. It's easy to lose track of what's been considered. The Advise developers created a matrix with plus and minus signs that summarizes what the students has learned so far about how each goal is positively or negatively affected by each plan.
As a bookkeeping tool, the matrix is very helpful. Unfortunately, it can easily be too helpful. When the student sees the matrix, a common response is to say "ah, that's what I'm supposed to do, put plus and minus signs in every box in the matrix." Now the student is no longer trying to come up with a well-rounded set of arguments for a plan, he or she is just trying to find the magic steps to make those plus and minus signs appear. The tool has become worse than a crutch, it has become the focus of the task.
Everyone hates quizzes in school, and they hate surprise quizzes the most. A quiz sets up an adversarial relationship between the student and the quiz giver. Avoid quizzes and even the appearance of quizzes.
Example: The student chooses one of several actions in a simulated management situation. A guide pops up and says "you can't do that, that's wrong."
Example: Movie Reader was an early system built at ILS to teach story understanding skills. The model was that of a parent watching a movie with a child. Parents frequently ask their children questions such as "Do you know why that happened?" "Who do you think did it?" and so on. These questions prompt the child to think harder about what's going on in the story. The questions occur at key points in the plot and focus on important hidden implications. When the same model was put in computer form, with full-screen video and questions appearing as pop-up dialog boxes, the questions looked just like a reading comprehension quiz.
Repair: Warn the student that choices will appear and make it clear what the purpose of those choices are. If, as in Movie Reader, the questions are meant to be thought-provokers, provide an easy link to answers, so that it's clear that getting a right answer is not the issue.
If relevant, emphasize that there are no right answers. All choices are possible, some better than others, none perfect. If this is not true of your GBS, why not?
Don't "grade" answers with a "bzzzt, wrong!" response. Just show what happens for each choice.
Have lots of choices. If you give the student just 4 choices, it feels a lot like a quiz. If there are dozens of choices or more, and/or those choices remain constant throughout the simulation, except when they make absolutely no sense in some situation, then it doesn't feel like a quiz.
The outcome of actions in a simulated world have to be believable or you might as well go back to lecturing.
Example: You want to teach students the dangers of running red lights. You build a driving simulator and when a student runs a red light, BAM, they get sideswiped by a tractor trailer from New Jersey.
Does the student buy it? Of course not. Is the student likely to change their actual driving habits? Hardly. The simulation was rigged. There's a right answer and if you don't do it, something bad happens. It's not like real life. In real life, you often get away with running red lights.
Example: You want to teach investment strategies. If the student creates a good portfolio, they get rich, otherwise they lose their shirt.
Would you trust real money to the lessons learned in such a simulated world?
Repairs: Reconsider the design. Is not knowing what might happen the real cause of failure? In the case of running red lights,everyone knows the danger. The real cause of failure is the belief that "it won't happen to me, the odds are in my favor." The learning obstacle is that in real life, most often you get away with running red lights, and, if you don't, the feedback (death) precludes learning from failure.
Sometimes, you can make the odds more believable by using real data. This could work very well in the investment GBS. Students create portfolios, given real conditions that existed in the past few years. What happens to these portfolios is then driven by what actually happened in the financial markets. Note that this will be an acid test of the advice you give. The student might do what you teach and lose a bundle. The GBS should deal with that openly and honestly.
The driving example is harder but the real data example might help. Use actual traffic patterns and videos of real traffic situations, so that, if the student gets in trouble, they'll at least believe it might have really happened. At the very least, they may learn that certain kinds of intersections and times of day are worse for this behavior.
Closure and reward is important in a GBS, but what counts as a reward in life doesn't always translate to a computer environment.
Example: One design proposed for teaching nutrition to young children had the children shopping for items for dinner. If they picked a balanced meal, they were rewarded with a picture of an ice cream sundae.
Not only did this design sabotage the focus on good foods and not sweets, what kind of reward is it to see a picture of an ice cream sundae? Eating a sundae is fun. Seeing one is silly. Ditto for getting simulated money, a simulated sports car, and so on.
Repair: When designing the GBS tasks, design for reward. If possible, create an artifact the student can take away from the GBS.
Example: In Broadcast News, students had to edit the text of a newscast for a major news event, fixing oversimplifications, filling in the gaps, and so on. At the end, they read the final text before a camera connected to the computer. The program then spliced their reading into a mini-newscast with music and logos. It also showed a network news broadcast of the same event.
Have characters in the simulated world express their appreciation. Have an "after-action" review and report that highlights accomplishments.