Is better learning ‘more efficient’ or is it about students actually wanting to be in class?

I was cleaning up some old image files on my computer over the weekend and came up on this meme I tried to make about ten years ago. I don’t think I quite hit the language right, but, basically, I was trying to say that the more we try to make learning more ‘efficient’ the more we break the thing we were originally trying to do. I think of learning as a complex, deeply uncertain process, that’s different (to some degree) for every student and (i think critically) different for every teacher.

As I was not able to quite make this joke, I will now ruin it by trying to explain it 🙂

It's two containers of tennis balls one with whole tennis balls and the other with tennis balls cut in half. The text says 'more time on task. More learning.'

Why this is a problem?

I think we can agree, at least, that we don’t all agree on what learning IS.

That’s troubling when it so happens that helping people learn is what you do for a living. One of the most important places we see this, I think, is when we look at the results of research in education. As I am wont to teach in my BEd classrooms, you can find research in education that will support any position. You can find hundreds of articles that say Learning Styles are ‘true’ and hundreds that claim the opposite. You can hear that direct instruction is better or that project based learning is better… SCIENTIFICALLY. It’s all ‘better learning’.

So I teach my student that they need to decide what they think things like ‘learning’ or ‘better’ actually mean before they can choose the research that they need to help them make decisions. If I they see a piece of research that says “this approach leads to better learning outcomes” does that mean that the student has done better on a memory test? Are they ‘happier’? Did they claim on a multiple choice survey that it made them ‘more engaged’?

‘Better’ is too often a synonym for ‘got higher grades’ or ‘remembered’ or ‘was more efficient’. I understand that there are people who do see remembers faster as better learning, but it’s not everyone. I’m not opposed to people remembering things or efficiency… i just don’t think it’s the most important thing. I value engagement over memory.

Certainty turns learning into a task

If we tell students what to learn, the best students just go ahead and do what they’re told. And that sounds great, right? As Beth McMurtry writes in the Chronicle this week, it has some unfortunate side effects. If my job is to sit in a classroom, peruse a VERY specific rubric and hit all the targets on it, then doing what I’m told is what is going to make me successful. I’m learning that learning is something that I simply receive. No passion. No interest. Just the contractual obligation to give the answer expected.

There all kinds of challenges with this, from bored faces in class, resistant students and the inevitable “just tell me what you want me to put in the assignment and I’ll do it” demand from students. Student become customers. The classroom becomes transactional.

Maybe the biggest concern for me is a lack of belief in nuance. If you think that someone else has the answers to the questions, you both start to think that you are dumb because you never know the answer AND, maybe, you start to look for people who are convinced they have the answer. (and I don’t mean, “this is the scientific consensus on this issue given this and this research”, which is totally fine, but people who argue that “this is true!”.) People gravitate to the biggest megaphone.

GenAI replaces tasks that are algorithmic

And this is why GenAI is so powerful right now. These tools can produce text that appears to speak meaningfully about how much a student cares about their community. It can find the answer to your physics problem. If there’s a step by step rubric or a specific generally agreed upon answer that you’re expecting from your students, it’s going to do it. It is, in effect, very good at producing those cut up pieces of tennis ball. (to be fair, it’s a specific problem for creative writing as well, which is can often mimic reasonably well)

What it isn’t going to do is what my bookkeeper does for me every six months or so when we get together to talk about my taxes. She knows me as a person and helps me organize things so that they work for me. Turns out people think that her job is going to get taken away by GenAI. From her perspective, the organizing of numbers is such a small part of what she does that it’s not really going to affect her bottom line. She uses GenAI, but mostly to train her staff to write professional emails. Her job is full of nuances, full of dealing with people and decision making – all tasks she isn’t going to turn over to an algorithm.

The more I think about it, the more GenAI is a clarion call telling us we’ve been giving away the farm for years. We all seem to think that other people’s professions are at risk using GenAI, but not our own. Being a professional is, in most cases, knowing how to weigh multiple, often conflicting values to make a decision in an uncertain situation. It’s about judgement. A professional answer the question ‘what should we do?’ not ‘what is true?’

We have lots of pedagogies that help us learn in ‘shoulds’. That are built for judgement and to help us create a tolerance for ambiguity. The answer to the question ‘what do we do about GenAI’ is to first ask our self what ‘better’ means when it comes to learning, and build with that in mind. If better is ‘finish my word problem faster’ then that will send us down one path. If it means ‘students love learning and want to keep doing it’ then we go down another path entirely.

We mostly get to pick this. But we need to think about what we value first.

eg… (this was originally part of the post, but i chopped it out and added it here at the end in case any wanted to read it.)

Take a look at the way that we interpret program level goals. If a capstone goal for a k12 system is something like ‘is a good citizen’ we’ve obviously got a wide range of ways of judging what that could mean. If a group of educational developers is charged with creating a way of ‘measuring’ that there are some things that can work and others that are really hard. Something like ’40 hours of community service’ might be a measurement that people feel like they can do… hours being measurable. But something like ‘has an interest in their community’ might be a little harder. In practice both of these create real challenges.

The forty hours, while certainly measurable, are really difficult to do in practice. If there are 500 kids at the local high school, getting all 500 of them authentic community service experience is a bit of a nightmare. It turns out that kids from families that are already involved in the community can do it really easily, those kids who don’t have that privilege struggle – so those kids get whatever they can manage. Often a token experience (Several students from a university I worked with complained privately about the ‘shoe sorting job’ that several of them got a local shelter… it took about 30 minutes of the eight hours they were there, but the shelter people had nothing else they had for them to do. This job, they told me, happened every year).

The ‘interest in the community’ creates a different type of challenge. Let’s say you decide that you are going to get the student to reflect on their community and what they could do to improve it. Every single student knows what they are supposed to say. Many 15 year olds (and, frankly, many adults) don’t actually care about improving their community – for any number of reasons. Being forced to write an essay about it is forcing that 15 year old to lie about their feelings about community in order to get the grade.

Neither of those activities are ‘teaching’ citizenship. They don’t help convince that 15 year old the value of participating in their community. They are reinforcing whatever privilege or perspective the child already has… in addition to teaching them to lie for a grade. If we can’t measure it, does that mean we shouldn’t do it? I hope not.

#ShrugCon – the technical backend for an online conference

In preparing for this conference, I did a fair amount of searching for a post about someone thinking their way through the technical requirements for an online conference. I didn’t find much – but that probably has as much to do with how bad ‘search’ is. In the spirit of someone learning from my mistakes, I thought I might jot down the decisions I made to try and pull #ShrugCon together. The first part of this is being written right before the conference, and I’ll add #PostNotes after.

I should note that while I’m using the Universities time and technology (primarily the Microsoft suite) I’m supporting the whole conference as one person with a budget of $0 (with the exception of some of my Office of Open Learning peeps reviewing the submissions). It started out this way because i didn’t want to jump into people’s schedules on a project that I wasn’t sure was going to succeed and it turned into a bit of an experiment about running an online conference as a team of 1.

I have a previous post describing #ShrugCon

Conference Website

The conference is being hosted on a single page of our uncertainty community WordPress hosted with Reclaim hosting. I considered creating separate pages for more detailed information, but then realized I was creating more things to keep track of and the new Twenty Twenty-Four theme has some pretty easy out of the box theming that allowed me to tile things down the page. So. Conference website is just one long page.

Annotated Bibliography

I’m a big fan of this and would encourage everyone to do one for their online conference/community. I already had one setup on Uncertainty/Community so I didn’t make it ‘for’ this, but it works great. It’s basically just using posts on WordPress with an ‘annotated bibliography’ tag and then creating a link to that tag’s page. I believe that the more we as people interested in knowledge gather together ideas in this way the more we’re contributing to coherence on the web. I’ll post something on this later, but suffice it to say that I believe ‘open annotated bibliography = good’. Every time someone in the conference sends along an interesting link, I add it. After the conference, part of our debrief will be adding cool stuff from the sessions to the bibliography.

Poster presentations

I added this late in the process, but I kind of like it. It’s setup the same way as the AB. It allows people we couldn’t fit into the times to create pieces so they can participate in the conference in ways that can let them share their ideas but also give them a “CHECK” that they can put on their CV.

Newsletter

I’m using the free version of Mailchimp. Straight up, nothing fancy. This is for people to sign up for to get updates and, hopefully, stay on as part of the community after the conference. The upside of this is that its easy to send and keep the emails for later. The downside is that it goes into the ‘newsletter’ stream in people’s inbox reducing the number of people who are going to open it. Open rate is… 60%?

Conference proposals

I set up three different forms using google forms.

  1. The first allowed people to express an interest in presenting/attending.
  2. The second was for contributing to the annotated bibliography
  3. The third was to volunteer to participate in the after-party where we’re going to try to pull together some themes/outcomes from the conference

This was probably too much. I ended up having to pull people out of the forms to add them to the newsletter so I wasn’t emailing a bunch of different people different things. One form that went directly into Mailchimp probably would have been smarter.

Registration

So… I’m using Microsoft webinar for that. I’ve never used it for anything before and its… not bad. It’s part of Microsoft Teams, and you can set it up as a registration system.

It's a screenshot of a conference registration page in microsoft teams webinar.

I think it looks ok. It’ll handle authors and author Bios and stuff. Allows people to register for it and sends them a meeting reminder. I’ve heard a couple of people suggest they had a few problems, but mostly people seem to like it. It’s easy enough to update and allows for speakers and organizers to have different access which means (I think) that my speakers don’t get an update every time I change the details on an event.

I setup the presentations in blocks of two, the 10am presentation and the 11am presentation are in the same ‘webinar’. I’m still not sure if that was a good idea, I’ll let you know after the conference. I’m going to do a conference survey and will include a question about how people liked this.

Direct communication with speakers

I thought about setting up a Teams meeting with all 20 speakers in it, but I decided to just go with email. It’s been fine. I kind of didn’t want people to be too planned… and email is just clumsy enough to discourage collaboration. 🙂

Who’s the audience for this post dave?

This is the question i’m left with after writing it. In some sense, this post is for future dave, so that he remembers what past dave did. If there’s someone else out there who does something similar, I’d love it if you left a comment and told us how it went.

#ShrugCon – An emerging conference to lay groundwork for uncertainty

Well, #ShrugCon is only a month away and the response so far has been amazing. We’ve got over 250 people receiving the newsletter, and over 100 registrants for the conference. It’s really more than I expected, and it cranks up the responsibility a bit. We’ve got a chance to do something cool – so here’s how I’m thinking we’ll do it. Let me know what you think.

What’s happened so far

About two months ago, I wrote a blog post vaguely framing what I was interested in talking about – uncertainty in education. That post included a way to sign up to a newsletter that would allow folks to follow along with the conversation. That newsletter was connected to a mailing list that I gathered the last two times I tried to do this. It all adds up to a pile of interesting people.

About a month ago, I wrote a post on linked in and posted links to a conference website to various social media spaces (LinkedIN is doing most of the work). It loosely described a bit more about why we might want to do this.

Why: We think there are lots of reasons to be talking about uncertainty right now.
1. We’re hearing from professional programs that students are struggling in their placements when asked to deal with problems that don’t have right answers. They’re struggling with uncertainty.
2. Those GenAI programs sure do seem to be getting all the ‘right answer’ problems right. People are going to be turning to them more and more. We think that we need more practice using our judgement.
3. We’re not convinced that learning how to get answers ‘right’ is necessary good preparation for those situations where your values and your judgement are going to be necessary for making decisions.
4. Dealing with uncertainty in the classroom is more fun :).

The conference website described four ways to participate in the conference

  1. Submit a pitch for a 5-10 minute presentation on uncertainty in education
  2. register to participate in the conference
  3. Submit a link to the annotated bibliography
  4. Submit a request to join the ‘after-party‘ (more on this below).

We got forty-ish submissions for presentations (it’s hard to count, as some people suggested things that could be interested and didn’t want to present, we got a few emails from folks who said they were interested in doing ‘something’) and the content of those submissions has allowed us to frame a conversation that will, hopefully, feel like an important experience for participants.

The list of submissions is pretty interesting. We’ve got someone looking at wicked problems through an indigenous lens. Someone else looking at uncertainty in academic publishing. People thinking about compassion, about ill-structured problems, preparing learners for uncertainty in the workforce, interdisciplinarity… so many interesting conversations.

We’ve got them mostly pulled together. We’re trying to find affinity conversations, deal with timezones and try to be as fair as possible to as many people as possible.

What the conference will look like

July 16th and 17th will be the discussion days. Each session will have two (maybe three) speakers.

  1. These sessions will begin with a brief framing of a social contract around the discussions. Collaborative good. Mean bad.
  2. The first speaker will spend 5-10 minutes talking about their topic (close to 5 than 10).
  3. We’ll have a facilitated conversation talking about the topic. We’ll encourage the posting of links to the chatroom. We’ll try to find places of connection and divergence. We’ll Track our ideas. (the speaker is a participant, there will be a dedicated facilitator)
  4. The second speaker will do their thing
  5. Second facilitated conversation, same as the last one except that the connecting and divergence could include both topics or just the latter one.

We’re building. We want people to bring their ideas AND their links. We want to have folks (should they want it) to have their names attached to their ideas so that when we pull them together in the afterparty, we can attribute them.

The After-Party

On July 18th and 19th we’re going to host a hybrid after-party. During that time we’re going to be pulling together all the ideas we can manage from the previous two days. I’m not sure what’s going to come out of it, but I have some hopes.

  1. I want to add to the annotated bibliography
  2. I’ve got a couple of excellent artists coming, so I’m hoping we’ll have some graphical representations of the outcomes of the conference
  3. An overview document of the ‘lessons learned’
  4. Some kind of white paper that can form the basis for future discussion for uncertainty in education
  5. A list of things ‘for future research’

The future?

Well… it’s hard to say. With the response that we’ve gotten so far I think we’ve got a conversation that people want to have at least once. As long as this year goes ok, I’m hoping we can run another next year. Maybe we reach out for funding for research? We’ll have to wait and see.

Moving beyond ‘solving’ problems as meaningful learning- a conference #ShrugCon

A conference about uncertainty which might also be about the left-overs after problem-solving.

I have spent the week watching my youngest do two types of ‘problem solving.’ I got her one of those ‘daily calendars’ at Christmas – one of the ones that has a puzzle to solve everyday. She’s getting better at them, and while I was waiting to get asked to help with one… she really didn’t need it. She’s got the trick to doing them. The second kind of problem solving she was doing was quadratic equations. She does pretty well with it, but sometimes gets caught up in one method (her teacher has very specific methods) and then takes a few minutes to see that the textbook has just switched the problem around. She’s mostly got the trick of it.

ChatGPTo, it turns out, mostly has the trick of both of them. The algorithm is amazing at the puzzles and… mostly reliable for the word problems. (it got some of the variables mixed up… I asked it why, it said it just got ‘confused’)

Problem solving, it seems, is something we can get the trick of. It’s also something that new GenAI systems are doing extremely well. As I explained to a Social Psych grad student last week, employers are not going to need as many ‘problem solvers’ going forward. If a problem can actually be solved, an algorithm is probably going to be able to do it.

Turns out, if we follow work done at RAND and by the folks below, our method of problem solving was learned FROM early digital tools.

A little history of the kind of ‘problem solving’ I’m talking about?

In their 1958 article, Newell, Shaw and Simon suggest that we need a theory of problem solving so that we can “explain how human problem solving takes place.” They’re looking to describe a method that they’ve learned from digital computers (in this case RAND JOHNNIAC) because the digital thing can “be induced to execute the same sequences of information processes that humans execute when they are solving problems” (p.153).

The problem solving literature that comes out of this thread still has a long history. Simon and Newell’s thoughts about ‘well-structured problems’ (1970) and their obsession with using chess as an exemplar for ‘problem solving’ (Ensmenger, 2011) keep showing up in the literature. How can we solve the problem faster? How can we solve the problem more reliably? How can we beat the chess player/machine? How can we think more like a chess player?

In their ‘impacts on education’ section of that same 1970 article, S&N claim that they know very well that there are two kinds of learning – rote learning and meaningful learning. They suggest that we have known for a long time that there’s a real difference, but we’ve never been able to talk about ‘meaningful learning’ before. They propose, as you might expect, problem solving as meaningful learning.

What does it mean to ‘solve’ a problem?

In his 1973 article, Simon describes a ‘well-structured problem’ as one where the question, the process to solve the problem and the solution are known or knowable. Solving the problem, then, is to

  1. take the question that has been given to you
  2. Use the process for problem solving you’ve been taught
  3. Check if you’ve got the right answer against the answer that the person who has ‘structured’ that problem has for you. (back of the textbook, in my daughter’s case)

This is the standard trope of problem solving in the education system. It’s also the thing that many people like to study in educational research. “Did the kid get the right answer?” It can only be the ‘right answer’ if someone else has cleaned up all the messy stuff. Someone built a problem to solve, someone made a puzzle, someone developed the formula…

Isn’t that the hard work already done?

Newell and Simon address this issue. When asked whether the solving of a problem is implicit in the designing of a problem, their response is simple and direct. “Observation of subjects’ behavior over a sequence of chess problems, cryptarithmetic puzzles, or theorem-finding problems shows the argument to be empirically false.” (1970)

You can have solvable problems and puzzles and theorem finding, I’ll take the ‘left overs’

In 2001 Simon (yup, same guy – he won a Nobel Prize along the way) said that Ill-structured problems are what’s ‘left over’ from well-structured problem. They are the things that don’t fit into the nice categories of question/process/solution. This conference is about the leftovers. It’s about the things in life/learning that aren’t tidy. The ones that no one can confirm are right (there are many ways to confirm that an answer is wrong).

A well-structured problem almost never happens to me in real life. At work, as a parent, as a partner, as a citizen I am almost never in a position where I’m given a clear question that isn’t messy in some way, a process that I can follow, and a way for someone to say ‘yeah, you did that exactly right’. And when I am, I can mostly just use a GenAI tool to get there.

The things that are meaningful, to me, are about real life. They aren’t about chess, they aren’t about puzzles, they are about how each of us faces the uncertainty around us. With all these GenAI discussions swirling around I’m even more interested in how we learn when things are uncertain.

This conference is about how we teach and learn in that uncertainty.

References

Ensmenger, N. (2011). Is chess the drosophila of artificial intelligence? A social history of an algorithm: Social Studies of Science. https://doi.org/10.1177/0306312711424596

Newell, A., Shaw, J. C., & Simon, H. A. (1958). Elements of a theory of human problem solving. Psychological Review, 65(3), 151–166. https://doi.org/10.1037/h0048495

Simon, H. (2001). Problem Solving. In The MIT Encyclopedia of the Cognitive Sciences (MITECS) | MIT CogNet. The MIT press. http://cognet.mit.edu/erefs/mit-encyclopedia-of-cognitive-sciences-mitecs

Simon, H. A. (1973). The Structure of Ill Structured Problems. Artificial Intelligence, 21.

Simon, H. A., & Newell, A. (1970). Human problem solving: The state of the theory in 1970. American Psychologist, 26(2), 145. https://doi.org/10.1037/h0030806

#ShrugCon OR In Search of a Pedagogy of Abundance: Preparing Students for an Uncertain Future (present?)

We care about learning… but what does that mean today? I have this suspicion that it’s about preparing people to deal with uncertainty… I’d like to know if you agree with me AND if you do, what we might do about it.

Dates

  • July 16-17 Online for the stories/discussions 
  • July 18-19 Hybrid after-party for creating usable artefacts from the discussion

Join our Uncertainty Community Newsletter for updates

We have an abundance of information and mis/disinformation. We have an abundance of content and generated content. An abundance of connection. We can whip our way through tasks that used to take us hours and create artefacts between sips of coffee. We can, by reaching in our pockets, solve a math problem, find the lyrics to the song that is playing in the room we’re in or read a journal article on Housemaid’s Knee. Our information landscape has fundamentally changed.

What does it mean? What do we do? How do we adapt? 

Enter #ShrugCon? 

This is not a ‘shrug’ as in ‘I don’t care’ but rather a ‘wow, that’s a hard question without a simple answer’. It’s a commitment to addressing the challenges that we are facing in education without holding on to a random approach that we are comfortable with or narrowly focusing on one version of ‘actionable science’. We’re hoping to host a discussion. You’re invited to come along and tell a story about where you’re at in this conversation. You’re welcome to come and participate in the discussions that result from that story or, if you like, you’re welcome to just listen in. 

This is a 4 day online/hybrid event. The first two days are for stories. How has this abundance changed how you learn, how you think about learning how you help other people learn? We will have 2 days of facilitated discussion around various themes in an attempt to try and come to grips with some of the ways that information abundance affects learning in our formal, informal and day to day spaces. The last part of the event will be an after-party, hosted at the University of Windsor, where we will gather information from the previous activities in an attempt to create infographics, discussion papers and other outputs from the ideas generated by the event.

I’d love to chat 🙂

We can’t teach humility in our schools, but really need to.

This is my first post talking about some of the ideas in my new book Learning in a Time of Abundance. The book is the result of almost 20 years of writing, talking and thinking about how the Internet changes what it means for us to learn. The book is not so much about how we learn IN schools, but rather what we need to learn as citizens in order to deal with the world we’ve found ourselves in. It just so happens that much of our training (overt and hidden) about what it means to learn was formed by our schools – so they come up a fair amount.

Our schools are about getting right answers. If you get the answer right you’re smart. People with good grades are smart people, and they get good grades by giving the answer that the teacher wants them to. I understand that this seems like an obvious claim… but I think it’s important. We are rewarded, in schools, for being obedient and giving the answer the teacher wants us to.

Uncertainty

This is fine when what we’re looking for is the name of a given molecule or the capital city of Japan (though it used to be Kyoto – I was today years old when i finally noticed it’s an anagram of Tokyo). There are a whole other set of things, human things, that are not like that. Many (most?) of the things we need to do to get through life don’t have a single answer. When we teach those in schools, even in more creative courses like music or art, we often teach them as if there were right answers. We reward students for getting it right.

The hidden curriculum, then, is that when someone asks you a question, there is a right answer. That’s what we learn ‘learning’ is. Being smart is about figuring out what that right answer is, and then giving it. That’s what we learn in school. We never learn to say things like ‘yeah, i’m not really sure I’m in a position to answer that question’ or ‘yeah, i can kind of see both sides of that, and can see it working either way’.

This shows up in my classes all the time. I will challenge my students with a topic that is split in the literature, we’ll read both sides, and, at some point, a student will say “which one is right?” I typically shrug my shoulders and say something like “it depends on how you look at it.” Then they feel like they were tricked. We always end up in some conversation about how it doesn’t make sense to ‘learn it’ if it’s not ‘right’.

(There’s a whole argument here about teaching people facts as background knowledge when they’re novices and teaching them deeper concepts when they’re experts that I will discuss in a later post, but suffice it to say that most of the time, we are taught most things as if they’re true. )

The purpose of humility

We are not new in needing humility in the way we look at hard conversations. Philosophers have been suggesting for millennia that we need to be humble. Socrates, after being called the wisest man in Athens by the Oracle, responded that he knew nothing… thus confirming that he was the wisest man in Athens.

That humility, giving the space that there might be other perspectives that aren’t yours, that you don’t understand, that you don’t know about, gives room to the people around you. It also is, I think, the only response to what Rittel and Webber called wicked problems. Some problems are so huge, so complex, so intertwined, that you can only ever work on part of the problem. In some cases, you can only make something better by making something else worse. You can decide to approach those problems with a wrecking ball, ignoring your impact on the whole ecosystem, or you can approach it with humility.

It matters in every day life just as much. We tend to panic when we are confronted with uncertainty. When someone else’s beliefs don’t match ours we can see that as a threat. If there’s only one right answer, and they think they have the right answer, then that must mean that they think our answer is wrong. Leaving room for uncertainty, for ambiguity, for context, can help us understand our issues and each other better.

Weirdly, we teach humility in PhD programs. All the way leading up to a PhD, we are taught that things are right/wrong, and, if we ever make it to the top of the formal learning process, people are like “yeah, well, it depends”.

That means that basically all of us, on basically every topic, have been taught that things are right or they’re wrong. Maybe worse, we’ve been taught that someone else has decided what that right answer is… and our job is to just follow along with whatever it is.

This is the central concept of how marketing works – make people believe that something is true.

We have too many things are us, too much information coming in, too many complex problems to do something about to believe in right answers.

Humility, then, is a key literacy in confronting abundance.

(note: I do believe there are wrong answers… just not necessarily always a single one that is right)

Why I’m advising that people stop assigning essays, and it’s not just because of AI

I’m closing in on 20 sessions I’ve done in the last year that were, in some way, related to the issue of generative AI. Actually, it’s probably more than 20. They have moved from ‘omg, there’s this thing that just came out’ to ‘yeah, we really do need to sit down and have a long chat about what we’re trying to get done in higher education’. And, sometimes, that conversation is a really positive one. A humane one. We’re going to follow that tone here.

I don’t hate essays

I cut my teeth in higher education classrooms teaching academic writing. I taught the sandwich model, I did practice five paragraph essays, process writing – all the classics. I loved the challenge of getting students exciting about the writing process, teaching them to support a position, and giving them the tools they’d need to conquer the big essays that they were going to face in the rest of their university careers. When students walked into my classroom on the first day, they were confronted with this picture on the projector screen.

My position was that I had never seen a student remain neutral about the question – “Is this art?” This was the first activity in the first class.

Note: Turns out, Duchamp may not have made this. Thanks to Prof Rebecca Furguson for pointing that one out.

We have to learn to write essays because we have to learn to write essays

In the last couple of years, however, my belief in the essay has been taking some hits. I started using it as an example of an assignment type that was tethered to the system. K12 system teachers would tell me that students needed to learn to write essays so they could write essays in university. Undergrads are told they need to learn to write essays so they can write them when they are in grad school etc… I mean, what percentage of the people who learn to write an essay ever reach the point where they, actually, need to write them for some useful purpose? I mean… I did. But I’m probably the only person I grew up with who ever wrote an essay because they wanted to.

It seems like a lot of training to develop a skill that very few people are ever going to use.

But essays teach all these other skills!

So here’s the part that’s been coming up since the GenAI conversation. I’ve been using this Chronicle article to discuss how students are using GenAI to help them write essays. The author, a student, worries that students are going to lose the ability to do critical thinking because the AI is going to be doing it for them. All the student needs to do to write an essay amounts to a little grunt work.

So what are the skills that essays are meant to teach, and, if they ever worked to teach those skills, do they still in this era? I grabbed a random list from a random website

  • Analytical Skills. (GenAI is going to cover that)
  • Critical Thinking. (And this one)
  • Creativity. (And this one – I’m not saying that GAI is creative, but rather that because the choices get made the student doesn’t need to be creative to write an essay. I still think students desperately need to develop their creativity)
  • Ability to Structure Knowledge. (I mean, maybe we’re still doing this?)
  • Keen Eye for Details. (Grammarly has this one covered)
  • Informed Opinions. (Does it?)
  • Information Search Skills. (See below)
  • General Verbal Intelligence. (GenAI, Grammarly)

A quick look at this list, at least, suggests that the tools we have available are going to do a fair amount of the work that we think the essay is doing. And this doesn’t even count the fact that for somewhere between $10 and $50 a page you can just hire someone to write your paper for you. John Warner has been talking about this for years, see this post and his book.

We need all those skills, or at least most of them, but I don’t think that the essay is doing that for us anymore. I want to teach creativity, I just don’t think the essay supports that like it used to.

Search is the key. It’s all about search.

But this has been the realization for me. Essays have not been doing the thing that I actually thought they were doing since we started having effective online search tools. I used to assign essays for the same reason I used to assign writing reflections for academic papers. I want students to engage with the material. I want them to learn how to identify valid and credible information and learn how to apply it to problems they are facing. I want them to engage in the history of knowledge. With other thinkers in the field that we’re studying.

Here’s the thing. I’m starting to think it really hasn’t been happening for 20 years.

In teaching the SIFT method to my students this term, we ended up in a bunch of conversation about how we go about finding information. I heard one student say ‘i found a quote to support my argument’ and it hit me. I ask them how they’d learned to search/research, and it went something like this. Have an argument, usually given by the instructor, do a search for a quote that supports that position, pop the paper into Zotero to get the citation right, pop it into the paper. No reading for context. No real idea what the paper was even about. Search on google/library website, control +F in the paper, pop it in the essay.

Compare this, if you will, to my undergraduate experience in the library. Go to the card catalogue, write down a bunch of possible articles/books on a piece of paper, go around the library and find said resources, settle in at a table to go through them. I had to read them. I’m not saying I wanted to, there was no other option. I had to engage with the material in order to find something that I could base my paper on.

That experience is gone.

The essay, i’m arguing here, no longer forces students to learn how to research. I’m not saying a few students don’t do it, i’m saying they don’t have to. As a graduate student in one of our Humanizing Digital Learning course said to the rest of the class “You’d have to give me a reason not to CTRL F, ’cause I don’t see it”.

Teach Search

And, because of this, I now teach search. I teach students how to write good search strings to get varied responses. We explore how different searches work over different systems. We talk about bias the researchers have, about how to find facts, but also how to find advice when you don’t know what you’re doing. We talk about the humility necessary to use Internet search to learn. If you don’t have the skills to evaluate something, you’re going to struggle to get wisdom you can use from the web, or from ChatGPT or wherever you going to find it.

I’m increasingly starting to think that we need to re-evaluate what the basic epistemic skills are that we think people need to make meaning with all this abundance and all the GenAI out there. I think everyone, in every field, might want to devote some serious class time to how we can find valid and credible information when it comes to facts, but, maybe more importantly, when it comes to things that aren’t about ‘right and wrong’.

Every one of these students is going to be a voter.

I don’t think that the essay is teaching these research skills anymore, and, if anything we need them more than ever.

Further reading

I’m never Assigning an Essay Again – John Warner

A letter to my students about why I disagree with Paul Kirschner about the Failure of Constructivist/PBL/IBL etc…

Context – my students learning to consider the viability of Learning Styles

Had a great time with my ed tech pre-service students last week. We were learning about searching (the web) and research and talking about how to be effective in learning what we need to know about tools and approaches that we come across. My edtech courses have never been particularly content focused, but after some discussions with Tom Farrelly this summer, I’ve converted it almost entirely to teaching the literacies I think students need to discover what they need to know based on their own values.

The first hour of the class was focused on looking at the learning styles information on the web and comparing it to existing research. The vast majority of my students come into class believing in learning styles and, for many, it’s the only educational theory language they are comfortable using. Our first group read was “The myth of learning styles” by Reiner and Willingham. My purpose in choosing that particular article is that, while I tend to agree that the concept of learning styles has serious limitations, at least in Willingham’s case, I don’t tend to be on… his side of education. He is more invested in memory than I am and thinks that expert learners are people like chess players. He’s a huge figure in cognitivist literature dealing with education. I used him because I wanted students to understand that research comes from a context, and finding out about that context can help you understand what a person means by words like learning.

Look at the intersection of memory and chess. The ability to remember every pattern on a chess board is going to be hugely important for people trying to be good at chess. While there are LOTS and LOTS of potential patterns, there are a limited number, and chess has clear rules about winning and losing. Much like the other examples Willingham uses, like computer science and music, we can understand how memory is going to be hugely beneficial to people working at a high expert level in those fields.

That’s not me. It’s never going to be me. I’m never going to be a world class expert in a field like chess or computer science. I would also argue that almost none of my students will be either. The important question, I think, is to consider what things we do value preparing ourselves for and considering whether our approach to teaching best prepares them to do that.

So when I offhandedly suggested to my students that there is a whole field of education that is committed to that kind of work, they quite rightly asked for the research. 🙂 A review of that literature is out of the scope of the course I’m teaching, so on the off chance that some of them are interested, I thought i would put a quick article breakdown here of one of those pieces of work.

Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching

This article has been cited over 10000 times according to google scholar, so safe to call it influential. I’m using it here because it allows me to point to some of the patterns I’ve blogged about here before, all wrapped up in a nice tight package. If you’re a constructivist, I encourage you to read the article, I think it’s a nice introduction to what the constructivist haters think. If you love Kirschner et. al., I’m happy to engage in a conversation, but it will have to start with our lack of shared epistemology. I find that conversation fascinating, I don’t find the ‘but science!’ conversation to be as fascinating.

Minimal guidance

Here’s an easy one to start with. The citation provided tying constructivist pedagogies to ‘unguided’ or ‘minimal guidance’ are not, I would suggest, sufficient to the use of the term. My constructivist classrooms are VERY guided though it’s true that I’m not handing over tons of content for people to memorize. I see the expression ‘unguided’ or ‘minimal guidance’ as a misrepresentation of what constructivism is about.

A good reference here is the Mayer article (firewalled) cited throughout. The article, from 2004, suggests that totally unguided instruction is not the best way to have people accomplish defined tasks. Totally agree. I have never found a problem based teacher, for instance, who is providing ‘totally unguided instruction’. Sure. People aren’t going to discover an algorithm for solving a math equation by just hanging around some numbers. Agreed.

The chess argument

I’ve made a few comments about chess above, so I wont go over it again. Simply put, chess is a game that you can win. Most of the important decisions people make in their lives aren’t games with rules that show how to win. I think that makes it a suspect example for helping people become learned humans. Same for solving math problems. Same for computer programming. They are important niche skills, certainly, I’m not convinced they are the basis for learning as a human. Nathan Ensmenger’s article on chess in education is excellent.

Problem solving skills

Much of the cognitive argument in the article is around what works best for ‘problem-solving skills’. This is where your values come into play. Are we, on the whole, teaching ‘problem solving skills’ in our education system? You might see it that way. When I look at the regular life of a regular person there are certainly problems that are solved. I can fix a leaky tap. I can, to use the example in the article, cross a road without being hit or do a math problem. Most of the things I decide on in a given day, however, are not these kinds of things. Not in my job. Not in my family life. Not doing construction in my attic. There’s no one to tell me I’ve done the work right. I am always choosing between a variety of intersecting and often conflicting rules, values and implications.

For me, constructivism is not about teaching people to problem solve the MOST EFFECTIVELY. It’s about learning to confront uncertainty. It’s about learning to ask a question even when it’s not clear what that question should be. It’s about deciding when you don’t have all the information. It’s preparation for life.

So I don’t consider constructivism a failure if its not the best preparation for problem solving. Solving (getting the right answer) problems is not the thing on the top of my list in my classroom.

Novices and experts

This one is trickier and one I struggle to explain. The argument the article seems to make is that because novices don’t have loads of information in long term memory, it’s harder for them to use that information to do the work. Putting aside that ‘the work’ that the article wants us to do is solve problems with right answers, I have some other concerns.

Most of us will always be novices at almost everything. The pathway that seems to be implied here is – master the content – then you can use the higher order thinking as an expert. But most of us will never be experts at what we’re learning. Most of my students will be english teachers or math teachers but not ‘edtech experts’. If I give them information now, most of them will never do enough work to become an edtech expert to allow them access to the higher order conversation. I’ll be preparing them with information from 3-5 years ago, for a career 3-5 years in the future. Information that will continue to be out of date as we go forward.

My view of constructivism reaches for a modified guided expert approach. Sure. If I say ‘hey, go out and evaluate this math software for the classroom’ with no support, it’s going to be terrible for them. But going through these guided approaches, where I spot them a few questions, do a lot of iterative feedback, and allow them to develop their skills, gives them the chance to get some of those expertish tools that can help them later.

Final notes

The upshot of my concern with this research is that it reduces teaching to ‘helping people solve problems’. It also sets up a hierarchy of learning where people get basic instruction now and expert instruction later, even though the vast majority of us never make it to the later part. I’m not interested in populating the world with more excellent chess players, I’m more interested with compassionate citizens who can engage in difficult discussions in ways that help us work through the challenges in our society.

Teaching with AI checklist

I pulled together my notes from the last 9 months of doing sessions on Generative AI and compiled them yesterday. Shout out to Nick Baker for adding some things, editing some things and overall making it more nicely to readish.

Every time I approach this issue I keep thinking that so much more and probably a lot less should be said. Every time I meet with a group of faculty or teachers on this issue we go through a few phases

  1. Boredom, kinda, as I explain what generative AI is.
  2. A bit of ‘yeah, this doesn’t apply to me, my courses…’
  3. A demonstration where I take their actual assignments and complete them in 30 seconds using generative AI
  4. sadness.
  5. And then, hope. Hope when they realize that the only solution is good teaching.

That is in no way meant to reflect a statement about all faculty or teachers. I only really get the ones who care about teaching and their students.

Anyway… this is what I’ve been telling them.

Introduction

Generative AI has already changed education. Whether we realise it or not, every student has been forced to make a decision about whether they are going to use AI to generate text, videos or images. Of particular importance to those of us in the teaching profession, we have lost our ability to make students write their own texts when we are not watching them. Regardless of your position on the inclusion of generative AI in the classroom, this is likely to have a profound impact on your classroom. 

The following checklist is emergent. We will continue to add to it as situations develop and new approaches emerge. 

Dave Cormier and the Office of Open Learning, UWindsor.

Have I included my own stance regarding Generative AI in my syllabus?

There is no agreed upon way for people to be handing these systems right now. So, whether you’re talking about Chegg or ChatGPT, be clear in your syllabus how you expect students to use it (or not). This will not, on its own, stop people from doing things, but it will at least make it clear for someone who wants to do the right thing. 

  1. Tell them what you would like them to do
  2. Give them tools that you think are appropriate
  3. Find ways to incorporate ethical usage of these tools into your classroom teaching practice

Links

Have I explained what counts as engagement in my course and explained why I want students to do the work that I am asking them to do? Have I added an explanation to each of my readings/assessments explaining to students why it’s important and how it is connected to the learning outcomes?

One of the ways of addressing students’ illegitimate usage of Generative AI tools is to explain to them why you want them to do the work laid out in your syllabus. We have, historically, forced students to do their homework by awarding them grades for completion. If students are doing their work to ‘complete’ it, instead of being driven by actual interest, they are going to be far more likely to find ways to complete their work without having to learn anything.

  1. Encourage students to find ways to be interested in the work in the classroom
  2. Share your own reasons for finding the material interesting
  3. Find ways to highlight examples of students performing in an engaged manner

Links

Have I considered why I was using the take home assessments affected by generative AI (e.g. right answer questions, long form text like essays)? Can I replace them with new approaches that serve the same purpose?

There are many good arguments out there for the writing of essays and other long-form writing tasks. They promote deeper thinking, give students more space to construct critical arguments, and have strong disciplinary connections in some cases. They are a common means of demonstrating the student’s developing set of  research skills. In the past, we were often able to assume with reasonable confidence that essays and extended writing were a sound way to be assured that students were developing those skills. It has always been possible to pay someone else to write your essay, though and with the new tools available, and the relatively inexpensive rates available at essay mills, there is no longer any guarantee that any student is doing these things. 

  1. Consider using point form responses submitted in class
  2. Consider deconstructing the essay (an argument assignment, a research assignment) that never leads to a completed essay

Links

Are there places where I am trying to ‘cover’ content that I can remove and allow for deeper work on more important topics?

There are many reasons that can lead to us needing to ‘cover’ certain kinds of material in a classroom. It could be that we are mandated by accreditation bodies, it could be that there is a course deeper in the degree that is counting on us to develop some foundational knowledge or skills. But this is not always the case. Many of us inherit the courses we teach and don’t entirely understand why a given topic is included in the course. Teaching less, and teaching more deliberately, allows us the time to delve into the nuances of a topic.

Links

Have I provided enough time to allow my students to unlearn old ways of doing things before they can take on the new ones that I’m presenting?

The abilities that come with generative AI will likely lead to some changes in your course. It is critical that students get time to learn these new processes, so that they know what it means to be successful in your course. Skills that may have been valued when they were in high school may no longer be as important, and the changes made by one faculty member may not work in your class. Give students time to make those adaptations so they have their best chance at success.

Links

Have I made ethical decisions about student data such that the assignments and activities in my class don’t require students to give away their own personal identification or their work to outside companies?

Each of the digital tools that we use in our classroom take different amounts of personal information and intellectual property from students. We can inquire of our IT departments for information regarding the usage of student data in our institutions. The guideline is simple: treat our student’s data the way we want our own data treated.

Links

Have I reviewed how my field is being affected by the web and AI generation? If it’s significant, have I included this in my course?

Generative AI is going to have vastly different impacts by field. Reach out to your colleagues across your respective fields and get a sense of how AI is impacting their day-to-day work. Many disciplines have started to collect and share ideas within their communities of practice. Some professional associations have also provided their guidance. There is much still to learn as the use and capabilities of these tools evolves. 

Links

Have I incorporated new best practices for finding/evaluating knowledge in my field that take AI generated content/marketed content into account? (e.g. Prompt engineering exercises)

As our knowledge work is increasingly mediated by algorithms like search engines and text generators, it’s vital that we learn how to best find, sort and evaluate information from these systems. While there are certainly common good practice approaches that apply across fields, some approaches (e.g. using curated databases) are going to be discipline specific. Incorporating activities that help learners use, manipulate and trick the algorithms to bring back the results they need, and of which they can be confident of reliability, are essential to developing 21st century literacies.

Given the abundance of information available, good and bad, often the most important literacy any student needs is to be able to sift through information to find what is most true or most useful. Building those activities into our courses might be the most important thing we can do to help students in their futures.

Links

Have I confirmed that the changes I’ve made to my syllabus have not created an unfair amount of work for me or my students?

Anytime we rework our syllabus, there is a chance that we add more work than we had in our previous versions, sometimes without noticing. Make sure to consider the total number of hours that all of the planned activities (class time, labs, assignments, group work, independent research, reading, watching content, quizzes etc.) in our syllabus imposes on our students, and be careful to explain your work expectations to your students. Be mindful that students have many other classes, often with the same requirements as yours, as well as commitments outside of university. The more we load students with busy work, the less time they have to do the things that most of us value most – deeply engaging with the topics of our courses and demonstrating that engagement through our assessment tasks. 

Links

Have I considered the accessibility implications of the digital tools I am using? Do they have the potential to improve or reduce accessibility?

Every tool comes with its own affordances. Think your way through the classroom advantages and disadvantages to any tool you are going to use. Has the tool been formally assessed for accessibility by the University? Have you talked to OOL or CTL about it? Have you tried checking it with an online accessibility checker such as WAVE or AccessiBe? Does it require new computers to run it? Does it require a login? Does it have visual elements that disadvantage some students? Do all images used have alternative text? Does it require high speed internet? Does it work the same way on a mobile device such as a phone or tablet? How does it interact with screen readers or text to speech tools? Does it require high levels of physical dexterity? Can it be controlled from a keyboard only? What is the cost of the tool and who is expected to pay for it? These and many more questions should inform any decision to use a technology in our teaching. 

Links

Universal Design for Learning (UDL) for Inclusion, Diversity, Equity, and Accessibility (IDEA)

Getting Started With Accessibility

Does it matter if we call talking to algorithms ‘Prompt Engineering’?

I’m facilitating week 3 of our Digital Engagement course tonight (which is the third course in our Humanizing Digital Learning Micro program) and a big part of what I’m talking about is how to find the content you are looking for. It’s a messy conversation that’s all wrapped up in some of things that I’ve spent lots of time thinking about and things I only kind of understand. I asked some friends who I happen to connect with on social media platforms their opinion, and some patterns are emerging. Let me spell out what I think some of the issues are and then see if I can recognize some of the patterns of response that I’m seeing.

I should add that what I’m looking for here is IN ADDITION to dealing with misinformation and disinformation. I’ve been using Mike Caufield’s work (maybe this version this year?) for years for this, and will continue to do so. Also, he has a book coming out. You’ll probably want that too.

How do I help students return from an algorithm (search engine/AI) with content that works in my class?

This is my problem statement. If I want to make students finding and evaluating things they find online a part of my classroom, then what should I be doing to help them get the things that I think they should be getting? I love a student-centered classroom as much as the next constructivist, but what is my responsibility for helping them get good stuff? What skills/literacies should I be fostering? How much of my classroom time should i be devoting to this? And, specific to this post, does it matter what I call it?

I want students to be able to go and find content online and work with it in class. My experience is that if you just say “hey go find something” you’ll end up with the first 5 results from a google search. And that’s fair. A lack of scaffolding is hardly a student’s fault. With the new AI systems available now, I’m likely to also get some crafted responses from ChatGPT or whatever. What I’m looking to do is improve those responses to make for a richer discussion in my classes, but support students in developing the literacies that they can then apply to whatever else they’re looking for from an algorithm – whether iut’s information about climate change or their diet or their job.

Student-centered classrooms don’t help students learn!

For context, here, I don’t really care. That is, I don’t care about the kind of learning research that is based on an increase in student ‘retention of information’ or an increase of ‘test scores’. This interview with Paul Kirschner is a fun romp through some of the criticisms. It’s pretty far down the list of things that I value. I mean, remembering things is nice, and important for some things, but I’m happy with remembering being a byproduct of my classroom rather than the focus.

My goal in the classroom is to do the best job I can to help students develop some literacies from a broader set of literacies that I think are important. I care more about a kid liking math and thinking it is useful, for instance, than getting the math right on a test. In this case, I care more about a student getting better at working around corporate sponsored content (where relevant) and finding a more diverse sense of voices than them remembering the five laws or something or other.

I have some core literacies that I think all students should probably learn, and a bunch of others that I’m happy to see students develop depending on their interests or needs. This year I’m moving ‘getting stuff from the Internet in effective and ethical ways’ up my list to ‘core literacy’.

note: I’m probably not going to have an assessment for it.

Does it matter if you call it prompt engineering?

I’m going to follow Karen Costa’s perspective on this one…

This, I think, is good advice. Call it what it is, though Colin Simpson makes the important point that if everyone else calls it prompt engineering and I don’t, then participants wont be able to follow up with other research. Further, Dr. Alison H. Melley suggests that the ‘engineering’ language works for her and that I should include all versions to appeal to as many people’s perspectives as possible. I can see that. So we’ll need to include the idea that it is called prompt engineering in some of the literature. I do worry, however, about the implication of the word engineering. And I’m looking for the every day word I’m going to use in my classroom after the concept is introduced.

Why not ‘prompt engineering’?

I think the word prompt might be ok, though Jon Becker’s ‘algorithm tricking’ might be closer to the plain language that Karen Costa is suggesting. It’s the word engineering that worries me.

Engineering, in my mind, is about a process that you design that brings you to a decisions or an answer. It’s about solving problems. The introduction to engineering on Wikipedia (and i can only imagine how much debate went into it) is “Engineering is the practice of using natural science, mathematics, and the engineering design process[1] to solve problems, increase efficiency and productivity, and improve systems. “

But that’s just the definition. You could totally use the word engineering to mean any process that you use, no matter how nuanced, to achieve any result. For me, though, no matter how recursive your design structure in your version engineering, it doesn’t automatically suggest the thing that I really care about, which is how you search using your values. What I’m talking about isn’t working with natural laws and math to get a right answer. It’s working in a system full of corporate influences, dominant normative voices, conflicting researched positions and perspectives that fully require you to decide what you believe or want before you can find an answer.

Imagine setting up a classroom activity where you were interested in students finding information about how to teach math. I can only guess that if you’ve made it this far into the post, the following distinction might be important to you. If not, I appreciate your thoroughness.

Using the prompt engineering approaches discussed in class, find a good resource for teaching 2×2 multiplication to students.

Using the algorithm tricking approaches discussed in class, find a good resource for teaching 2×2 multiplication to students.

I feel like the first one suggests that we’re looking for right answers, that we’re using the algorithmic systems ‘well’, and the second one acknowledges that were manipulating the system to come up with answers that fit our own perspectives.

George Veletsianos suggests that it depends on the conversation you’re trying to spark

And Angela Stockman says

Final thoughts

I’m hoping to help improve my student discovered content activities. Some of that is just about me setting aside the time in the classroom to go over the results and talk about it with them. Some of it, I think, is that I need to do a better job teaching things like algorithm tricking (or prompt engineering). I think what we call things matters. You may not, and that’s cool. But I think it changes what people think success looks like. If I say I’m going to be a good prompt engineer or a good algorithm manipulator, I think those are different goals.

I don’t know what my students are going to need when they leave my classroom and do whatever they’re going to do. I have this strong feeling, however, that I would like support them in understanding the nuances of complex issues in a way that encourages them to apply their values to that complex issue. So much so, in fact, that I just wrote a book about it.

I imagine a student (i’m teaching BEd students this fall) preparing to teach 2×2 multiplication to their grade 5 students three years from now. They turn to the internet to look for ideas, maybe an AI site, maybe teacherspayingteachers or maybe just a search engine. I would love it if their searching process included a consideration of their values at each intersection of the road. Whatever those values may be.

Creative Commons License
Except where otherwise noted, the content on this site is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.