The impact of conformity in education

In 2009 I was fortunate enough to be part of a conversation that led to “preparing for the post-digital era”. This week we all got asked to do a ten years later reflection, and, as I’m at an NSF funded retreat (at Biosphere 2!) talking about equity in STEM education, i thought it made sense to try and use the postdigital as a tool to interrogate equity and education.

Let’s start here. Social media isn’t a jerk.

I wish I could send a smack-upside-the-head to ten-years-ago-dave. When things like Twitter were still places of positive connection and occasional porn sites jumping your hashtag, we had this idea that the connection between people was somehow going to be different. We told everyone to join twitter if they wanted to be smarter, better, taller! 10-years-ago-dave didn’t understand that it was inevitable that the rest of the human experience was going to impact those spaces. Twitter was full of people in 2009 and full of more people now in 2019.

Here’s the thing… it’s not like we didn’t know the world was full of jerks. If you’d asked 2009-dave if there were jerks everywhere, he would have nodded sagely. This is why I can’t believe that he didn’t see 2014-twitter as an inevitable outcome. In 2014 the jerks found twitter. Or, at least, they found out how to use twitter in a way that allowed them to show they were jerks. They yelled at people. They abused people. People were harmed. It is still happening. They have made the internet very unsafe for many people. They were mean to people because they were different. They attacked people who weren’t totally dedicated to the privilege of the jerks. People seem to do that from a desire for power and attention. They also do it to find a sense of belonging with others who share a desire for power and attention. That desire didn’t materialize in 2014.

In my work I always say that technology reinforces pedagogy. The technology here amplifies the jerk… it doesn’t make the jerk. More importantly, the technology ISN’T the jerk. And when we see ‘social media’ as a thing, in and of itself, rather than a just a way people platform themselves – no different than the speaker platform at hide park – we miss the solutions. Our technologies are good ways to find a jerk, but the solution to that is to deal with the jerk, not the technology.

So. Social media is not a thing that needs to be fixed. People connecting with people is a thing. Jerks are a thing. Jerks are not a digital problem. Jerks are a real-world problem that has been around for a long time. We need to get past the digital and fix our real-world jerk problem. And, as we go along, we have to think about how our systems help create those jerks.

Part two – we actually can negotiate a new social contract

A thousand years ago, steel encased thugs with sharpened crowbars (swords) were wandering around the countryside in Europe punching cows. I’m not joking. They were jerks. They were literally punching cows, as well as stealing people’s stuff and, all too often, killing random, innocent people. The church, not usually the benevolent actor in medieval history tales, had an idea. They created the Peace and Truce of God movement. Local clergy would make a pile of all the saints relics they could find and try and get knights together to swear to this new social contract. Saints relics were the brand that enforced that change. The peace of god was an attempt to try and protect people (clergy were particularly singled out as people who needed protection) but it extended to property and livestock. The truce of god was an attempt to have days that violence was off limits. Sundays. Holidays.

Technology (horse + sword + armor + castle) had create a societal problem that needed to be addressed. A thousand years later you can see the impact of the PTG in our culture. They actually looked at something that was a side effect of a technology and went out and renegotiated a social contract to get it done. It actually worked. It took 2 or 3 hundred years… but it you look at what words like polite, or proper actually came to mean in that society, lots of it can be traced back to that original (admittedly self-interested) work by the church.

The church is no longer the societal institution threatened by free-roving jerks who’ve slipped the bonds of the old social contract. Democracy is, to what extent we have it.

And we need a pro-social web dammit. And we need to make it.

I honestly think that our education system can be that brand that allows us to make this change. Our education system, however, is often kind of a jerk. That education system is a systemic structure that teaches us to believe in power over people.

Deciding what knowledge someone needs is an exercise is having power over someone. Assessment, particularly, is grounded in power structures. Learning as its been traditionally perceived by our culture is a sorting process. Whether it is the way in which we separate the ‘expert’ and the novice through degree granting methods or the bell curve which either secretly of overtly lives under our % system, it is the way by which we apply different class markers to people. It is a ‘we-making’ process and it is, like all we-making processes, a ‘them-making’ process. We are literate. We have a PhD. We are the teacher. We are an A student. All of these things exclude the people who are not part of the ‘we’ belonging.

Those expectation are… not equitable. They privilege a certain background. They privilege a certain kind of thinking… or knowing. In a sense, our education system is a training ground for the privileges of conformity. A conformity that is certainly easier for many, and a conformity that is totally inaccessible to many. It teaches people that conformity to power is what belonging looks like.

So lets go back to our social media jerk. Jerks go online to exercise their power by attacking people for not conforming to their sense of belonging. The louder they yell… the more they run in a pack… the more they attract people to their conformity group and the more firmly they exclude the them that don’t conform. This is the system of power that our schools represent.

I’m not saying that our schools necessarily make jerks… what I’m saying is that the ways on which knowing is negotiated in our schools supports this way of negotiating truth. If you have power, you can be right. If you have power, you can decide who’s right. Also… there are things that are RIGHT and learning things about the world is about trying to find the right answer.

We need our schools to replicate models of inclusivity and equity that are not about the imposition of conformity. That means that we accept people the way they come in the door, and we help them come up with answers that belong to them.

Do different technologies have different affordances that allow jerks to be more jerk-like? Sure. But that post-digital lens asks us to look beyond the “twitter is a cesspool” argument. When we identify the technology and not the people beyond we missed the systemic cultural practices that are helping to shape the people who are the bad actors on those platforms.

Some questions to use when discussing why we shouldn’t replace humans with AI (artificial intelligence) for learning

I struggle to have good conversations about my concerns about Artificial intelligence as a learning tool.

I ended up in an excellent chat with my colleague Lawrie Phipps discussing the last 10 years of the Post-Digital conversation and ended up in a bit of a rant about AI tutors. Like many I have had a vague sense of discomfort around thinking of AI as something that replaces humans in the ways in which we validate what it means to know in our society. I don’t particularly care ‘exactly’ what a person knows, no one, in any profession or field of knowledge is going to be able to ‘remember’ every facet of a particular issue… or being able to balance all its subtleties. We are all imperfect knowers. But what it means to know, on the other hand, what we can look at as ‘the quality that makes you someone who knows about that’ that lives at the foundation of what it means to be human… and every generation gets to involve itself in defining it for their era.

There are many different angles to approaching this discussion. These questions, and the thought that follow, are my attempt to provide some structure to thinking about the impact of AI on our learning culture.

  • What does it mean to know?
  • How does a learner know what they want to know?
  • What’s AI really?
  • Who decides what a learner needs to learn when AI is only perceiving the learner?
  • What does it mean when AI perceives what it means to know in a field?
  • What are the implications if AI perceives both the learner and what it means to know?

Why “what it means to know?” matters
It is my belief that deep at the bottom of most debates around how we should do education is a lack of clarity about what it actually means to ‘learn’ something or, more to the point, to ‘know’ something. Educators and philosophers have been arguing this point for millennia, and I will not try and rehash the whole thing here, but suffice it to say that our current education models are a little conflicted about it. All of our design and most of our assessments are created in an effort to help people know things… and yet there is no clear agreement in education on what learning actually is. It’s complex, which is the problem. Learning totally depends on what the learning is intended for. I may be only average at parallel parking, and don’t really remember any of the bits of information i was taught many moons ago when i learned… but i can mostly park… so i ‘know how’. Would i pass a test on it? Probably not… but i didn’t hit any cars, so i ‘know how’. I’m clear about what my goal is and therefore the judgement, for me, is fine. The guy behind me last week who didn’t like the fact i didn’t signal properly to parallel park is POSITIVE that i don’t know how to park. He told me so. Do I know how?

What does it mean to learn in all situations? I don’t know. What i believe is that each time you enter into a learning situation you have to ask yourself (and ideally get students to ask themselves) “what does it means for them to ‘know'”?

Are you going to talk about the ‘real’ AI?
Anytime an educator gets involved in a discussion about AI with a computer scientist, you can be pretty much be guaranteed that the sentence “that’s not REALLY AI” is going to follow. From Wikipedia,

Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.

Using this simplistic definition, then, Artificial intelligence

  1. perceives the learner
  2. perceives knowledge in a field
  3. perceives the learner and knowledge in a field

and takes action based on this to help the learner successfully achieve their goal.

AI and students knowing what to ask
On multiple, painful, occasions I have asked students what they wanted to learn in my class. I have tried to follow the great androgogists by having students define for themselves in a learning contract what they want to learn at the beginning of a course. I have mostly failed… I now wait until we’ve got a good 8 hours behind us before i enter into any of these kinds of discussions. I then set aside a long time for those things to be renegotiated the deeper a learner gets into a given field of knowing. You don’t know what you don’t know… and if you don’t know, you can’t ask good questions about what to know.

So. Students come to classes (often) because they don’t know something and have some desire to know it. We, as providers of education services, ostensibly have some plan for how they are going to come to know that thing. Learners coming to a classroom often

  1. don’t know what they need to know
  2. want to know things that aren’t knowable
  3. think they know things that are actually wrong
  4. want to know things that there are multiple reasonable answers to
  5. know actual things that are useful
  6. don’t really want to know some things
  7. and… and… and…

And each one of those students comes to your class with a different set of these qualities. What is a learner’s goal when they start the learning process? I don’t know. No one does.

AI perceives the learner
So. What does this mean for AI? If AI is following the input from a student and deciding what to give them next… how is it supposed to respond to this much complexity in the learner? It does it by simplifying it. And, I would argue, there are some very concerning implications to that.

I can understand how AI can perceive a learner and adapt to giving them a particular resource based on their responses. They can use the responses of other people who’ve been in the same situation to help give reasonable recommendations. 72% of people who answered this question incorrectly improved when given this resource. Of those that remained, some were helped by this other resource… etc…

But in this case, and i can’t emphasize this enough, someone behind the algorithm has DECIDED what the correct answers are. If we’re solving for a quadratic equation, i’m less concerned about this, but if we’re training people to be good managers, I become very concerned about it. What’s a good manager? It totally, totally depends. It’s going to be different for different people. Helping people come to know in a complex knowledge space is a combination of their experience, them exploring other people’s experience and mentorship. Imagine a teacher asking an algorithm how to teach properly. Who’s answers to that question is the teacher going to get?

AI perceives knowledge in a field
Lets say that I want to make dovetail box joints.


From a simple perspective making this join is easy

  1. draw on the flat side of the board
  2. cut that out
  3. put that board on the edge of the other board
  4. cut that and they fit together.

That is a straight up answer to the question “how do I make a dovetail joint?”, that probably doesn’t leave you able to actually make a dovetail joint. So lets imagine an AI system that rifles through every youtube video with the word ‘dovetail’ in it, can exclude all the videos about birds that come back from that search and can actually see and understand every technique used in those videos. Now, lets assume that it judges the value of those videos based on likes, total views, and positive comments in the comments that refer to ‘quality of instruction’. (I have built a very smart algorithm here)

My algorithm is going to come up with a list of ‘must dos’ that are related to the creation of dovetails. Those must dos will be a little slanted to carpenters who are appealing or entertaining. It will also lean towards carpenters who show the easiest way to do something (not necessarily the best way). Once people know about the algorithm, of course, those people who want more hits and attention will start to adjust their approaches to make them more likely to be found and used as exemplar resources by the algorithm. And… you can see one field slowly drifting towards the simplistic and the easy and away from craft. You could imagine any other kinds of drift based on an algorithm with different values.

Now… that’s me talking about carpentry. Imagine the same scenario when we are talking about ethics… or social presence. It gets a bit more concerning. Algorithms will privilege some forms of ‘knowing’ over others, and the person writing that algorithm is going to get to decide what it means to know… not precisely, like in the former example, but through their values. If they value knowledge that is popular, then knowledge slowly drifts towards knowledge that is popular.

AI is sensing the learner and knowledge in the field
Lets put those two things together and imagine a perfect machine that both senses the learner and senses knowledge in a field. How does the feedback loop between students who may not know what they want and an algorithm that privileges ‘likeable knowledge’ over other kinds of knowing, going to impact what students are going to learn? How is it going to impact what it MEANS to know?

One last question.
What is the increased value of having an algorithm process youtube videos instead of you actually watching them?

Creative Commons License
Except where otherwise noted, the content on this site is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.