Lots of interesting conversation going on in my community right now about the implications of ChatGPT style tools for the education system. Will students use it to cheat? Will we incorporate it in our classrooms? Can we use it to do mundane tasks for us? What are the ethical implications of using this kind of software?
My read is that these tools will do to education what the math calculator did to math education. And we’re still fighting about that 40 years later.
Those are important conversations, but I want to talk about something else. I’m interested in how these tools are going to change our relationship to learning, work and knowledge. In a conversation with Nick Baker this morning, we were trying to map out what the future workflow of the average person doing an average task.
- Step 1 – Go to FutureGPT search
- Step 2 – Ask FutureGPT ‘what does a government middle manager need to know about the Martian refugee crisis. Include three references and tell me at a grade 12 level using the voice of an expert talking to their boss’
- Step 3 – Look over the response, click send message, include your mid-level manager’s email address.
I figure were, maybe, two years away from this? But who knows, we might have this before I post this message.
What does this mean for knowledge?
30 years ago when I was in college, you went to the card catalogue, found a book that might be relevant and went to a long line of library books to find your book. Once you remembered how the system worked. On that shelf were a bunch of other books that had been curated by 50 years of librarians to be similar in nature (in one way or another) to the book that you were looking for.
The librarians were my algorithm.
Right now, still, I’m using a search engine with a bunch of different practices to try and find the information I want curated by other people somewhere out there on the Internet. I put in a search string, I look at what I get back from the algorithm, make some adjustments, and try again. Throughout the process I land on some websites created by humans about the issue I’m interested in.
The search engine algorithm brings me to a human (probably) made knowledge space.
Starting this year, we’re going to be returned a mishmash of all the information that is available on the Internet, sorted by mysterious practices (popularity, number of occurrences, validity of sources if we’re lucky) and packaged neatly into a narrative. The algorithm is going to convert that information to knowledge for me.
The algorithm presents me with the knowledge, already packaged.
Autotune for knowledge
In 1998, Cher’s ‘Believe’ hit it big as the first autotuned song to sell tons of, I guess, CDs. Autotuning takes the human voices and ‘removes the flaws’ that are there. Any place where you might be off key, pitchy, where you might have slowed down or sped up in your singing. Musical purists have been decrying the process since as they say that it removes the human part of the process from the music. It’s everywhere now. If you listen carefully to most popular songs you can hear the uniformity in the sound.
That’s what’s going to happen to our daily knowledge use.
This, to me, is the real danger. These tools are so convenient, so useful, save so much time, how is anyone going to rationalized taking the time to actually look into issues to check for nuance? Who is going to pay you to take a week to learn about something enough so you can give an informed opinion when something that looks like an informed opinion can be generated in seconds?
The real danger is not to people who are experts in their fields. Super experts in every field will continue to do what they have always done. All of us, however, are novices in almost everything we do. Most of us will never be experts in anything. The vast majority of the human experience of learning about something is done at the novice level.
That experience is about to be autotuned.