My teacher was a computer
Computers already make our prepackaged lunches sold at seven elevens and manage a plethora of tasks in every area of our society. Why couldn’t computers also be teachers? The above example is imaginary, but based on the current hype of artificial intelligence (AI) it is becoming more real with each passing day.
This development may not necessarily be bad, as with many other fields of work, transferring tasks from humans to robots may have a lot of beneficial sides. Already many schools are using software to find plagiarism in students texts or using intelligent learning systems that can help students learn and perform better. (Even though it is unclear if these systems help or do a disservice for learning.) What is needed however is a more in-depth discussion on what kind of learning and institutions responsible of learning we want to create and advance further.
Most of the news and articles on AI are focused on how these systems will change the future. It still seems, after decades of criticism towards technological determinism, that new technologies, particularly digital technologies, see themselves as a discovery in the naïve sense of the word: That these discoveries just pop out of the void, for us to use; But fact is there is no magical well of pure innovations that appear to the truest of seekers. The ways AI is developed is dependent on the will of those who are developing it. Even though researchers such as Ben Williamson have pointed out that the education market is full of personal interests, lobbying, and money, these facts are seldom raised up in the news (Williamson 2016; Williamson 2015; Williamson 2017). The point I am driving towards is that even before we talk about the possible benefits and negatives of AI in education, we need to critically deconstruct the whole idea of digitalisation and automation of education.
One possibility to do this is to start thinking and looking that what learning even actually is? How do we learn? Where does learning happen? When does learning occur? Asking these simple questions may shed us more light on the possible positives and negatives of AI in education
For instance, one of the stated, and maybe the most commonly accepted, benefits of AI in education is that it can free teachers from repetitive tasks such as grading papers or evaluating essays. While this does sound like a gift from heaven for many teachers, it is not without its problems. First off, we need to realise that by establishing a digital system for evaluating and grading, we enforce the whole idea of assessing and grading students. In other words, we look at learning as something that can be measured by tests and ranked by numbers.
Secondly, we should be asking that how do we define repetitive task? Is grading essays repetition? So much so that we can give the task to a computer? If it is should we continue doing it? Jaron Lanier, computer scientist, author and one of the pioneers of virtual reality has warned that often we lower our standards to meet up with the computers standards (Lanier 2010). If grading papers are so automatic that machine can do it better, have we then lowered our standards?
If teachers bypass reading their students work, to stop diving into the ways students think, does that not create a disconnect and lower the ways the teacher understands the student? Alternatively, if we think about a student who writes an essay, what is his experience of getting his or her paper graded by a machine? An automated grading system is quite far from the Buberian idea of dialogue.
If we look at grading from mechanical standpoint and grades as objective and universal value: that the essay needs to include a certain amount of facts, certain amount of letters and a certain amount of original content, then the machine can probably already do it better. However, if we look at the essay as a process to construct meaning and incorporate that into learners lifeworld, then grading becomes more difficult. When does the student learn the things he or she writes about when she is studying to write the essay? - When she is writing the essay? - When the essay is finished? - When she gets her grade? - After some amount of time has passed of the writing of the essay?
Such things are already hard to consider and acknowledge. The question is that can AI do this better than a human teacher? The danger is that if the system does a lousier job than the human teacher, the people who learn differently in different times get thrown out of the system with bad grades and feedback (or no feedback).
Looking at it from a more optimistic perspective, we can think that the future AI grading systems have cognitive skills and other capabilities that allow it to understand the subtle nuances of the student's essay, such as ironic use of language or humour. The system might even be connected to a larger intelligent learning platform, and get the student's attendance percents, success in other tasks, health data and even real-time data of the students vitals, feelings and moods through using facial recognition and webcams installed throughout the school or/and a personal device given to every students. With all of this information, the grading system can make a more individual grading decision and even write some positive messages or reinforcements the student might need. In a class of maybe fifty or a hundred students does not such system do a better job? How could a teacher even be able to do such much essay reading and grading, when the teacher might have many similar classes at the same time?
The problem, at least for me, is twofold: First gets back into the setting of the context and discourse: Why do we have such big classes and limited resources? Could we, instead of developing digital systems, make an effort to create smaller classes and get more teachers? We need to ask that for whom are the structures build for? Whom do they service? Is such learning system a better way, or is a better way in the current condition of overworked teachers and thinking educational institutions as capitalist businesses?
The other aspect goes back to the question I asked a little while ago: What is learning? The problem with many of the digital systems is that they understand learning as a passing of information, as an abstract intellectual process. In How We Became Posthuman, Katherine N. Hayles (2008) traces this line of thought back to the Turing test and the birth of digital age. Turing test lends from an old parlour game where one person plays a judge between two contestants, man, and a woman. The judge cannot see the players. Instead, he has to guess which one is which by asking them questions and reading the answers. In Turing’s version, the woman is replaced by a computer. The idea is that if the human playing the judge cannot say whether the player is a computer or human, then the computer must be capable of thinking. Hayles argues that the problem of Turing’s test is not in whether we can rightly guess which is the man, woman or machine, but that by just using such test we give up our embodied selves and thinking becomes an abstract intellectual process:
“If you cannot tell the intelligent machine from the intelligent human, your failure proves, Turing argued, that machines can think. Here, at the inaugural moment of the computer age, the erasure of embodiment is performed so that "intelligence" becomes a property of the formal manipulation of symbols rather than enaction in the human lifeworld ” p.xi (Hayles 2008)
The problem with digital systems is that it is tough to create a system that embodies our physical world. We can create robots, but currently, and in the foreseeable future they lack the same sense of embodied knowledge that every teacher has. We do not even yet fully understand how our bodies, feelings and thinking functions and are connected with each other. Why should we then think we are capable of creating a machine that could possess such qualities. In other words, the problem with such systems is that it misses the whole notion of the body as a place of construction of knowledge and treats learning as transferable information.
In a world without bodies, an essay might feel like a string of letters and symbols joined together instead of something that touches the student's lifeworld. Maybe the cleverest of student could write a system that writes the essays for her? - Without engagement and agency, the system is in danger of becoming played. Furthermore, the same disconnection or divide may happen to teachers: What does the teacher do with the time she has saved? How does such comprehension of learning effect her lifeworld? Does teaching become solely just a transfer of facts to the students so that they can feed the taught answers to a machine? Even though the future AI systems do promise more flexibility and “intelligence” to understand the student's answers, it does not change the fact that using AI in grading enforces one way of learning ideology and dismisses others.
If automating repetitive tasks turns out to be this complex, how then are the other tasks AI promises to do, such as using virtual environments or virtual teachers?
Hayles, N.K., 2008. How We Became Posthuman, University of Chicago Press.
Lanier, J., 2010. You Are Not a Gadget, Vintage.
Williamson, B., 2016. Coding the biodigital child: the biopolitics and pedagogic strategies of educational data science. Pedagogy, Culture & Society, 24(3), pp.401–416.
Williamson, B., 2015. Political computational thinking:. Critical Policy Studies, pp.1–20. Available at: http://dx.doi.org/10.1080/19460171.2015.1052003
Williamson, B., 2017. Silicon startup schools: technocracy, algorithmic imaginaries and venture philanthropy in corporate education reform. Critical Studies in Education, pp.1–19.