The Robot Take-over: Reflections on the Meaning of Automated Education

With education technology pointing to an ever-increasing automation of educational activities, a few enthusiasts go so far as to predict the replacement of teachers by robots. The present paper intends to take such declarations as a provocation, encouraging us to question our understanding of educational practices. That this possibility is even considered says much about how we understand education and effectively educate, greatly instigating our inquiries about that which is ever the subject of education, i.e., the human being. To follow this line of questioning, the exposition will proceed thus: there will firstly be a rough outline of what is a “robot”, and what, if anything, distinguishes it from any other machine, beginning with the origin of the word, in fiction, and with considerations on machinery and automation relying on a few of Karl Marx’s insights. Following that, a few attempts to define real-life robots and robot-like machinery used in Information and Communication Technology will be seen, leading to a brief foray on Artificial Intelligence and Machine Learning. Finally, the last part will have recourse to Hannah Arendt’s theory of action in an effort to reflect on the meaning of the hopes for and attempts at an automated education. The robot take-over 2


Introduction
Education as a whole and the teacher's role in particular thus join a group of many other activities that, though apparently unique to mankind, are being currently targeted by the latest wave of digital innovation -and given over to machines. We can already find software, such as "ROSS", "working" at law firms, researching cases and even preparing petitions and contracts (Baeta, 2018, Goldhill, 2016a; other programs can write literature or music, like "Shelley", which has been producing multiple short horror stories on Twitter (Frankel, 2017), or "FlowMachine", which already composed a couple of songs (Goldhill, 2016b). Though still far from Seldon's predictions, "Jill Watson", a "sister" software to "ROSS" and "Shelley" (all created using IBM's question answering computer "Watson"), already points to a partial automation of educational activities, using Artificial Intelligence to conduct online tutoring (Maderer, 2016).
Presently, however, we will make no attempt to second-guess such prophecies, nor to verify the possibility of teachers being de facto replaced by machines -though some relevant thoughts on the matter might emerge. Rather, we intend to take Seldon's declarations as a provocation that encourages, maybe even forces us, to question our understanding of education and educational practices. The mere fact that the possibility of automated teaching is being considered (and not by Seldon alone) says a great deal about how we understand education and how we effectively educate; and the debates over what constitutes or not an artificial intelligence, or whether a machine is already sufficiently "human" to pursue human activities, greatly instigates, as well, our inquiries about that which is, after all, ever the subject of a reflection on education, i. e., the human being.
In the present attempt to follow this line of questioning, the exposition will proceed thus: before inquiring on the impact of the possibility of automated education and the replacement of teachers by robots, there will firstly be an attempt at a rough outline of what is to be understood by a "robot", and what, if anything, distinguishes it from any other machine, further relating robots to the automation of tasks; this effort will begin with the very origin of the word "robot", in Karel Čapek's play R.U.R. (Rossum's Universal Robots), leaning as well on a few other well-known fictional works, and with considerations on machinery and automation relying as well on some of Karl Marx's insights on the matter. Following that, there will be an examination of a few attempts to define real-life robots and robot-like machinery used in Information and Communication Technology (ICT), including a foray, as brief and intelligible as possible to those unfamiliar with programming (such as the author himself), on Artificial Intelligence (AI) and Machine Learning, the technological breakthroughs that get the hopes (and fears) of edtech enthusiasts such as Prof. Seldon so high, and how they could supposedly contribute to the automation of educational practices. Finally, for a reflection on the meaning of the possibility of an ever more automated education and of the active efforts undertaken in its pursuit, to educational practices, but also to society at large, the last part of the text will have recourse to Hannah Arendt's theory of action, specially to the distinction between action and mere behavior.

Robots, Machinery, Automation
Prof. Seldon's talk of robots no doubt gave his declarations the intended touch of sensationalism 2 : the word "robot" will most likely evoke some very strong science fiction and fantasy imagery that has been deeply engraved in pop culture over the last decades. From Star Wars' droids to the murderous Terminator, from Blade Runner's replicants to Westworld's hosts, from Jetson's zealous maid Rosie to the tyrannical builders of The Matrix, and on and on to countless others, robots have become a staple of contemporary imagination, and it is probably safe to say that both the word and what it names are familiar to a large part of the population. And yet, the actual content of Seldon's talk does not point in any concrete way to the robots so prevalent in literature, movies and TV: no metal men shepherding children down the hallway or writing the alphabet on a touchscreen black board. "Robots", here, would stand for something different.
Indeed, though a robot seems to always be in some way a substitute to human beings, that does not necessarily entice anthropomorphism -robots are not always androids 3 . The fact that there is no precise and universally accepted definition of what is a robot makes such not-so-trivial distinction all the more troublesome.
Seeing, however, as our proper concern is with education, therefore with humans, and not primarily robots or robotics, fiction will perhaps serve us better than the technical literature (or at any rate in a different, yet very fruitful way), as is hardly unusual in such cases -and that both "robot" and "robotics", like so many other technical and scientific terms, originate in science fiction, no doubt signals in favor of such a line of thought 4 . Fortunately for us, Karel Čapek was considerate enough to his audiences to include, at the aptly-named "Introduction" to his play R.U.R., which first introduced the word "robot" to the world, an explanation regarding robots and their nature, given on stage by the character of Harry Domin, none other than the director general of Rossum's Universal Robots (the fictional company that produces robots, to which the title of the play alludes). His considerations on the matter are surprisingly relevant, and warrant a long citation: He ["old" Rossum, a former owner of the "Rossum" company and creator of artificial life] wanted, in some scientific way, to take the place of God. He was a 2 Adding to that, the cover of The fourth revolution consists of a metallic, humanoid robot, chin resting on one hand in the likeness of Rodin's Le Penseur. 3 The distinction is not sufficiently important to merit our attention much further. For the purposes of the present manuscript, Isaac Asimov's words on the matter will, for now, suffice to simultaneously introduce the question and illustrate how it can easily be glossed over: "In science fiction it is not uncommon to have a robot built with a surface, at least, of synthetic flesh; and an appearance that is, at best, indistinguishable from the human being. Sometimes such humanoid robots are called "androids" (from a Greek term meaning "manlike") and some writers are meticulous in making the distinction. I am not. To me a robot is a robot" (1983, p. 164). 4 "Robot" derives from the Czech "robota", meaning "hard" or "forced labor"; though Karel Čapek first used the word in his play, the term itself was picked by his brother Josef, and chosen due to the fact that the automates thus named were meant to labor in place of humans (Margolius, 2017). The meaning of the name no doubt supports the well-known interpretation, according to which the robots in the play are to a great length an allegory of the working class, and the robot take-over that ensues, a commentary on the Russian Revolution. The term "robotics", in turn, was coined by Isaac Asimov in his short story "Liar!", first published in 1941, though the author himself misdates it, stating it was only first used in "Runaround", published a year later (cf. Asimov, 1983, p. 1, 3 […] The simpler you make production the better you make the product. What sort of worker do you think is the best? […] The best sort of worker is the cheapest worker. The one that has the least needs. What young Rossum invented was a worker with the least needs possible. He had to make him simpler. He threw out everything that wasn't of direct use in his work, that's to say, he threw out the man and put in the robot. (2006, p. 15) Ever since the first use of the word, a robot was understood as a simulacrum of a human beingbut only of a given part of a human being, that is, the part that is able to work, or at any rate the one which makes work possible. True enough, later usage of "robot" would differ much from that seen in R.U.R., to the point that some would suggest the mass-produced biological servants of the play are not truly worthy of the name 5 -an assessment devoid of any sound logic, not only due to R.U.R.'s coinage of the term, but, more importantly, because Čapek's work perfectly set the fundamental character of robots as a partial replication of a human activity, as artificial workers and servants that perform tasks in place of humans and independently of them 6 . One might justifiably ask, however, in what way this characterization of robots differs significantly from that of any other machine. Robots labor in place of humans; is that no so for all machines? While a tool serves as an instrument for the labor of men, a machine labors in their place instead.
[The machine's] distinguishing characteristic is not in the least, as with the means of labour, to transmit the worker's activity to the object; this activity, rather, is posited in such a way that it merely transmits the machine's work, the machine's action, on to the raw material -supervises it and guards against interruptions. Not as with the instrument, which the worker animates and makes into his organ with his skill and strength, and whose handling therefore depends on his virtuosity. (Marx, 1973) Robots, as we have seen, partially replicate human activity and are able to act independently of it. Again, is that no so for all machinery? The machine proper is therefore a mechanism that, after being set in motion, performs with its tools the same operations that were formerly done by the workman with similar tools. Whether the motive power is derived from man, or from some other machine, makes no difference in this respect. From the moment that the tool proper is taken from man, and fitted into a mechanism, a machine takes the place of a mere implement. The difference strikes one at once, even in those cases where man himself continues to be the prime mover. (Marx, 1975, v. 35, p. 377) That Marx, while writing these lines, had in mind the steam-powered, first industrial revolution technology of the 19th century only makes this objection even clearer 7 : either there would simply be no relevant difference between machines back in Marx's day and present-day information age, fourth revolution "intelligent" robots notwithstanding, and prof. Seldon's (and others') expectations for the future of education are pure nonsense 8 ; or our description of robots would be incomplete. Such an objection, of course, could only prosper by failing to take into account some rather evident considerations. First of all, that robots are, no doubt, machines; secondly, to say there are no relevant distinctions between machines in general and robot-machines specifically, insofar as the automation of educational practices is concerned, is not by a longshot the same as saying there is no difference between machines in Marx's time and now. Indeed, as was pointed above, nothing in prof. Seldon's declarations pointed to "literal" robots replacing teachers. He might as well have said "machines": the possibility of a replacement, here, is more important than what is doing the replacing. Yet he used the word, and yet differentiating between robot and machine may not be plainly a wasted effort; though it may warrant a somewhat imprecise result, the words are surely not interchangeable, at the very least owing to what they evoke.
Again, fiction seems to both support our claim and supply us with directions. Even though the technical definitions will include no such properties (as will be seen below), fiction will very seldom refer to the more manlike machines as such, usually setting aside "robot", "android" or some other word for the task -except when the goals is precisely to "dehumanize" them, either as a way to diminish or, less frequently, to somehow "elevate" them. For instance, Terminator's character Sarah Connor will refer to the T-800 robot as "the machine" in a definitively pejorative way, as a constant reminder of his inhumanity and in spite of his human appearance; while the supercomputers that run all the world's economy in Isaac Asimov's "The evitable conflict" are called Machines, even if "they are still robots" (1983, p. 503), due mostly, one could wager, to their being so inhumanly all-knowing and all-encompassing. Robots are machines akin to humans, even when they are not human-looking, and, as such, much more convincing as automates for human-like 7 Another objection might arise by pointing out that what Marx endeavors here is an economic definition of machinery, or in the very least a definition from the perspective of a critical study of economy -which is a sound assessment. Indeed, the author makes no secret of this and readily proclaims so, at the very title of his most important work ("a critique of political economy") and likewise all over it (e.g. 1975, v. 35, p. 374-375). It so happens that Marx's perspective is the most accurate one for this enquiry because the attempts to automate education constitute an economic fact -more precisely, the attempted reduction, in theory and practice, of all human activities to economic ones. 8 It would indeed be very naive of our part to extract, from Marx's take on technological improvements, a layout for some sort of unchanging system of production (as if capitalism could ever quietly settle down and just make do with whatever technology it had hitherto managed to implement) and not the precise outlines of a way of organizing society that is defined by its need to constantly revolutionize said production, as well as all social conditions. "The bourgeoisie cannot exist without continually revolutionising the instruments of production, and thereby the relations of production and all the social relations. Conservation, in an unaltered form, of the old modes of production was on the contrary the first condition of existence for all earlier industrial classes. Constant revolution in production, uninterrupted disturbance of all social conditions, everlasting uncertainty and agitation, distinguish the bourgeois epoch from all earlier ones" (1975, v. 6, p. 487). activities, even more so for those seemingly so unique to mankind. Maybe saying "robot" or "machine" is of little importance if we mean to discuss education from a "strictly" theoretical perspective; but it is not so from a rhetorical one, such as that of a speech at the British Science Festival seeking to promote an upcoming book or of a clickbait science website article: "robot" sounds much more convincing as a human substitute than "machine".
Another piece of insight can be gleamed as well from what Marx had to say about automation -despite the fact he did not live long enough to ever hear or read of such a thing as a robot. According to him, as we have seen, a machine is indifferent as to whether its prime mover is or isn't a man; but furthermore, an automatic machine is the "most complete" form of machinery, the one most adequate to the production process of capital, precisely because, like capital itself (in its corresponding stage of development), it is automatic, a moving power that moves itself […]. The increase of the productive force of labour and the greatest possible negation of necessary labour is the necessary tendency of capital, as we have seen. The transformation of the means of labour into machinery is the realization of this tendency. (Marx, 1973) Trying to reflect on the nature of automation today by starting off from a few observations on the nature of automation during the early industrial revolution is no doubt a stretch -not to mention the controversies surrounding the concept of necessary labour and the theory of value as a whole (in which we cannot allow ourselves to take any part here). And yet one certainly cannot overlook the continuing tendency of capital to increase production and reduce costs, or that the continuing development of machinery is still continuously realizing such a tendency. The media and public interest, knowingly or not, will often be drawn to "Old" Rossum's question: can we create artificial humans, can we be like God? But, at the end of the day, it is the "Young" Rossum's concerns that always win out and warrant concrete action: how can we make workers cheaper? How can we produce or offer services more efficiently? It is, roughly put, as an answer to these problems that robotics, artificial intelligence etc. has made truly significant advancements. Jill Watson is now the "most adequate form" of machinery for the most efficient commodification of "services" such as online tutoring, much like the Jenny, which so impressed Marx in his time, was for weaving. And perhaps robots as a whole, as imprecise as our definition of them may be, in our time stand simply for the most adequate representation of the automation of the more human-like activities -automation that, as will be seen, precedes the actual replacement of humans.

Robots, Artificial Intelligence, Machine Learning 9
This powerful image of the fictional robot has proven itself both lasting and unavoidable, as can be seen when examining a few of the many attempts at a technical definition of the term by those seeking to rigorously distinguish real-life robots from other machines. More often than not, knowingly or otherwise, their efforts will either reaffirm Čapek's description at least to some extent or, in trying to somehow escape a seemingly narrow perspective of what a robot is, end up revealing a very narrow perspective of what a human is (and often reaffirming Čapek's description anyway).
For instance, according to a definition attributed to the "Robot Institute of America" 10 , a robot is "a reprogrammable, multifunctional manipulator designed to move material, parts, tools, or specialized devices through various programmed motions for the performance of a variety of tasks". We have here a clear example of the first group: actual robots are defined as task performers, that is, machines that work, labor, toil in place of humans, much like fictional robots -and, since labor in this case is of a necessarily physical kind, Seldon's "roboteachers" are left out.
In another example, the very type of classification attempted necessarily ratifies this characterization of robots as workforce substitutes. The International Organization for Standardization, though offering no proper definition for them, establishes robots as being either industrial or service, a distinction drawn out by exclusion (i.e., a service robot, such as a "roboteacher", is one that performs non-industrial tasks), meaning, to put it shortly, that robots are task-performing things, be said tasks industrial or not (ISO 8373). Also noteworthy is how ISO 8373 further adds that robots require "a degree of autonomy", defined as the "ability to perform intended tasks based on current state and sensing, without human intervention": here we not only see once more the emphasis on the task-performing character of robots, but also a use of "autonomy" that is meaningful both for establishing it as an intrinsic attribute of robots and for standing as a synonym for "automation".
On the other hand, some will endeavor to avoid the very mention of tasks (or service, labor, work etc.) in their concept, vastly broadening it. Professor Anca Dragan, of UC Berkeley, risks defining robots as being a "physically embodied artificially intelligent agent that can take actions that have effects on the physical world" (Simon, 2017). One can hardly overstate the significance of the philosophical, psychological and anthropological assumptions present in this statement. It could be that a thorough, conceptually rigorous use of day-to-day words like "intelligence" or "action" in a short interview for a lay audience is an unfair amount of zealousness to ask of any tech expert. At any rate, Prof. Dragan's common-sense usage of "intelligence", "action" and "embodiment" is evidence of significant common sense misconceptions that may be heavily influencing the development, understanding and employment of technology and, consequently, mankind's course: mind and body are implicitly understood as separate and separable entities (and the existence of bodiless artificially intelligent agents is logically presupposed) and action is primarily (perhaps solely) understood in a strictly physical sense.
And yet the loose assumptions in Dragan's definition do not come close to offering an example of the dangers of such proceedings as the catastrophic conclusions offered by Prof. Mel Siegen, of the Robotics Institute at Carnegie Mellon University: The classical definition among my colleagues is "a robot is a machine that senses, thinks, and acts". For about 10 years I personally have added "communicates" to these three -recently other people have also been saying "…senses, thinks, acts, and communicates". But for better or for worse, this definition makes most modern household appliances -clothes washing machines, etc. -robots. (Siegel, 2015) One can only wonder how much Prof. Siegel's high view of household appliances tells us of the company he keeps, of his opinion of his own mental attributes, or even of whatever animistic beliefs he may possess. Yet there is something to be salvaged from this life-bestowing assessment, as long as we can take some time to ponder over what could lead someone to take this much for granted what amounts to no less than attributing a form of sentience to a washing machine. Does a clothes washing machine "sense"? It doubtlessly reacts to external stimuli, but only to very specific ones: the pressing of its buttons (or a series of digitally received commands, or any such thing) that dictates when, for how long and in what way it will wash clothes; perhaps also the amount of water accumulated inside it, which dictates to the machine when to stop filling itself up with water, or some other event that constitutes a step in its unchangeable laboring patterns. Does a washing machine "act"? The only thing resembling an action on its part is the very task of washing clothes, always performed in pre-established ways, which also happens to be its only proper reaction to any stimuli. Does it "communicate"? It sure enough emits signs, and reacts to very few of themagain, button-pressing or whatever form the commands it receives take. That, however, plainly does not of itself mean communication, at least not in any proper sense of the word, merely a physical reaction to physical stimuli. To put it in Vincent Descombes' words: "a machine does not communicate in anything other than a physical sense, and this is because it has no signs of its own" (2001, p. 158).
Does it "think"? Since we do not want to get caught up in the endless (if fruitful) debate on what constitutes thought, or on how to assess whether a being is or not intelligent, we might as well again resort to some of Descombes' very insightful conclusions: an intelligent being will have an intelligent conduct.
All reasoning regarding men and rational automatons rests on the possibility of asking whether a system's conduct is intelligent. In order for a system's conduct to be intelligent, the system must first have a conduct. It must therefore have a conduct that must be attributed to it, a conduct that is the system's own and distinct from the autonomous operations of its parts. And in order for it to have its own operations of symbolic manipulation, these operations must bear upon symbols that are its own symbols, those that it uses to achieve its aims. (2001, p. 159) It is one thing for the Jenny, from its very birth, to spin with 12 to 18 spindles at the same time; it is performing a purely mechanical activity, requiring no intelligent conduct, and no symbolic manipulation or appropriation. But for Jill Watson to supply answers and feedback to computer science students at a university course would evidently require symbolic manipulation, actually communicating with humans, that is, an intelligent conduct of its own. Or would it not? We are thus led to the one thing that would supposedly enable machines to truly act as educators (or lawyers, writers etc.): artificial intelligence. Before any attempt at answering such mightily complex questions, it will be enlightening for us to inspect another distinction, which is that between proper, "human-imitating" AI and what has come to be called AI over the years: Historically, the phrase "AI" was coined in the late 1950's to refer to the heady aspiration of realizing in software and hardware an entity possessing human-level intelligence.
[…] Sixty years hence, however, high-level reasoning and thought remain elusive. The developments which are now being called "AI" arose mostly in the engineering fields associated with low-level pattern recognition and movement control, and in the field of statistics -the discipline focused on finding patterns in data and on making well-founded predictions, tests of hypotheses and decisions. One could simply agree to refer to all of this as "AI," and indeed that is what appears to have happened. Such labeling may come as a surprise to optimization or statistics researchers, who wake up to find themselves suddenly referred to as "AI researchers." But labeling of researchers aside, the bigger problem is that the use of this single, ill-defined acronym prevents a clear understanding of the range of intellectual and commercial issues at play 11 . (Jordan, 2018) Though "human-imitative AI", as Jordan calls it, has seen no significant advances, data management underwent unprecedented development leaps, particularly over the last decades. What is most likely the main driving force behind this improvement is no secret: We are entering the era of big data. For example, there are about 1 trillion web pages; one hour of video is uploaded to YouTube every second, amounting to 10 years of content every day; the genomes of 1000s of people, each of which has a length of 3.8 × 10 9 base pairs, have been sequenced by various labs; Walmart handles more than 1M transactions per hour and has databases containing more than 2.5 petabytes (2.5 × 10 15 ) of information; and so on. This deluge of data calls for automated methods of data analysis, which is what machine learning provides. In particular, we define machine learning as a set of methods that can automatically detect patterns in data, and then use the uncovered patterns to predict future data, or to perform other kinds of decision making under uncertainty (such as planning how to collect more data!). (Murphy, 2012, p

. 1) Indeed, as Jordan points out:
Most of what is being called "AI" today, particularly in the public sphere, is what has been called "Machine Learning" (ML) for the past several decades. […] In terms of impact on the real world, ML is the real thing, and not just recently. (Jordan, 2018) Let us now return to Jill Watson and the question of its intelligent conduct. Was it even created for such a purpose? What Ashok Goel openly states as the reason that led him and his team at Georgia Tech's College of Computing to create the software will offer a precious indication: "The world is full of online classes, and they're plagued with low retention rates," Goel said. "One of the main reasons many students drop out is because they don't receive enough teaching support. We created Jill as a way to provide faster answers and feedback." (Maderer, 2016) Efficiency, not the mimicking of human qualities, led Goel to create Jill -the Young Rossum's quest for improving production, not the Old one's atheistic godlike aspirations. A quest that predates sofisticated computer software by at least a couple of centuries, but which may have found in it, as in many of the other ICT, a more adequate form. Distance learning in particular has seen many similar effectiveness-enhancing uses of ICT. For instance, here is how a Brazilian university proceeded when setting up an online course: The same [online] class can be attended by an unlimited number of students, in real time or through a "digital library", where classes remain stored for the exclusive use of alumni of the Institution 12 , being accessible at any time during the academic calendar set up by said Institution.
[…] After two years, when the system became stable, three professors who collaborated on the effort as coordinators where fired. No others were hired for their positions, but other employees were rellocated in order to fulfill their tasks, without the appropriate rise in function or pay.
According to one of the coordinators fired, this occurred because "the whole process was already assembled and now all that was left was to administer the flow of information from the units. Now this has become easier, because the computers had already stored all the work routine. (Wolff & Almeida, 2008) Raising student retentions and cutting off unnecessary human labour are sure enough only two sides of the same efficiency-heightening attitude, and the replacement of teachers (or professors) seemingly presaged by Jill Watson is but the next big step in this process. Yet something else stands out the account of the fired professor in Wolff and Almeida's paper: what truly enabled the institution to fire professors was storing their classes and work routine in data banks. The computer, even without any sophisticated ML or anything else resembling an AI, could keep the course running with a little human support, but setting the course up had been up to the humans, with the computer as support.
A further look into how Prof. Goel and his team built Jill Watson will present us with a similar emphasis on routine, rather than on the software's capacity for an independent conduct: They contacted Piazza, the course's online discussion forum, to track down all the questions that had ever been asked in KBAI [Knowledge Based Artificial Intelligence] since the class was launched in fall 2014 (about 40,000 postings in all). Then they started to feed Jill the questions and answers. "One of the secrets of online classes is that the number of questions increases if you have more students, but the number of different questions doesn't really go up," Goel said. "Students tend to ask the same questions over and over again." That's an ideal situation to apply computing technologies like Watson. Goel tapped into IBM's open developer platform to identify Watson APIs for answering questions, adding Georgia Tech's own processing modules to improve performance. The team then wrote code that allows Jill to field routine questions that are asked every semester. (Maderer, 2016) Jill fields routine questions, and can only do so because it does not truly produce answers, but merely extracts these from the forum databases. Much like the robots from R.U.R. Ltd., Jill and other ML software "learn how to speak, write and do arithmetic, as they've got amazing memories. If you read a twenty-volume encyclopedia to them they could repeat it back to you word for word, but they never think of anything new for themselves. 13 " (Čapek, 2006, p. 13).
Labeling of courses aside, Goels and his team have not designed an "authentic", "human imitative" AI, but a data manager with Machine Learning properties, only worthy of the AI acronym on account of the semantic detour seen above. Seldon sure enough had in mind similar technology when pointing to the development of programs that would "learn to read the brains and facial expressions of pupils" -"learning" here meaning "Machine Learning", detection of patterns in brain signals and facial movement and subsequent management of data obtained in order to access previously offered answers.
There is, it is true, at least one thing that ML software like Jill Watson do that resembles a conduct of their own, and that no doubt helped the name popularize, which is planning on how to collect further data. Indeed, this planning does lead ML software to pursue different methods for procuring data, methods perhaps not properly foreseen by the programmers; it even leads online tutoring software like Jill to offer, for instance, answers that would not be found anywhere on the database given them, by "mixing up" the corresponding data. Perhaps a similar "initiative" could be attributed to Seldon's "roboteachers" and other samples of edtech software.
Would this "initiative" constitute, for lack of a better way of saying it, an "autonomous" conduct?

Education, Action, Automation
We could likely sketch out, from Descombes' many considerations about it, the elements of what he named a proper, intelligent conduct. However, it will serve us better in this instance to have recourse to a theory specifically aimed at the properly human conduct; even more specifically, to a theory of action.
Hannah Arendt's unique take on human activities is well-known for, among others things, establishing a clean-cut distinction between labor, work, and action -a distinction that is doubtlessly most useful in our current endeavor, but must presently be approached in a way far more superficial than would be adequate, or else incur carrying us away from the issue. 14 As the author herself summarizes it: Labor is the activity which corresponds to the biological process of the human body, whose spontaneous growth, metabolism, and eventual decay are bound to the vital necessities produced and fed into the life process by labor.
[…] Work provides an "artificial" world of things, distinctly different from all natural surroundings. Within its borders each individual life is housed, while this world itself is meant to outlast and transcend them all. […] (1998, pp. 7-8) There can be little doubt, from what we have seen so far, that a given machine will be perfectly capable of performing either labor or work on its own, after being adequately equipped and prepared for it. An ML software may as well, on top of that, automatically develop different methods for producing the intended results. But what about action?
To act, in its most general sense, means to take an initiative, to begin (as the Greek word archein, "to begin", "to lead", and eventually "to rule", indicates), to set something into motion (which is the original meaning of the Latin agere). Because they are initium, newcomers and beginners by virtue of birth, men take initiative, are prompted into action.
[…] It is in the nature of beginning that something new is started which cannot be expected from whatever may have happened before. This character of startling unexpectedness is inherent in all beginnings and in all origins.
[…] The new always happens against the overwhelming odds of statistical laws and their probability, which for all practical, everyday purposes amounts to certainty; the new therefore always appears in the guise of a miracle. The fact that man is capable of action means that the unexpected can be expected from him, that he is able to perform what is infinitely improbable. (Arendt, 1998, pp. 177-178) Arendt's opposing of action to "statistical laws and their probability" evidently leaves little room for doubt on whether machines that work precisely by detecting patterns in data and performing statistics-based predictions have any sort of initiative -decision-making on how to obtain additional data notwithstanding. The author, however, provides us with another, more suitable way to designate what these machines are actually doing: not acting, but merely behaving.
Behavior, unlike action, brings forth nothing new, and not only conforms to statistical certainty, but is presupposed by it, since the "laws of statistics are valid only where large numbers or long periods are involved, and acts or events can statistically appear only as deviations or fluctuations" (Arendt, 1998, p. 42). If we further consider how "the more people there are, the more likely they are to behave and the less likely to tolerate non-behavior", the fact that the mass society of Arendt's time, described as able to nearly exterminate true acts and deeds, leaving them with "less and less chance to stem the tide of behavior" (1998, p. 43), comprised, on a worldwide level, less than half the population of today 15 , acquires considerable significance. The "deluge of data" to which Murphy points out is intrinsically linked to this tremendous rise in population, as well as to the surge in data generation resulting from the revolution in ICT; the need to administer, control, manage this data, however, is in turn most likely linked to the desire to administer, control, manage the fate of society, part of "the attempt to eliminate action because of its uncertainty and to save human affairs from their frailty by dealing with them as though they were or could become the planned products of human making." (Arendt, 1998, p. 230) It is therefore no wonder that this impetus for obstructing the engendering of new beginnings by action would also manifest itself in the human activity which par excellence deals with newcomers to the world, i.e., education. It has done so by attempting to conceive of education as a form of production, of work, and to oversee it accordingly.
By taking the organization of school processes to its last stage of rationality and excluding the symbolic, representative and affective dimensions of education, the metaphor of work replaced the prerogative of interrogation for the prerogative of defining safe rules for human formation. When teachers who are submitted to fixed planning, defined schedules and planned content, carry their activities outside of previously defined bounds, this is called a deviation, as if the unexpected represented an inability for the fulfilment of the plan. (Silva, 2015, p. 577) In order for us to begin to muse about the possibility of replacing teachers with robots, themselves so clearly envisioned by Čapek as a metaphor for work, we must have already saturated education with the metaphor of work to the point of effectively turning it into work and the teachers into workers. That a machine can stand in for a teacher (or, again, for a professor) can only mean that the teacher already behaves as a machine -and likewise for the students who "talk" to a machine over a whole semester without realizing it is one.
Of course, students and teachers alike are not to be solely blamed for this state of affairs. It is indeed characteristic of our production system, from the age of manufacture, to develop more and more specialized capacities in the laborer, converting him "into a crippled monstrosity, by forcing his detail dexterity at the expense of a world of productive capabilities and instincts; just as in the States of La Plata they butcher a whole beast for the sake of his hide or his tallow" (Marx, 1975, v. 35, p. 365). Though Marx has in mind here only a physical form of labor, in our deeply bureaucratic service economy the principle applies well enough to countless other activities: For example, if a system were invented that was capable of composing the speeches and declarations of politicians, it would have to be held to be as much a speechwriter as is the writer who today fills that function.
[…] The above comparison was not between a machine and a human but between a mechanical speechwriter and a human being who had himself been transformed into a speechwriter. (Descombes, 2001, pp. 113-114) "The goal is to reduce a complex human behavior to a form that can be treated computationally", says Alex Rudnicky, a computer science professor at Carnegie Mellon University (Gershgorn, 2017), regarding his research and, incidentally, in so doing, summarizing the process of digital automation. The worker, insofar as he is a worker, has all human traits removed from him except those needed for that specific function. To create an efficient worker means to throw out the man and put in the robot, as "young" Rossum did, if not literally, then at least figuratively -though doing the former requires first doing the latter. But how could we possibly educate a human being, an acting being, and not simply a behaving one, a laboring animal that may one day be perfectly replaceable by a machine, if those in charge of said education have already been themselves reduced to machine-like behavior? Though the matter is surely deserving of further reflection, it does not seem too risky to guess that the answer is "we can't".
Yet, quite a few edtech enthusiasts are either not very worried by this or, if they are, do not show it. "We need to focus far more if we are to prepare our young for tomorrow's economy, and to optimize its infinite possibilities", says Anthony Seldon at the foreword to Kenneth Baker's booklet on the digital revolution (2016, p. 2). Zhang et al. had a similar stand many years before: As the new economy requires more and more people to learn new knowledge and skills in a timely and effective manner, the advancement of computer and networking technologies are providing a diverse means to support learning in a more personalized, flexible, portable, and on-demand manner. […] In the past few years, e-learning has emerged as a promising solution to lifelong learning and on-the-job work force training. (Zhang, Zhao, Zhou & Nunamaker, 2004, p. 1-2) Even Jordan, not quite the enthusiastic about would-be AI developments and ready to stress the importance of the perspectives of the social sciences and humanities on the matter (2018), quickly (if perhaps unwittingly) reduced the complexity of the subject to "intellectual and commercial issues", as seen above. The point, for these authors, is not the bringing up of humans able to act, capable of initiative; the point is just to make business as usual -if also more efficiently. Indeed, as Arendt has shown, economics go hand in hand with conformism, the assumption that men behave and do not act with respect to each other, that lies at the root of the modern science of economics, whose birth coincided with the rise of society and which, together with its chief technical tool, statistics, became the social science par excellence. Economics could achieve a scientific character only when men had become social beings and unanimously followed certain patterns of behavior, so that those who did not keep the rules could be considered to be asocial or abnormal. (1998, p. 42) The possible consequences of this seemingly unstoppable transmutation of action into behavior have been known and exposed for a long time; and yet, it would seem there is no such thing as repeating them too much: For even now, laboring is too lofty, too ambitious a word for what we are doing, or think we are doing, in the world we have come to live in. The last stage of the laboring society, the society of jobholders, demands of its members a sheer automatic functioning, as though individual life had actually been submerged in the over-all life process of the species and the only active decision still required of the individual were to let go, so to speak, to abandon his individuality, the still individually sensed pain and trouble of living, and acquiesce in a dazed, "tranquilized", functional type of behavior. The trouble with modern theories of behaviorism is not that they are wrong but that they could become true, that they actually are the best possible conceptualization of certain obvious trends in modern society. It is quite conceivable that the modern age -which began with such an unprecedented and promising outburst of human activity -may end in the deadliest, most sterile passivity history has ever known. (Arendt, 1998, p. 322)

SPECIAL ISSUE EdTech and Policies of Human Formation
education policy analysis archives Volume 26 Number 115 September 17, 2018ISSN 1068-2341 Readers are free to copy, display, and distribute this article, as long as the work is attributed to the author(s) and Education Policy Analysis Archives, it is distributed for noncommercial purposes only, and no alteration or transformation is made in the work. More details of this Creative Commons license are available at http://creativecommons.org/licenses/by-nc-sa/3.0/. All other uses must be approved by the author (s)