Political Automata. The separation of politics and policy-making
What will the future of politics bring us? This essay wants to seek out new possibilities in the configuration of our democracy by assessing its current state and principles, and explore more viable solutions in terms of policy-making. The classic notion of the future can help us achieve that. Future aesthetics, dystopian and utopian, commonly show a highly technological, robotised and automated reality. Despite that these imagined realities usually have not been realised in full, automatisation and robotisation have in fact entered our world in a structural fashion. While this development has been implemented mostly in repetitive environments as production, and some claim that it finally gives us the freedom to focus on the act of politics, it is not to say that it could not be implemented in politics itself.
Automation as a process is generally received with ambivalence, on the one hand is offers us great opportunities in terms of mass- or specialised production, but on the other jobs are usually at stake; whether it dissolves jobs or creates new types of work. Nick Srnicek and Alex Williams, writers of Inventing the Future, however, offer us a more positive sentiment, they demand a fully automated economy and an accelerated process towards this state. "Without full automation, post capitalist futures must necessarily choose between abundance at the expense of freedom (echoing the work-centricity of Soviet Russia) or freedom at the expense of abundance […]."1 In other words, full automation would free us from the burden of work. This is a classic argument in favour of the implementation of an automated future, but it is not the only argument. While automation initially was implemented primarily in areas that could easily be seen as a burden — because the tasks involved were either repetitive, time-consuming or heavy-duty — in later stages this has shifted to areas that are more cognitive. Not because these areas were considered as toilsome, or for purely economical reasoning, but because the machine could in fact outperform the human. For example in aviation, where autoland systems were introduced in order to enable aircrafts to land in weather conditions that for pilots would otherwise be impossible to operate in. So not only could automation potentially free us from the burden of work, it also raises the opportunity to increase the cognitive capabilities of our society. Aristotle wrote about the freemen who are positioned above toil — above exhausting physical labour — as men who are free to engage in politics while their stewards attend to their households.2 The act of politics here is not considered a burden, but as some higher act that must be performed by the freemen. Aristotle thus distinguishes politics from work. Does this claim still have ground? It must be said that the writing of Aristotle dates from 350 B.C. and describes politics in the context of slavery and not automation, but the question if political production is any different from other types of work is very much alive. What is, in fact, currently the difference between ordinary and political production? Philosopher Jacques Rancière also discusses the notion of a gap between work and politics, and consequently the apparent impossibility to merge such activities. Except he claims that "[…] politics begins precisely when this impossibility is challenged, when those men and women who don't have the time to do anything other than their work take the time they don't have to prove that they are indeed speaking beings, participating in a shared world and not furious or suffering animals."3 Politics here is described not as an occupation, but as a mode. This opens up the possibility that those workers who are in fact involved in political production are too challenged in their attempts to engage in politics, as any other. From that perspective, if we want politicians to engage in politics, political production should as well be subject to cognitive automation. In the near future we would be able to produce political automata, machines based on self-learning algorithms, that have the capability to create policies for us. It is therefore necessary to discuss the effects that self-learning machines and algorithms can have on the act of policy-making.
The human being and our flawed understanding of systems and progress
Why would we replace politicians with political automata? In order to answer that question we have to look at how both humans and self-learning algorithms deal with the world around us. In this first chapter I will draw links between the work of philosopher John Gray, economist Hyman P. Minsky and scientist Stephen Wolfram in an attempt to further problematise the role of humans in political production, the making of policy. What ties their work together is that they are all in one way or another focused on the notion of unpredictability, irreducibility and instability within different complex systems that compose our world; whether it is our sense of ethics, the financial system or the field of physics. They refute ideas of constants, cumulative growth, stability and linearity. Gray, Minsky and Wolfram rather think of the programs that run in such systems as being intrinsically cyclical, complex and unstable in spite of their simple origin or nature.
Our understanding of such systems greatly defines how we perceive the world, our past and present, and therefore also how we act in it in relation to our future. It is therefore of political importance. How and what we measure in order to inform our decisions also frames the desired outcome of such decisions. A world view of endless possibilities leads to other political decisions than one of limited possibilities. Gray proposes that our perception of events is ultimately flawed by exaggerating our assumptions of the influence we have on them. In alignment to this Wolfram states that every moment that we are able to remove our specific human position — that what seems to make us special, our ego if you will — we open up the possibility to define the programs that run in complex systems more universally. Making them simpler and more constitutional. In other words, if we would remove more and more assumptions we have about ourselves, we might make better sense of the world around us. Shedding our human-centered position within the history of the universe, for example, surely led to a better understanding of it. Wolfram is certain that we will eventually define the basic program or universal set of protocols that constitutes the basis of all, our universe.4 What is important is that Gray, Minsky and Wolfram all introduce ideas of randomness and irregularity, but they do not necessarily believe that the complex systems that are part of our world are 'based' on random principles. The programs are simple and universal, but they appear random and irregular only in their evolution.
Let us start by looking what exactly a complex system is. Complex systems consist of many components and subsystems that together compose a functioning whole. Within these systems the individual components interact with each other, an event in one component can have significant effect in other — seemingly disparate — components. This behaviour of such a system renders it complex, in part incomprehensible and therefore in many cases unpredictable. Wolfram refers to this as computational irreducibility i.e. we cannot predict their outcome, but only observe it by running the system. A key feature of complex systems is that the principles in one can often be used to describe those in others. Looking at weather patterns helps us understand how economies work. Just as boiling water helps us understand weather patterns. Yet we have completely different views on how we can control these systems. We perceive the weather as inherently unstable and uncontrollable, and only partly predictable with a margin of error. But if we start looking at our economy — in all its complexity — our view changes. Mainstream economists assume that the economy has a built in equilibrium of growth and therefore is linear. It has an embedded notion of improvement and progress, we can engineer it.5 This nicely aligns with the persistent idea that we as humans are ultimately capable of improving our lives. This assumption is under constant pressure, especially since the end of the '60s, but nevertheless it is still widely accepted.
The notion of progress is directly tied to our expectance of the future. Wolfram speaks of a constraint of something being foreseeable to a certain extent. In order to act we, as human beings, have the wish to predict the outcome. A necessity nature does not have6 , it is not concerned about its future, it just follows its course. Humans have the need for a horizon, to grasp the events that are presented to us. If this is achieved through the believe in a religious form of power, or in the proceedings of science or technology does not matter all that much. They are different means to the same end; to understand the seeming complexity that surrounds us. But short-term horizons might blindside long-term horizons we could be looking at. Following the writings of John Gray the notions of improvement and progress are clearly embedded in our collective thinking. Liberal humanists believe that humanity tends to improve itself, step by step, gradually, by learning from our past endeavours. Gray, however, refutes the link between the accumulation of knowledge and the accumulation of ratio and civilisation that is is embedded in this line of thinking and proposes that ratio and civilisation are in stead extremely fragile qualities.7 He relates the notion of linearity in the accumulation of these qualities to meliorism8, a philosophical concept that assumes that humans by interfering in otherwise natural processes can gradually improve the outcomes of those processes. According to sociologist Sheila Shaver meliorism is one of the very fundaments of the idea of liberalism. "Liberalism […] is meliorist in regarding all social institutions and political arrangements as capable of human improvement."9 Above all this idea has been rooted in the American culture, symbolised by the concept of the American dream. And although this dream has been shattered innumerable times it is still very persistent. Look only at the phrases and slogans used by presidential candidates… "Yes we can", and more recently "Make America great again". Or think about how the idea of Western democracy is projected onto the 'outskirts' of the world. They clearly propose the idea of an engineered society. The element of progress is so deeply engrained in our thinking that even when progress is stopped, or reversed, we assume that we as humans can turn this process around. And in the cases that our believe in progress is so severely contested, we seek for external forces that are to blame for this 'unnatural' state we are in. Just as in the 1930s during the great depression, recent financial crises have led to minorities being pushed forward as our scapegoats, amplifying xenophobic thinking.10 The idea of progress is also very present in the political vocabulary regarding our economy, more than anything else our progress is measured through the current state and growth of our economy. Political campaigns are filled with claims and promises about employment rates, the rise or resurrection of the economy and tax cuts. Our economic heyday was marked by an unrestrained believe in economic expansion, and when this period ended people desperately called for its return. There are only few who are willing to accept that true wealth is ultimately limited. Gray describes that the notion that every human problem in the long run is resolvable is now part of our contemporary ideas on progress.11
The point being made is that we as humans have less power over events than we might attribute ourselves, instead it might be beneficial if we would attribute more of this power to the programs that run within the systems of our world. The desires we have, blur with our understanding of the capabilities we have, resulting in a flawed perception. Externalising them in the design of algorithms potentially creates a clearer separation between the two.
But what then are the powers at hand? Gray, Minksy and Wolfram describe them in comparable ways, each from the perspective of their own field. Wolfram, for example, has done extensive research in the field of cellular automata. A field that has interest from other fields like mathematics, physics, biology and computational theory. Cellular automata were originally discovered in the 1940s by Stanislaw Ulam and John von Neumann, and later popularised through the Game of Life by mathematician John Horton Conway. A cellular automaton is a simulation of a life-form which populates on a grid of cells. The cells on the grid are either on or off, alive or dead. The cells of the population will survive, die or come to live based on simple rules applied to the automaton. Conway's Game of Life was initially made public in an article by Martin Gardner, wherein the simple rules of that specific automaton were described as follows.
As Gardner states "[i]t is important to understand that all births and deaths occur simultaneously. Together they constitute a single generation or, as we shall call it, a 'move' in the complete 'life history' of the initial configuration."12 Different sets of rules will result in different automata and therefore different patterning of the life-forms. From the 1980s Wolfram, supported by the development of computational power, did research on an extensive array of different rule-sets. Numbered from rule-0 to rule-256.13 These rule-sets are referred to by Wolfram as primitives, yet I will refer to them as programs. Most of these programs show perfectly symmetrical and predictable patterns, it is when Wolfram came across 13. rule-30 and rule-110 where things became more interesting. These two programs displayed a randomness, or turbulence in their patterns that was truly unpredictable but reproducible. When iterated over many generations they show periods of local stability, which is then disrupted by other patterns, causing a moment of change or chaos. The programs render complex, but start out extremely simple, one black cell and a small rule-set. This made Wolfram conclude that this principle might be applicable to systems we perceive as being highly complex, for example to the aforementioned weather-patterns, fluid turbulence, or if we follow John Gray possibly also to the state of our civilisation. If this is the case, then ideas of gradual and linear progress in policy-making would be fruitless. And in stead focussing on growthalone, we should also take in account the systems' built-in turbulence.
Another element in Wolfram's presentation resembles the findings of Gray quite closely, he describes three notions on how the outcome of a system could be become complex or random. In the first a system is predictable and non-random, and becomes random only through external input. "Like a boat bobbing on an ocean. The boat itself does not produce randomness, it just moves randomly because it is exposed to the randomness of the ocean."14 It is a quite traditional take on randomness and is relatable to the ideas of meliorism. In the second, which finds its origin in chaos-theory, the randomness of the program is fed in from the beginning and not — as just described — along the way. The randomness stems from the initial input for the system, again an external force, and not from the stable rules that compose the system. The third notion however originates from stable and predictable rules, like rule-30 or rule-110, but is intrinsically random in its evolution and results in unstable cyclical patterns despite of its stable origin.15 The outcome of such a program is bound to be complex and is irreducible. As mentioned earlier, the only way we can predict its output is by running the program and observe its iterations.
Wolfram briefly mentions that our financial system too could be based on such intrinsically random programs, most likely in combination with the other types of programs, and this is where his ideas coincide with those of Hyman P. Minsky. In 1992 Minsky wrote a working paper called The Financial Instability Hypothesiswherein he describes, in an interpretation of Keynes's General Theory, a capitalist economy that is inherently unstable. Also Minksy incorporates the notion of the future, or expectation, in his work. Specifically in how money flows produce future money. "[T]he flow of money to firms is a response to expectations of future profits. […] Thus, in a capitalist economy the past, the present, and the future are linked not only by capital assets and labor force characteristics but also by financial relations." Or in other words "[i]nvestment takes place now because businessmen and their bankers expect investment to take place in the future."16 The hypothesis consists of two theorems that describe its instability.
The first theorem of the financial instability hypothesis is that the economy has financing regimes under which it is stable, and financing regimes in which it is unstable. The second theorem of the financial instability hypothesis is that over periods of prolonged prosperity, the economy transits from financial relations that make for a stable system to financial relations that make for an unstable system.
So the model that Minksy proposes is that of a capitalist economy that shows a cyclical dynamic independent of exogenous forces, or "shocks". These cycles are rather formed from the "internal dynamics of capitalist economies"17. This model contests those of mainstream economists, as already mentioned earlier in this chapter. Economist Steve Keen, who continues to work on the ideas of Minsky, in one of his lectures mentions a statement of Ben Bernanke. Bernanke, who served two terms as chairman of the Federal Reserve, allegedly stated that the current mainstream models of the economy are designed for non-crisis periods, in other words they do rely on exogenous forces to fall into decline. They completely ignore that what goes up, most likely will also come down. And it is here that it becomes clear that mainstream — utopian — ideas can really blindside us from the actual power we hold over our future. If the systems in place are of a cyclical and random nature, and the models we use rely on stability and linearity, it results in a disappointing discrepancy between these two realities. This seems to be the case in economics, but it might be even more applicable to human behaviour. For that we have to return to the work of John Gray. He draws a line between the accelerating accumulation of knowledge, which he attributes to the human species as an unique capability, and our capabilities to learn from our experiences. "While knowledge and invention may grow cumulatively and at an accelerating rate, advances in ethics and politics are erratic, discontinuous and easily lost."18 As an example he mentions universal evils as torture and slavery. These evils do not vanish like outdated theories would in science, they return under new monikers. Torture becomes an intensified questioning method, slavery becomes human trafficking. What we gain in civilisation is not simply backed up on a hard drive to never lose again.19 In stead you could argue that it might be quite the opposite, that our civilisation is extremely fragile. What if we are just as much programmed to be civilised as we are programmed to fall into barbarity? If politics and policy-making is about changing the course of our future, and to anticipate to what lies ahead, then our understanding of that expectancy is quite important. In stead of focussing on how much we will progress, grow and gain, we should just as much focus how much we can recede, destroy and lose. In addition to that there should also be an acceptance of the cyclical and random nature of the programs that iterate into our future. A principle of cost. What makes a self-learning algorithm suited for political purposes in this context is that their usually based on such principles, an algorithm rather assumes that it is wrongthan that it is right. In an iterative process it would seek to provide a solution that is less wrong than the solution it provided before, explicitly making use of randomness in their process of doing so.
The human being and our flawed common sense
Besides understanding what policy-making should take into account, in terms of progress or cost, it is of equal importance to focus on how decisions and ideas are formed. Existing ideas have the power to shape or restrain our reality in the form of a common sense. It is the task politics to reflect on these existing ideas and challenge them where needed. Rancière describes this as the distribution of the perceptible, a distribution and redistribution of space and time, place and identity, speech and noise, the visible and the invisible. According to Rancière "[p]olitical activity reconfigures the distribution of the perceptible. It introduces new objects and subjects onto the common stage. It makes visible what was invisible […]".20 However politics is not immune for the restraints they are supposed to challenge. The context of what is considered normal has great impact on our understanding of things, we therefore should not neglect the construction of our common sense. In this chapter I want to draw lines between a human construction of common sense and how machines are dealing with such a concept. Using ideas from this machine-construct also helps us to look at our human-construct differently.
Gilles Deleuze has written on the workings of our society which can help us grasp how a common sense can be constructed, but also how it can be trapped in a suboptimal position. While we often claim to live in a free society Deleuze describes a shift of forces rather than an absence of it, it therefore does not necessarily conclude that our common sense is free from external restraints. In Postscript on the societies of controlDeleuze responds to Michel Foucault's division of societies' history into sovereign societies and disciplinary societies. He specifically builds upon the latter notion, which he separates into two sequential modes: one mode he again refers to as — Foucault's — disciplinary societies and the other as societies of control. In a disciplinary society power is distributed through hierarchy, creating a series of reversed tree structures to which Deleuze refers to as "vast spaces of enclosure", for example those of a family, school or factory. A society built on downward forces, regulation, laws and taxation. In societies of control the power of the institution fades and is distributed throughout the system. Deleuze describes the idea that this society acts "[…] like a self-deforming cast that will continuously change from one moment to the other, or like a sieve whose mesh will transmute from point to point."21 A decentralised network of nodes. It describes our current society. If we would assume that just like power, common sense is also distributed throughout such a decentralised network, you could think that common sense is free-flowing and egalitarian. However Deleuze describes the network as a mechanism of control and not as that of an open structure. While everything is connected in a distributed network, it does not mean there is no direction or force involved. Due to the forces within this omni-directionality the flow or dispersement throughout the network is restricted and directed in many ways. Much like swimming in a strong current, you are free to swim in every direction, but only the strong will really achieve that freedom. Alexander R. Galloway looks at the work of Deleuze from the analogy of the internet. He refers to this restrictive aspect of the network as technological control. This control is, according to Galloway, inscribed in the workings of the protocol. Here of course he describes the technological protocols that constitute the internet, but the term protocol can also be read more generically as something that is enforcing that what is considered normal, common sense. What Galloway suggests is that "[t]he internet is a delicate dance between control and freedom. […] In other words, at the same time that it is distributed and omnidirectional, the digital network is hegemonic by nature; that is, digital networks are structured on a negotiated dominance of certain flows over other flows. Protocol is this hegemony. Protocol is the synthesis of this struggle." While the internet is often perceived as democratic, decentralised and un-controlled, "nearly all Web traffic must submit to a hierarchical structure to gain access to the anarchic and radically horizontal structure […]".22 Both Deleuze and Galloway describe both our Western society and the internet, which are generally perceived as predominantly free, from an opposing controlled perspective. In terms of common sense, the ideal situation would show a common sense that is constantly challenged and updated to an improved version of it. Through politics, not control.
Machine learning is often involved in finding an answer or a classification from a pool of data. If we would transfer the idea of a common sense to this computational context it could be described as the current solution or classification. Just as common sense is the current sense, both are subject to change. The search for such a solution or classification usually entails looking for the maximum or minimum answer. One example of such a search would be to process an image sporting the numerical character '2' to classify it as such. The maximum answer — or in other words the best possible answer — in this case is obviously to conclude the image contains the number '2'. A suboptimal answer would be to classify the image as '7', the shapes are quite similar but the answer is of course incorrect. The fact that the algorithm occasionally is incorrect is fine, but it needs to be punished in order to train it, so it will improve its answering capabilities over time. Discarding the wrong answers therefore is a very important feature of the algorithm. That ideas do not per se improve in similar ways is explained by sociologist Barry Schwartz. He introduces two terms, thing technology and idea technology. He states that in thing technology objects — which are badly designed and are therefore false — die of natural causes and disappear "into the ether". Meaning that nobody would ever buy a bad device and recommend it to other persons, so it will be replaced by an improved version. But in idea technology he says, this is not necessarily the case, false ideas can live a long and prosperous life. For example when an idea dominates large parts of society, it is very hard to discard it as simply being false. He calls this phenomenon ideology, as an atheist he is referring to religion, but also to ideology in a much broader context.23 In idea technology common sense does not necessarily make perfect sense, although ideally the two should come closer and closer.
As described earlier self-learning algorithms are designed to discard false solutions more rapidly, let us take another mathematical problem as an example, that of the knapsack. This is a problem of combinatorial optimisation where the challenge is to find the highest possible value within the restraints of the given dimensions of the 'knapsack', filling it, by choosing from a set of items each given a volume and value. Whereas in the previous problem the right answer could be known upfront, in this example the best answer is an unknown before we enter our search and might even remain unknown. Problems where we already know the answer are usually problems that are better suited for the human mind, but where this is not the case a machine tends to outperform our human brain. In the knapsack-problem for example a human might start a solution by filling the available volume with the most valuable items, regardless of their volume, resulting only in a reasonable score. From that fairly disappointing result a human would most likely try another approach, now calculating the value-volume ratio for each item. If you start filling the knapsack with those items that have the highest ratio, you would probably end up with a slightly better score, but it might not be the best possible solution. A machine learning approach could start very differently with the virtue of computational power. Instead of starting from the principle of the highest gain, it could start from the principle of evolution. Let us say you would fill the knapsack ten times randomly to start of this process. The scores would probably be on the low end, unless you are lucky, but this is only the first step in the evolutionary process. Generation one. To populate the second generation of solutions you would select only the best answers and try to evolve them into better answers by combining them, a process of pairing. This process could iterate over many and many generations until you have found a solution that does not seem to improve any longer. You could conclude this if the solution has not changed over an x amount of iterations, for example a thousand generations. It will most likely be a good result, but unfortunately it still might not be the best possible solution to the problem. The issue at hand is the risk of getting stuck in what is referred to as a local minimum or maximum. You might be able to find a good local answer in the process of evolution, but there could be better local minima or maxima to find and amongst them also the global and absolute minimum or maximum. Being at the top of the Mont Blanc might make you feel you are on the top of the world, but you cannot see there is an even higher mountain elsewhere, the Mount Everest. This would be a clear example of a local maximum. In physics, when you are stuck in a local minimum, the way to get out of such a situation is literally to apply force. In machine learning this is really not that different, to get out of a local minimum or maximum you have to shake things up. In our knapsack-problem this could be achieved by adding new randomly generated solutions to the pool that can pair with already good solutions. Chance offers us input outside of our local area. Another way would be to use a committee of machines, several machines that simultaneously work on the exact same problem thereby preventing the committee as a whole from falling in the trap of locality. Together they act as an open society where every machine has the freedom to contribute to the solution, a mathematical freedom of speech if you will.
It is important to transfer our understanding of the trap of locality from mathematics, physics and coding to the realm of the common sense. There are countless historical and contemporary examples where the availability of information or knowledge did not — immediately — lead to an improved common sense. Our society therefore does not at all seems to function as the above mentioned committee of machines. The directional powers in the distributed network that forms our society potentially prevent us from embracing new possibilities because they appear as too radical from the perspective of our current locality. Consequently our current position in the mesh of possibilities might appear as radical as any other position depending on the point from which it is perceived. For all we know; what to us seems as the way forward only drifts us further way from true progress.
Political automata
One of the main questions that derives from what is described before is: can self-learning algorithms create better political policy than humans? I believe that human assumptions are flawed in several ways, and that the principles that constitute the working of self-learning machines and algorithms might make them better suited to take humane and righteous decisions than humans themselves. However what is maybe more important than this technical aspect is the implications such a development would have. In the search for an answer to the question if we can create better policy, we should ask ourselves how we would define better policy. This is something we will only understand once we know what kind of society we are aiming for. We as humans still have to determine what is humane and righteous. When we would build these self-learning algorithms into political automata they offer us a chance to rethink our own function within politics. The automata themselves would not per se be political, they would only engage in political production, but they will give us the opportunity to become political ourselves. It would mean a separation of politics and policy-making altogether. The political automata force us to externalise our political beliefs as parametrical input for the self-learning algorithms. A recent example where this necessity surfaces, and the technological sphere touches that of the political sphere, is the research of Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan on autonomous vehicles and the need for ethics in their programming. They pose a dilemma of who should be sacrificed in case a self-driving car is involved in an inevitable accident with pedestrians; the driver or the passers-by?24 Such technological dilemmas force us to determine, formalise and program in advance what we perceive as more important, in this case self-preservation or equality. There are two elements at play here, one of which is the scale of abstraction. When we would program the political automata, it would be redundant to provide training data and pre-made answers for every single issue it needs to solve, the machine would be obsolete in such a case. Inevitably the self-learning algorithm should operate with only minimal parametrical input from the electorate, which asks for a more abstract approach on politics. Rather than voting for a specific policy, voters would for example now have to choose how compassionate their nation should be, or how risky. That brings us to the second element, that of weight distribution. This is closely related to the first element, because we do not only have to choose what kind of society we desire, but also on which desires we put more weight. And when we put weight on one, we can't simultaneously put the same weight on another, it has to be distributed. Or in case of the autonomous car, we have to choose between the individual and collective desires. Both elements relate to the optimisation of the self-learning algorithms, while it can be optimised to create policy in all kinds of configurations, we have to decide what we optimise it for. This technological shift entails that politics as an institution has to move closer towards what is now considered the field of philosophy in order to deal with the larger questions that lie in front of it. What has to be externalised in order to program political automata, also has to be internalised again in politics in order to reconfigure its function.
A special thanks to N.F. de Leij (UvA) who guided me through the process of this essay.
Political Automata. The separation of politics and policy-making
What will the future of politics bring us? This essay wants to seek out new possibilities in the configuration of our democracy by assessing its current state and principles, and explore more viable solutions in terms of policy-making. The classic notion of the future can help us achieve that. Future aesthetics, dystopian and utopian, commonly show a highly technological, robotised and automated reality. Despite that these imagined realities usually have not been realised in full, automatisation and robotisation have in fact entered our world in a structural fashion. While this development has been implemented mostly in repetitive environments as production, and some claim that it finally gives us the freedom to focus on the act of politics, it is not to say that it could not be implemented in politics itself.
Automation as a process is generally received with ambivalence, on the one hand is offers us great opportunities in terms of mass- or specialised production, but on the other jobs are usually at stake; whether it dissolves jobs or creates new types of work. Nick Srnicek and Alex Williams, writers of Inventing the Future, however, offer us a more positive sentiment, they demand a fully automated economy and an accelerated process towards this state. "Without full automation, post capitalist futures must necessarily choose between abundance at the expense of freedom (echoing the work-centricity of Soviet Russia) or freedom at the expense of abundance […]."25 In other words, full automation would free us from the burden of work. This is a classic argument in favour of the implementation of an automated future, but it is not the only argument. While automation initially was implemented primarily in areas that could easily be seen as a burden — because the tasks involved were either repetitive, time-consuming or heavy-duty — in later stages this has shifted to areas that are more cognitive. Not because these areas were considered as toilsome, or for purely economical reasoning, but because the machine could in fact outperform the human. For example in aviation, where autoland systems were introduced in order to enable aircrafts to land in weather conditions that for pilots would otherwise be impossible to operate in. So not only could automation potentially free us from the burden of work, it also raises the opportunity to increase the cognitive capabilities of our society. Aristotle wrote about the freemen who are positioned above toil — above exhausting physical labour — as men who are free to engage in politics while their stewards attend to their households.26 The act of politics here is not considered a burden, but as some higher act that must be performed by the freemen. Aristotle thus distinguishes politics from work. Does this claim still have ground? It must be said that the writing of Aristotle dates from 350 B.C. and describes politics in the context of slavery and not automation, but the question if political production is any different from other types of work is very much alive. What is, in fact, currently the difference between ordinary and political production? Philosopher Jacques Rancière also discusses the notion of a gap between work and politics, and consequently the apparent impossibility to merge such activities. Except he claims that "[…] politics begins precisely when this impossibility is challenged, when those men and women who don't have the time to do anything other than their work take the time they don't have to prove that they are indeed speaking beings, participating in a shared world and not furious or suffering animals."27 Politics here is described not as an occupation, but as a mode. This opens up the possibility that those workers who are in fact involved in political production are too challenged in their attempts to engage in politics, as any other. From that perspective, if we want politicians to engage in politics, political production should as well be subject to cognitive automation. In the near future we would be able to produce political automata, machines based on self-learning algorithms, that have the capability to create policies for us. It is therefore necessary to discuss the effects that self-learning machines and algorithms can have on the act of policy-making.
The human being and our flawed understanding of systems and progress
Why would we replace politicians with political automata? In order to answer that question we have to look at how both humans and self-learning algorithms deal with the world around us. In this first chapter I will draw links between the work of philosopher John Gray, economist Hyman P. Minsky and scientist Stephen Wolfram in an attempt to further problematise the role of humans in political production, the making of policy. What ties their work together is that they are all in one way or another focused on the notion of unpredictability, irreducibility and instability within different complex systems that compose our world; whether it is our sense of ethics, the financial system or the field of physics. They refute ideas of constants, cumulative growth, stability and linearity. Gray, Minsky and Wolfram rather think of the programs that run in such systems as being intrinsically cyclical, complex and unstable in spite of their simple origin or nature.
Our understanding of such systems greatly defines how we perceive the world, our past and present, and therefore also how we act in it in relation to our future. It is therefore of political importance. How and what we measure in order to inform our decisions also frames the desired outcome of such decisions. A world view of endless possibilities leads to other political decisions than one of limited possibilities. Gray proposes that our perception of events is ultimately flawed by exaggerating our assumptions of the influence we have on them. In alignment to this Wolfram states that every moment that we are able to remove our specific human position — that what seems to make us special, our ego if you will — we open up the possibility to define the programs that run in complex systems more universally. Making them simpler and more constitutional. In other words, if we would remove more and more assumptions we have about ourselves, we might make better sense of the world around us. Shedding our human-centered position within the history of the universe, for example, surely led to a better understanding of it. Wolfram is certain that we will eventually define the basic program or universal set of protocols that constitutes the basis of all, our universe.28 What is important is that Gray, Minsky and Wolfram all introduce ideas of randomness and irregularity, but they do not necessarily believe that the complex systems that are part of our world are 'based' on random principles. The programs are simple and universal, but they appear random and irregular only in their evolution.
Let us start by looking what exactly a complex system is. Complex systems consist of many components and subsystems that together compose a functioning whole. Within these systems the individual components interact with each other, an event in one component can have significant effect in other — seemingly disparate — components. This behaviour of such a system renders it complex, in part incomprehensible and therefore in many cases unpredictable. Wolfram refers to this as computational irreducibility i.e. we cannot predict their outcome, but only observe it by running the system. A key feature of complex systems is that the principles in one can often be used to describe those in others. Looking at weather patterns helps us understand how economies work. Just as boiling water helps us understand weather patterns. Yet we have completely different views on how we can control these systems. We perceive the weather as inherently unstable and uncontrollable, and only partly predictable with a margin of error. But if we start looking at our economy — in all its complexity — our view changes. Mainstream economists assume that the economy has a built in equilibrium of growth and therefore is linear. It has an embedded notion of improvement and progress, we can engineer it.29 This nicely aligns with the persistent idea that we as humans are ultimately capable of improving our lives. This assumption is under constant pressure, especially since the end of the '60s, but nevertheless it is still widely accepted.
The notion of progress is directly tied to our expectance of the future. Wolfram speaks of a constraint of something being foreseeable to a certain extent. In order to act we, as human beings, have the wish to predict the outcome. A necessity nature does not have30 , it is not concerned about its future, it just follows its course. Humans have the need for a horizon, to grasp the events that are presented to us. If this is achieved through the believe in a religious form of power, or in the proceedings of science or technology does not matter all that much. They are different means to the same end; to understand the seeming complexity that surrounds us. But short-term horizons might blindside long-term horizons we could be looking at. Following the writings of John Gray the notions of improvement and progress are clearly embedded in our collective thinking. Liberal humanists believe that humanity tends to improve itself, step by step, gradually, by learning from our past endeavours. Gray, however, refutes the link between the accumulation of knowledge and the accumulation of ratio and civilisation that is is embedded in this line of thinking and proposes that ratio and civilisation are in stead extremely fragile qualities.31 He relates the notion of linearity in the accumulation of these qualities to meliorism32, a philosophical concept that assumes that humans by interfering in otherwise natural processes can gradually improve the outcomes of those processes. According to sociologist Sheila Shaver meliorism is one of the very fundaments of the idea of liberalism. "Liberalism […] is meliorist in regarding all social institutions and political arrangements as capable of human improvement."33 Above all this idea has been rooted in the American culture, symbolised by the concept of the American dream. And although this dream has been shattered innumerable times it is still very persistent. Look only at the phrases and slogans used by presidential candidates… "Yes we can", and more recently "Make America great again". Or think about how the idea of Western democracy is projected onto the 'outskirts' of the world. They clearly propose the idea of an engineered society. The element of progress is so deeply engrained in our thinking that even when progress is stopped, or reversed, we assume that we as humans can turn this process around. And in the cases that our believe in progress is so severely contested, we seek for external forces that are to blame for this 'unnatural' state we are in. Just as in the 1930s during the great depression, recent financial crises have led to minorities being pushed forward as our scapegoats, amplifying xenophobic thinking.34 The idea of progress is also very present in the political vocabulary regarding our economy, more than anything else our progress is measured through the current state and growth of our economy. Political campaigns are filled with claims and promises about employment rates, the rise or resurrection of the economy and tax cuts. Our economic heyday was marked by an unrestrained believe in economic expansion, and when this period ended people desperately called for its return. There are only few who are willing to accept that true wealth is ultimately limited. Gray describes that the notion that every human problem in the long run is resolvable is now part of our contemporary ideas on progress.35
The point being made is that we as humans have less power over events than we might attribute ourselves, instead it might be beneficial if we would attribute more of this power to the programs that run within the systems of our world. The desires we have, blur with our understanding of the capabilities we have, resulting in a flawed perception. Externalising them in the design of algorithms potentially creates a clearer separation between the two.
But what then are the powers at hand? Gray, Minksy and Wolfram describe them in comparable ways, each from the perspective of their own field. Wolfram, for example, has done extensive research in the field of cellular automata. A field that has interest from other fields like mathematics, physics, biology and computational theory. Cellular automata were originally discovered in the 1940s by Stanislaw Ulam and John von Neumann, and later popularised through the Game of Life by mathematician John Horton Conway. A cellular automaton is a simulation of a life-form which populates on a grid of cells. The cells on the grid are either on or off, alive or dead. The cells of the population will survive, die or come to live based on simple rules applied to the automaton. Conway's Game of Life was initially made public in an article by Martin Gardner, wherein the simple rules of that specific automaton were described as follows.
As Gardner states "[i]t is important to understand that all births and deaths occur simultaneously. Together they constitute a single generation or, as we shall call it, a 'move' in the complete 'life history' of the initial configuration."36 Different sets of rules will result in different automata and therefore different patterning of the life-forms. From the 1980s Wolfram, supported by the development of computational power, did research on an extensive array of different rule-sets. Numbered from rule-0 to rule-256.37 These rule-sets are referred to by Wolfram as primitives, yet I will refer to them as programs. Most of these programs show perfectly symmetrical and predictable patterns, it is when Wolfram came across 13. rule-30 and rule-110 where things became more interesting. These two programs displayed a randomness, or turbulence in their patterns that was truly unpredictable but reproducible. When iterated over many generations they show periods of local stability, which is then disrupted by other patterns, causing a moment of change or chaos. The programs render complex, but start out extremely simple, one black cell and a small rule-set. This made Wolfram conclude that this principle might be applicable to systems we perceive as being highly complex, for example to the aforementioned weather-patterns, fluid turbulence, or if we follow John Gray possibly also to the state of our civilisation. If this is the case, then ideas of gradual and linear progress in policy-making would be fruitless. And in stead focussing on growthalone, we should also take in account the systems' built-in turbulence.
Another element in Wolfram's presentation resembles the findings of Gray quite closely, he describes three notions on how the outcome of a system could be become complex or random. In the first a system is predictable and non-random, and becomes random only through external input. "Like a boat bobbing on an ocean. The boat itself does not produce randomness, it just moves randomly because it is exposed to the randomness of the ocean."38 It is a quite traditional take on randomness and is relatable to the ideas of meliorism. In the second, which finds its origin in chaos-theory, the randomness of the program is fed in from the beginning and not — as just described — along the way. The randomness stems from the initial input for the system, again an external force, and not from the stable rules that compose the system. The third notion however originates from stable and predictable rules, like rule-30 or rule-110, but is intrinsically random in its evolution and results in unstable cyclical patterns despite of its stable origin.39 The outcome of such a program is bound to be complex and is irreducible. As mentioned earlier, the only way we can predict its output is by running the program and observe its iterations.
Wolfram briefly mentions that our financial system too could be based on such intrinsically random programs, most likely in combination with the other types of programs, and this is where his ideas coincide with those of Hyman P. Minsky. In 1992 Minsky wrote a working paper called The Financial Instability Hypothesiswherein he describes, in an interpretation of Keynes's General Theory, a capitalist economy that is inherently unstable. Also Minksy incorporates the notion of the future, or expectation, in his work. Specifically in how money flows produce future money. "[T]he flow of money to firms is a response to expectations of future profits. […] Thus, in a capitalist economy the past, the present, and the future are linked not only by capital assets and labor force characteristics but also by financial relations." Or in other words "[i]nvestment takes place now because businessmen and their bankers expect investment to take place in the future."40 The hypothesis consists of two theorems that describe its instability.
The first theorem of the financial instability hypothesis is that the economy has financing regimes under which it is stable, and financing regimes in which it is unstable. The second theorem of the financial instability hypothesis is that over periods of prolonged prosperity, the economy transits from financial relations that make for a stable system to financial relations that make for an unstable system.
So the model that Minksy proposes is that of a capitalist economy that shows a cyclical dynamic independent of exogenous forces, or "shocks". These cycles are rather formed from the "internal dynamics of capitalist economies"41. This model contests those of mainstream economists, as already mentioned earlier in this chapter. Economist Steve Keen, who continues to work on the ideas of Minsky, in one of his lectures mentions a statement of Ben Bernanke. Bernanke, who served two terms as chairman of the Federal Reserve, allegedly stated that the current mainstream models of the economy are designed for non-crisis periods, in other words they do rely on exogenous forces to fall into decline. They completely ignore that what goes up, most likely will also come down. And it is here that it becomes clear that mainstream — utopian — ideas can really blindside us from the actual power we hold over our future. If the systems in place are of a cyclical and random nature, and the models we use rely on stability and linearity, it results in a disappointing discrepancy between these two realities. This seems to be the case in economics, but it might be even more applicable to human behaviour. For that we have to return to the work of John Gray. He draws a line between the accelerating accumulation of knowledge, which he attributes to the human species as an unique capability, and our capabilities to learn from our experiences. "While knowledge and invention may grow cumulatively and at an accelerating rate, advances in ethics and politics are erratic, discontinuous and easily lost."42 As an example he mentions universal evils as torture and slavery. These evils do not vanish like outdated theories would in science, they return under new monikers. Torture becomes an intensified questioning method, slavery becomes human trafficking. What we gain in civilisation is not simply backed up on a hard drive to never lose again.43 In stead you could argue that it might be quite the opposite, that our civilisation is extremely fragile. What if we are just as much programmed to be civilised as we are programmed to fall into barbarity? If politics and policy-making is about changing the course of our future, and to anticipate to what lies ahead, then our understanding of that expectancy is quite important. In stead of focussing on how much we will progress, grow and gain, we should just as much focus how much we can recede, destroy and lose. In addition to that there should also be an acceptance of the cyclical and random nature of the programs that iterate into our future. A principle of cost. What makes a self-learning algorithm suited for political purposes in this context is that their usually based on such principles, an algorithm rather assumes that it is wrongthan that it is right. In an iterative process it would seek to provide a solution that is less wrong than the solution it provided before, explicitly making use of randomness in their process of doing so.
The human being and our flawed common sense
Besides understanding what policy-making should take into account, in terms of progress or cost, it is of equal importance to focus on how decisions and ideas are formed. Existing ideas have the power to shape or restrain our reality in the form of a common sense. It is the task politics to reflect on these existing ideas and challenge them where needed. Rancière describes this as the distribution of the perceptible, a distribution and redistribution of space and time, place and identity, speech and noise, the visible and the invisible. According to Rancière "[p]olitical activity reconfigures the distribution of the perceptible. It introduces new objects and subjects onto the common stage. It makes visible what was invisible […]".44 However politics is not immune for the restraints they are supposed to challenge. The context of what is considered normal has great impact on our understanding of things, we therefore should not neglect the construction of our common sense. In this chapter I want to draw lines between a human construction of common sense and how machines are dealing with such a concept. Using ideas from this machine-construct also helps us to look at our human-construct differently.
Gilles Deleuze has written on the workings of our society which can help us grasp how a common sense can be constructed, but also how it can be trapped in a suboptimal position. While we often claim to live in a free society Deleuze describes a shift of forces rather than an absence of it, it therefore does not necessarily conclude that our common sense is free from external restraints. In Postscript on the societies of controlDeleuze responds to Michel Foucault's division of societies' history into sovereign societies and disciplinary societies. He specifically builds upon the latter notion, which he separates into two sequential modes: one mode he again refers to as — Foucault's — disciplinary societies and the other as societies of control. In a disciplinary society power is distributed through hierarchy, creating a series of reversed tree structures to which Deleuze refers to as "vast spaces of enclosure", for example those of a family, school or factory. A society built on downward forces, regulation, laws and taxation. In societies of control the power of the institution fades and is distributed throughout the system. Deleuze describes the idea that this society acts "[…] like a self-deforming cast that will continuously change from one moment to the other, or like a sieve whose mesh will transmute from point to point."45 A decentralised network of nodes. It describes our current society. If we would assume that just like power, common sense is also distributed throughout such a decentralised network, you could think that common sense is free-flowing and egalitarian. However Deleuze describes the network as a mechanism of control and not as that of an open structure. While everything is connected in a distributed network, it does not mean there is no direction or force involved. Due to the forces within this omni-directionality the flow or dispersement throughout the network is restricted and directed in many ways. Much like swimming in a strong current, you are free to swim in every direction, but only the strong will really achieve that freedom. Alexander R. Galloway looks at the work of Deleuze from the analogy of the internet. He refers to this restrictive aspect of the network as technological control. This control is, according to Galloway, inscribed in the workings of the protocol. Here of course he describes the technological protocols that constitute the internet, but the term protocol can also be read more generically as something that is enforcing that what is considered normal, common sense. What Galloway suggests is that "[t]he internet is a delicate dance between control and freedom. […] In other words, at the same time that it is distributed and omnidirectional, the digital network is hegemonic by nature; that is, digital networks are structured on a negotiated dominance of certain flows over other flows. Protocol is this hegemony. Protocol is the synthesis of this struggle." While the internet is often perceived as democratic, decentralised and un-controlled, "nearly all Web traffic must submit to a hierarchical structure to gain access to the anarchic and radically horizontal structure […]".46 Both Deleuze and Galloway describe both our Western society and the internet, which are generally perceived as predominantly free, from an opposing controlled perspective. In terms of common sense, the ideal situation would show a common sense that is constantly challenged and updated to an improved version of it. Through politics, not control.
Machine learning is often involved in finding an answer or a classification from a pool of data. If we would transfer the idea of a common sense to this computational context it could be described as the current solution or classification. Just as common sense is the current sense, both are subject to change. The search for such a solution or classification usually entails looking for the maximum or minimum answer. One example of such a search would be to process an image sporting the numerical character '2' to classify it as such. The maximum answer — or in other words the best possible answer — in this case is obviously to conclude the image contains the number '2'. A suboptimal answer would be to classify the image as '7', the shapes are quite similar but the answer is of course incorrect. The fact that the algorithm occasionally is incorrect is fine, but it needs to be punished in order to train it, so it will improve its answering capabilities over time. Discarding the wrong answers therefore is a very important feature of the algorithm. That ideas do not per se improve in similar ways is explained by sociologist Barry Schwartz. He introduces two terms, thing technology and idea technology. He states that in thing technology objects — which are badly designed and are therefore false — die of natural causes and disappear "into the ether". Meaning that nobody would ever buy a bad device and recommend it to other persons, so it will be replaced by an improved version. But in idea technology he says, this is not necessarily the case, false ideas can live a long and prosperous life. For example when an idea dominates large parts of society, it is very hard to discard it as simply being false. He calls this phenomenon ideology, as an atheist he is referring to religion, but also to ideology in a much broader context.47 In idea technology common sense does not necessarily make perfect sense, although ideally the two should come closer and closer.
As described earlier self-learning algorithms are designed to discard false solutions more rapidly, let us take another mathematical problem as an example, that of the knapsack. This is a problem of combinatorial optimisation where the challenge is to find the highest possible value within the restraints of the given dimensions of the 'knapsack', filling it, by choosing from a set of items each given a volume and value. Whereas in the previous problem the right answer could be known upfront, in this example the best answer is an unknown before we enter our search and might even remain unknown. Problems where we already know the answer are usually problems that are better suited for the human mind, but where this is not the case a machine tends to outperform our human brain. In the knapsack-problem for example a human might start a solution by filling the available volume with the most valuable items, regardless of their volume, resulting only in a reasonable score. From that fairly disappointing result a human would most likely try another approach, now calculating the value-volume ratio for each item. If you start filling the knapsack with those items that have the highest ratio, you would probably end up with a slightly better score, but it might not be the best possible solution. A machine learning approach could start very differently with the virtue of computational power. Instead of starting from the principle of the highest gain, it could start from the principle of evolution. Let us say you would fill the knapsack ten times randomly to start of this process. The scores would probably be on the low end, unless you are lucky, but this is only the first step in the evolutionary process. Generation one. To populate the second generation of solutions you would select only the best answers and try to evolve them into better answers by combining them, a process of pairing. This process could iterate over many and many generations until you have found a solution that does not seem to improve any longer. You could conclude this if the solution has not changed over an x amount of iterations, for example a thousand generations. It will most likely be a good result, but unfortunately it still might not be the best possible solution to the problem. The issue at hand is the risk of getting stuck in what is referred to as a local minimum or maximum. You might be able to find a good local answer in the process of evolution, but there could be better local minima or maxima to find and amongst them also the global and absolute minimum or maximum. Being at the top of the Mont Blanc might make you feel you are on the top of the world, but you cannot see there is an even higher mountain elsewhere, the Mount Everest. This would be a clear example of a local maximum. In physics, when you are stuck in a local minimum, the way to get out of such a situation is literally to apply force. In machine learning this is really not that different, to get out of a local minimum or maximum you have to shake things up. In our knapsack-problem this could be achieved by adding new randomly generated solutions to the pool that can pair with already good solutions. Chance offers us input outside of our local area. Another way would be to use a committee of machines, several machines that simultaneously work on the exact same problem thereby preventing the committee as a whole from falling in the trap of locality. Together they act as an open society where every machine has the freedom to contribute to the solution, a mathematical freedom of speech if you will.
It is important to transfer our understanding of the trap of locality from mathematics, physics and coding to the realm of the common sense. There are countless historical and contemporary examples where the availability of information or knowledge did not — immediately — lead to an improved common sense. Our society therefore does not at all seems to function as the above mentioned committee of machines. The directional powers in the distributed network that forms our society potentially prevent us from embracing new possibilities because they appear as too radical from the perspective of our current locality. Consequently our current position in the mesh of possibilities might appear as radical as any other position depending on the point from which it is perceived. For all we know; what to us seems as the way forward only drifts us further way from true progress.
Political automata
One of the main questions that derives from what is described before is: can self-learning algorithms create better political policy than humans? I believe that human assumptions are flawed in several ways, and that the principles that constitute the working of self-learning machines and algorithms might make them better suited to take humane and righteous decisions than humans themselves. However what is maybe more important than this technical aspect is the implications such a development would have. In the search for an answer to the question if we can create better policy, we should ask ourselves how we would define better policy. This is something we will only understand once we know what kind of society we are aiming for. We as humans still have to determine what is humane and righteous. When we would build these self-learning algorithms into political automata they offer us a chance to rethink our own function within politics. The automata themselves would not per se be political, they would only engage in political production, but they will give us the opportunity to become political ourselves. It would mean a separation of politics and policy-making altogether. The political automata force us to externalise our political beliefs as parametrical input for the self-learning algorithms. A recent example where this necessity surfaces, and the technological sphere touches that of the political sphere, is the research of Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan on autonomous vehicles and the need for ethics in their programming. They pose a dilemma of who should be sacrificed in case a self-driving car is involved in an inevitable accident with pedestrians; the driver or the passers-by?48 Such technological dilemmas force us to determine, formalise and program in advance what we perceive as more important, in this case self-preservation or equality. There are two elements at play here, one of which is the scale of abstraction. When we would program the political automata, it would be redundant to provide training data and pre-made answers for every single issue it needs to solve, the machine would be obsolete in such a case. Inevitably the self-learning algorithm should operate with only minimal parametrical input from the electorate, which asks for a more abstract approach on politics. Rather than voting for a specific policy, voters would for example now have to choose how compassionate their nation should be, or how risky. That brings us to the second element, that of weight distribution. This is closely related to the first element, because we do not only have to choose what kind of society we desire, but also on which desires we put more weight. And when we put weight on one, we can't simultaneously put the same weight on another, it has to be distributed. Or in case of the autonomous car, we have to choose between the individual and collective desires. Both elements relate to the optimisation of the self-learning algorithms, while it can be optimised to create policy in all kinds of configurations, we have to decide what we optimise it for. This technological shift entails that politics as an institution has to move closer towards what is now considered the field of philosophy in order to deal with the larger questions that lie in front of it. What has to be externalised in order to program political automata, also has to be internalised again in politics in order to reconfigure its function.
A special thanks to N.F. de Leij (UvA) who guided me through the process of this essay.