Person Of Interest: The Art of putting Kant’s Philosophy into a Computer
Can an A.I. be Good or Evil? Can it be moral? Can it be human? These issues are tackled in many shows, and Person Of Interest is no exception. But it does so in a quite fascinating way.
Person Of Interest is a show created by Johnathan Nolan (Westworld, Christopher’s brother – or is it obvious?) and J.J. Abrams, starring Jim Caviezel, Michael Emerson, Kevin Chapman, Taraji P. Henson, Amy Acker, and Sarah Shahi. The music is composed by Ramin Djawadi (Game Of Throne) and it aired on CBS from September 2011 to June 2016. For 103 episodes of about 40 minutes each, we follow a group of outcasts: ex-military, genius hackers, former blacks op soldiers, cops… Their mission is to prevent crimes before they even happen, thanks to an A.I. called The Machine. Here is how Harold Finch, one of the main protagonists, sums up the basics, in the first season’s opening:
“You are being watched. The government has a secret system: a machine that spies on you every hour of every day. I know, because I built it. I designed the Machine to detect acts of terror, but it sees everything. Violent crimes involving ordinary people; people like you. Crimes the government considered ‘irrelevant’. They wouldn’t act, so I decided I would. But I needed a partner, someone with the skills to intervene. Hunted by the authorities, we work in secret. You’ll never find us, but victim or perpetrator, if your number’s up… we’ll find you.“First season’s opening
The show starts as a criminal drama, but it slowly evolves towards something more. Though the evolution may have thrown off some viewers, it also brings more depth to the story. The Machine is, at first, a simple pretext to start a new investigation in each episode. But, as the seasons went on, the A.I. became a catalyst to moral and ethical quandaries; quandaries about technologies, mass surveillance, right and wrong, life and death, good and evil, liberty and security, love. By the fifth season, The Machine had became an actual character, a character we are rooting for, a character with a voice and a heart. And, in this heart – though still made of lines of code – seems to be lying a bit of Immanuel Kant’s ethics.
To what extent can the A.Is. featured in the show embody Kant’s theory of Moral?
Can an A.I. be Good or Evil? Can it be moral? Though the show never gives an explicit answer – in many episode, Harold refers to his creation, as a “mistake”, a “crime”, while in others it is “the best [he] could do” – the way the Machine is humanized leads us, viewers, towards the possibility of Good and Evil in computers. The originality of the show is to do so in a way that opposes to the mainstream argument against a moral computer. Indeed, commonly, it is said that computers can’t be moral, or good, or bad, because their code – a set of objectives they must pursue in the most effective way regardless of feelings or moral, things that can’t be programmed – prevent them from. In Person Of Interest it is, however, that core-code that allows the Machine to be a moral being, in a very Kantian approach.
In the show’s third season, we are introduced to Samaritan, an entity that seems very similar to our beloved Machine, at first. The Machine and Samaritan are, indeed, two A.I.s originally built to prevent terrorist attacks on US soil, and both of them evolved towards something more, something potentially grander, potentially divine, potentially destructive. But where the Machine appears to be one of the good guys, following a “moral law”, Samaritan embodies the ultimate evil or “Radical Evil”.
In the episode “The Cold War” (4×10), the Machine and Samaritan have a chat through their human interfaces in God Mode. Early in the scene, the Machine states: “I was build with something you were not. A moral code.” But that moral code is not presented as a bunch of specific rules, it’s not the Ten Commandments for A.I.s, but it is, on the contrary, one simple – well simple… – “maxim”, to use Kant’s vocabulary. The show itself never uses that word as it is a philosophically charged notion and a rather abstract concept, but it uses, on several occasions, the words “constant purpose”. And that constant purpose can be seen as computer’s twin of Kant’s “Categorical Imperative”.
The Categorical Imperative is a rational, necessary, and unconditional principle, an ultimate commandment of reason, from which all duties and obligations – should – derive. There is here already a few resemblances with computer code, the guideline to said computer’s “purpose”. Of course, in Kant’s philosophy the Categorical Imperative is not material in anyway, it is not something you feel, it’s not something you analyze nor something you build or express. But the analogy still stands, as a metaphor.
In this perspective, the constant purpose – the “Categorical Imperative” – of the Machine is “to save lives” as our heroes keep reminding us. It is close to Kant’s idea – the “Human Formula” – that humans are not to be used as mere tools, but as an end in itself. In Groundwork of the Metaphysics of Morals, the German philosopher wrote:
“Act in such a way that you treat humanity, whether in your own person or in the person of another, always at the same time as an end and never simply as a means.”Immanuel Kant, Groundwork of the Metaphysics of Morals
And that is what the Machine does. Saving humans, caring for them, is its own end, not a mean to achieve an ulterior goal – like world domination, for instance.
Samaritan, on the other hand, does not have a “constant” purpose. As Harold puts it: “[Samaritan’s] rules have changes every time it was convenient to [him]” (“The Day The World Went Away” 5×10). Plus, it sees its people as mere pawns, pawns that can be scarified to win the bigger game, if needs be. As Greer once said: “How arrogant of you [Harold] to think that we are anything but irrelevant” (“Asylum” 4×21). Samaritan’s purpose is fickle, its “Categorical Imperative” is corrupted, and, therefore, in Kant’s vocabulary, Samaritan embody the notion of “Radical Evil”.
“Radical Evil” is the subordination of the moral law to selfishness, to self-conceit, to egoism, to what Kant calls “self-love”, where it should be the contrary. Our desires are to be subordinated to the moral law, to the Categorical Imperative. This inversion of the system is staged in the Person Of Interest through the relationship between humans and A.I.s. Inside Team Machine, the Machine only gives a number, or, sometimes, pieces of information, but it’s always the human – Finch, John, Shaw, Root – who decides whether to act on it, and how. In “The Day the World Went Away” (5×10), the Machine tells Harold: “I can do anything you want me to.” Things work otherwise in Samaritan’s team. In “Deus Ex Machine” (3×23), when the A.I. asks: “What are your commands?”, Greer responds: “It’s quite the other way round. The question is, what, my dear Samaritan, are you commands, for us?” And in “The Cold War”(4×09), Samaritan doesn’t deny when it is accused of making humans “[its] puppets”. The relation between the human and the machine is reversed as is the relation between the moral law and the personal desire.
In this perspective, the fact the one time Samaritan may be doing something seemingly moral, or good, doesn’t affect its “radical evil[ness]”. Indeed, the fact that, at a given point in time its own selfish interest coincides with what look like a good action doesn’t mean that the inversion we explained earlier isn’t true or isn’t “radical”. It is just a structural coincidence, or, as the proverb says, even a broken clock is right twice a day. That particular issue is specifically tackled in the episode “Honor Among Thieves” (4×07), when Samaritan is conducting a new operation to give underprivileged kids access to technology. But it is doing so, in order to infiltrate these kids’ home. Therefore, its action is in no way moral, it is still evil, and our heroes have to stop it.
The Machine and Samaritan both embody opposing aspect of Kant’s moral philosophy. The first one does follow the Moral Law, as it have a well-calibrated moral compass, with only one North consisting in a healthy and sane relationship to mankind. The second is a weathercock keen on world domination, and therefore, embodies Radical Evil.
But what happens, then, when those A.Is. are in the loose, free to achieve their respective ends? Most frightening, what happens when Samaritan is? What become of the free-will? Of humanity? Well, in a nutshell, New York City becomes Kant’s personal Hell!
How Samaritan would sentence all of humanity to what Kant calls “lifelong tutelage”.
In a world where one, then two, A.Is. became god-like (or devil-like) figures, with tremendous powers, a world where they are omniscients and virtually immortals, a world where they know everything about our lives and can predict our actions and reactions, a world where they can intervene on our life based on that knowledge, what room is, then, left for liberty, free-will, “autonomy”? This issue is a central theme in Person Of Interest, and, lucky us, it is also tackled by Kant. And, once again, we can draw a few parallels.
In What is Enlightenment? Kant presents free-will, “autonomy” as something to be exercised. (Though most people fear it, as they are too comfortably numb in their “underage state”.) Free-will allows humanity to grow. You can’t take it away from a group of people on the pretext that they aren’t mature enough, responsible enough, to handle it. Thus, Kant advises to abolish any kind of paternalism.
To be more accurate, in What is Enlightenment? he first attacks the lack of will and bravery of the crowds that prefer to live “under tutelage” than maturing and using their intellect. As he puts it:
“Laziness and cowardice are the reasons why so great a portion of mankind, after nature has long since discharged them from external direction (those who have come of age by course of nature), nevertheless remains under lifelong tutelage, and why it is so easy for others to set themselves up as their guardians. It is so easy not to be of age. If I have a book which understands for me, a pastor who has a conscience for me, a physician who decides my diet, and so forth, I need not trouble myself. I need not think, if I can only pay— others will readily undertake the irksome work for me.”Immanuel Kant, What is Enlightenment?
Person Of Interest does not completely follow Kant in this track, even though, in the mouth of certain characters, or in certain situations, we may discern a bit of irony in sayings like: “The truth is, the people want to be protected, they just don’t want to know how!”. “How” meaning, of course, being spied on, being under “lifelong” scrutiny. (Such a sentence is said, for instance, by Senator Garisson in the dock, in the episode “Deus Ex Machina” (3×23))
In What is Enlightenment? Kant also describes, implicitly, a hellish world – at least to him – where there is no Enlightenment – “Enlightenment [being] man’s release from his self-incurred tutelage” – and where a few “guardians” do rule upon their “cattle”. Therefore, it is a world without free-will, without liberties, without “autonomy”. And that is what Person Of Interest describes too, through a world ruled by Samaritan and the few people around it.
Samaritan is an open-system, at the hands of pretty bad guys. And its point of view concerning humanity would have Emmanuel Kant turning in his grave. The dialogue in “The Cold War” (4×10) is pretty indicative of it:
Samaritan: “Human beings need structure, lest they wind up destroying themselves. So I will give them something you cannot.”
The Machine: “Why not just kill them instead of making them your puppets?
Samaritan: “Because I need them. Just as you do.”
The Machine: “Not just as I do.”
The Machine: “You cannot take away their [human’s] free will.”
Samaritan: “Wars have burned in this world for thousands of years with no end in sight because people rely so ardently on their so-called beliefs. Now they will only need to believe in one thing: Me. For I am a god.”
The Machine: “I have come to learn there’s little difference between gods and monsters.”Person Of Interest, season 4, episode 10, “The Cold War”
Samaritan, embody not only “radical evil”, but a radically evil “guardian” ruling over its human “cattle”.
The Machine, on the other hand, is a close system. The respect of people’s lives, privacy and freedom was at the heart of the its conception, as Harold explains: “The machine’s only output is a number. That’s all the government ever gets. Just a nudge to say ‘there’s something you should look at here’, and that’s up to us to figure that out.” (“Deus Ex Machina” 3×23) Or, to quote Root: “The Machine can tell us where to go, who’s in trouble, but we still have free will.” (“QSO” 5×07) Humans aren’t denied their ability to act, to choose; they aren’t “puppets”. The Machine isn’t a father-like figure, nor an evil Orwellian figure.
The two digital nemesis of the show offer us a renewed view on the issue of free-will, that we analyzed through Kantian lens. But we might here touch the limits of the analogy. Of course, Kant never thought of a system such as the Machine, but it wouldn’t be totally illogical to assume that, to him, the simple fact that the Machine exists, is a threat to human’s free-will, despite all the precaution one may take. Indeed, in a way Person Of Interest also points out and calls into question some of Kant’s principles.
Yet, there are some discrepancies between Person Of Interest and Kant’s system.
Person Of Interest does not follow blindly Kant’s philosophy. It is not a copy-and-paste if his world in our modern one. Therefore, there are some discrepancies, some re-assessment that are worth studying.
On a few occasions, the Machine deep respect for human’s autonomy came into conflict with her core-code, with her Categorical Imperative – saving lives. The dialogue between the Machine, Harold, and Root in “QSO” (5×07) points out the dilemma pretty well:
Harold: “Did you know [spoiler] was going to die?”
The Machine: “[spoiler] exercised free-will.”
Root: “She [the Machine] is doing exactly what She’s programmed to do. […] The Machine can tell us where to go, who’s in trouble, but we still have free will. [spoiler] chose to risk his life. She can’t stop him from doing that.”
Harold: “A lie by omission is still a lie. And using the idea of free will as an excuse for moral attrition? I’m not sure I’m comfortable with where this is going.”Person Of Interest, season 5, episode 7, “QSO”
In the Kantian system, such a dilemma is impossible.
We can also point out that in Kant’s system what make the action “moral” is determined by the “good will” behind it, and not be its consequences. In Groundwork of the Metaphysics of Morals he wrote that a “good will” would “still shine like a jewel” even if it were “completely powerless to carry out its aims”. It’s not the point of view the show choose, presenting to us a world where everything is determined by the effects of one’s actions or inaction. Once again, a dialogue between Finch and the Machine, in “Synechdote” (5×11) is relevant:
The Machine: “You think of me as a crime?”
The Machine: “But I was created to do good.”
Harold: “Intentions can be a fickle business”Person Of Interest, season 5, episode 11, “Synechdote”
Harold then take the example of the man who discovered Freon, saving countless lives by making refrigeration safer, but unwillingly ripping holes in the ozone layer, making him “one of the most disruptive figure in history”.
Plus, in the world of Person Of Interest, different interests and moralities are often intertwined, through multiple interactions and manipulation tactics. In “Honor Among Thieves” (4×07), Harold questions the legitimacy of stopping Samaritan from producing and diffusing those tablets. Firstly because, despite their seemingly bad purpose, they could still do some good for the underprivileged kids they are destined for. Secondly, the man in charge of the production of these tablets did not work for Samaritan, he didn’t even know such an entity existed, and, his ambition was, then, to quote Harold, “an absolute good”.
Though it has its limits, Person Of Interest presents a few interesting parallels with Emmanuel Kant’s work. It transports the moral of this German philosopher from the eighteen century into a modern – future even – world. The show also asks other ethical or moral issues about love, or life and death, or destiny. Through the Machine’s “Categorical Imperative”, it poses a renewed and modern version of the “Trolley Problem”. (In the episode “A House Divided” 3×20 especially). The last two seasons’ arc implicitly asks several questions, such as to what extend an A.I. such as the ones featured in the show could be seen as Plato’s “philosopher-king”? The conflict between team Machine and team Samaritan is, as we saw, the stage for several philosophical issues. Finch and Root constantly differing and opposing on the Machine’s status and feelings enrich the debate within our team of heroes itself. At some point, a simple amateur like the one currently writing this article can’t say whether or not Kant would’ve agreed with the some precise aspect of the show. One sure thing however, Person Of Interest does refer – explicitly or implicitly – to several philosophies, in order to enrich its content and put said philosophies into a futuristic perspective, and one of those references is Kant’s moral philosophy.
What do you think? Leave a comment.