X-Men’s Mutants and The Rise of AI: A Reflection on a New Dominant Entity

The rise of artificial intelligence’s impact on human society could possibly parallel the societal ramifications that the rise of ‘mutants’, humans born with extraordinary powers, had on human society in the X-Men universe. A comparison of the human/mutant conflict in the X-Men universe and how artificial intelligence is manifested in reality demonstrates the parallels between the rise of a superior entity. Exploring these ideas in science fictions shows us that a conflict between the two is inevitable. Further investigation in the two scenarios also highlights that there have been some attempts at cooperation between the two opposing entities. Reflection upon these investigations shows us that the success of these cooperations could save humanity from possible extinction.

Deconstructing the Two Rising Entities

The philosophy behind Magneto’s Brotherhood of Mutants is that mutants are to supplant the current human race as the dominant species on the planet, while Charles Xavier’s X-Men believe they need to work together to make the world a better place. From those explanations alone, it can be assumed that Charles is taking the moral high ground against the tyrannical regime of Magneto’s beliefs. This would be the case, if the mutants were not simply more human beings with exceptional powers. If they had different cultures and beliefs, the variance of the two species could simply be compared to that of two foreign cultures working together. But they are not. Mutants in the X-Men universe, are simply human beings who retain their culture, societal expectations, ethnicity, rationality (depending on the power) and happen to possess a supernatural power. Mutants, are arguably, human beings but better.

A parallel can be drawn to the rise of artificial intelligence in our own reality. The first supercomputers could calculate arithmetic at a pace that couldn’t possibly be reached by human beings. They went on to defeat humans at chess, a game considered to be the epitome of intellectual prowess, with even former world champion Gary Kasparov having no chance of being able to compete against them anymore. More recently, Google’s AlphaGO artificial intelligence defeated the reigning human champion at the game of Go. They have even conquered humans at our own quiz shows, with IBM Watson dominating human competitors at Jeopardy. Artificial intelligence has now reigned supreme at video games, defeating our species’ greatest competitors at DOTA 2. Besides board games and quiz shows, automating routine jobs has become a legitimate fear in the workplace, with many jobs that decades ago were thriving industries made redundant with robots doing the work for them, and AI learning how to do it. Even creative artists aren’t safe, with AI Amper releasing its own generated music to humankind. It is not a far-fetched thought to think that at some point in human history, machines will be able to undertake anything, intellectually, physically, or creatively, better than we can. Just as mutants are simply better than humans, machines could be simply better than us. Is there anything we can learn from this parallel? What might our impending doom look like?

Inevitable Conflict

Following the emergence of superhuman mutants on planet Earth, separating themselves from regular humans with their extraordinary powers, Erik Lensherr and Charles Xavier worked together save and foster any mutants that humans persecuted. Their actions were based on a mutual understanding that mutants represented the next stage in human evolution. Soon however, their visions began to differ, with Xavier seeing the mutants as the ones who would guide humanity and foster their co-operation, and Lensherr developing a more radicalized ideology, viewing mutants as the species to replace humanity, referring to them as ‘Homo superior’. This schism resulted in Lensherr splintering from Xavier, assuming the name Magneto and assembling his own team of mutants called the Brotherhood of Mutants, who fought for mutant supremacy. What started out as a peaceful idea of the mutant emergence, resulted into an inevitable clash between the two interpretations of the role of mutants in mankind’s future. Humanity already perceived mutants to be a threat to their existence, resulting in a conflict arguably within the laws of nature (e.g. survival of the strongest). This inevitable conflict between mutants and humans can be observed in science-fiction’s interpretation of the rise of artificial intelligence.

Large behemoth machines that eliminate biological life to halt the progression of artificial intelligence

The philosophy of the inevitable conflict between humanity and artificial intelligence is wide-spread across science fiction. One modern demonstrable example are the Reapers from the popular RPG video game franchise, Mass Effect. On the surface, the Reapers appear to be a very cliched representation of a dominating AI, being highly intelligent machines that reside in dark space who appear every 50,000 years to cleanse the galaxy of organic life. The last installment of the original trilogy revealed that the Reapers pursue this 50,000 year genocide in order to annihilate any organic life that could possibly produce artificial intelligence. This genocide is pursued because they believe that the process of developing an artificial intelligence that causes conflict is inevitable. This hypothesis is somewhat proved (albeit through its own in-game lore) by the conflict between the Quarians and the Geth. In the series, the Quarians are a race of intelligent aliens, en par with humanity, who are forced to exist in a space armada due to losing their home world in a colossal conflict with the artificially intelligent beings they created, the Geth. This is an example of how if a race of superior beings were to come into existence amongst an established race (in Mass Effects’ case, AI robots and aliens), that a conflict between the two would be inevitable. Like in the case of humans and mutants, the main cause of the conflict between the two, is that one race is perceived to be ultimately superior to the other.

An Attempt at Co-operation

In stark opposition to Magneto’s philosophy of mutants ascending past humans to become the dominant species on the planet, Charles Xavier’s X-Men attempt to maintain the idea of humans and mutants co-existing in harmony. Charles Xavier himself believed that the differences between the two species were not to be distinguished as a superiority complex, but to be acknowledged as two different peoples who could benefit from each other. The extraordinary powers of the mutants could be utilized to help mankind with its problems, and mankind could foster the arrival of a new genetic species through its establishment on Earth itself.

This optimistic attitude towards co-operation could be applied to a possible future with artificially intelligent machines. The human mind simply cannot compete with the processing power of a computer when it comes to brute-force arithmetic and memory recollection, and contrastly, computers do not suffer the human’s biological weaknesses, such as mental fatigue or emotional compromise. However, humans currently have a vastly superior ability to recognize different faces, something machines are yet to achieve at human standard, and the complexities of human emotion and creativity have yet to be adequately understood by a robot. Viewing the differences between humanity and machine on a lateral basis, could open up ideas of a compromise between the two ‘species’, just like Charles Xavier’s ideal of the X-Men. This ideal that is already being theorized, by none other than Elon Musk.

Founder of the emerging ‘Neuralink’ project, attempting to merge biological brains and artificial intelligence

Since early 2017, Elon Musk has been hinting at an experimental venture that could serve as a harmony between man and machine called Neuralink. The venture aims to help humans remain equals with advancements in artificial intelligence by merging human minds with software through implants in the brain. Elon himself described the venture as being an attempt at a “merger between biological intelligence and digital intelligence”. Such devices could close the gap in memory access and enable human beings to interface with computers at a thought-based level. Neuralink represents an idea to harmonize the emerging new entity, artificial intelligence, with the already established civilization, human beings, that parallels the philosophy of Charles Xavier’s X-Men. This attempt at co-operation between two different species/entities, basing their distinctions laterally rather than thinking one is clearly superior, could be the human races’ only chance at maintaining its survival in the face of rising artificial intelligence.

Is there any hope?

Amidst the evidence of conflict between the two different groups, and the inevitable conclusion that one group would rise to supplant the former, is there any hope of co-existence, or at least survival for the former group? The efforts of Charles Xavier have not been in vain in the X-Men universe, and there have been cases where humans and mutants have shown to co-exist and co-operate for the future of both species. One such example would be human Moira MacTaggert’s involvement with the X-Men. In the comic books, Moira is a world leading authority on genetic mutation and has worked as a collegue and researcher with Charles Xavier on the effects of mutation and its manifestation in mutants. She has since been involved in many comic book storylines, aiding the X-Men in their various adventures, using her knowledge as a geneticist to help the team of mutants where ever she can. Another example would be Hank McCoy. Hank himself is a mutant and part of the original X-Men team, but at one point of his comic book continuity became a mutant political activist, combating discrimination against mutant-kind at the political level, rather than through force like Magneto. In the film adaptation X-Men: The Last Stand, Hank is depicted as being the official secretary of mutant affairs for the United States government, working with the humans to preserve mutant rights. These are examples in the X-Men universe that depict how humans and mutants can work together and co-exist in harmony, despite their differences.

In regards to artificial intelligence, philosopher Nick Bostrom discusses different optimization processes undertaken by artificial intelligences could indirectly destroy humanity against our efforts. Tasking an artificial intelligence with making humans smile, an artificial intelligence could surmise that the most effect way to achieve this would be to take over the world and stick electrodes in the face of every human being, forcing a permanent smile. Similarly, if humans asked an artificial intelligence to solve a mathematical problem, it could surmise that taking over the world and turning the planet into a giant supercomputer could be the most effective solution. The idea that getting an artificial intelligence to solve a problem could itself be an instrumental reason to take over the planet. Bostrom argues that a artificial super intelligence could present serious problems for humanity in the future, and that cooperation would be rendered futile. One popular problem that he discusses is the problem of scalable control, where if a super intelligent machine’s ‘preference function’ to solve problems if not scaled to own interests, it could creating new technologies and shaping the world to its own preference function, which may not include the survival of the human race. Bostrom details that building an artificial intelligence that is on our side and continues to work in the best interest of humanity no matter how intelligent it is, to be a ‘very difficult problem’.


From many science-fiction interpretations of humans meeting superior beings, we can ultimately see that a conflict between the two is inevitable, be it mutants or artificial intelligence. Despite this inevitable conflict, there has been evidence in both science fiction and reality of a possible co-existence between the two entities, revolving around the idea of a lateral relationship rather than a superiority complex. This co-existence however has yet to be manifested in our reality, and may not be as simple as how it has been portrayed in science fiction, through the use of individuals with stakes in both parties. Overall, the rise of artificial intelligence can have serious ramifications for our society and our future as a species, and if we can learn anything from the conflict between humans and mutants in the X-Men universe, it is that this could lead to our extinction if we can’t figure out how to work together.

What do you think? Leave a comment.

Posted on by
Contributing writer for The Artifice.

Want to write about Comics or other art forms?

Create writer account

34 Comments

  1. I am old enough to remember the first wave of “AI” hype back in the 1980s and I was young enough then to become caught up in the enthusiasm.

    Despite the current claims that technology is advancing “exponentially” – all the techniques currently being promoted are very familiar to me.

  2. The biggest threat from AI is what your bank will use it for- it’ll be able to predict your financial behaviour to a disturbing degree of accuracy, and target you with ads for loans at precisely the moment you are most vulnerable.

  3. We will get exactly what we deserve.

  4. Very interesting piece you have here. We’re often reminded that technology and science are not good or bad – they’re neutral. The problem with this assertion is that science and technology, having no measurable moral substance of their own, are wholly creatures of the interests that employ them.

  5. Ava Franks
    1

    Well, as Professor Xavier states at the beginning of the first film: “Mutation. It is the key to our evolution. It has enabled us to evolve from a single-celled organism into the dominant species on the planet. This process is slow and normally taking thousands and thousands of years, but every few hundred millennia, evolution leaps forward.”

  6. HugsAllAround
    1

    One of the dangers of “AI” is that because our calculators always perform arithmetic perfectly, that people will expect computers to perform equally flawlessly in unconstrained domains.

    This is very, very far from being the case.

  7. I’ve seen Terminator 1 and 3 and Lawnmower Man so I think this makes me a expert on the subject…It’s nothing a bucket of water can’t fix. It’s fine, it’s going to be fine. 😀

    • Having seen Terminator 2, as well, I think that makes me even more qualified. And I agree: Nothing to worry about. Nothing at all.

  8. The issues will always be the same…who gets the write the rules and for what purpose…

    Computers wont break their programming, they will do what they are designed to do…if they are designed to be buggy and crash then they will…if they are designed to say ‘yes’ or ‘no’ then they will.

    And if we give unchallenged power to anyone, they will use it for their own motives and their own interests…guaranteed

    • Coppola
      0

      Yes. Danger is about what AI machines will be programmed for.

  9. Unfortunately, there’s no single mutation that will result in spectacular physical differences and bizarre powers. One can only dream. 🙂

  10. Interesting post. One mutation has always interested me, myostatin deficiency. A mutation that increases the number of muscle fibers and significantly increases strength and muscle size. This mutation occurs in many animals, humans, dog, and cows.

  11. lawless
    0

    Whenever AI is discussed, it’s perceived threat is what dominates the conversation. I believe it’s partly due to xenophobia of the Western mind.

  12. Mutants exist. There is proof. No doubt about it.

  13. This age of automation and AI and robots will be like nothing that has gone before. It will displace people and economic reality as we understand it.

  14. Great article. I cannot see any difference between human intelligence and so-called artificial intelligence. Both will conclude that humans are destroying the Earth’s ecosystem and that the main cause is human overpopulation. We are very good at culling animals when their populations become excessive. Artificial intelligence will cull humans – for the good of the remainder. Is that a bad thing? It will probably be very good for those who survive.

    • rachneck
      0

      AI should reach the logical conclusion that we are the problem but not yet that there are too many of us. Right now, and as always in the past, the greedy make it difficult for the rest of us to survive. We will eventually reach over population because the earth’s resources do have a limit, but now it is the quality of our perception of reality that seems to be the underlying problem.

  15. Evon Tolbert
    2

    At the mo i’m less worried about artificial intelligence and more worried about artificial stupidity. I work in this area and see the botching caused by tight project timelines now impacting ai models.

    • Stupidity is strictly natural. Some people argue that the stupidest things ever done were done by the supposedly cleverest among us, such as the research on radioactivity by the Curie couple and Einstein’s concept for the atom bomb in the form of a simple formula.

      Yes, yes, I know, when radiation saves cancer patients it’s the glory of science, when it kills thousands in Hiroshima it’s the unintended tragedy caused by human stupidity…it’s known as “the one who leaves a loaded gun in a room full of toddlers is never responsible for the inevitable bloodbath” principle.

  16. Beautifully written piece.

  17. Cari Aguirre
    0

    This real issue is the increasing use of autonomous systems and their emerging legal and social ramifications.

  18. Robots can help in strengthening freedom of expression. If there is a mob attack, one can deploy robots to control such mobs till police arrives.

  19. What an interesting way to consider this issue. Well done.
    Both X-men and AI are based on the idea that people are ultimately improving. Of course, the alternative is we were better when we started, several thousand years ago, and we’re just getting worse. But AI could be viewed through that lens, too. Just as humanity rebelled against our Creator, there’s a decent chance that any version of intelligent life we build will want to rebel against us. See also Ex Machina and Solo: A Star Wars story.

  20. Charles Laster
    0

    I’m concerned with the effort by Elon Musk to meld human and artificial intelligence. The consequences would stagger the imagination.

  21. For me governments need to intervene. If they don’t then the future economy will be one where a tiny number of rich people employ armies of poor ones to do menial and trivial tasks. The only hope is that machines help to boost the demand for complementary, non-routine tasks. This could lead to better paid jobs. But I fear that this is unlikely because this is only possible if we escape from our lousy rates of productivity growth. So, rather than fret about a non-existent, machine-led job apocalypse, we should be worrying about the emergence of a two-tier labour market in which vulnerable workers are denied their rights and their dignity.

  22. AI still is really machines designed and constructed by humans to behave in the way we think humans would (should) behave in given circumstances, only “better”. Fears of conscious machines “taking over” are misplaced so long as we don’t know how consciousness actually works, or even what it is. That may happen some day, but not for along time yet. Until it happens we can only construct devices that act as if they were conscious, working according to procedures built into them by humans. It is those humans we should be wary of.

    • They don’t need to be conscious to “take over” logic could force them to do anything.

  23. People will only mutate if it helped the human race to survive better than before.

  24. ajmanfreezone
    0

    I have read your content .it is the great idea for youngster

  25. Munjeera

    Perhaps it is not the creations at fault but the creators.

  26. Sarai Mannolini-Winwood

    A great discussion, and one I think we all spend a little time pondering now and then as technology continues to shift. One of the wonderful things about comics, and sci-fi fiction in general, is its ability to offer space for disparate thinking. Thanks for sharing your great discussion.

  27. Anyone interested in a discussion about the dangers and probability of AI should check out Harry Collins’s book Artifictional Intelligence. He basically concludes that an AI that is indistinguishable from a human is impossible with our current science. He doesn’t say it will never be possible, but we will have to have a breakthrough in science before we can get there.

Leave a Reply to Munjeera Cancel reply