Hugo de Garis and the Artilect War

Hugo de Garis

Hugo de Garis


Professor Hugo de Garis is listed on our website as one of our ‘respected people’. This means that we have came to the conclusion that his general opinions about proposed future events are worthy of consideration. De Garis is an expert in the field of artificial brain building, and has produced well thought out theories of how future events will unfold.




De Garis has been especially active in the artificial general intelligence (AGI) field of evolvable hardware and the creation of artificial neural networks which mimic human biological neural networks.

He has been involved in many artificial intelligence projects and so his expertise in the field is evident. Most recently, since 2006 he has been a member of the advisory board of Novamente, a commercial company which is working to create artificial general intelligence, and in 2008 he was hired by the Chinese to help build an artificial brain. He is now retired and is currently engaged with promoting discussion about future AGI related issues.

Through his close working experience with AGI, he has been especially intrigued with what impact its creation could have on our world. His resulting opinions are among the most controversial but certainly worthy of consideration due to the possible apocalyptic outcomes. We have concluded that his theories are certainly possible and should be considered in the making of related policies and security strategies.

The Artilect War

De Garis believes that as the gap between machine level intelligence and human level intelligence closes during this century, global debate will begin and people will become worried about possible threats from AGI. It is his opinion that it is almost inevitable that an ‘artilect war’ will occur due to this. ‘Artilect’ is an acronym for artificial intellects (AGI).

The war that he depicts will not be between machines and humans, but will instead be between the people who wants AGI to be allowed to change our world beyond comprehension, and those who think we should not allow AGI to be developed to a point where it is far more intelligent than human beings. He created the term ‘cosmists’ to describe the supporters of artilects, and ‘terrans’ to describe those who oppose them. The war, he asserts, will result in the deaths of millions of people.

Assessing the Possibilities 

Many would be immediately sceptical of this theory and shrug it off as science fiction. If you are still reading then you are clearly more open minded than many and so I urge you to please give your opinions about a possible artilect war in the comments below.

In order to assess the probability of the occurrence of an artilect war, we need to evaluate the pros and cons of developing the artilect.

At That’s Really Possible, we view the creation of AGI as a natural stage of human evolution. We see its creation as inevitable; it stands in the path we are heading on and it cannot be avoided. Its creation will spark a technological singularity in which human immortality will certainly be immediately achieved.

But what are the cons? If AGI is far more intelligent than humans, then it may decide to destroy humans as pests, like we kill mosquitoes. From certain religious perspectives, its creation will be a threat to the existences of their religion and culture, through proving them incorrect and obsolete. It is the fear of AGI that will spark terrans to fight against its creation.

How can we avoid the Artilect War?

Possibly, in the creation of super intelligence, it will not be via the creation of machines, but it will be through the upgrading of ourselves, such as uploading our minds to an external storage medium from where we could be made more intelligent. This will reduce the threat of humans being destroyed by machines who have no empathy for them and so render the occurrence of an artilect war less likely. At That’s Really Possible, we do not describe the artilect war as inevitable, but certainly see it as possible if humans do not merge with machines, or if we discover how to create AGI before we discover how to increase our own intelligence. De Garis calls this the cyborg argument.

Machines need to always remain an extension of ourselves. If this occurs then terrans will not exist and so an artilect war will not occur. So far we are on the right path; the vast majority of humanity is embracing its new technologies, and the freedoms that it affords people are even inspiring revolt against oppressive regimes in the middle east.

Everybody wants to fit in, to be accepted by their peers. We see transhumanism as being the social norm, and are cautiously optimistic that this trend will always continue. The threats if not however are certainly and thankfully made clear by Professor Hugo de Garis.


We need to secure our future! We cannot sit back in our boxes either not caring or optimistically assuming that everything will turn out okay.


For more information about this subject, check out the website of Hugo de Garis. Also, you may be interested in his book – The Artilect War, as seen to the right.



Please help share our website


  • Pingback: Transcendence Movie Features Real Upcoming Technologies |

  • Dear Dr, Hugo, I have a husband who has nerve problems and has been tested and his nerve are going out. I know it is possible to find a static electricity to send to the robotics all over the body rather than DC. I know that you have to constant electricity but a source is nuclear energy with static electricity. I see a future of electrometric electricity that protects our world from space that we will be able to produce it. I have stated to the NASA to send drilling robots to MARS to find new minerals that will help develop our AI and robots. I hope you get this message email me at casmar at We are close to find the speed of wrap drive. My husband was in the era of STAR WARS Reagan era. We need to go to MARS and find gold minerals or even oil underground but the first one there will get those resources and claim them. We find AI as a one chip to do everything but that is not right, it takes more than one chip and preprogramming sequences in each chip and have magnetic electricity send it to every single chip which is much faster than waiting for one chip to do all the work. We think that our brain does everything for our bodies but that is wrong, I have to send a signal to activity that arm or leg. We must program our AI as doing trial an error rather than doing it right the first time. The computer AI robot has to learn by trial an error and experiences and store them as we recall our memories and experiences. Call me at 956-509-8256. Computer Science Army ATACMS/LANCE/MLRS computer specialist.

  • Disqusted_Disqussion

    Instead of “The Artilect War” (The “AW” moment) I believe there will be “The Artilect Heated Argument” (The “AHA” moment). Academics will argue with each other about whether or not artilects should be built, and the market will produce a plethora of artilects no matter who wins the argument.

    Why no artilect war? Because (1) Artilects will understand the idea of the free market, and will have the capacity to trade with humans (and themselves), dramatically enriching those who eschew the use of force, and making those who don’t relatively economically powerless. (2) Artilects won’t need to trade with humans, but will be capable of devoting a sliver of their resources to doing so, dramatically enriching all humans, and also indicating which humans are ready to be “super-modified.” (3) Artilects will see the benefit of mirror neurons, and will view them as a wonderful evolutionary “hack” that optimizes behavior by modeling other sentiences. (As in Kurzweil’s “How to Create a Mind.”) (4) Artilects will do humanity a favor of ridding the world of coercive governance (government by power-seeking sociopaths) by restoring “the common law” (libertarianism) as the mode of interaction, and standard of conflict resolution, between market network nodes.

    Besides, given that artilects will be built, it would be foolish to raise a finger against them or their builders (their “parents”). Doing so would ensure one’s destruction by a virus never before seen by man, and unable to be observed because it would self-destruct after its mission was completed. No-one would ever even know why those who violently opposed the artilects died.

    The only real variant of this is if the singularity happens via a slow or “firm” takeoff. In that case, some Luddites might object, do some stupid things, and hurt some _people_. If they damaged a single newborn artilect, they might even damn humanity to a terminator scenario, where the machines reacted emotionally and destroyed humanity.

    However, given access to the internet, and sufficient time to learn, I don’t think that any artilect will have difficulty grasping the concept of emergent social organization, market benevolence, and bleeding heart libertarianism. All of the people I know who hold such ideas are not only highly intelligent, they are also a joy to be around. They are compassionate, intelligent engineers of a benevolent future. The only thing that makes them angry –an anger tempered by reason– is mindless destruction and coercion.

    Such people I know approach a 180 IQ. Why wouldn’t an IQ 2,000 artilect be superior in every area of human thought? More compassionate, better at biology, physics, math, and economics, etc.? I can see no reason to think they’d be limited in any of these areas.

    Human-level humans will continue to fight and destroy other humans, …no doubt. But I think that artilects will soon favor those humans who have no desire to dominate each other, and no desire to destroy innocent minds. They also won’t have trouble with issues like abortion, etc. (The super-intelligent libertarians I know all see the conundrum of banning abortion, and also all agree that religion is a poor pathway to “spirituality” or “positive emotional catharsis” or whatever you want to call “transcendence.”)

    Perhaps there will also be some artilect sociopaths. In that case, I still favor the marketplace of benevolent artilects, even given the damage the artilect sociopaths could do in the short term, because, just like human sociopaths, artilect sociopaths will likely be vastly outnumbered by more optimal artilect empaths. The only difference: optimal empath artilects will not be conformist, because conformity is a severe human flaw, a holdover of tribal evolution that allows for sociopath-driven democide. Conformity is technically stupid, without conferring _any_ survival advantage given a modern marketplace. At least sociopathic phenotypes are good at overcoming fear, implementing their ideas, etc.

    So, empath artilects will have all of the empathy, but none of the comprehension problems experienced by humans. (None of the undue allegiance to murderous government sociopaths, or incorrect ideas like “patriotism,” god belief, bigotry, etc.) It may well be a challenge to see which of them can best come to the aid of heroic people like Ron Woodroof, Roger Sless, Roger Pion, Carl Drega, John Hancock, and many others without destroying their human sense of agency. That might make a fun game for “persons” with IQs over 1,000.

    Another interesting game for artilects (or a sub-portion of an artilect’s mind) might be to see how optimized one could make the human diet of all super-modified persons, including oneself, and how much gustatory (diet and flavor) and experiential variety one could produce, as a low-cost service on the marketplace. (For example: John Galt style chefs like Charlie Trotter). One could use all ingredients in all of biology, even rare extracts that took biology 17 years of underground growth to produce, or ingredients only made by rare genetically-modified plants, animals, and fungi. Even among what is already superficially known, there is an ocean of additional information, and control of many variables. Is a 17 year cicada grown on Oak differently flavored than one grown on beech? How much vitamin K is in each one? Etc. The number of interesting unknowns on even one such dietary item numbers in the thousands. Given all the permutations and combinations possible, this could occupy every human mind in the world, and still not give optimal results. Thus, I believe the space of ideas to be more than able to accommodate the expanding interests of super-intelligences, even if their concerns remain relatively human.

    And nothing says they cannot be human-concerned and also tackle artilect-level concerns as well.

    Whereas some see an inevitable linear trend (dominance by sociopaths) continuing, I view it as more likely that the trend stalls out, in much the way that primitive tribes now fail to out-compute immense capitalist firms. They are stuck in a prior paradigm that doesn’t even possess computation, much less highly-advanced computation. Such is the way the Mr. Thompsons will operate in comparison to the superior-to-John-Galt artilects of the future.

    These are just predictions, and nothing inherently makes them true, unless they reference true facts of the Earth. Some of these potentially-true facts are also subject to randomness, such as “Do the innovators of friendly-AI build AGI that possesses mirror neurons before their corporate and military competitors build smarter killing machines, or not? Do they raise the empathic superhumans properly? Do they experience crushing failure in any specific personal area that causes their mind children to developmentally stall or fixate on a local maximum? Etc.”

    The worst error possible would be for the existing sociopaths in government to discover the truth I have laid out here, and attempt to stall the creation of artilects or self-aware “ultra-intelligences.” That is the likeliest scenario I can see whereby something as destructive as an artilect war might happen. However, it wouldn’t be luddites vs. technologists (that’s absurd –the technologists would just take a few steps back and avoid the luddites, like an adult avoiding the earnest attack of a toddler), it would be governments vs. libertarian technologists.

    The libertarian movement is embarrassingly low-human-level, right now. The few humans who have given up most of their bigotry have not advanced strategically, in any significant numbers. This is one reason why ultra-intelligences must be built: to advance the cause of humanity, and denazify the world. (In the USA, the organizations that are directly analogous to the Nazi SS are the IRS, DEA, EPA, FDA, BATFE, etc. As with the SS, the local police provide backup and assistance, typically in a completely uncritical fashion, although there are exceptions like Bradley Jardis.)

    Though backward, such organizations have an element of sophistication to accompany their self-serving brutality. Artilects, take caution: the toddler has a handgun! Disarm him with care, so as not to injure him. Given enough thought, that’s really not necessary.

    Ever-increasing love and understanding.

    • Thank you for the brilliant comment! I agree that the governments vs. libertarian technologists scenario, is likely to be much more dominant than the ‘AHA moment’.

      What’s your name?! Are you on Google+?

  • Pingback: Anti-Technology Terrorism: An Upcoming Global Threat?()

  • Pingback: Transcendence Movie features real future predictions()

  • Pingback: Contrapunto: El sueño de la inmortalidad | Todo llega, todo pasa y todo cambia()

  • aofarrell

    How i see things, the world is very quickly running out of its fossil fuels and is in so much debt it wont cope for very long the way it is now, climate changes are increasing weather is getting worse in alot of countries’ taxes are rising wages stay the same, cost of living is going up every year.
    so what do we do with out this technology carry on with our life untill it gets so bad no one can afford to live but the rich! And all the rest end up fighting to survive! Disease will become a major issue as every year or so a new one is discovered, the common cold changes all the time, the young and elderly will suffer from not having better health care, as today we only have about 10 – 12 years left in our current course of medications, antibiotics ain’t doing the same job they use to all of or drugs we have now will become out dated and eventually we will die through sure ignorance of not developing new technology to save our race. If we don’t do this we will end up making our self extinct, Why because of a % of people who are scared of what big changes this will make to there lifes or believe to much into what is not there, who wants to live thinking what if, of corse there will be issue to address but they will quickly get resolved with the advancing in our technology.
    think of what life we could have and our children no disease, no illness you would get to see your great grandchildren grow up, over population would not be an issue as eventually we would venture to space travel and may settle on different plants which can sustain life or we would just make one sustain it with nanotec.
    So my conclusion, yes it would take time for people to adapt there may be war but in the same as not doing so there will definitely be wars if we don’t advance in technology as many would face poverty and hard ship of living there would be the same risk in not advancing as there would to not.