**5. Discussion**

What is the cost for a member of Homo sapiens to become Homo technologicus in the way described and what might the consequences be? Clearly the realisation of such entities presents enormous questions that affect all aspects of human society and culture. In attempting to answer such questions a string of positives and negatives appear. Standing still is not an option. In the extremes, if humans opted by some global agreemen<sup>t</sup> for a non-Homo technologicus future (if that were possible), could the end result actually be an intelligent machine superculture as described in [34], leading to the singularity and loss of control on earth to machines. Conversely, if humans globally opted for a Homo technologicus future, could society and culture cope with such a distinct non linearity in evolution?

Some argue that any view of the appearance of superhuman cyborgs can be seen as being unwarranted 'metaphysical' speculation [35]. On the other hand it could be felt that humankind is itself at stake in any case [36]. A viewpoint can then be taken that either it is perfectly acceptable to upgrade humans, turning them into Homo technologicus, with all the enhanced capabilities that this offers, or on the other hand it can be felt that humankind is just fine as it is and should not be so tampered with [36].

The most important issue here is that we are considering a completely different basis on which the Homo technologicus brain operates—part human and part machine in its nature. When the nature of the brain itself is altered the situation is complex and goes far beyond anything encountered with the mere physical extensions of case 1. Such a Homo technologicus entity would have a different foundation on which any of their thoughts would be conceived in the first place. From an individualistic viewpoint therefore, as long as I am myself a Homo technologicus I am happy with the situation. Those who wish to remain Homo sapiens however may not be so happy.

With a brain which is part human, part machine, Homo technologicus would have some links to their human background but their view on life, what is possible and what is not, would be very much different from that of a human. Their values would relate to their own life and Homo sapiens may not figure too highly in such a scenario.

One aspect is that Homo technologicus would have brains, which are not stand alone, but rather, are connected to each other directly via a network. A question is therefore is it acceptable for Homo sapiens to give up their individuality and become mere nodes on an intelligent machine network? Or is it purely a case of individual freedom, if an individual wants to so upgrade then why not?

Some questions are obvious. Should every human have the right to be upgraded? If an individual does not want to should they be allowed to defer, thereby taking on a role in relation to Homo technologicus rather like a chimpanzee's relationship with a human today? How will the values of Homo technologicus relate to those of Homo sapiens? Will Homo sapiens be of any consequence to Homo technologicus other than something of an awkward pain to be removed if possible?

However to conclude, we must be clear that with extra memory, high powered mathematical capabilities, including the ability to conceive in many dimensions, the ability to sense the world in many different ways and, perhaps most importantly of all, communication by thought signals alone, Homo technologicus will be far more powerful, intellectually, than Homo sapiens. It would be difficult imagining that Homo technologicus would want to voluntarily give up their powers or would pay any heed to the trivial utterances of Homo sapiens.

Just as now if a cow enters a room full of humans and proceeds to make cow noises (moo or boo!), it would be extremely unlikely for the humans in the room to say collectively what a wonderful idea the cow has, yes we will all do what the cow wants immediately. No the cow's noises would be simply ignored and she would be removed from the room and shortly killed. So in the future when a Homo sapiens enter a room full of Homo technologicus members and says something like "I don't like what you're doing". It would be extremely unlikely for the Homo technologicus members in the room to say collectively, what a wonderful idea the human has, we will all do what the human wants immediately. No, the human's noises would be simply ignored and they would be removed from the room and shortly killed.

**Conflicts of Interest:** The author declares no conflict of interest.
