Technological change is happening rapidly, arguably faster than ever before. In the midst of this transformative effort, it is important to think rightly about how to guide such change, that we might leave a world to our children which is better, not worse, than our own.
Axioms, of course, are articulated principles, the truth of which can be perceived and discussed but not proven, from which conclusions of interest can be logically derived. The following are perhaps not all true axioms in the strictest sense, nonetheless, they and a brief discussion of them are offered as a starting point which in itself ought to be discussed, and from which one might argue to the wisdom (or lack thereof) in various current and future technological trends and developments.
Axioms.
0. Technology is a human creation.
Therefore certain statements about humans must be made in order to understand technology and think about how best to develop it.
Starting with this one:
Axiom 1: Conversion is possible.
That is to say, it is possible to turn, by an act of the will, towards righteous goodness, and conversely also to turn away from it. Or, to put it another way, human beings are free. Free not only with regards to peripheral or unimportant decisions, but fundamentally free with regard to the shaping of their own destiny, for good or ill.
Moreover, conversion is not the same as intellectual enlightenment. It is possible to see reality with deep insight, and to be readily capable of complex logical reasoning, and yet to continue to be a liar or a thief, or vain, or manipulative... Although these and other vices do in fact tend to lead towards illogical behavior and a clouding of perception, they in themselves are not necessarily strictly speaking illogical; to be evil and to be in possession of extremely shrewd insights are in no way mutually exclusive.
Conversely, it is possible to be simple, not in possession of deep knowledge, nor capable of very abstract thought, nor even possessing of any particular facility with straightforward logic, and yet to be honest, generous, humble, forthright, kind, merciful, a faithful friend...
One might say that about the fundamental things, such as the importance of kindness, all agree in theory; some merely fail in practice, and therefore that intellectual enlightenment has in fact led to unity on these things, but that we just fail to live out our hard earned conclusions. However, the practical result is the same; there are differing degrees of commitment to kindness, different weighting of its value relative to other sometimes conflicting ideals, like, for example, honesty or competitiveness, in actions which speak louder than words.
But there are intellectual disagreements which seem immune to solution by enlightenment as well. One man says to value health even above business. Another says it was necessary to divorce his wife to pursue his career. Yet another says that he does all for God. A fourth says that what is important is for all three of the above to do what they feel is right, that they are serving the common good by their earnestness. But in so doing he in turn is offering a different account of what really matters than the other three, who are (let's say) in fact living with health, career, and God, respectively, as their highest values, not individual earnestness. These four men are living contradictory lives. They cannot all be right, and there may not be any effective intellectual arbitration to be had between them. Thus we live out differing takes on what the fullness of a human life is, what it is to thrive and be great, holding to differing ordering principles and contradictory highest goods.
Conversion, unlike intellectual enlightenment, can in fact lead to unity, but being the fruit of freedom, it is a double-edged sword, because almost all reject it to one degree or another (and of course there are competing things one could convert to) leaving us to at least some degree disunified.
Axiom 2: Human life is not an emergent property of mechanical systems.
The human body, with its muscles and sinews, bones and synapses, is without question a magnificent mechanical system. But life transcends mere mechanics. It is not simply the sum of its parts, nor does it arise solely from the intricate interplay of physical components. We human beings are not, as some would have us believe, simply advanced electronic neural networks connected to actuators and an energy supply. One argument for this is in light of Axiom 1 — free beings are indeterminate and intrinsically unpredictable, and there is no attaining indeterminacy from a determinate system like a computational neural network, which by definition only yields calculable results. Chaotic and widely varied and sometimes out of line with our intuition, yes, but truly novel, incalculable, and free, no.
Needless to say the above argument applies to human life but not necessarily to, say, plant life, although I think it is axiomatic and intuitively obvious that plants and animals are not mere machines either. But let’s limit ourselves to the easier claim that human beings are more than machines for now. This is one of the reasons that we are so fond of watching sports, why everyone loves the story of the underdog. There is nothing more exciting, more renewing of the human spirit, than witnessing someone “beat the odds” that is to say watching the operation of freedom as a person chooses the less likely, the glorious but painful, chooses to vanquish rather than be vanquished, come what may.
Axiom 3: Technology should serve human thriving.
Machines are to serve man, and not vice versa. But what is it to serve man? Some things seem fairly obvious: premature death is bad, being unable to see or communicate with one's loved ones is bad, being able to eat plentiful food and not suffer starvation is good, etc... but to really answer the question of what human thriving is, one finally must have an answer to the question of what human purpose is. And this is where things get very very difficult, precisely because of the possibility of conversion as laid out above! If our first axiom holds, then it follows that no statement of human purpose, regardless of how well thought out and well articulated, will have any great likelihood of being adopted by all living people; some will simply turn willfully away.
This problem is the fundamental problem of 21st century western civilization. It is at the root of almost all current political and cultural conflict. We have lost a great deal of the foundational agreement about human purpose upon which our civilization was built, and have thus far failed to find sturdy ground, a shared purpose, upon which to refound our society. More discussion of this to follow, but what is axiomatic is that technology is for the sake of human thriving, and can only succeed to the degree that it is an embodiment of a shared vision of this thriving.
Axiom 4: Technology cannot be all things to all people.
Despite the possibility of a vast number of important individual choices, we human beings do in fact live as social animals in one common world. This means that many things must simply go one way, or the other. It is not possible for instance to simultaneously live in a world in which smartphones do and do not exist. The fact that I can personally choose not to have a smartphone does little to change this. The world is radically different when every single person is carrying an internet connected computer in his pocket, and the experience of being the one person without such a device is very different from the experience of being simply another person in a world where such devices do not exist.
One can argue of course that the world is far better with smartphones, which may very well be the case but the point is that there is no individual freedom with regard to this question, society must decide to go one way or the other. One could also argue that such motions are simply inevitable and there is really no decision to be made, but this is not true; there have absolutely been cases of societally rejected technology, for example human cloning, which became possible but was broadly criticized as unethical and has remained unattained. Again the point here is not to debate whether or not this should have happened, but simply to point out that with regard to individual adoption of technology a libertarian policy in which every individual is allowed to do as he chooses may very well be the right one, but with regard to technological development itself, the question of what world we want to live in, there is in fact no libertarian solution, and there will necessarily be some mechanism by which the human course is plotted and dissenting wills either moved or forced to acquiesce.
Axiom 5: Technology is not morally neutral.
There is a certain contingent of people (who are often vaguely anti-technology but prefer to cast themselves as pro-technology) who love to repeat statements like the following: “technology isn’t good or bad, it’s just a tool, and like all tools is as good as the way it’s used and the cause it’s used for.” This is wrong. It’s wrong about tools and it’s wrong about all technology.
Of course, there is no arguing with the fact that pretty much any artifact can be used for good or evil. One can commit a murder with something innocuous like a crowbar, and build a wonderful nuclear reactor with the uranium from an A-Bomb. But, once again, this merely addresses individual use and fails to actually address questions about the societal development of technology. There are absolutely some technologies which we have a moral imperative to at least take reasonable means to develop, and others which we have a moral imperative not to develop.
Furthermore, the development of technology itself is a part of human history, a history which is not static and in which real change takes place. Technological development plays a role in societal history which bears at least some analogy to moral development in the individual human life. Done rightly, it makes man stronger, better, capable of knowing and doing more. Done badly, the opposite of all of these things. As in the moral life, the statement “if you’re not moving forwards you’re moving backwards” applies here. We have no choice but to get it right.
Axiom 6: Technological systems cannot take responsibility.
To many, this claim is simply self-evident. Others, however, reject our second axiom, holding that human life itself is an emergent property of material things, in which case there is no fundamental difference between a human life and technology, and as a result no reason not to believe that technology could in principle take responsibility for things.
One brief argument against this is the following: there is no evidence at all that there has been or ever will be an unrepeatable technological system. If atoms can be arranged a particular way once, in principal they can be again. On the other hand, there is no evidence at all that there has been or ever will be a repeatable human life. And in fact this is part of the wonder of the human person, the reason that every person deserves love and respect: each and every person is a singular event, a unique character, a life never seen before and never to be seen on earth again.
One thing that follows from this is that humans can be effectively punished, while machines cannot. Take the most severe punishment of all, the death penalty. To condemn a human who has done wrong to death is to deliver the ultimate blow, to put an end to that person and their unique life and everything they stood for. While it is still possible of course that another person might be inspired to similar acts, say by the memory of the deceased, that second person will act as a separate person with their own will, their own perspective, and their own chance to turn away from the bad towards the good.
Let's say analogously that a neural network is trained in such a way that it winds up murdering innocent people. What would be the analog of the death penalty in this case? One could shut off the power, destroy the data center, shut down the mines that made the silicon that went into the chips, and yet there is still no theoretical reason at all the weights of the neural network might not have been saved somewhere and stored away, the materials to make the hardware sitting on a ship offshore, and the entire data center ready to be reconstructed exactly as it was. There is no true and final termination of an idea or of the possibility of the physical instantiation of that idea. As a result there is no way to truly inflict consequences on a technology. This is perhaps in some ways a trivial argument, but it seems to me difficult to dispute.
We humans must take the blame where necessary, acknowledge that we have power, and use that power for good.
Conclusion:
The goal here was to lay out axioms to guide the development of technology. These axioms should be more certain and fundamental than the conclusions drawn from them, so we’ll keep any extensive statement of their consequences separate, perhaps for a future article.
For the time being let us simply restate the axioms all together along with the briefest possible sketch of what they all might mean:
Technology is a human creation. To guide it well involves getting certain things straight about humanity. The first of these is that humans are capable of conversion. The second (which follows from the first) is that humans are not mere machines. Having said these things, and noting something of the complexity and subtlety of serving a being which is free and therefore in some sense indeterminate, one is in a position to state that machines ought to serve humans, albeit with the acknowledgement that the great task remaining will be determining what really constitutes this service. Note that the difficulty and immensity of this task is generally understated or ignored completely in our current milieu.
Axiom 4 then heads off the commonly (if generally implicitly) proposed solution, which is the proposal that technology can simply serve all of our disparate ends at once, perhaps as mediated by a free market. This is not possible, and it does not really take tremendously deep thinking to see the impossibility.
Having then thoroughly established the difficulty, Axiom 5 adds an element of urgency and moral duty to the problem.
Finally axiom 6 places the ball squarely in our own court. There is no other earthly being, including our much vaunted technology itself, coming to solve this for us.
The shaping of history, of which technological development is a significant part, is our responsibility. It is urgent, inescapable, and cannot be resolved solely through intellectual effort. We must engage our full being—heart and mind—invoking the inspiring power of our lives and the persuasive power of our tongues to build a community freely aimed, in action and artifice, at what is ultimately good.