Philosophy and Me, Part VI: Aristotle and the Every Day

This is the sixth in a series of posts on my own philosophical journey; the first post is here.

Toward the beginning of Aristotle’s Physics he discusses the theories of his predecessors, and makes what seems to me a radical and ingenious move with almost no fanfare.

His predecessors were interested in two related problems: the One vs. the Many, and Change vs. Stability. Some, like Parmenides, thought that everything there is, is One. There is only the One. And since there’s only the One, it cannot change, so change is an illusion. Others took the opposite point of view. Everything is change. You can’t step in the same river twice. Stability is an illusion. Reality is Many.

The thing about these arguments is that they were all about reality as a whole. And reality as a whole is really hard to get a handle on. Aristotle took their concerns and their arguments and brought it all down to human scale: instead of the cosmos as a whole, what about this house, this dog, this tree, this man?

This house is one thing, a house, but it has many parts. It is one and many at the same time. This tree was planted some years ago; it constantly gets bigger, but it remains the same tree. This man has many parts; but if you remove an arm, or a leg, it doesn’t remain an arm or leg for long; and quite possibly the man doesn’t remain a man for long either, depending on how you do it. A dog eats a piece of meat, and somehow the piece of meat becomes part of the dog–or, at least, some of it does.

Change exists. Stability exists. Things can be one in one way, and many in another. We know these things; we live with them every day. What does it all mean? How does it all work?

Aristotle went looking for the answers—but he never forgot to ground everything he did in the every day world.

The world is bigger, richer, more complex than we can possibly imagine. Any set of first principles we might care to define cannot help but leave things out. Have principles, axioms, and postulates, by all means! We cannot think without them. But never forget the world around us, because truth is what is.

Part VII

The Human Wisdom of St. Thomas

It’s nicely coincident to my current series of posts that today is the feast of St. Thomas Aquinas. In addition to being one of the greatest thinkers in the history of the world, he is also my elder brother in the Dominican order, and my patron saint.

Recently I got a little book by Josef Pieper called The Human Wisdom of St. Thomas: A Breviary of Philosophy. I say that it’s by Josef Pieper, by that’s misleading—except for a brief forward, all of the text comes from St. Thomas’ own writings. Pieper has simply selected them and arranged them in an interesting and useful way.

Although St. Thomas is one of the great philosophers, he was primarily a theologian, and philosophy, the “handmaiden of theology”, was simply one of the tools he used to illuminate the glory of God. Thus, his philosophy is apparent throughout his writings…but he never attempted to write down his philosophy all in one place. It makes it hard to study.

What Pieper has done is pull brief quotations from across the vast expanse of Thomas’ work, and arrange them by topic…and then arrange them within each topic so that they almost form a continuous thread. His desire for this book was that the would-be Thomist would read a bit of it every day, so that Thomas’ principles and conclusions would sink in.

As an example of the style, the third section is titled (in Thomas’ own words),

There can be good without evil, but there cannot be evil without good.

The quotes in this section all build on this theme. Partway down the first page, for example, we see these three related thoughts:

No essence is in itself evil. Evil has no essence.

Evil consists entirely of not-being.

Nothing can be called evil insofar as it has being, but only insofar as it is deprived of part of its being.

Thus, a man who does evil is one who turns from that which would perfect him to that which diminishes him, makes him less a man. And yet, what remains of him is still good.

Pieper does provide a detailed set of citations at the end of the book; thus, I know that the four quotes I listed here are from the Summa Theologiae, the Summa Contra Gentiles, and from one of the “disputed questions”. Pieper also pulls quotes from the Compendium Theologiae, the commentaries on scripture, the commentaries on Aristotle, and a number of other short works.

In short, the whole book is remarkably pithy, and I would recommend it to anyone interested in St. Thomas and his thought. The joy of philosophy is the wonder at and contemplation of the richness of the world than it engenders, and the briefly stated ideas in this book are an outstanding place to get started with the wondering and the contemplating.

Philosophy and Me, Part V: From Chesterton to Aristotle, via Thomas Aquinas

This is the fifth in a series of posts on my own philosophical journey; the first post is here.

Time passed, and eventually I made the acquaintance of G.K. Chesterton. These were the days before Amazon was around, and when it came to buying books the thrill of the chase was everything. Where ever we went, we went to bookstores; and in every bookstore I looked to see if they had anything by Chesterton I hadn’t seen before. And eventually I acquired a copy of St. Thomas Aquinas: The Dumb Ox, Chesterton’s biography of St. Thomas Aquinas. I read it, and enjoyed it, but didn’t retain much of it, and put it away. Mostly what I remembered was Chesterton’s conjecture that there was something eminently sane about Thomas, something that couldn’t be said about Descartes and his successors. Then I put the book away, and it stayed put away until just a few years ago, when God’s grace led me back to Thomas and Thomas led me back to the Catholic Church (with the help of many others).

I’ve told that story before, at probably rather excessive length, so I won’t repeat it here. It suffices to say that I grew interested in Thomas and his philosophy and his theology, and began to start boning up on it. I blogged quite a lot of the early part of that here. But in order to understand Thomas, I discovered that I needed to understand Aristotle, and from the philosophical point of view that’s what I’ve been working on ever since.

And the fascinating thing about Aristotle, a thing that is completely retained by Thomas, is his emphasis on what we know from personal experience. Woohoo! Between the two of them, I felt like I’d come home.

Part VI

Philosophy and Me, Part IV: Gödel, Escher, Bach

This is the fourth in a series of posts on my own philosophical journey; the first post is here.

At some point during my college years, I began to read Douglas Hofstadter’s Gödel, Escher, Bach: An Eternal Golden Braid. I started reading a copy by a friend, who’d gotten (I think) as required reading for a class on artificial intelligence. Eventually I got my own copy; and it took me years to actually read the whole thing. It’s a playful, whimsical, sprawling book, which I will not try to summarize here; and it’s also a work of philosophy, though I didn’t understand that at the time. The thread that binds the book together is Gödel’s Incompleteness Theorem, a very startling result.

Early in the 20th century, an effort was started (by Bertrand Russell, among others) to put all of Mathematics on a sound footing. Mathematics is the most certain of all human knowledge, but it wasn’t certain enough. Its foundations were felt to be a bit shaky, starting as they did with our normal, intuitive grasp of number. The goal, consequently, was to define the absolutely minimum number of axioms and postulates and derive all of Mathematics from that, thoroughly and systematically and for all time.

It was a bold plan. That which was true would be proven true; and that which was false would be proven false; and certainty would reign.

And into this bold project strode Kurt Gödel, who knocked it into a cocked hat.

Without going into great detail, and leaving out all sorts of nuances and caveats (not to mention the proof itself), what Gödel proved was that any sufficiently powerful system of axioms and postulates was incomplete: that there were truths expressible in the system that could not be proven within the system.

In short, you’re in a cleft stick. You can make your mathematical system complete, but it will be too simple to be of interested; or you can make it cover everything, but it won’t be complete. Bertrand Russell’s project was fundamentally flawed.

I had left my philosophy class with the conviction that trying to prove the whole range of philosophical truths from a small set of first principles was doomed to failure. It might work in mathematics, but it didn’t work when you expanded your scope to all of reality. Now I discovered that it didn’t really work in mathematics, either.

Please note: Gödel’s Incompleteness Theorem doesn’t mean that mathematics is useless, or that proceeding by means of axioms, postulates, proofs, and theorems is valueless. It’s an effective tool. But its power, even in the restricted realm of mathematics, is limited.

This was a fascinating result, and not surprisingly it confirmed me in my anti-philosophical prejudices.

Part V

Philosophy and Me, Part III: The End of the Innocence

This is the third in a series of posts on my own philosophical journey; the first post is here.

My boy Hume had let me down. Contra Descartes he’d brought sense experience back into the realm of philosophy; and then he’d gone and blown it by deciding that we couldn’t really trust our sense experience to give us true knowledge about the world. And then we moved on to Immanuel Kant, and his Prologomena to Any Future Metaphysics. Not shy, our Immanuel.

The Prologomena is a short book, and I read the whole thing. I’m not at all sure I understood any of it at that time, as it was horribly impenetrable. (I’m told the Critique of Pure Reason is much worse.) What I took away from it (combined with what I’ve learned since) is that Kant took that next step on from Hume. There is an objective reality—but our senses don’t give us true knowledge of it. Indeed, if I understand him correctly, he says we can’t know objective reality, and that the image of reality we do have is largely self-constructed.

But surely this is madness?

And so, I came away from my brief flirtation with philosophy with three clear and distinct ideas:

First: Trying to build a complete, correct philosophy of reality from a very few first principles is doomed to failure. The mathematical model simply does not work for philosophy. (I later found this to be truer than I had realized.)

Second: There’s no point in talking with people who deny objective reality. I don’t pretend that my knowledge of objective reality is perfect; clearly, it’s far from that. But it’s stupid to doubt what you can know directly, and I certainly have direct knowledge of objective reality. Further, the great success of science and technology over the last two hundred years shows that I’m not alone. We humans are really good at knowing objective reality when we work at it, and to deny this is to be willfully stupid. And there’s no point in arguing with people who can believe the absurd.

And this leads to the third point: Modern philosophy is bunk. There’s a reason why most people think philosophy is a waste of time: philosophers say dumb things which are obviously wrong. I might have stated that third point as simply, “Philosophy is bunk,” but I was dimly aware that I didn’t have the whole story. Plato had seemed to make sense, so far as I’d read him; and perhaps there had been something of value between Plato and Descartes.

By the end of the class, at the ripe old age of 18, I’d almost completely written off philosophy as a worthwhile field of endeavor.

Part IV

Philosophy and Me, Part II: Hume

This is the second in a series of posts on my own philosophical journey; the first post is here.

When we left off, I had expressed my disappointment with Descartes: he’d achieved a new start, but at the cost of excluding most of what he knew from experience. (And at the cost of throwing away most of what had come before in philosophy, though I didn’t understand that, then.) To be fair, his was a methodological rather than real doubt—he fully expected that everything he excluded would be pulled back in during the course of his analysis.

Then along came the Empiricists; and they agreed with me (if I’m allowed to put it that way) that ignoring sense experience was a Big Mistake. The three major Empiricists were Berkeley, Locke, and Hume, and given the time available we spent our time on last of the three, David Hume.

Like Descartes, Hume was happy to start from scratch. And his basic principle was that the only way we come to know anything is through our senses. I was overjoyed. At last, here was some sense.

And yet, there was still something wrong. Hume said that the only way we come to know anything is through our senses—and our senses are not always to be trusted. And consequently, how can we know anything for sure? How can we know anything at all? It became clear to me that Hume’s point of view, if followed to its end, led inexorably to solipsism: I exist; I imagine that other things exist, but I can’t know that for sure.

This struck me then, as it strikes me now, as a reductio ad absurdem, a reduction to absurdity. Clearly something had gone deeply wrong.

Part III

Philosophy and Me, Part I: Descartes

If I intend to blog regularly, I can see that I’m going to have to spend more time writing about philosophy, because that’s one of the main things I’m spending my time thinking about these days. I shall try to make it interesting. But the problem with writing about philosophy is that it’s hard to start in the middle, and the middle is where I’m thinking. So I plan to begin by writing a post or two about my own philosophical journey, so that you all can see where I’m coming from.

My first exposure to philosophy was in an Introduction to Philosophy class my first semester of college. The format was simple: we were given a number of original sources, we were to read them, and then we discussed them in class. Occasionally we wrote papers. We started with a small smattering of Plato, because you have to start with Plato; not enough to really appreciate him, but enough to have some notion of who he was and of who Socrates was.

And then we jumped almost two-thousand years to Descartes.

Let me repeat that again. We jumped almost two-thousand years to Descartes, as though nothing in between mattered.

This didn’t concern me at the time, mind you—and from the standpoint of modern philosophy, there is a sense in which the instructor was perfectly right. From the standpoint of Descartes and those who came after him, little of that intervening time mattered because they explicitly and consciously chose to ignore it. I’ll get to that later.

The main thing for now is that Descartes consciously chose to reject that which had come before, and to start fresh. He was a mathematician, and he chose to proceed mathematically: he wanted to start with as few principles as possible, and build up everything else from them logically. He said (I paraphrase ruthlessly), “There are many things I know…but I’m going to pretend that I don’t know anything that I’m not absolutely positively logically sure of.” As all the world knows, he finally came down to one principle, Cogito ergo sum!, “I think, therefore I am.” That was his starting point, and he worked up from there.

This process made a certain amount of sense to me; though I didn’t know it, I was a budding math major, and the certainty of math appealed to me. Descartes was trying to bring the same kind of certainty to philosophy, and I liked that. At the same time, it bugged me that he was ignoring the things he knew from experience—that he was, as I’d put it now, trying to make himself stupider than he was. And the trouble with trying to make yourself stupider than you are, as C.S. Lewis noted multiple times, is that you very often succeed.

From Descartes we went on to Spinoza, who followed the model of Euclid’s Geometry much more closely; his book was full of definitions, axioms, and theorems, and first I found it fascinating. I also found it impenetrable; all I can remember now is that he went on and on about substances and their modes (very little of which I’d been given the background needed for understanding), and that he ended up with something like Pantheism.

But I hungered to get on to the Empiricists: Berkeley, Locke, and Hume. They looked at the world empirically. They paid attention to what they knew from the world. They didn’t foolishly throw all that away. Surely they’d make more sense.

Part II

The Metaphysics of Minecraft

Yesterday I rambled on a bit about the difference between natural and artificial things. The gist of it, which possibly I conveyed rather badly, is that natural things, and particularly living things, have a nature that determines what they can do and how they behave—and that this nature isn’t entirely explicable in terms of a collection of atoms arranged in a particular way. This last point is controversial, of course. There are many today of the “materialist” (or, sometimes “physicalist”) camp who would disagree with me.

Be that as it may, I was recently surprised to find this distinction between natural and artificial things illustrated in a rather odd place—the computer game Minecraft.

In Minecraft you inhabit a world made of blocks. You can mine these blocks out of the world, and then make new things out of them. And there are two ways to do it.

First, you can treat the game like the world’s biggest Lego set, and build things (including truly massive structures) out of the blocks by placing them back into the world as blocks. Second, you can “craft”. This means creating a variety of things—tools, armor, new kinds of blocks—out of blocks by using something called a “crafting table”. The resulting objects are either items, things you can carry and use, or blocks that you can place in the world.

Let me give a couple of examples. First, here’s an artifact: A Dark Tower of Wizardry. It’s made of blocks. It has certain behaviors; for example, fire burns on it in various places. But it’s an artifact—its behavior is the behavior of the blocks from which it is made. It has no behavior of its own.

tower.jpg

You can make quite amazing artifacts in Minecraft: Star Trek-style doors that open with a woosh when you walk up to them, elevators, logic circuitry, and the like. But all of them are simply artifacts that exploit the behavior of the blocks of which they are made.

Here, on the other hand, is something quite different.

enchanting.jpg

That thing in the middle, with a book floating on the top of it, is called an enchanting table. You can use it to enchant your weapons, armor, and tools to make them more powerful. It’s a single block, made on a crafting table from four blocks of obsidian, two diamonds, and a book. And the point is, it has significant behavior that does not in any way derive from the things of which it is made. Not only can you enchant things on it, the book on top moves around by itself, and letters magically fly towards it from the books on the shelves around it. It is its own thing, with its own behavior. You might almost say that it’s alive.

It has, in fact, a nature. For all that you have to make it from other things, it’s a natural object in the Minecraft world, rather than an artifact. Unlike the Dark Tower of Wizardry, it has its own Java code in the Minecraft application that gives it its nature.

And this is the thing that’s so cool. All of the neat stuff that happens in Minecraft ultimately depends on blocks and items with natures given to them by the application. The neat artifacts you can build work by combining these natural elements in ways that they work together.

Just like in the real world. Everything comes down to the natures of natural objects, which are not necessarily explicable purely in terms of the parts from which they are made.

They say art imitates life. Does it ever!

Natural and Artificial Things

What’s the difference between stones, trees, dogs, and people on the one hand, and houses, pianos, and motorcycles on the other? According to Aristotle, the basic difference is that the things in the first group are natural things, while the things in the second are artificial things. That seems obvious, but to Aristotle, the difference goes deeper. Natural things are so called because they possess a nature that determines what they are, and how they behave. The oak tree outside my house has the nature of an oak tree. This determines how it grows, and how big it gets, and how hard the wood is. A dog has the nature of a dog, and so it barks, wags its tail, and so forth. Humans have human nature. All of these things are what they are because of something inside them.

Houses, pianos, and motorcycles, being artificial, a product of artifice, the creation of an artisan, do not have a nature. Rather, a piano is assembled from pieces; and many of these pieces are natural objects that each operate according to its own nature. Metal wire naturally vibrates and emits a tone when struck. The wood of the case naturally resonates to the tone of the wire and amplifies it. An artifact is a way of putting natural things together so that their natures all work together to achieve some desired effect.

But if artificial things don’t have a nature to call their own, they do have a property that natural things, especially living ones, generally don’t have: you can take them to pieces and reassemble them. If you take a motorcycle apart, it’s true that you no longer have a motorcycle; but if you put them back together again properly, your motorcycle is as good as new.

If you take person to pieces, on the other hand, you can’t usually reassemble them, Dr. Frankenstein not withstanding.

This distinction is often denied by the materialists among us. A human being is just atoms, they say; we can explain everything in terms of the movements of atoms. We don’t need any natures! Atoms are good enough. Given time, we’ll be able to build new people just by assembling atoms together in the right way. And yet, there’s more to being a person than just being the right set of atoms.

Consenting Adults

There’s something very odd about the phrase “consenting adults”.

Once, I think, it meant something like this: acts performed in private by consenting adults, which cause harm to no one else, are nobody’s business except that of the two adults involved. As a legal standard of when the state is entitled to interfere, this makes a great deal of sense—at least, to a certain extent. I’m minded of the affair some years ago when a German citizen advertised on the Internet for a person willing to be slaughtered and eaten, and got one. Apparently these were consenting adults, but this, for which God and good sense be praised, did not prevent the cannibal from being arrested, tried, and convicted, though too late to save his victim.

But these days, the phrase “consenting adults” seems to be used in ordinary speech as a moral rather than a legal standard. If it’s between consenting adults, it’s OK. Not only is the state not entitled to interfere, other observers are not entitled to disapprove on moral grounds. The attention has shifted from the particular act, performed in private, to the kind of act as discussed in public. Who are you to tell me that I shouldn’t sleep with whoever I like? We’re consenting adults.

And yet consent is no kind of moral stamp. On the contrary: far from being a precondition for morality, consent is a necessary precondition for sin. The sin lies, in fact, in my giving my consent to my sinful act. Imagine this dialog:

“I would like to sin with you. Will you sin with me? I think we will both enjoy it.”

“Yes, please, I would very much like to sin with you. Shall we do it now?”

Here we have consenting adults, agreeing quite knowledgeably that they are about to do wrong, and voluntarily choosing it. Where’s the morality in that? And yet people still trot out “consenting adults” as a reason for withholding moral censure.

There’s a related rhetorical move: the assumption that any statement of moral censure implies a desire on the part of the speaker to make the censured behavior illegal. But that’s another post.