Are you a moral realist, or a moral antirealist?
… is the question that prompted this post, that I may nevermore have it darken my philosophical door.
Let’s call an ‘ethical function’ the sort of basic unit of any ethical system - where a function is a mathematical function, and an ‘ethical’ function is one that, given some sets of input, will spit out something that includes behaviour. We might also call it a norm, though that seems more fuzzy a term.
So ‘always turn left’ would be a (dizzying) ethical function, as would ‘cultivate a benign personality’ (allowing us to avoid merely defining away virtue ethics). Then we have the two traditional views of the nature of moral functions:
‘For at least some sets of input, there is at least one objectively correct ethical function to apply to them’.
Eg, on the notion of torturing a kitten (vs, say, doing nothing), some people would claim that the objectively correct function is one that reliably outputs no behaviour, no matter what else the input. Others would say the objectively correct function outputs differing things depending on (eg) how much suffering the kitten torture would prevent.
It seems odd to say that data can have an objectively correct function to feed it into - sufficiently so that I literally don’t know how to parse the idea. But I’m not out to take down moral realism in one short post, only to offer an alternative, so all I’ll say here is that I wouldn’t know how to be a moral realist if I tried.
‘For no sets of input is there an objectively correct ethical function to apply to them.’
This is by definition the negation of moral realism. But where I don’t know how to parse X, I also don’t know how to parse ~X. So while this has an intuitive appeal, again I wouldn’t know what it meant to say I believed it.
As far as I know I’ve invented this use of this phrase. Though I’m loth to suggest another philosophical -ism, it seems like a reasonable description of my actual view, to wit:
‘Regardless of input and output, we can exclude any ethical functions for which the process of selecting and applying them entails self-contradiction or some other incoherence (I’ll call this the function process). Moreover, there is a finite set of size X of ethical functions that are thus exclusively coherent’
I won’t attempt too hard to justify this idea here - I only want to propose it, clarify it, and give some preliminary reasons why it might appeal.
To clarify, ‘some other incoherence’ could be a keyword with no parseable content, eg ‘tnetennba’, or (more controversially), ‘ought’.
‘The process of applying them’ spans the content of the function and its output such as ‘stroke all kittens’ (the function), or ‘the act of stroking Lady Grey’ (the output, given certain input, such as Lady Grey being within stroking range). ‘The process of selecting’ refers to whatever motivation drove us to choose some particular function. So a contradiction in the process could be a contradiction in any single part of it (‘stroke and don’t stroke the kitten’, ‘maximise utility and minimise the probability that I violate anyone’s rights’, etc), or one that comes from multiple parts of the function process, eg ‘selecting the only ethical function that my philosophy textbook has written in English because I’m monolingual (and evidently think that’s of fundamental importance)’, then finding out that the function outputs ‘flirt with the waiter in Spanish’.
The different possible sizes (or ranges) of X amount to a family of theories, but they have some appealing properties in common:
- The basic propositions, while still imperfectly defined, seem a lot easier to understand and either accept or reject than the moral realist and antirealist positions.
- They’re (kind of) falsifiable. For whatever constraining criteria (or value) I choose for X - eg ‘only utilitarian ethics’ - my claim can in principle be refuted by demonstrating the coherence of an ethical function process that is not such a function (or demonstrating that at least X + 1 function processes are coherent)
- They’re claims that allows for the possibility of real progress in ethics, as opposed to moral antirealism, which seems - at least to some of its adherents - to restrict ethical discussion to essentially emotive efforts to persuade other people to ‘care about’ the same things as you do.
- They map very easily onto the question of epistemology, on which more in a later essay.
I happen to believe, but will not try to defend the proposition here, that X = 1: unsurprisingly, a form of valence utilitarianism. But even in a form where X = hundreds, or excludes only specific subcategories of claims about ethics, this would be progress.
And meanwhile, rightly or wrongly, I’m a valence utilitarian, and not a moral realist - yet I believe that anyone who argues for a different ethical system errs.
 In an earlier version of this post I called it 'moral exclusive coherentism', which I know I invented but that's a horrible mouthful, and a Google search for 'moral exclusivism' found surprisingly little that might get confused with this. So I've renamed it, or at least created a nicer synonym - the old term might still be useful for strict clarity.