☆✦ The Scintillating But Ultimately Untrue Thought ✦☆♀ ...
A Hill of Validity in Defense of Meaning
Sat 15 July 2023
by Zack M. Davis
in commentary
tagged autogynephilia, bullet-biting, cathartic, categorization, Eliezer Yudkowsky, Scott Alexander, epistemic horror, my robot cult, personal, sex differences, two-type taxonomy, whale metaphors
If you are silent about your pain, they'll kill you and say you enjoyed it.
-Zora Neale Hurston
Recapping my Whole Dumb Story so far-in a previous post, "Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems", I told the part about how I've "always" (since puberty) had this obsessive sexual fantasy about being magically transformed into a woman and also thought it was immoral to believe in psychological sex differences, until I got set straight by these really great Sequences of blog posts by Eliezer Yudkowsky, which taught me (incidentally, among many other things) how absurdly unrealistic my obsessive sexual fantasy was given merely human-level technology, and that it's actually immoral not to believe in psychological sex differences given that psychological sex differences are actually real. In a subsequent post, "Blanchard's Dangerous Idea and the Plight of the Lucid Crossdreamer", I told the part about how, in 2016, everyone in my systematically-correct-reasoning community up to and including Eliezer Yudkowsky suddenly started claiming that guys like me might actually be women in some unspecified metaphysical sense and insisted on playing dumb when confronted with alternative explanations of the relevant phenomena, until I eventually had a sleep-deprivation- and stress-induced delusional nervous breakdown.
That's not the egregious part of the story. Psychology is a complicated empirical science: no matter how obvious I might think something is, I have to admit that I could be wrong-not just as an obligatory profession of humility, but actually wrong in the real world. If my fellow rationalists merely weren't sold on the thesis about autogynephilia as a cause of transsexuality, I would be disappointed, but it wouldn't be grounds to denounce the entire community as a failure or a fraud. And indeed, I did end up moderating my views compared to the extent to which my thinking in 2016-7 took the views of Ray Blanchard, J. Michael Bailey, and Anne Lawrence as received truth. (At the same time, I don't particularly regret saying what I said in 2016-7, because Blanchard-Bailey-Lawrence is still obviously directionally correct compared to the nonsense everyone else was telling me.)
But a striking pattern in my attempts to argue with people about the two-type taxonomy in late 2016 and early 2017 was the tendency for the conversation to get derailed on some variation of, "Well, the word woman doesn't necessarily mean that," often with a link to "The Categories Were Made for Man, Not Man for the Categories", a November 2014 post by Scott Alexander arguing that because categories exist in our model of the world rather than the world itself, there's nothing wrong with simply defining trans people as their preferred gender to alleviate their dysphoria.
After Yudkowsky had stepped away from full-time writing, Alexander had emerged as our subculture's preeminent writer. Most people in an intellectual scene "are writers" in some sense, but Alexander was the one "everyone" reads: you could often reference a Slate Star Codex post in conversation and expect people to be familiar with the idea, either from having read it, or by osmosis. The frequency with which "... Not Man for the Categories" was cited at me seemed to suggest it had become our subculture's party line on trans issues.
But the post is wrong in obvious ways. To be clear, it's true that categories exist in our model of the world, rather than the world itself-categories are "map", not "territory"-and it's possible that trans women might be women with respect to some genuinely useful definition of the word "woman." However, Alexander goes much further, claiming that we can redefine gender categories to make trans people feel better:
I ought to accept an unexpected man or two deep inside the conceptual boundaries of what would normally be considered female if it'll save someone's life. There's no rule of rationality saying that I shouldn't, and there are plenty of rules of human decency saying that I should.
This is wrong because categories exist in our model of the world in order to capture empirical regularities in the world itself: the map is supposed to reflect the territory, and there are "rules of rationality" governing what kinds of word and category usages correspond to correct probabilistic inferences. Yudkowsky had written a whole Sequence about this, "A Human's Guide to Words". Alexander cites a post from that Sequence in support of the (true) point about how categories are "in the map" ... but if you actually read the Sequence, another point that Yudkowsky pounds home over and over, is that word and category definitions are nevertheless not arbitrary: you can't define a word any way you want, because there are at least 37 ways that words can be wrong-principles that make some definitions perform better than others as "cognitive technology."
In the case of Alexander's bogus argument about gender categories, the relevant principle (#30 on the list of 37) is that if you group things together in your map that aren't actually similar in the territory, you're going to make bad inferences.
Crucially, this is a general point about how language itself works that has nothing to do with gender. No matter what you believe about controversial empirical questions, intellectually honest people should be able to agree that "I ought to accept an unexpected [X] or two deep inside the conceptual boundaries of what would normally be considered [Y] if [positive consequence]" is not the correct philosophy of language, independently of the particular values of X and Y.
This wasn't even what I was trying to talk to people about. I thought I was trying to talk about autogynephilia as an empirical theory of psychology of late-onset gender dysphoria in males, the truth or falsity of which cannot be altered by changing the meanings of words. But at this point, I still trusted people in my robot cult to be basically intellectually honest, rather than slaves to their political incentives, so I endeavored to respond to the category-boundary argument under the assumption that it was an intellectually serious argument that someone could honestly be confused about.
When I took a year off from dayjobbing from March 2017 to March 2018 to have more time to study and work on this blog, the capstone of my sabbatical was an exhaustive response to Alexander, "The Categories Were Made for Man to Make Predictions" (which Alexander graciously included in his next links post). A few months later, I followed it with "Reply to The Unit of Caring on Adult Human Females", responding to a similar argument from soon-to-be Vox journalist Kelsey Piper, then writing as The Unit of Caring on Tumblr.
I'm proud of those posts. I think Alexander's and Piper's arguments were incredibly dumb, and that with a lot of effort, I did a pretty good job of explaining why to anyone who was interested and didn't, at some level, prefer not to understand.
Of course, a pretty good job of explaining by one niche blogger wasn't going to put much of a dent in the culture, which is the sum of everyone's blogposts; despite the mild boost from the Slate Star Codex links post, my megaphone just wasn't very big. I was disappointed with the limited impact of my work, but not to the point of bearing much hostility to "the community." People had made their arguments, and I had made mine; I didn't think I was entitled to anything more than that.
Really, that should have been the end of the story. Not much of a story at all. If I hadn't been further provoked, I would have still kept up this blog, and I still would have ended up arguing about gender with people sometimes, but this personal obsession wouldn't have been the occasion of a robot-cult religious civil war involving other people whom you'd expect to have much more important things to do with their time.
The casus belli for the religious civil war happened on 28 November 2018. I was at my new dayjob's company offsite event in Austin, Texas. Coincidentally, I had already spent much of the previous two days (since just before the plane to Austin took off) arguing trans issues with other "rationalists" on Discord.
Just that month, I had started a Twitter account using my real name, inspired in an odd way by the suffocating wokeness of the Rust open-source software scene where I occasionally contributed diagnostics patches to the compiler. My secret plan/fantasy was to get more famous and established in the Rust world (one of compiler team membership, or conference talk accepted, preferably both), get some corresponding Twitter followers, and then bust out the @BlanchardPhd retweets and links to this blog. In the median case, absolutely nothing would happen (probably because I failed at being famous), but I saw an interesting tail of scenarios in which I'd get to be a test case in the Code of Conduct wars.
So, now having a Twitter account, I was browsing Twitter in the bedroom at the rental house for the dayjob retreat when I happened to come across this thread by @ESYudkowsky:
Some people I usually respect for their willingness to publicly die on a hill of facts, now seem to be talking as if pronouns are facts, or as if who uses what bathroom is necessarily a factual statement about chromosomes. Come on, you know the distinction better than that!
Even if somebody went around saying, "I demand you call me 'she' and furthermore I claim to have two X chromosomes!", which none of my trans colleagues have ever said to me by the way, it still isn't a question-of-empirical-fact whether she should be called "she". It's an act.
In saying this, I am not taking a stand for or against any Twitter policies. I am making a stand on a hill of meaning in defense of validity, about the distinction between what is and isn't a stand on a hill of facts in defense of truth.
I will never stand against those who stand against lies. But changing your name, asking people to address you by a different pronoun, and getting sex reassignment surgery, Is. Not. Lying. You are ontologically confused if you think those acts are false assertions.
Some of the replies tried to explain the obvious problem-and Yudkowsky kept refusing to understand:
Using language in a way you dislike, openly and explicitly and with public focus on the language and its meaning, is not lying. The proposition you claim false (chromosomes?) is not what the speech is meant to convey-and this is known to everyone involved, it is not a secret.
Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then. But you're not making a stand for Truth in doing so, and your opponents aren't tricking anyone or trying to.
-repeatedly:
You're mistaken about what the word means to you, I demonstrate thus: https://en.wikipedia.org/wiki/XX_male_syndrome
But even ignoring that, you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning.
Dear reader, this is the moment where I flipped out. Let me explain.
This "hill of meaning in defense of validity" proclamation was such a striking contrast to the Eliezer Yudkowsky I remembered-the Eliezer Yudkowsky I had variously described as having "taught me everything I know" and "rewritten my personality over the internet"-who didn't hesitate to criticize uses of language that he thought were failing to "carve reality at the joints", even going so far as to call them "wrong":
[S]aying "There's no way my choice of X can be 'wrong'" is nearly always an error in practice, whatever the theory. You can always be wrong. Even when it's theoretically impossible to be wrong, you can still be wrong. There is never a Get-Out-Of-Jail-Free card for anything you do. That's life.
Similarly:
Once upon a time it was thought that the word "fish" included dolphins. Now you could play the oh-so-clever arguer, and say, "The list: {Salmon, guppies, sharks, dolphins, trout} is just a list-you can't say that a list is wrong. I can prove in set theory that this list exists. So my definition of fish, which is simply this extensional list, cannot possibly be 'wrong' as you claim."
Or you could stop playing nitwit games and admit that dolphins don't belong on the fish list.
You come up with a list of things that feel similar, and take a guess at why this is so. But when you finally discover what they really have in common, it may turn out that your guess was wrong. It may even turn out that your list was wrong.
You cannot hide behind a comforting shield of correct-by-definition. Both extensional definitions and intensional definitions can be wrong, can fail to carve reality at the joints.
One could argue that this "Words can be wrong when your definition draws a boundary around things that don't really belong together" moral didn't apply to Yudkowsky's new Tweets, which only mentioned pronouns and bathroom policies, not the extensions of common nouns.
But this seems pretty unsatisfying in the context of Yudkowsky's claim to "not [be] taking a stand for or against any Twitter policies". One of the Tweets that had recently led to radical feminist Meghan Murphy getting kicked off the platform read simply, "Men aren't women tho." This doesn't seem like a policy claim; rather, Murphy was using common language to express the fact-claim that members of the natural category of adult human males, are not, in fact, members of the natural category of adult human females.
Thus, if the extension of common words like "woman" and "man" is an issue of epistemic importance that rationalists should care about, then presumably so was Twitter's anti-misgendering policy-and if it isn't (because you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning) then I wasn't sure what was left of the "Human's Guide to Words" Sequence if the 37-part grand moral needed to be retracted.
I think I am standing in defense of truth when I have an argument for why my preferred word usage does a better job at carving reality at the joints, and the one bringing my usage explicitly into question does not. As such, I didn't see the practical difference between "you're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning," and "I can define a word any way I want." About which, again, an earlier Eliezer Yudkowsky had written:
"It is a common misconception that you can define a word any way you like. [...] If you believe that you can 'define a word any way you like', without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely."
"So that's another reason you can't 'define a word any way you like': You can't directly program concepts into someone else's brain."
"When you take into account the way the human mind actually, pragmatically works, the notion 'I can define a word any way I like' soon becomes 'I can believe anything I want about a fixed set of objects' or 'I can move any object I want in or out of a fixed membership test'."
"There's an idea, which you may have noticed I hate, that 'you can define a word any way you like'."
"And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying 'I can define a word any way I like'."
"Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind. One more reason not to believe you can define a word any way you like."
"And people are lazy. They'd rather argue 'by definition', especially since they think 'you can define a word any way you like'."
"And this suggests another-yes, yet another-reason to be suspicious of the claim that 'you can define a word any way you like'. When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity-not just for us, but for any mind of bounded computing power."
"I say all this, because the idea that 'You can X any way you like' is a huge obstacle to learning how to X wisely. 'It's a free country; I have a right to my own opinion' obstructs the art of finding truth. 'I can define a word any way I like' obstructs the art of carving reality at its joints. And even the sensible-sounding 'The labels we attach to words are arbitrary' obstructs awareness of compactness."
"One may even consider the act of defining a word as a promise to [the] effect [...] [that the definition] will somehow help you make inferences / shorten your messages."
One could argue that I was unfairly interpreting Yudkowsky's Tweets as having a broader scope than was intended-that Yudkowsky only meant to slap down the false claim that using he for someone with a Y chromosome is "lying", without intending any broader implications about trans issues or the philosophy of language. It wouldn't be realistic or fair to expect every public figure to host an exhaustive debate on all related issues every time they encounter a fallacy they want to Tweet about.
However, I don't think this "narrow" reading is the most natural one. Yudkowsky had previously written of what he called the fourth virtue of evenness: "If you are selective about which arguments you inspect for flaws, or how hard you inspect for flaws, then every flaw you learn how to detect makes you that much stupider." He had likewise written on reversed stupidity (bolding mine):
To argue against an idea honestly, you should argue against the best arguments of the strongest advocates. Arguing against weaker advocates proves nothing, because even the strongest idea will attract weak advocates.
Relatedly, Scott Alexander had written about how "weak men are superweapons": speakers often selectively draw attention to the worst arguments in favor of a position in an attempt to socially discredit people who have better arguments (which the speaker ignores). In the same way, by just slapping down a weak man from the "anti-trans" political coalition without saying anything else in a similarly prominent location, Yudkowsky was liable to mislead his faithful students into thinking that there were no better arguments from the "anti-trans" side.
To be sure, it imposes a cost on speakers to not be able to Tweet about one specific annoying fallacy and then move on with their lives without the need for endless disclaimers about related but stronger arguments that they're not addressing. But the fact that Yudkowsky disclaimed that he wasn't taking a stand for or against Twitter's anti-misgendering policy demonstrates that he didn't have an aversion to spending a few extra words to prevent the most common misunderstandings.
Given that, it's hard to read the Tweets Yudkowsky published as anything other than an attempt to intimidate and delegitimize people who want to use language to reason about sex rather than gender identity. It's just not plausible that Yudkowsky was simultaneously savvy enough to choose to make these particular points while also being naïve enough to not understand the political context. Deeper in the thread, he wrote:
The more technology advances, the further we can move people towards where they say they want to be in sexspace. Having said this we've said all the facts. Who competes in sports segregated around an Aristotelian binary is a policy question (that I personally find very humorous).
Sure, in the limit of arbitrarily advanced technology, everyone could be exactly where they wanted to be in sexpsace. Having said this, we have not said all the facts relevant to decisionmaking in our world, where we do not have arbitrarily advanced technology (as Yudkowsky well knew, having written a post about how technically infeasible an actual sex change would be). As Yudkowsky acknowledged in the previous Tweet, "Hormone therapy changes some things and leaves others constant." The existence of hormone replacement therapy does not itself take us into the glorious transhumanist future where everyone is the sex they say they are.
The reason for sex-segregated sports leagues is that sport-relevant multivariate trait distributions of female bodies and male bodies are different: men are taller, stronger, and faster. If you just had one integrated league, females wouldn't be competitive (in the vast majority of sports, with a few exceptions like ultra-distance swimming that happen to sample an unusually female-favorable corner of sportspace).
Given the empirical reality of the different trait distributions, "Who are the best athletes among females?" is a natural question for people to be interested in and want separate sports leagues to determine. Including male people in female sports leagues undermines the point of having a separate female league, and hormone replacement therapy after puberty doesn't substantially change the picture here.1
Yudkowsky's suggestion that an ignorant commitment to an "Aristotelian binary" is the main reason someone might care about the integrity of women's sports is an absurd strawman. This just isn't something any scientifically literate person would write if they had actually thought about the issue at all, as opposed to having first decided (consciously or not) to bolster their reputation among progressives by dunking on transphobes on Twitter, and then wielding their philosophy knowledge in the service of that political goal. The relevant facts are not subtle, even if most people don't have the fancy vocabulary to talk about them in terms of "multivariate trait distributions."
I'm picking on the "sports segregated around an Aristotelian binary" remark because sports is a case where the relevant effect sizes are so large as to make the point hard for all but the most ardent gender-identity partisans to deny. (For example, what the Cohen's d ≈ 2.6 effect size difference in muscle mass means is that a woman as strong as the average man is at the 99.5th percentile for women.) But the point is general: biological sex exists and is sometimes decision-relevant. People who want to be able to talk about sex and make policy decisions on the basis of sex are not making an ontology error, because the ontology in which sex "actually" "exists" continues to make very good predictions in our current tech regime (if not the glorious transhumanist future). It would be a ridiculous isolated demand for rigor to expect someone to pass a graduate exam about the philosophy and cognitive science of categorization before they can talk about sex.
Thus, Yudkowsky's claim to merely have been standing up for the distinction between facts and policy questions doesn't seem credible. It is, of course, true that pronoun and bathroom conventions are policy decisions rather than matters of fact, but it's bizarre to condescendingly point this out as if it were the crux of contemporary trans-rights debates. Conservatives and gender-critical feminists know that trans-rights advocates aren't falsely claiming that trans women have XX chromosomes! If you just wanted to point out that the rules of sports leagues are a policy question rather than a fact (as if anyone had doubted this), why would you throw in the "Aristotelian binary" weak man and belittle the matter as "humorous"? There are a lot of issues I don't care much about, but I don't see anything funny about the fact that other people do care.2
If any concrete negative consequence of gender self-identity categories is going to be waved away with, "Oh, but that's a mere policy decision that can be dealt with on some basis other than gender, and therefore doesn't count as an objection to the new definition of gender words", then it's not clear what the new definition is for.
Like many gender-dysphoric males, I cosplay female characters at fandom conventions sometimes. And, unfortunately, like many gender-dysphoric males, I'm not very good at it. I think someone looking at some of my cosplay photos and trying to describe their content in clear language-not trying to be nice to anyone or make a point, but just trying to use language as a map that reflects the territory-would say something like, "This is a photo of a man and he's wearing a dress." The word man in that sentence is expressing cognitive work: it's a summary of the lawful cause-and-effect evidential entanglement whereby the photons reflecting off the photograph are correlated with photons reflecting off my body at the time the photo was taken, which are correlated with my externally observable secondary sex characteristics (facial structure, beard shadow, &c.). From this evidence, an agent using an efficient naïve-Bayes-like model can assign me to its "man" (adult human male) category and thereby make probabilistic predictions about traits that aren't directly observable from the photo. The agent would achieve a better score on those predictions than if it had assigned me to its "woman" (adult human female) category.
By "traits" I mean not just sex chromosomes (as Yudkowsky suggested on Twitter), but the conjunction of dozens or hundreds of measurements that are causally downstream of sex chromosomes: reproductive organs and muscle mass (again, sex difference effect size of Cohen's d ≈ 2.6) and Big Five Agreeableness (d ≈ 0.5) and Big Five Neuroticism (d ≈ 0.4) and short-term memory (d ≈ 0.2, favoring women) and white-gray-matter ratios in the brain and probable socialization history and any number of other things-including differences we might not know about, but have prior reasons to suspect exist. No one knew about sex chromosomes before 1905, but given the systematic differences between women and men, it would have been reasonable to suspect the existence of some sort of molecular mechanism of sex determination.
Forcing a speaker to say "trans woman" instead of "man" in a sentence about my cosplay photos depending on my verbally self-reported self-identity may not be forcing them to lie, exactly. It's understood, "openly and explicitly and with public focus on the language and its meaning," what trans women are; no one is making a false-to-fact claim about them having ovaries, for example. But it is forcing the speaker to obfuscate the probabilistic inference they were trying to communicate with the original sentence (about modeling the person in the photograph as being sampled from the "man" cluster in configuration space), and instead use language that suggests a different cluster-structure. ("Trans women", two words, are presumably a subcluster within the "women" cluster.) Crowing in the public square about how people who object to being forced to "lie" must be ontologically confused is ignoring the interesting part of the problem. Gender identity's claim to be non-disprovable functions as a way to avoid the belief's real weak points.
To this, one might reply that I'm giving too much credit to the "anti-trans" faction for how stupid they're not being: that my careful dissection of the hidden probabilistic inferences implied by words (including pronoun choices) is all well and good, but calling pronouns "lies" is not something you do when you know how to use words.
But I'm not giving them credit for for understanding the lessons of "A Human's Guide to Words"; I just think there's a useful sense of "know how to use words" that embodies a lower standard of philosophical rigor. If a person-in-the-street says of my cosplay photos, "That's a man! I have eyes, and I can see that that's a man! Men aren't women!"-well, I probably wouldn't want to invite them to a Less Wrong meetup. But I do think the person-in-the-street is performing useful cognitive work. Because I have the hidden-Bayesian-structure-of-language-and-cognition-sight (thanks to Yudkowsky's writings back in the 'aughts), I know how to sketch out the reduction of "Men aren't women" to something more like "This cognitive algorithm detects secondary sex characteristics and uses it as a classifier for a binary female/male 'sex' category, which it uses to make predictions about not-yet-observed features ..."
But having done the reduction-to-cognitive-algorithms, it still looks like the person-in-the-street has a point that I shouldn't be allowed to ignore just because I have 30 more IQ points and better philosophy-of-language skills?
I bring up my bad cosplay photos as an edge case that helps illustrate the problem I'm trying to point out, much like how people love to bring up complete androgen insensitivity syndrome to illustrate why "But chromosomes!" isn't the correct reduction of sex classification. To differentiate what I'm saying from blind transphobia, let me note that I predict that most people-in-the-street would be comfortable using feminine pronouns for someone like Blaire White. That's evidence about the kind of cognitive work people's brains are doing when they use English pronouns! Certainly, English is not the only language, and ours is not the only culture; maybe there is a way to do gender categories that would be more accurate and better for everyone. But to find what that better way is, we need to be able to talk about these kinds of details in public, and the attitude evinced in Yudkowsky's Tweets seemed to function as a semantic stopsign to get people to stop talking about the details.
If you were interested in having a real discussion (instead of a fake discussion that makes you look good to progressives), why would you slap down the "But, but, chromosomes" fallacy and then not engage with the obvious steelman of "But, but, clusters in high-dimensional configuration space that aren't actually changeable with contemporary technology" steelman which was, in fact, brought up in the replies?
Satire is a weak form of argument: the one who wishes to doubt will always be able to find some aspect in which an obviously absurd satirical situation differs from the real-world situation being satirized and claim that that difference destroys the relevance of the joke. But on the off chance that it might help illustrate the objection, imagine you lived in a so-called "rationalist" subculture where conversations like this happened-
⁕ ⁕ ⁕
Bob: Look at this adorable cat picture!
Alice: Um, that looks like a dog to me, actually.
Bob: You're not standing in defense of truth if you insist on a word, brought explicitly into question, being used with some particular meaning. Now, maybe as a matter of policy, you want to make a case for language being used a certain way. Well, that's a separate debate then.
⁕ ⁕ ⁕
If you were Alice, and a solid supermajority of your incredibly smart, incredibly philosophically sophisticated friend group including Eliezer Yudkowsky (!!!) seemed to behave like Bob, that would be a worrying sign about your friends' ability to accomplish intellectually hard things like AI alignment, right? Even if there isn't any pressing practical need to discriminate between dogs and cats, the problem is that Bob is selectively using his sophisticated philosophy-of-language knowledge to try to undermine Alice's ability to use language to make sense of the world, even though Bob obviously knows very well what Alice was trying to say. It's incredibly obfuscatory in a way that people-the same people-would not tolerate in almost any other context.
Imagine an Islamic theocracy in which one Megan Murfi (ميغان ميرفي) had recently gotten kicked off the dominant microblogging platform for speaking disrespectfully about the prophet Muhammad. Suppose that Yudkowsky's analogue in that world then posted that those objecting on free inquiry grounds were ontologically confused: saying "peace be upon him" after the name of the prophet Muhammad is a speech act, not a statement of fact. In banning Murfi for repeatedly speaking about the prophet Muhammad (peace be upon him) as if he were just some guy, the platform was merely "enforcing a courtesy standard" (in the words of our world's Yudkowsky). Murfi wasn't being forced to lie.
I think the atheists of our world, including Yudkowsky, would not have trouble seeing the problem with this scenario, nor hesitate to agree that it is a problem for that Society's rationality. Saying "peace be unto him" is indeed a speech act rather than a statement of fact, but it would be bizarre to condescendingly point this out as if it were the crux of debates about religious speech codes. The function of the speech act is to signal the speaker's affirmation of Muhammad's divinity. That's why the Islamic theocrats want to mandate that everyone say it: it's a lot harder for atheism to get any traction if no one is allowed to talk like an atheist.
And that's why trans advocates want to mandate against misgendering people on social media: it's harder for trans-exclusionary ideologies to get any traction if no one is allowed to talk like someone who believes that sex (sometimes) matters and gender identity does not.
Of course, such speech restrictions aren't necessarily "irrational", depending on your goals. If you just don't think "free speech" should go that far-if you want to suppress atheism or gender-critical feminism with an iron fist-speech codes are a perfectly fine way to do it! And to their credit, I think most theocrats and trans advocates are intellectually honest about what they're doing: atheists or transphobes are bad people (the argument goes) and we want to make it harder for them to spread their lies or their hate.
In contrast, by claiming to be "not taking a stand for or against any Twitter policies" while accusing people who opposed the policy of being ontologically confused, Yudkowsky was being less honest than the theocrat or the activist: of course the point of speech codes is to suppress ideas! Given that the distinction between facts and policies is so obviously not anyone's crux-the smarter people in the "anti-trans" faction already know that, and the dumber people in the faction wouldn't change their alignment if they were taught-it's hard to see what the point of harping on the fact/policy distinction would be, except to be seen as implicitly taking a stand for the "pro-trans" faction while putting on a show of being politically "neutral."
It makes sense that Yudkowsky might perceive political constraints on what he might want to say in public-especially when you look at what happened to the other Harry Potter author.3 But if Yudkowsky didn't want to get into a distracting fight about a politically-charged topic, then maybe the responsible thing to do would have been to just not say anything about the topic, rather than engaging with the stupid version of the opposition and stonewalling with "That's a policy question" when people tried to point out the problem?!
I didn't have all of that criticism collected and carefully written up on 28 November 2018. But that, basically, is why I flipped out when I saw that Twitter thread. If the "rationalists" didn't click on the autogynephilia thing, that was disappointing, but forgivable. If the "rationalists", on Scott Alexander's authority, were furthermore going to get our own philosophy of language wrong over this, that was-I don't want to say forgivable exactly, but it was tolerable. I had learned from my misadventures the previous year that I had been wrong to trust "the community" as a reified collective. That had never been a reasonable mental stance in the first place.
But trusting Eliezer Yudkowsky-whose writings, more than any other single influence, had made me who I am-did seem reasonable. If I put him on a pedestal, it was because he had earned the pedestal, for supplying me with my criteria for how to think-including, as a trivial special case, how to think about what things to put on pedestals.
So if the rationalists were going to get our own philosophy of language wrong over this and Eliezer Yudkowsky was in on it (!!!), that was intolerable, inexplicable, incomprehensible-like there wasn't a real world anymore.
At the dayjob retreat, I remember going downstairs to impulsively confide in a senior engineer, an older bald guy who exuded masculinity, who you could tell by his entire manner and being was not infected by the Berkeley mind-virus, no matter how loyally he voted Democrat. I briefly explained the situation to him-not just the immediate impetus of this Twitter thread, but this whole thing of the past couple years where my entire social circle just suddenly decided that guys like me could be women by means of saying so. He was noncommittally sympathetic; he told me an anecdote about him accepting a trans person's correction of his pronoun usage, with the thought that different people have their own beliefs, and that's OK.
If Yudkowsky was already stonewalling his Twitter followers, entering the thread myself didn't seem likely to help. (Also, less importantly, I hadn't intended to talk about gender on that account yet.)
It seemed better to try to clear this up in private. I still had Yudkowsky's email address, last used when I had offered to pay to talk about his theory of MtF two years before. I felt bad bidding for his attention over my gender thing again-but I had to do something. Hands trembling, I sent him an email asking him to read my "The Categories Were Made for Man to Make Predictions", suggesting that it might qualify as an answer to his question about "a page [he] could read to find a non-confused exclamation of how there's scientific truth at stake". I said that because I cared very much about correcting confusions in my rationalist subculture, I would be happy to pay up to $1000 for his time-and that, if he liked the post, he might consider Tweeting a link-and that I was cc'ing my friends Anna Salamon and Michael Vassar as character references (Subject: "another offer, $1000 to read a ~6500 word blog post about (was: Re: Happy Price offer for a 2 hour conversation)"). Then I texted Anna and Michael, begging them to vouch for my credibility.
The monetary offer, admittedly, was awkward: I included another paragraph clarifying that any payment was only to get his attention, not quid quo pro advertising, and that if he didn't trust his brain circuitry not to be corrupted by money, then he might want to reject the offer on those grounds and only read the post if he expected it to be genuinely interesting.
Again, I realize this must seem weird and cultish to any normal people reading this. (Paying some blogger you follow one grand just to read one of your posts? What? Why? Who does that?) To this, I again refer to the reasons justifying my 2016 cheerful price offer-and that, along with tagging in Anna and Michael, whom I thought Yudkowsky respected, it was a way to signal that I really didn't want to be ignored, which I assumed was the default outcome. An ordinary programmer such as me was as a mere worm in the presence of the great Eliezer Yudkowsky. I wouldn't have had the audacity to contact him at all, about anything, if I didn't have Something to Protect.
Anna didn't reply, but I apparently did interest Michael, who chimed in on the email thread to Yudkowsky. We had a long phone conversation the next day lamenting how the "rationalists" were dead as an intellectual community.
As for the attempt to intervene on Yudkowsky-here I need to make a digression about the constraints I'm facing in telling this Whole Dumb Story. I would prefer to just tell this Whole Dumb Story as I would to my long-neglected Diary-trying my best at the difficult task of explaining what actually happened during an important part of my life, without thought of concealing anything.
(If you are silent about your pain, they'll kill you and say you enjoyed it.)
Unfortunately, a lot of other people seem to have strong intuitions about "privacy", which bizarrely impose constraints on what I'm allowed to say about my own life: in particular, it's considered unacceptable to publicly quote or summarize someone's emails from a conversation that they had reason to expect to be private. I feel obligated to comply with these widely-held privacy norms, even if I think they're paranoid and anti-social. (This secrecy-hating trait probably correlates with the autogynephilia blogging; someone otherwise like me who believed in privacy wouldn't be telling you this Whole Dumb Story.)
So I would think that while telling this Whole Dumb Story, I obviously have an inalienable right to blog about my own actions, but I'm not allowed to directly refer to private conversations with named individuals in cases where I don't think I'd be able to get the consent of the other party. (I don't think I'm required to go through the ritual of asking for consent in cases where the revealed information couldn't reasonably be considered "sensitive", or if I know the person doesn't have hangups about this weird "privacy" thing.) In this case, I'm allowed to talk about emailing Yudkowsky (because that was my action), but I'm not allowed to talk about anything he might have said in reply, or whether he did.
Unfortunately, there's a potentially serious loophole in the commonsense rule: what if some of my actions (which I would have hoped to have an inalienable right to blog about) depend on content from private conversations? You can't, in general, only reveal one side of a conversation.
Suppose Carol messages Dave at 5 p.m., "Can you come to the party?", and also, separately, that Carol messages Dave at 6 p.m., "Gout isn't contagious." Should Carol be allowed to blog about the messages she sent at 5 p.m. and 6 p.m., because she's only describing her own messages and not confirming or denying whether Dave replied at all, let alone quoting him?
I think commonsense privacy-norm-adherence intuitions actually say No here: the text of Carol's messages makes it too easy to guess that sometime between 5 and 6, Dave probably said that he couldn't come to the party because he has gout. It would seem that Carol's right to talk about her own actions in her own life does need to take into account some commonsense judgement of whether that leaks "sensitive" information about Dave.
In the substory (of my Whole Dumb Story) that follows, I'm going to describe several times that I and others emailed Yudkowsky to argue with what he said in public, without saying anything about whether Yudkowsky replied or what he might have said if he did reply. I maintain that I'm within my rights here, because I think commonsense judgment will agree that me talking about the arguments I made does not leak any sensitive information about the other side of a conversation that may or may not have happened. I think the story comes off relevantly the same whether Yudkowsky didn't reply at all (e.g., because he was too busy with more existentially important things to check his email), or whether he replied in a way that I found sufficiently unsatisfying as to occasion the further emails with followup arguments that I describe. (Talking about later emails does rule out the possible world where Yudkowsky had said, "Please stop emailing me," because I would have respected that, but the fact that he didn't say that isn't "sensitive".)
It seems particularly important to lay out these judgments about privacy norms in connection to my attempts to contact Yudkowsky, because part of what I'm trying to accomplish in telling this Whole Dumb Story is to deal reputational damage to Yudkowsky, which I claim is deserved. (We want reputations to track reality. If you see Erin exhibiting a pattern of intellectual dishonesty, and she keeps doing it even after you talk to her about it privately, you might want to write a blog post describing the pattern in detail-not to hurt Erin, particularly, but so that everyone else can make higher-quality decisions about whether they should believe the things that Erin says.) Given that motivation of mine, it seems important that I only try to hang Yudkowsky with the rope of what he said in public, where you can click the links and read the context for yourself: I'm attacking him, but not betraying him. In the substory that follows, I also describe correspondence with Scott Alexander, but that doesn't seem sensitive in the same way, because I'm not particularly trying to deal reputational damage to Alexander. (Not because Scott performed well, but because one wouldn't really have expected him to in this situation; Alexander's reputation isn't so direly in need of correction.)
Thus, I don't think I should say whether Yudkowsky replied to Michael's and my emails, nor (again) whether he accepted the cheerful-price money, because any conversation that may or may not have occurred would have been private. But what I can say, because it was public, is that we saw this addition to the Twitter thread:
I was sent this (by a third party) as a possible example of the sort of argument I was looking to read: http://unremediatedgender.space/2018/Feb/the-categories-were-made-for-man-to-make-predictions/. Without yet judging its empirical content, I agree that it is not ontologically confused. It's not going "But this is a MAN so using 'she' is LYING."
Look at that! The great Eliezer Yudkowsky said that my position is "not ontologically confused." That's probably high praise, coming from him!
You might think that that should have been the end of the story. Yudkowsky denounced a particular philosophical confusion, I already had a related objection written up, and he publicly acknowledged my objection as not being the confusion he was trying to police. I should be satisfied, right?
I wasn't, in fact, satisfied. This little "not ontologically confused" clarification buried deep in the replies was much less visible than the bombastic, arrogant top-level pronouncement insinuating that resistance to gender-identity claims was confused. (1 Like on this reply, vs. 140 Likes/18 Retweets on start of thread.) This little follow-up did not seem likely to disabuse the typical reader of the impression that Yudkowsky thought gender-identity skeptics didn't have a leg to stand on. Was it greedy of me to want something louder?
Greedy or not, I wasn't done flipping out. On 1 December 2019, I wrote to Scott Alexander (cc'ing a few other people) to ask if there was any chance of an explicit and loud clarification or partial retraction of "... Not Man for the Categories" (Subject: "super-presumptuous mail about categorization and the influence graph"). Forget my boring whining about the autogynephilia/two-types thing, I said-that's a complicated empirical claim, and not the key issue.
The issue was that category boundaries are not arbitrary (if you care about intelligence being useful). You want to draw your category boundaries such that things in the same category are similar in the respects that you care about predicting/controlling, and you want to spend your information-theoretically limited budget of short words on the simplest and most widely useful categories.
It was true that the reason I was continuing to freak out about this to the extent of sending him this obnoxious email telling him what to write (seriously, who does that?!) was because of transgender stuff, but that wasn't why Scott should care.
The other year, Alexander had written a post, "Kolmogorov Complicity and the Parable of Lightning", explaining the consequences of political censorship with an allegory about a Society with the dogma that thunder occurs before lightning.4 Alexander had explained that the problem with complying with the dictates of a false orthodoxy wasn't the sacred dogma itself (it's not often that you need to directly make use of the fact that lightning comes first), but that the need to defend the sacred dogma destroys everyone's ability to think.
It was the same thing here. It wasn't that I had any practical need to misgender anyone in particular. It still wasn't okay that talking about the reality of biological sex to so-called "rationalists" got you an endless deluge of-polite! charitable! non-ostracism-threatening!-bullshit nitpicking. (What about complete androgen insensitivity syndrome? Why doesn't this ludicrous misinterpretation of what you said imply that lesbians aren't women? &c. ad infinitum.) With enough time, I thought the nitpicks could and should be satisfactorily answered; any remaining would presumably be fatal criticisms rather than bullshit nitpicks. But while I was in the process of continuing to write all that up, I hoped Alexander could see why I felt somewhat gaslighted.
(I had been told by others that I wasn't using the word "gaslighting" correctly. No one seemed to think I had the right to define that category boundary for my convenience.)
If our vaunted rationality techniques resulted in me having to spend dozens of hours patiently explaining why I didn't think that I was a woman (where "not a woman" is a convenient rhetorical shorthand for a much longer statement about naïve Bayes models and high-dimensional configuration spaces and defensible Schelling points for social norms), then our techniques were worse than useless.
If Galileo ever muttered "And yet it moves", there's a long and nuanced conversation you could have about the consequences of using the word "moves" in Galileo's preferred sense, as opposed to some other sense that happens to result in the theory needing more epicycles. It may not have been obvious in November 2014 when "... Not Man for the Categories" was published, but in retrospect, maybe it was a bad idea to build a memetic superweapon that says that the number of epicycles doesn't matter.
The reason to write this as a desperate email plea to Scott Alexander instead of working on my own blog was that I was afraid that marketing is a more powerful force than argument. Rather than good arguments propagating through the population of so-called "rationalists" no matter where they arose, what actually happened was that people like Alexander and Yudkowsky rose to power on the strength of good arguments and entertaining writing (but mostly the latter), and then everyone else absorbed some of their worldview (plus noise and conformity with the local environment). So for people who didn't win the talent lottery but thought they saw a flaw in the zeitgeist, the winning move was "persuade Scott Alexander."
Back in 2010, the rationalist community had a shared understanding that the function of language is to describe reality. Now, we didn't. If Scott didn't want to cite my creepy blog about my creepy fetish, that was fine; I liked getting credit, but the important thing was that this "No, the Emperor isn't naked-oh, well, we're not claiming that he's wearing any garments-it would be pretty weird if we were claiming that!-it's just that utilitarianism implies that the social property of clothedness should be defined this way because to do otherwise would be really mean to people who don't have anything to wear" maneuver needed to die, and he alone could kill it.
Scott didn't get it. We agreed that gender categories based on self-identity, natal sex, and passing each had their own pros and cons, and that it's uninteresting to focus on whether something "really" belongs to a category rather than on communicating what you mean. Scott took this to mean that what convention to use is a pragmatic choice we can make on utilitarian grounds, and that being nice to trans people was worth a little bit of clunkiness-that the mental health benefits to trans people were obviously enough to tip the first-order utilitarian calculus.
I didn't think anything about "mental health benefits to trans people" was obvious. More importantly, I considered myself to be prosecuting not the object-level question of which gender categories to use but the meta-level question of what normative principles govern the use of categories. For this, "whatever, it's a pragmatic choice, just be nice" wasn't an answer, because the normative principles exclude "just be nice" from being a relevant consideration.
"... Not Man for the Categories" had concluded with a section on Emperor Norton, a 19th-century San Francisco resident who declared himself Emperor of the United States. Certainly, it's not difficult or costly for the citizens of San Francisco to address Norton as "Your Majesty". But there's more to being Emperor of the United States than what people call you. Unless we abolish Congress and have the military enforce Norton's decrees, he's not actually emperor-at least not according to the currently generally understood meaning of the word.
What are you going to do if Norton takes you literally? Suppose he says, "I ordered the Imperial Army to invade Canada last week; where are the troop reports? And why do the newspapers keep talking about this so-called 'President' Rutherford B. Hayes? Have this pretender Hayes executed at once and bring his head to me!"