If by throwing a rock
out of my bedroom window every morning and hitting a passerby I could save everyone alive from ever again having to find the end of a roll of Scotch tape,1 should I do it? Well, first of all, a vast majority of the houses in my neighborhood were built in or around 2015, so a lot of couples moved in then and had babies. Thus, a significant number of the passersby in the morning are going to be first or second graders on their way to school. So when calling to mind the median passerby, you should imagine my towheaded little neighbor Bruno, who just started school last month. Then you should envision him with a head gash, a bulldozer-themed patch over one side of his tiny glasses, and a severe and possibly permanent speech impediment and conclude, “No, you monster! What’s wrong with you?!”
So there you go, intuition pump successful, I’ve tl;dr-ed you from having to read Erik Hoel’s magisterial takedown of effective altruism (EA). (Just kidding. No, I haven’t. You should read that and its sequel now and then come back here.)
Google Trends tells me
that if I did throw a rock out the window, I’d be much more likely than I was just two months ago to hit someone who’s heard of “effective altruism.” Then again, in my neighborhood the likelihood isn’t all that high. So EA is having a moment. Nonetheless, I take Hoel’s argument on the point to be conclusive: utilitarian calculus2 is always going to lead to absolute horrors that no one in their right mind would countenance—what Derik Parfitt called Repugnant Conclusions: There are a lot of people who want to tear their hair out after unsuccessfully scratching their fingernails along the surface of a roll of transparent tape, so at some point the weight of all that suffering has to merit Bruno taking a rock to the skull, right? No! Of course not. That’s repugnant. Bruno’s suffering is qualitatively worse and more compelling than any quantity of suffering of the Scotch Tape-scratching variety.
But Hoel also makes a concession that I think is equally valid: The utilitarians’ hearts are in the right place, even if the whole maximization of pleasure and reduction of pain thing is a dead end. And if they just keep making exceptions and concessions, they can eventually water down their proposals until they’re palatable and can achieve some good. But how do we adjudicate whose “hearts are in the right place”? How do we know what is “good”? What is the origin of our intuitions about these matters, if they are intuitions? I have my own modest proposal: Let’s drop the whole EA thing like a Netflix account and try to get at what really leads us to conclude that certain consequences of our actions are “good” or “bad,” while in other situations, it doesn’t even matter what the consequences are. Where are these judgements even coming from? How much do we depend on other people, our education, the culture we come from, and the prevailing opinions of our time to draw such conclusions? Are we warranted in thinking that any of our moral judgements are true?
If you’re a devoted listener
to the podcast, you’ll recall that in Book 2 - Episode 9: “ Ballpark Figures Part I,” in the midst of an extended discussion of the role of model-building in our moral calculations, I mentioned a conundrum that Stephen Metcalf—author and, at Slate, Critic at Large and co-panelist of the Culture Gabfest—noticed with regard to the Woody Allen film Manhattan. (Admittedly, this is a banal subject, but in Steve’s telling and as I used it in the podcast, it serves as a proxy for reappraisals of the sexual liberation and hedonism of second-wave feminism. The New York Times op-ed writer Michelle Goldberg seems particularly enamored with this topic.)
I watched Manhattan when I was 16. By that point, the movie was about 14 years old, and I grew up in a milieu in which ideas about sexual liberation—that old norms about sex were repressive and that “consenting” people ought to seek out pleasurable experiences—were ascendant. Now, I was raised Catholic and still hold to that faith, but (like everyone, I suppose) public mores were weighted into my practical ethical calculation. I found the movie sometimes shocking and embarrassing, but also clever and witty (at least as much as a 16-year-old kid can) and took to heart the implicit message that, however badly the characters may have behaved, it wasn’t the sexual liberation that was bad. My embarrassment stemmed from norms that I “ought” to shed in favor of new, “better” norms.
Then roughly 25 years passed, and I gathered some experiences and matured and added lots of nuance to my beliefs, and I didn’t think much at all about the movie Manhattan. It only became salient again when I heard Steve point out on the Gabfest that—the heaps of praise it has enjoyed over the years notwithstanding—the movie is disturbing (especially considering Woody Allen’s ever-more indubitable deviancy, but also just so): The Mariel Hemingway character is only 17 years old, meaning on purely legal grounds the story involves statutory rape. But legality is an abstraction. In practice, the relationship is worse still. At the beginning of the film, Allen’s character emotionally manipulates her, and at the end, when his other liaison has dried up, he places pressure on her to rejoin him in their affair even though she has other, more fitting things for a freshly-turned 18 year old to do. By the lights of most observers now, the message of “sex positivity” for the Hemingway character is absurd and morally repulsive. In the culture at large, Steve’s reappraisal has caught on.
The first thing I wish to register is admiration for my 16-year-old self, whose initial reaction had been that something was off about that movie, but whose intuitions were overruled by prevailing social values. Looking back, 16-year-old me got it right. Second, like Steve, the total turnaround of public opinion from unreserved valorization to villainization makes me uncomfortable. If we are to take the moral reckoning of the MeToo movement seriously, then we need some grounds. If those moral grounds are culturally determined, then when our cultural mores change, on what grounds are our shiny, new cultural values better than our old ones? How does Drake, in his orange parka, decide when to turn in wincing disgust or lift his head in delectation? At the level of social consensus, morality really does seem relative.
So, last year Steve and I chatted about these things for a couple of hours (I’m posting a recording of that conversation next week as a special episode of the podcast for paid subscribers). We both agreed that we really couldn’t abide thinking of morality as fungible. Steve argued that, back in the 1970s and until very recently—when the naive pursuit of pleasure dominated public wisdom—we were mistakenly thinking of moral negotiations as happening in a vacuum. Like Newtonian physics, sex positivity had a lot of explanatory power as long as we ignored many of the actual conditions under which we make decisions. The condition Steve claimed we were overlooking is the fact that sex is essentially an intersubjective power struggle. And the young, and women—and especially young women—are consistently the victims in that power struggle. Once we finally came to collectively acknowledge those facts, Steve claimed, we started seeing everything in a new and clearer light.
I believe there is some truth
to Steve’s theory. I think power struggles are an exasperating but unavoidable spandrel (a structure that arises as a byproduct of some other structural arrangement) of any human interaction. But I don’t like it when “power” becomes a big, menacing, yet amorphous demigorgon that dominates all moral negotiation—the same thing as Steve himself has noted most irked his mentor and erstwhile teacher Richard Rorty about lefty discourse: the penchant for
“locating the ‘ubiquitous specter’ known as ‘power’ everywhere, and conceding that we are without agency in its presence.”
I don’t think sex is essentially a power struggle. In my conversation with Steve, I came down as a hard believer that the individual conscience has to be the final arbiter of right and wrong. With my ideas only three-quarters-baked, I said the summation of Immanuel Kant’s various formulations of the categorical imperative was the only plausible tool offering a way out of the conundrum of moral relativism. Steve was game to follow this line of thought and said that if I was so committed to autonomy as the source of normativity, then I should reach out to Christine Korsgaard.
So after taking a deep dive and reading her The Sources of Normativity, Creating the Kingdom of Ends, and Self-Constitution: Agency, Identity, and Integrity, I did. She responded briefly and kindly answered a couple of my questions, but I don’t think she was interested in debating some wag about the movie Manhattan or taking time to sit through a podcast interview—which, she’s busy, I understand. So I’ll rely mostly on her writing
Korsgaard’s entire project
revolves around a way of reading Kant’s moral philosophy, particularly the Groundwork of the Metaphysics of Morals. But before I can explain Korsgaard’s way of interpreting, I should clarify some of the foundations upon which Kant builds the Groundwork and that inform Korsgaard’s reading: In another episode of the podcast, Book 2 - Episode 10: “Ballpark Figures Part II,” I mentioned (1:54:11) Martin Heidegger’s concept of “thrownness” (Geworfenheit), describing it as such:
“At birth you are thrown into the absurdity of the material world. And by dint of your conscious experience, you cannot help but judge it normatively.”
That’s actually a few steps beyond what Heidegger is usually taken to be denoting with the word “thrownness” (although I follow Katherine Withy in her contention that it is all there by implication). Usually he is taken to be less focused on our decisions and more on the fact that we find ourselves “situated” in a particular place and time, with certain rules and cultural conventions and obligations, and with all of the suffering and privations but also privileges that ensue from those conventions. We sort of feel at home and fit where, by no choice of our own, we find ourselves.
It is weird that I was born in 1977, in Daytona Beach, Florida of all places. (Yes, I am Florida man.) And I was born at a time without iPhones or the internet or cargo shorts. I could just as easily have been born in a Germany split in two parts. And we feel that weirdness even more intensely the older we get because it becomes more apparent that none of the conditions we were thrown into at birth are ineluctable or deterministic. After all, I do live in Germany now, but it’s unified and not split in two. And there are iPhones and cargo shorts here. And, by the way, back when I was born, the movie Manhattan seemed like a witty revelation, but now it’s icky. Weird. It’s the weirdness that’s important for Heidegger because he thinks it can tell us something (I’m never quite sure what) about the nature of being and becoming that he believes everyone since Heraclitus has gotten totally wrong. Meanwhile, correlative to that sense of orphaned thrownness is the possibility of freedom. And on this point, Kant was way ahead of Heidegger.
Kant’s thoughts about self-perception are legion, they are an essential aspect of the Transcendental Deduction (by which he tried to show that the Categories, or concepts, of thought necessarily apply to the contents of our experience), they are way beyond the scope of this essay, and frankly I’m just not even gonna. Suffice it to say that Kant believed we humans make moral decisions happen because it couldn’t be otherwise. We are aware of ourselves and our actions, we think about ourselves and those actions, and basically we are stuck making decisions under those conditions. Korsgaard’s articulation of this idea in the first paragraph (1.1.1) of Self-Constitution is succinct and good and worth quoting at length:
“Human beings are condemned to choice and action. Maybe you think you can avoid it, by resolutely standing still, refusing to act, refusing to move. But it’s no use, for that will be something you have chosen to do, and then you will have acted after all. Choosing not to act makes not acting a kind of action, makes it something that you do.”
So we are thrown into existence and by the very nature of our self-reflection there is an “I ought to do A” or an “I ought not do B” baked into any action we will to take. When Bartleby the Scrivner replies to every request by the Master in Chancery with “I would prefer not to,” his personal motivations might be totally inscrutable, but just on the basis of being a self-reflective, conscious subject, he necessarily decides he ought to behave that way. We can decide his normative judgement is bad (which would also be a normative judgment on our part), or that he is mentally not well. But the fact that he freely decides he ought to do nothing is indisputable.
The real trick is explaining how we weigh those normative decisions we’re forced to make. And on this point, Kant—who was famously comprehensive in the delineation of his ideas—was a little wishy-washy. Before I can justify that assertion, however, I need to clarify some terms.
Other philosophers who came before Kant, in particular David Hume, had started noticing that there were multiple ways of understanding judgments. In fact, it was in opposition to Hume’s skepticism—which I’ll explain in a moment—that Kant created his own system, using the following terms:
1) The analytic/synthetic distinction: Basically, you make a judgement analytically if you just understand what the words mean and how they relate to one another. Analytic judgements are true by definition. If I say “A hydrangea bush is a plant,” I won’t have made any new discovery about the nature of biology. I will simply be explaining (at least partially) what the word “hydrangea” means. Meanwhile, we make judgements synthetically if our propositions add something new to the subjects we’re thinking about. If I say, “Most people like chocolate,” I can verify that with surveys and stats. But nothing about the words “people” or “chocolate” lead me to that conclusion.
2) The a priori/a posteriori distinction: if I understand or know something based on my experiences, then I know it a posteriori. If I understand or know something without having had any experiences to help me, then I know it a priori. In other words, with a priori judgments, I can wrap my mind around a concept without ever having to extrapolate it from experience.
Now, David Hume took a hard line in his belief that “a priori” and “analytic” were the same thing, though he didn’t use those words. He just called this form of human reasoning “Relations of Ideas” (p.18). Thinking of this sort includes things like “the sciences of Geometry, Algebra, and Arithmetic.” That 3 x 5 and 30 ÷ 2 both equal 15 you can be certain about because you understand the relationship between the numbers. “Propositions of this kind are discoverable by the mere operation of thought,” Hume says. Meanwhile, on the other side, Hume believed that “a posteriori” and “synthetic” were the same thing, and he called this form of reasoning “Matters of Fact.” The tiles on my neighbor’s roof are black. Nothing about the words “roof” or “tile” tells you that these particular tiles are black. You couldn’t have known that without my telling you—or your seriously invading my (and Bruno’s family’s) privacy.
This strict division into two parts is known in philosophy circles as “Hume’s Fork,” and it’s pretty intuitive. It feels right: I grasp claims a priori because they’re analytic. I just need to know the basic intension (or concept) of the words, so obviously I can draw a conclusion without experiencing anything. Likewise, I grasp claims a posteriori because they’re synthetic. I have to “synthesize” two pieces of information that I wouldn’t have known simply by understanding the concepts behind the words, so obviously I need to gather stuff from experience to draw a conclusion.
Kant wasn’t biting from Hume’s Fork. He first read Hume’s work sometime toward the elder man’s death and stewed in disagreement for about 12 years, all while lecturing at the University of Königsberg, before finally writing the Critique of Pure Reason in a five-month fit in 1781. There he concedes as self-evident that there can be no a posteriori analytic judgments3 because if the concept doesn’t necessitate your judgement, then you have to synthesize some predicate with your subject from knowledge gathered via experience. For example, you can only learn by spying on Bruno and family that their roof tiles are black.
But synthetic a priori judgements—now those were Kant’s bag. For one thing, Kant argued, we don’t know how operators like +, –, ÷ , or = work simply by understanding the concept of any given number, like 15. The fact that it equals 3 x 5 is not essential to the concept 15. We must learn these truths synthetically—and yet, we understand them a priori. That is, there’s no experience that makes them true. Sure, you might accidentally click 3 under “quantity” at Amazon, and if you were ordering sets of five boxes of Lucky Charms, then (lucky you!) you’ve got 15 boxes. But you need to understand what 3 and 5 and 15 mean before you can conceive of those boxes as their material instantiations. When it comes to counting Lucky Charms, we’re a priori mathematically judicious!4 Or at least that’s what Kant thought. (And I’m with him on this.)
So far, all of this has been theoretical. Whether I ought to eat the 15 boxes of Lucky Charms sitting in my cupboard before they expire—or let my kids eat them—is an altogether different matter. Roughly 2400 years ago, Aristotle caught onto something important about the manner of human reason: In De Anima, he notes that action starts with desire and then proceeds to practical thought as it begins to contemplate pursuing that desire rather than pursuing some other desire (or pursuing them in a certain order, or abstaining from both). The important thing to note here is that the end of practical thought is the pursuit, not some object of that pursuit. Lucky Charms might be the object of my desire, and I might have concluded that they are desirable by coming to know something theoretical about them (namely that they are a delightful mix of grainy bits and vanilla-sweet, wispy yet chalky marshmallows that take on an even more agreeable consistency when doused ever-so-briefly in milk), but in desiring them, I change my mode of thought. The desire forces me to think practically about them—eating Lucky Charms is the end of practical thought, not knowing them or even tasting them. The end of theoretical thought is knowledge; the end of practical thought is action. Kant believed that synthetic a priori judgements would work in practical thought as well theoretical thought. And it’s here that his ideas grow wishy washy like (and I don’t want to belabor the metaphor here, but actually I really kind of do) Lucky Charms that have been left in milk for too long.
We spent the second half of Season 2 of the podcast pointing out that quantification and formalization have seductive qualities. They promise insights that outstrip our intuitions.5 Sometimes formalizing the relations between things really does give us new insights, especially when we release formalization’s powers by means of mechanized algorithmic media—black boxes—that render correct but to us inscrutable results. The philosophers, logicians, neuropsychologists, and computer scientists who invest their energies in the promise of formalized logic are innumerable, and their guru (even if they disagree with him about everything) has always been Kant. Everyone who tries to pimp their common, everyday thinking with formalized propositional logic, and then further with predicate logic, is eating Kant’s dust.
For our purposes I shouldn’t be too expansive, but I’ll try to do justice to at least the contours of Kant’s explanation of how synthetic a priori practical judgments work, as well as how Christine Korsgaard understands his final steps—and why I can’t get on board with her.
In the Groundwork, Kant accepts a point, made by Hume (pp.100 – 103), that while we sometimes consider events as unrelated sequences, other times we see clear relationships between them—i.e. causality: one event makes another event happen. And there’s something law-like about the nature of causality. When a reasonably comparable set of circumstances are given, then we can expect the same effects to result every time. If you leave Lucky Charms in milk, they’ll turn soggy. This universality of causation is a law of nature (at least if we’re not looking at the quantum level). Kant argues that through our choices, we cause certain effects; by definition, to choose or will something is to cause an effect. So if the defining characteristic of causation is its universality—a law will necessarily be obeyed—then in making choices there must be a universal law we follow, too.
Obviously, the choices of practical reason are free and not causally determined: I can stand in front of my pantry filled with 15 boxes of Lucky Charms and decide to eat them. And yet, delicious as they may be, I can also choose not to eat them. Only a fool or willfully ignorant person would argue otherwise, and Kant was neither. Nevertheless, Kant thinks a choice can’t truly be a cause if it doesn’t entail obeying an unconditional natural law. If the law were conditional, then it wouldn’t be a law. I might avoid lying because it won’t help me achieve some goal, or it is against God’s will, or in order to go to heaven. But then those things would be the real reason guiding me and not the injunction not to lie. The injunction itself has to be the thing compelling me. It has to be a categorical imperative.
Prior to practical reason, I might have certain desires and beliefs about what will achieve those desires. Kant calls those not-yet-practically-reasoned-over beliefs “maxims.” For example, I might say, “Eating all of the Luck Charms in my friend’s pantry is a great way to get Lucky Charms.” Whether I should follow that maxim, in itself, is unclear. I might say to myself that that is stealing, or something akin to lying. But who cares? What’s wrong with stealing? “Lots,” you’re probably saying, to which I reply, “Prove it,” to which you respond with a disapproving grumble. To your defense comes Kant with an attempt to formulate the Categorical Imperative as the ultimate law that supersedes and grounds all other laws of practical reason:
“Act only according to that maxim through which you can at the same time will that it become a universal law.”
This Categorical Imperative is a little bit (though not exactly) like the Golden Rule, or like when we say, “Imagine if everyone did that.” Kant wants us to imagine a world in which everyone who needs their fix of Lucky Charms just eats whatever they find, no matter whom it belongs to or how much it cost them or whether they were looking forward to eating them.
There are a lots of ways we can envision this state of affairs as seriously sucking: for one, just imagine all the hunkering and hoarding of—and if you’re in America, let’s be honest, senseless shootings over—Lucky Charms that would ensue. It’s an even more ghastly scenario than the one with me hurling rocks at Bruno and his classmates every morning. But if you really think about it, not even that dystopian vision seems like it’s the thing compelling us to avoid wantonly eating other people’s Lucky Charms. It’s a further terrible result downstream from the more primary terrible result. The primary problem is that it would be a violation of the trust of, and lack of respect for, our fellow people, who are also just trying to make (hopefully good) choices. We feel like we owe it to other people not to violate them in their goals, just like they owe it to us.
But why? What is it about us that makes us sacred to our own system? Kant says that all choices have ends, and the ends of our practical choices must be the thriving and success of all people attempting to autonomously make good practical choices. Thus we create of ourselves a “Kingdom of Ends” in which we all help each other and we are the ends. That’s lovely and romantic, and to a certain extent I’m on board. But I have a lot of caveats. First, perhaps you caught a mistake in Kant’s reasoning. It’s basically a semantic problem: the ends of practical reason are not the things we desire. When the things we desire come to fulfillment, that is the end of our desire. As I said earlier, the end of practical reason is action.
That’s already a serious problem, but it’s not the worst one I’d like to register: for before Kant can even get to the Kingdom of Ends, he has to explain why we are each other’s “ends.” Again, what is it about us? Kant’s answer is, I have to admit (and to an extent that I never really realized before I went down this path of engaging with Korsgaard), absolutely baffling: the thriving and success of other people is desirable just because we are self-determining; and yet, because we have free will, we are the causes of certain effects and are therefore bound to universality because that’s how causation works.
I’m sorry, but that doesn’t make any goddamn sense. And I know I’m not taking crazy pills and the only one who sees a problem here because in a comment on Korsgaard’s lectures in The Sources of Normativity, Thomas Nagel says much the same thing (maybe a little more urbanely):
“If the will is self-determining, why can't it determine itself in individual, disconnected choices as well as according to some consistent law or system of reasons? A neo-Humean regularity theory of causation seems an inappropriate model for free self-determination. If the idea makes sense at all, the free choice of actions which conform to a law is no more nor less a form of causality than the free choice of actions which do not. (And the same could be said of the free adoption of beliefs.) So far as I can see, choosing freely in a law-like pattern is merely a way of mimicking causality; if I always put on my left sock before my right, that does nothing to establish the causality of my will, so why does the categorical imperative do any better? There has to be something more compelling about the demand for universality than this.”
In her response, Korsgaard doubles down on identifying normative reasoning qua causality by noting a common feature of both causes and reasons: viz. that they necessitate:
“What the normativity of reasons and the power of causes seem to have in common is that they are forms of necessitation: a cause makes its effect happen, and so necessitates it (all else equal); a reason for action or belief necessitates that for which it is a reason in another way, namely, it necessitates a person to act or believe as it directs (again all else equal).”
Does it, though? Korsgaard admits that maybe we could just accept the power of cause and the normativity of reasons, but she seems to intuit how sketchy the analogy is, so she appeals to the Humean idea that we couldn’t recognize causes without their being regular and law-like and then argues that much the same thing has to be true of the normativity of reasons. But It isn’t recognition of our reasons we’re trying to establish! We’re trying to determine whether we’re warranted in following our reasons. Just saying that they necessitate amounts to nothing more than fiat.
And, besides, reasons aren’t always regular. Just look at how differently we evaluate the movie Manhattan depending on who we are, the prevailing publicity of the time, and even at what point in our ever-maturing lives we make our judgments. Korsgaard’s argument is circular. She wants to establish the regularity and universal validity of our reasons by appealing to their regularity and universality.
It really couldn’t be otherwise, as that circularity is part and parcel of Korsgaard’s entire project. She wants to ground the universal objectivity of our normative reasoning in the fact of self-determination, while showing that our selves are determined by our having universal normative reasons:
“[W]illing is self-conscious causality, causality that operates in the light of reflection. To will is not just to be a cause, or even to allow an impulse in me to operate as a cause, but, so to speak, to consciously pick up the reins, and make myself the cause of what I do. And if I am to constitute myself as the cause of an action, then I must be able to distinguish between my causing the action and some desire or impulse that is ‘in me’ causing my body to act. I must be able to see myself as something that is distinct from any of my particular, first-order, impulses and motives, as the reflective standpoint in any case requires.”
God love Christine Korsgaard and her brilliant erudition, which I’ve come to discover has added so much to our thinking about ethics and morality. I salute her for making us think deeply about the universality of our reasons and the role that self-reflection has in our achieving that universality. But this grounding of universality in the integrity of the self seems like an impasse. That just isn’t how self-reflection works. As Korsgaard herself writes, “Thought traffics in the general—the human mind traffics in the general.” I think Thomas Nagel captures this power better. The very act of reflecting on ourselves renders us each into two people: 1) the person in the world making decisions, the one who is a fact of the world just like Lucky Charms and Bruno’s family’s black roof tiles; and 2) the person who sees that first person as one of the things in the world, and especially as the kind of thing endowed with the possibility of making choices. Person 2 can consider what any person who might be in Person 1’s place ought to do (to wit: not throw rocks at Bruno and only eat—and not too much!—their own Lucky Charms). In Nagel’s words:
“The capacity for self-consciousness changes the nature of the being who is making the choices . . . by introducing irrevocably the distinction between appearance and reality, between how things seem from our personal point of view and how they really are—and facing us with the need and perhaps providing us with the capacity to arrive at [emphasis added] an answer that can be seen to be right not just from our individual point of view but from the reflective standpoint that takes that view as its object.”
I italicized the bit about “perhaps providing us with the capacity to arrive at an answer” because this is a possibility Nagel has always been a little chary about pursuing too far, although it’s a possibility he’s more willing than many philosophers to entertain.
I think we do have that capacity. And in Part 2 of this investigation into where that capacity comes, if it exists at all, I’d like drop the Kantian line and follow a whole other school of philosophers who feel a lot more comfortable accepting that capacity because they think it’s simply part of our nature. I have a feeling that they might get us closer to knowing what to do with our normative concerns—including all of these proposals from the Effective Altruists, but so very much more, as well.
And make no mistake, EA is ineluctably utilitarian. Brian Berkey has argued (p. 9) that this isn’t necessarily the case, but methinks the Effective Altruist protesteth too much.
This isn’t actually self-evident at all, as Saul Kripke (maybe) showed in Naming and Necessity that there are certain instances of necessary (analytic) a posteriori truths, like Water is H20.
That joke is dedicated to Mike Pesca.
Somewhat paradoxically, many also seem obsessed with showing that formalized logic is the hidden basis of our intuitions in the first place; that our conscious apprehension of that logic supervenes on it as an epiphenomenon or byproduct—whatever that means.