41 Comments
User's avatar
Felten De Meulenaere's avatar

Dear Maarten,

Allow me to begin by expressing my gratitude for the clarity and provocation of your reflections. Few things are more valuable in philosophy than a thesis that unsettles our intellectual habits, for it is through such disturbances that thought is compelled to refine itself.

I have long suspected that our treatment of logical fallacies has acquired a moral tone it does not deserve. To accuse a man of committing a fallacy is often less an act of illumination than one of quiet condemnation. Yet error, properly understood, is not a vice. We must be cautious not to confuse being mistaken with being morally deficient, nor truth with goodness. The history of thought shows us repeatedly that progress is made by those willing to err in earnest rather than by those who merely police the errors of others.

There is, moreover, something valuable and meaningful hidden within imperfect reasoning. Human thought is not a geometric proof; it is exploratory, tentative, and frequently inconsistent. If we were to reject every argument that bears the mark of logical imperfection, we would discard much of what has guided inquiry forward. Pointing out each logical misstep does not, by itself, teach us how to reason well. It may instead cultivate a sterile cleverness — the ability to win arguments without advancing understanding.

One might also recall that there exist demonstrations suggesting that no logical system can achieve complete self-sufficiency. If completeness itself eludes our most rigorous formal structures, it would seem rather arrogant to demand flawless coherence from ordinary human reasoning. To morally judge others on the basis of such imagined completeness neither improves our character nor strengthens the process of reasoning; it merely introduces anxiety where intellectual courage ought to reside.

There is another danger. The naming of fallacies can become a rhetorical weapon — a means of closing conversation rather than opening it. When the aim shifts from pursuing truth to securing victory, dialogue withers. To refute another’s supposed thinking errors is no guarantee that our own reasoning is thereby purified. Intellectual humility requires us to remember that the exposure of error is only valuable insofar as it invites further inquiry.

Let us therefore resist the temptation to turn logic into a tribunal. Better, I think, to regard reasoning as a cooperative venture in which fallibility is not only inevitable but indispensable. We learn not by pretending to be infallible, but by remaining open — both to correction and to the partial wisdom contained in views not yet fully formed.

Thank you for your insights, Maarten. They remind us that philosophy is at its best when it encourages courage in thinking rather than fear of being wrong.

With sincere regard,

Maarten Boudry's avatar

Hi Felten, thanks a lot for this generous reflection! I agree with much of what you say, especially the warning against turning logic into a moral tribunal. But my argument goes one step further. It’s not just that we sometimes wield fallacy labels in a punitive spirit, but that the very taxonomy encourages us to misdiagnose how reasoning actually works. When we shoehorn it into rigid categories, we risk mistaking ordinary, adaptive reasoning for a game of logical deduction. You’re also right that imperfect reasoning often contains something valuable. Much progress emerges from tentative and incomplete inconsistent ideas that are gradually refined.

Felten De Meulenaere's avatar

Interesting — I shall read and reread.

That was my very first impression when I encountered the text. It did not overwhelm; it invited. It did not demand agreement; it stirred reflection. Even on a quick first reading — not a careful analysis — I sensed layers beneath the surface. Certain phrases lingered. Certain ideas echoed longer than expected.

I felt like sharing this phenomenon of this kind of feeling textes that changes slightly each time you return to it — not because it changes, but because you do.

barry milliken's avatar

Human history is dominated by mass delusions of 2 kinds. POLITICAL delusions are based upon exaggerated fears that can be blamed on that tribe over there. RELIGIOUS delusions are based upon exaggerated fears that can be blamed on our own imagined collective guilt.

Both are driven by true believing zealots striving for moral status and the power to enforce. All tyrants believe that they are on the side of the angels.

These delusions are not the result of the standard logical fallacies (except self-contradiction). Instead they arise from false premises that are often assumed and unstated.

My guess is that the most damaging false premise is the intuitions that life is zero sum, and that resources deplete. Instead humans earn wealth by creating it, and they invent new resources from formerly useless stuff.

Maarten Boudry's avatar

Yes, exactly. Psychological biases (such as zero-sum intuitions or misconceptions about the finitude of resources) often lie at the root of irrational beliefs. But I’d also hesitate to label them “fallacies.” After all, some situations in life genuinely really are zero-sum. The problem is not the intuition itself, but its overextension: we tend to perceive zero-sum competition even in contexts that are positive-sum, especially in markets.

Pascal Boyer has written a fascinating paper on intuitive economics and the ways it can systematically mislead us. It’s a good example of how cognitive biases, rather than formal logical fallacies, often explain where our reasoning goes astray.

Craig Nishimoto's avatar

"In my experience, fallacy theory is not just useless—it can be actively harmful."

I see this too, but I wonder if the informal fallacies are still usefully taught as yellow flags, rather than red flags. So-called informal fallacies are just common ways of falling shorts of reasoning ideals. Only some of them have names, and calling them "fallacies" misleads many midwits. But it is helpful to identify and name the common ways of falling short.

Maarten Boudry's avatar

That’s a reasonable compromise, and I’m happy to concede part of it. I don’t object to teaching common patterns of weak reasoning. As you say, they can function as yellow flags: “slow down here,” “check the relevance,” “are we overextending this inference?” Used that way, they can sharpen awareness rather than shut down discussion. As I noted, in my experience, ad hominem reasoning is often symptomatic of weakness. When people have strong counterarguments to the substance of what you're saying, they usually deploy them. When they don’t, they tend to resort to motives, character, or credentials. That pattern itself is psychologically revealing — but it’s not a formal logical violation. It’s more an indication that the substantive terrain is hard to engage with. but we have to get ride of the label of "fallacies", which inevitably suggest a neat, context-independent logical defect. Most of the time, the real question isn’t “Which fallacy is this?” but “What exactly is wrong here — evidentially, causally, probabilistically?”

Adam Reith's avatar

No discussion of logical fallacies can be complete without mentioning Max Shulman's short story "Love is a Fallacy". A college student teaches his girlfriend all about fallacies and lives to regret it.

Read here: https://www.northiowa.org/wp-content/uploads/2018/04/Love-is-a-Fallacy.pdf

Maarten Boudry's avatar

Thanks, I was not familiar with that story! I'll check it out.

RichinPhoenix's avatar

My head hurts after reading and thinking about all of this. That should prove something, but I don’t know what. I would say it proves I’m stupid, but I’m a Mensan, so now I’m really confused.

Maarten Boudry's avatar

Haha, I'll treat this glass as half-full and take your comment as a compliment!

RichinPhoenix's avatar

I did enjoy your writing and analysis.

Lars Harhoff Andersen's avatar

Nice Article! I wrote an article a few years arguing that fallacies are a bad way of thinking about reasoning that drew on similar arguments, if that might interest someone here.

https://unreasonabledoubt.substack.com/p/why-logical-arguments-are-bad-arguments

Maarten Boudry's avatar

Thanks, that looks very similar in spirit indeed! I'll check it out.

Tobias Leenaert's avatar

Thanks for this interesting read.

I think I'm still grappling with the exact nature of the problem here.

Obviously the actual fallaciousness is, as you say, in the claim about deductivity (when made).

So I think fallacies do exist, but we usually point to the wrong thing when we use the concept: we treat the heuristic as the fallacy, rather than the unwarranted leap to certainty?

(Sorry if this is clumsily phrased - I’m not a philosopher, just trying to articulate something that feels important here.)

Maarten Boudry's avatar

I think you’ve put your finger on an important point. To the extent that real-life arguments of the types I discuss really deserve the name "fallacy", it's indeed because of the unwarranted leap to certainty. But my point is that such leaps are far less common than fallacy buffs imagine, because few people make deductive arguments intending full certainty. Most of the time, of course, they don't spell out the intended force of their argument in the first place, and we have to rely on (charitable) reconstruction and interpretation. They are speaking loosely, provisionally, rhetorically. When we diagnose a “fallacy,” we often turn a defeasible and weak argument into an artificially strong deductive form that they didn't (clearly) intend— and then knock it down. That’s part of what I’m resisting. So yes, if someone genuinely makes an unwarranted leap to certainty — "Trump MUST be wrong about everything because he's an awful human being" — then that’s a genuine error. My point is that this is much rarer than fallacy-talk suggests. More often, we’re dealing with gradations of strength, background assumptions, and context-sensitive judgments.

James Hammerton's avatar

One more thing: I suspect the Chewbacca defence really is a fallacy. :-)

James Hammerton's avatar

Nice explanation of why the logical fallacies are less important in the messy & uncertain real world than in the strict logical form they're presented in. The problem here is the reliance on strict deductive logic, where an argument for X either succeeds or fails (no middle ground) to prove X based on whether X can be deduced logically from the premises, and assuming agreement on those premises.

It's not that it's an invalid form of reasoning, but that it is a very strict form of reasoning requiring a high burden of proof. Constructing a strictly deductive argument before you believe a proposition thus requires a lot of effort and time. Life is far too messy, too uncertain and too short for us to rely solely on deduction as a method of reasoning.

We thus tend to rely on inductive reasoning, the things we've experienced and people we trust. X appears to true (my teacher told me, everyone seems to believe it, its in accord with my experiences, these articles and the data they cite seems to confirm it, etc), may be good enough for us most of the time to treat X as being true, but we might change our mind on X should sufficient new information come to light (the author of the article has had to retract it due to flaws in the methodology, new evidence blatently contradicts X).

As you say, in reality the fallacies often carry less weight / certainty than is normally claimed for them and can lead to lazy dismissal of things simply because it looks like one of the fallacies has been committed. I think this is why people sometimes suggest steel manning your opponents arguments rather than simply pointing out the logical weaknesses they may have committed and leaving it at that.

If a lot is riding on whether X is really true, then yes we can set the high bar for proving it, reviewing the arguments and evidence for X, checking for weaknesses, etc and in this context the fallacies may help us identify where the weaknesses / gaps lie that need to be addressed to settle the issue. Even here, the results of such an exercise will always be provisional (new information may come to light that alters things), as is the case in any scientific endeavour.

We don't have time (or expertise) to do deep consideration of every issue that might confront us, so we should therefore choose our battles, and be willing to hold our beliefs on a provisional basis, and be open to the possibility we might need to change our minds on them.

Maarten Boudry's avatar

I think this is exactly right, and you’ve captured something important about cognitive economy and different epistemic standards under different constraints.

A scientific seminar or courtroom sets a high bar when it comes to checking sources, weighing evidence, leaving less room for arguments about personality and character of claimants (though of that will still be inevitable). But most of our real-world reasoning labor under different constraints. We don't have time to double-check sources or verify reasoning steps, so we have to rely on proxies and heuristics. We accept X as true because trusted sources endorse it, because it fits our broader picture, because it seems plausible. That’s not sloppy thinking; it’s adaptive thinking under constraints.

You’re also right that the stakes matter. If a lot rides on X (a medical decision, court verdict, a major policy) then we raise the evidential bar and scrutinize the reasoning more carefully. We write about that in our paper as well.

So in sum: different contexts impose different demands, time constraints & epistemic standards. That variability is precisely why I’m skeptical of a one-size-fits-all fallacy taxonomy. The real question is rarely “Is this a fallacy?” but rather “Given the stakes, the time available, and the evidence at hand, how much stock should we put in this argument?”

Jordan Raymond's avatar

It's funny : I searched and reread all your original pieces about this ten days ago! I think I agree even more with you now then I did years ago. Thank you!

Maarten Boudry's avatar

Thanks a lot! Yes, I was not entirely happy with what I wrote back then, so I decided to do a complete overhaul of the essay.

John Wilkins's avatar

In the 1823 Elements of Logic, Bp Richard Whately noticed these issues clearly. My own take is to attack the taxonomy of fallacies. Asking “how many fallacies are there?” shows the problem. How many ways are there to make a deductive error?

Thanks for this post.

DC Reade's avatar

the exact number of logical fallacies is immaterial. The fact the the taxonomies of logical fallacy can vary in their fine points- and thus in their enumeration- does not impeach the very idea of their existence. There's a consensus of agreement on the definitions of the most commonly employed fallacies.

Verbal fallacies are most often errors of inference, errors of reference, or language constructions that produce unclarity of meaning, shoddy narrative framing, and/or evasion of the relevant questions related to the topic at hand. Verbal fallacies are inherently vulnerable to being detected and exposed. That's what's crucial. In my experience, when a fallacy exists, most of the time its presence is unambiguous.

John Wilkins's avatar

Let me try an analogy: how many ways are there to make a mathematical error? There are a limited number of operators but an indefinite number of mistakes.

Also there is good research and argument against teaching fallacies anyway. It focuses students on insulting opponents rather than dealing with arguments

I’ll see if I can locate the citations

DC Reade's avatar

"Let me try an analogy: how many ways are there to make a mathematical error? There are a limited number of operators but an indefinite number of mistakes."

I don't use logical fallacy detection to argue about abstractions, or hypotheses, or conjectures, or counterfactual speculations. I apply it to real-world situations.

"Also there is good research and argument against teaching fallacies anyway. It focuses students on insulting opponents rather than dealing with arguments"

There's no requirement to involve personalities when detecting logical fallacies. The fallacy is found in the argument, not the person propounding it. I routinely practice logical fallacy detection to screen my own positions for flaws. It doesn't involve insulting myself, or self-hatred.

I realize that it's possible to do fallacy detection badly, confusing the fallacy with the person espousing it.

I think some of the confusion regarding the utility of logical fallacy detection in the discussion has to do with perspective: in my observation, fallacy detection has less utility when reviewing research studies than it does when considering arguments on political policy. Clinical research conclusions are sometimes undone by the presence of logical fallacy biases, but more often their problems are related to a paucity of data findings, or a lack of a pre-existing foundation of knowledge on a newly recognized phenomenon or specific topic of research. Incomplete accounting for confounding factors also has a way of undercutting research findings and leading to conclusions that are dubious or erroneous. This accounts for why responsible researchers are inclined to phrase their conclusions cautiously, and to make explicit note that their findings are preliminary and require more investigation and confirmation before they can be accepted as "strong" or authoritative. This is simply sound reasoning. they're avoiding the pitfall of asserting a confident interpretive conclusion prematurely when the data is inadequate. Thereby assuring the integrity of their findings, by not overclaiming.

The venue where logical fallacy detection assumes more importance relates to the way that the cautiously phrased interpretations found in the original study are picked up, summarized, and reported to the public in the news media, i.e., so-called "pop science." Time and again, I notice sloppiness and misinterpretation: the reportage seizes on one data subset and highlights it, or exaggerates study conclusions that are actually much more tentative, or it uses sensationalistic language to draw in readers, etc. Such reportage almost always relies on the resort to logical fallacy in the process.

The realm of legal advocacy and politics is even worse. Especially American politics, sadly. There's no equivalent to the UK Prime Minister's Questions in American politics. British parliamentarians are no above indulging in logical fallacies in those exchanges, but--even with the shouting!--the proceedings are models of decorum and probity compared to the press conferences given by American presidents or their press secretaries. Hearings on issues in the US Congress are very often dreadfully larded with fallacy-heavy rhetoric and light on factual disclosure. But the arena where informal logic fallacies are most rampant is an issue debate that demands lockstep partisan loyalty at the expense of factual appraisal, logical consistency, and values integrity.

Nicolas Fricia's avatar

I must admit, I remain very unconvinced.

First, one motivation for the article is the claim that fallacies reside primarily in textbooks rather than real life. I have a hard time understanding whether this claim is even falsifiable. Just because you may have a hard time finding logical fallacies out in the wild, does not mean I do. I would argue I encounter them frequently. But to claim they are rare or non-existent is pretty much impossible to prove or disprove. The scale of generalization is quite extreme.

Second, the idea that fallacies should be abandoned because they are carelessly thrown about by students or on a tiktok video only proves that people need to do a better job at making arguments for why a fallacy is a fallacy in each case. It is not enough to just call something a fallacy. Besides, I generally view fallacies as capturing the intuition of explaining how an argument is wrong. You need to use reason to argue how the fallacy is relevant or irrelevant for dismantling someone’s claim.

Third, for example, the ad hominem fallacy is concerned with attacking someone’s character, motives, or some other personal attribute when the attack is irrelevant to the argument itself. It is considered fallacious when it is IRRELEVANT to the debate or discussion. Obviously, in places like court, personal character can matter a great deal for things like the credibility of witness testimony. In this case, remarks on personal character is relevant, thus not fallacious.

Fourth, regarding the Strict Definition problem, I do not think you provide examples of strict definitions that paint the strongest possible case for the fallacy. Continuing with ad hominem, you defined ad hominem in the loosest possible way, "The principle seems simple: when assessing an argument, you should attack the argument, not the person.” Which I would argue is not a correct description of ad hominem. You forgot the crucial point, that is, ad hominems are concerned with relevancy. Ad hominems are concerned with irrelevant personal attacks that do not engage with the argument. I have never seen someone consider questioning or attacking someone’s personal attribute(s) as fallacious after it was argued to be relevant in assessing the merits of an argument or claim.

The pharmaceutical research example captures the idea that the possible conflicts of interest between the researcher and funder of research is a legitimate concern because it is relevant to the outcome of said research. Hence, this is not an ad hominem fallacy. The key requirement, relevancy, was satisfied.

Do I encounter my definition of ad hominem in real life? Yes! PM Carney had an excellent speech at Davos, signaling that the rules based international order is essentially finished. He said many times that great powers (such as the US), act with impunity and disregard international law without any accountability. He made a great speech. How did Trump respond? He attacked PM Carney’s character.

Instead of addressing any of his arguments or concerns, Trump said, "Canada gets a lot of freebies from us, by the way. They should be grateful but they’re not. I watched your prime minister yesterday. He wasn’t so grateful. But they should be grateful to us, Canada. Canada lives because of the United States. Remember that, Mark, the next time you make your statements.”

He attacked Carney's supposed “ungrateful” motivations instead of addressing any of his arguments or concerns. Such personal attacks are irrelevant to anything PM Carney said (not to mention the ominous, they live because of us statement…). You may say oh this is just one example, but I would say this is one example by arguably the most powerful person on the planet, so this one example is quite significant. Such disregard for Carney’s points and speculating about how supposedly ungrateful he is certainly a fallacious response, no?

Maarten Boudry's avatar

First, my claim about rarity is perfectly falsifiable: just present me with reams of clear-cut fallacies from everyday life. But notice what I point out in the essay: textbooks and educational videos almost invariably rely on toy examples. That in itself suggests that genuine real-life cases are harder to find than people assume. My point is comparative. Apply strict deductive standards and clear-cut cases become rare. Relax the standards to capture real-life reasoning, and things become context-sensitive and defeasible — and the “fallacy” label no longer cleanly applies.

Second, I agree that merely calling something a fallacy is not an argument. That’s precisely my concern. The taxonomy encourages the illusion that slapping a label on a move explains what’s wrong with it. The issue isn’t just TikTok misuse; it’s built into the approach itself.

On ad hominem: you’re right that relevance is key. But that’s exactly the problem. Once “relevance” is built into the definition, the category becomes entirely context-dependent. Whether a personal attack is fallacious now depends on assumptions about credibility, incentives, expertise, and background conditions. At that point, we’re no longer dealing with a neat formal error but with a substantive judgment about context. Formally speaking, relevant and irrelevant ad hominem arguments look identical. The label adds nothing. The real work lies in showing why the consideration is irrelevant — probabilistically, causally, or pragmatically.

Your Trump example is useful. Is it likely a weak and irrelevant personal attack? Probably, Trump is one of the worst arguers on the planet. But even there, defenders could argue that motives and gratitude bear on diplomatic credibility or strategic posture. I’m not endorsing that move; I’m simply pointing out that the boundary between “relevant” and “irrelevant” is rarely as sharp as textbooks imply. Formally, if the Iranian prime minister gives a speech about human rights and you attack him for blatant hypocrisy rather than dealing with his arguments, that would look structurally identical to Trump's attack of Charney. But it would be totally appropriate. What differs is our judgment about context.

So I’m not denying that bad arguments exist. I’m questioning whether the traditional fallacy framework is the best tool for diagnosing them. Often what we really need is a discussion about evidence, incentives, context, and probabilistic assumptions — not a crude classification based on form.

And I do appreciate you engaging seriously with the argument.

Knifepoint's avatar

"The pharmaceutical research example captures the idea that the possible conflicts of interest between the researcher and funder of research is a legitimate concern because it is relevant to the outcome of said research. Hence, this is not an ad hominem fallacy. The key requirement, relevancy, was satisfied."

The pharmaceutical example is also ill defined and hides behind a lack of specificity.

What exactly does it mean to take something with a grain of salt? Does it mean that we should place some high burden of proof upon it? Or that we should narrowly examine the specific claims? Both of these are fine suggestions, and they are questions that can be addressed within the context of the study itself and the evidence presented. (You might respond, tho, that every study should be closely scrutinized.)

On the other hand if it is suggesting that we should dismiss the study purely because we are suspicious of the people conducting it - and this is a claim that people will make very often! - then I would argue that is indeed an ad hominem fallacy. It makes no specific claims against the argument and as such cannot actually be refuted - it relies solely upon the suggestion of impropriety against the person to dismiss the argument.

I've heard people claim this isn't really a fallacy because as a matter of rhetoric or practicality you don't need to engage with arguments you suspect to be in bad faith. But that's because these are fallacies of formal logic, not fallacies of rhetoric or practicality.

Maarten Boudry's avatar

That’s a fair challenge, and I think it actually reinforces my point about context. What it means to “take something with a grain of salt” depends heavily on the setting. In a peer-review seminar, you might scrutinize the methods, check the data, look for p-hacking, and examine whether funding could have biased design choices. In a public debate, or when deciding whether to change your medication, you simply don’t have the time or expertise to re-run the statistics yourself. You rely, inevitably, on credibility cues. In high-stakes, time-constrained contexts, background information about incentives and conflicts of interest becomes epistemically relevant. Not because it refutes the study deductively, but because it affects the weight you assign to it. That’s not a formal disproof; it’s Bayesian updating under practical constraints. So again, how much weightwe attach to “ad hominem” reasoning depends on what we take to be the relevant demands of the situation. That’s precisely why I’m skeptical that the fallacy label, by itself, does much explanatory work.

Knifepoint's avatar

I think that's a very fair response, and I get the impression we pretty much agree on the question of context. Fallacies are an aspect of formal logic, certainly not broad intellectual guidelines as people often attempt to use them. They're also very difficult to pair with the sort of Bayesian analysis you bring up, because “maybe” is not something you can solve with formal logic (and in practical terms Bayesian analysis is often more useful for exactly that reason - most of the things that truly matter to us wind up being maybes).

I do still think fallacies are valuable tho, and my preferred solution would be that we do more to teach formal logic and its limitations (and the alternatives) than that we throw out the fallacies themselves.

DC Reade's avatar

Nope. If logical fallacy detection has no role of importance to play in refuting an argument, you've ceded the power to indulge in them to your adversaries. Imagining that the problem can be finessed by allowing yourself or the position you support to employ them in return is self-deception.

If you don't know how to effectively employ logical fallacy detection to debug weak positions and bad arguments, you haven't practiced enough. Logical fallacy detection is a skill that requires reflection and discipline. The best way to achieve competence is to practice by debugging your own points of argument, and screening your own positions for their presence. Continually.

Acquaintance with general semantics principles is also imperative. No need to get into the tall weeds of it. Just develop the skills to know when and how categorical labels are being applied inaptly- inaccurately, inappropriately, manipulatively, pejoratively. Labels have to serve the purpose of adding clarity, not the murkiness of semantic noise. learn how to construct an accurate analogy and how to detect one that hits false notes. Acquaint yourself with the concept of the "Is of Identity", and be aware of its pitfalls as linguistic convention. The verbal reframings of E-prime usage are cumbersome in conversation, but e-prime offers a lot of clarification when reviewing text, and also for precision in writing.

Mani's avatar
1dEdited

Maybe perhaps weak arguments do not fall so easily into distinct categories of fallacy, but I do not think that this so much the case as to then conclude that they do not exist at all or that it is harmful to review them. I would say I encounter in regular life people who do make textbook examples of fallacies, but also many that are not entirely in a neat box where it perhaps does not render their point worthless, but at least weakens its contribution to the main argument. A lot of times I would say that they do not fall into neat boxes because the claim falls into more than one fallacy.

The genetic fallacy is one of the ones that is more suspect to me. In many cases it does work, say for one to be against anti-tobacco laws because it started in Nazi Germany or to criticize the policies of the modern Democratic party in the United States because in the 1850s they supported slavery, but there are many cases where by tracing the historical roots of some modern day thing can certainly lead to insights today that can affect one's decision-making on that issue. For example, there are many political parties in Europe that were originally explicitly neo-Fascist but have over time softened their stance, and it is perfectly reasonable to cite their earlier positions as an argument that if they were to achieve power they will push policies that reflect their origins.

The example with the ad hominem in this article though is focusing on scientific issues, when it is more often applied to politics. Of course it is reasonable for someone to dismiss the claims of a new-age spiritualist who claims to have cured cancer through vibrations as opposed to someone who has studied medicine all their lives despite that oneself does not know anything about medicine. However, let us say that someone argues against the claims of another that the government should raise taxes on the wealthy that 'they are just disgusting hippies' or vice versa that those who want to lower taxes that 'they are heartless monsters' as argumentative fallacies because they are attacking the person rather than the argument is perfectly reasonable. The vast majority of people today would agree that to bring up how Martin Luther King jr. was a serial adulterer as an argument that African Americans do not deserve equal rights in the United States is fallacious due to ad hominem reasoning, and without any doubt they are correct.

I would say today that the bigger problem than the use of fallacies is the lack of ground-rules for discussion. One issue I come across all the time is that someone initiates something that you had no idea you were going to have this specific argument so you are unprepared with exact sources in hand, then they release this cacophony of information about it without giving yourself a chance to speak of which you recognize about half is nonsense and even more is irrelevant, and is then immediately on to the next issue - without being able to address the majority, if any, of your objections. Then all they want to do is hear themselves talk and confirm in their heads that they are completely right about everything. Another issue for today is changing your opinion or admitting that you are not familiar enough on an issue to form a view on it is seen as an admission of defeat, that you 'lost' the discussion. You have to have an opinion about absolutely everything, and you cannot even associate with people who disagree. It may vary country to country, but that is apart of how I feel.

Hannah Arendt's the human condition as well as in other essays has many interesting points of the political implications of our change in relation to knowledge in the modern world. We constantly rely upon things for our existence that the vast majority of people do not understand (GPS, vaccines, radio etc.) involving things that are not apparent to our senses (General Relativity, viruses, radio waves) when the Western world had only just come out of an epistemology that was entirely derived from the authority of the Catholic Church and a faith that placed humanity firmly in the center of existence. She discusses how this plays a key role in the general sense of rootlessness that takes place today, which decades after her death have become even more apparent.

Matthew Kilcoyne's avatar

The real insight you're circling is that reasoning is probabilistic and contextual, not deductive — and formal fallacy theory pretends otherwise. But the solution is not to abandon the vocab but rather to upgrade the operating system. Fallacies aren't bright-line rules but instead Bayesian priors about where arguments tend to go wrong.

"That's a slippery slope fallacy" is the move people use to avoid engaging with whether the slope is actually slippery. The answer isn't "slippery slopes don't exist" — it's "show me the coefficients of friction."

Michael Smith's avatar

I've been converted, at least, away from rejecting 'slippery slope' arguments out of hand. Bitter experience has taught me that 'y does not logically proceed from x' is a poor reason for expecting y not to follow x in real life..