Fallacies, useful to know for writers.

But the thing you were looking at, the thing that led you to declare that you see a cow, was in fact a cut-out. If your statement was: that field has a cow in it, then you have a true statement, but it's true only by chance. If there had been a deer behind the cut-out, then your statement would be false. I don't see much trouble with the proposition.

The video was rather hit-and-miss. Some things were in fact logical fallacies, others were not. But hey, she made it to twenty-one, one way or another.
 
But the thing you were looking at, the thing that led you to declare that you see a cow, was in fact a cut-out. If your statement was: that field has a cow in it, then you have a true statement, but it's true only by chance. If there had been a deer behind the cut-out, then your statement would be false. I don't see much trouble with the proposition.

The video was rather hit-and-miss. Some things were in fact logical fallacies, others were not. But hey, she made it to twenty-one, one way or another.
AH, I've mis-remembered this: The question is not 'do i have the truth but 'do I know there is a cow in the field?' It was a challenge to the definition of knowledge as a justified and true belief: Gettier problem - Wikipedia
 
One of the most prevalent right now is the Appeal to authority fallacy.
Particularly "It said on the news that..."
Or worse, the nebulous "Experts say..." when the experts are unnamed and may not even be experts, simply people with a bias giving their opinions.

I've always been puzzled by what the difference is between 'Appealing to authority' - which appears to be a bad thing, and 'Citing your sources' - which appears to be a good thing.

Anyone care to enlighten me?

EDIT: Though I notice that 'Appealing to authority' is, in the video, referred to as 'Fallacious Appeal To Authority' - the implication being that the authority is fallacious not the appeal; i.e. citing someone as an authority on a subject when they are not.

Is that it?
 
Last edited:
I've always been puzzled by what the difference is between 'Appealing to authority' - which appears to be a bad thing, and 'Citing your sources' - which appears to be a good thing.

Anyone care to enlighten me?

Appeal to authority: I'm a professor at Harvard, I'm a world expert on video games and kids, and I think you shouldn't let your kids play video games (because my gut tells me its bad, but I won't admit that on camera).
Citing your sources: I'm Joe the plumber, and I don't let my kids play video games because I read three papers in PNAS (here they are) that showed video games worsen ADHD.

Now, my big beef is that "appeal to authority" often pollutes published sources ("citations").

Reviewer: Hey editor, paper looks ok (The results of this paper don't match with the conclusions, and there are some gaps in the controls, but who am I to argue with the big professor from Harvard who's been publishing papers on this topic for 20 years and make her my enemy.)
Editor: Okey Dokey. (This paper will get lots of clicks and citations and make my journal look cool.)

And even worse

Reviewer: Hey editor, this paper is bad (The results go against what I've been publishing for 20 years, and I can't actually find a problem with it, but I'm the big shot here, so I say no.)
Editor: Okey Dokey, (I don't want to piss off the big professor from Harvard and get on the bad side of the academic gossip train.)
 
I don't believe there is a chair in the other room. There was one there when I left the room a moment ago but I can't verify it is there now until I return to that room. And then I won't believe there is a chair in this room.
Seems a trip to a seat of learning is in order....
 
oops. wrong thread
Thought it was related ...shows what I know. Reminded me of radio shows where they do the '...X just tweeted in to say 'random outlandish unverified stuff that backs up what I'm saying'', 'oh right, I didn't know that, fair enough' thing. It always seems a bit daft and against the spirit of logic.
 
That reminds me of an argument about the nature of truth: I am looking at a field. In the field I believe I see a cow. Therefore it's reasonable to believe the field has a cow in it.
The cow is a cut-out of a cow. But, behind that cut-out, completely concealed, is.... an actual cow.
So... what I believe is in fact so. But my reasons for believing it are based on an illusion. Therefore the question is: Do I have the truth, or not?
Ask not whether you have Truth. Ask whether Truth has you.

Then pass it around.
 
Thought it was related ...shows what I know. Reminded me of radio shows where they do the '...X just tweeted in to say 'random outlandish unverified stuff that backs up what I'm saying'', 'oh right, I didn't know that, fair enough' thing. It always seems a bit daft and against the spirit of logic.
I cringe every time a tweet is displayed in a TV news report or newspaper article.
 
Appeal to authority: I'm a professor at Harvard, I'm a world expert on video games and kids, and I think you shouldn't let your kids play video games (because my gut tells me its bad, but I won't admit that on camera).
Citing your sources: I'm Joe the plumber, and I don't let my kids play video games because I read three papers in PNAS (here they are) that showed video games worsen ADHD.

Now, my big beef is that "appeal to authority" often pollutes published sources ("citations").

Reviewer: Hey editor, paper looks ok (The results of this paper don't match with the conclusions, and there are some gaps in the controls, but who am I to argue with the big professor from Harvard who's been publishing papers on this topic for 20 years and make her my enemy.)
Editor: Okey Dokey. (This paper will get lots of clicks and citations and make my journal look cool.)

And even worse

Reviewer: Hey editor, this paper is bad (The results go against what I've been publishing for 20 years, and I can't actually find a problem with it, but I'm the big shot here, so I say no.)
Editor: Okey Dokey, (I don't want to piss off the big professor from Harvard and get on the bad side of the academic gossip train.)
I used to be a researcher (in surface engineering and coatings technology, nothing very interesting though we had some amazing machines, but still...) and we were sent of courses to teach us about peer review and citation that basically told us : 'Yes, this is a thing, the second example more than the first - be very very wary'.

OK, the big point here is: The sources you cite, or journal you publish in, aren't the test for your model-of-how-things-work and your argument. The test is other people repeating your experiment in the real world, and doing variations on it, and seeing the same results that you get, and not being able to propose an obviously less complex model that makes seriously better predictions of further results. That's the function journals, and so citations in part, fulfil - communicating what you did so others can test it. Even if what you did stands scrutiny, don't be surprised if others push for revisions, for adding new limits to the use of your model (you should already be being up-front about where it's limits are, and it definitely does have them whatever it is) and other changes that are less severe than totally knocking your work over.

If you just want the really important point about what makes scientific research useful to the world if done right, you can stop reading here. The rest is pretty boring, though important to me as an ex-researcher. That said....

Citing your sources isn't totally valueless. It's better than nothing. Citing your sources is, in theory, a good thing because it allows other people to check where you've got your ideas, approach, and contributing knowledge from. BUT: It doesn't / shouldn't, by itself add much weight to your argument - it's just, a sign of good intention you might say. To the properly skeptical reader, again in theory, it's one of several means they should use to decide for themselves how much weight to give your argument - by checking at least the sources cited in support of any key, or possibly controversial, points. It's not that citing your sources should remove any level of skepticism from your audience, it's that not citing sources on an important or controversial should immediately add two layers of skepticism.

A few specific points:

The reviewers for your paper should be kept anonymous from you.

In fact, to avoid people guessing who their reviewers are in a small field, the reviewing should be selected at random, and may not even be by someone in your field or a closely related one (obviously, this is its own problem). So you shouldn't risk making an enemy with an honest review.

Journals are anxious to preserve their reputation (because they often charge a bomb for full access), and there are organisations that act as watchdogs, so while duff reviewing and turning a blind eye (for various reasons) certainly takes place, it should stay at a low level.

Most research work is pretty dull 'attacking the boundaries of knowledge by gradual, mind numbing, attrition' stuff - it's just not interesting, or important, all by itself. Most editors wont see the title as an attention grabber, most academics wont see it as a threat even if it disagrees with them.

Last: Journalism is not good science. Good PR very definitely isn't. What sells copy, what sounds pithy, is easily absorbed by every reader, and especially what sounds gains popularity and is cool (spits), is not a sign of truth. hence why I'm running on and on here. The biggest sh*tter is that 'what attracts funding' might not be either, unless the funding body is run by scientists who are still more scientist than manager or politician, despite being in charge of a funding body. 'Alien life discovered' as a front page Guardian headline does not mean we can tick that question off as answered scientifically. When so much alien life has been 'discovered', over many years, that it barely merits a mention on page 10 - then we can tentatively tick it off.

All the above is, absolutely, full of gaps and flaws. People gossip, turn favours, they get tribalistic about things, they pull dirty tricks and walk right up to important red lines and stick their toes over - sometimes because their career and livelihood and the future of their lab on the line, sometimes because their ego is enormous. Someone with the right contacts and influence who really wants to pull a fast one absolutely can, and stand at least a fair chance of getting away with it for some time.

I honestly don't have the skill or stomach to navigate through that stuff to success, which is why I left research. But, at it's heart, the issue is that there is no system anyone could construct that would be totally immune to that stuff - except for the old fail-safe that, eventually, you have your claims tested against how the world itself actually works, at which point they either hold up or they don't - and anyone who carries out your experiment will be able to see if they hold, or not, themselves.

That last 'test against the world' thing... even that's good in theory, and in practice, for only a majority of stuff. But like all great theories it also has its own well defined limits: What if your experiment uses the most powerful particle accelerator, or a detector that's one-or-two-of-a kind, or makes use of an opportunity that only occurs every 250 years?
So, in the end, at least part of it comes down to having faith that the vast majority of researchers, on any given day, really do care about finding the truth (with a caveat that they probably care more about keeping their kids fed), and are awake to there not just being the (IMHO fairly small) possibility of outright deception, but the much more common thing of honest mistakes, both of theory and practice and personal philosoph., And that they are willing to look for, and speak out against, at least the obvious, occurrences of them.

Sorry for going on and on but, once, many years ago now, this was my thing - and while I left, it remains close to my heart. Whiuch you can tell, because I have just unashamedly written a pretty dull essay on the philosophy of science...
 
Last edited:
I used to be a researcher (in surface engineering and coatings technology, nothing very interesting though we had some amazing machines, but still...) and we were sent of courses to teach us about peer review and citation that basically told us : 'Yes, this is a thing, the second example more than the first - be very very wary'.

OK, the big point here is: The sources you cite, or journal you publish in, aren't the test for your model-of-how-things-work and your argument. The test is other people repeating your experiment in the real world, and doing variations on it, and seeing the same results that you get, and not being able to propose an obviously less complex model that makes seriously better predictions of further results. That's the function journals, and so citations in part, fulfil - communicating what you did so others can test it. Even if what you did stands scrutiny, don't be surprised if others push for revisions, for adding new limits to the use of your model (you should already be being up-front about where it's limits are, and it definitely does have them whatever it is) and other changes that are less severe than totally knocking your work over.

If you just want the really important point about what makes scientific research useful to the world if done right, you can stop reading here. The rest is pretty boring, though important to me as an ex-researcher. That said....

Citing your sources isn't totally valueless. It's better than nothing. Citing your sources is, in theory, a good thing because it allows other people to check where you've got your ideas, approach, and contributing knowledge from. BUT: It doesn't / shouldn't, by itself add much weight to your argument - it's just, a sign of good intention you might say. To the properly skeptical reader, again in theory, it's one of several means they should use to decide for themselves how much weight to give your argument - by checking at least the sources cited in support of any key, or possibly controversial, points. It's not that citing your sources should remove any level of skepticism from your audience, it's that not citing sources on an important or controversial should immediately add two layers of skepticism.

A few specific points:

The reviewers for your paper should be kept anonymous from you.

In fact, to avoid people guessing who their reviewers are in a small field, the reviewing should be selected at random, and may not even be by someone in your field or a closely related one (obviously, this is its own problem). So you shouldn't risk making an enemy with an honest review.

Journals are anxious to preserve their reputation (because they often charge a bomb for full access), and there are organisations that act as watchdogs, so while duff reviewing and turning a blind eye (for various reasons) certainly takes place, it should stay at a low level.

Most research work is pretty dull 'attacking the boundaries of knowledge by gradual, mind numbing, attrition' stuff - it's just not interesting, or important, all by itself. Most editors wont see the title as an attention grabber, most academics wont see it as a threat even if it disagrees with them.

Last: Journalism is not good science. Good PR very definitely isn't. What sells copy, what sounds pithy, is easily absorbed by every reader, and especially what sounds gains popularity and is cool (spits), is not a sign of truth. hence why I'm running on and on here. The biggest sh*tter is that 'what attracts funding' might not be either, unless the funding body is run by scientists who are still more scientist than manager or politician, despite being in charge of a funding body. 'Alien life discovered' as a front page Guardian headline does not mean we can tick that question off as answered scientifically. When so much alien life has been 'discovered', over many years, that it barely merits a mention on page 10 - then we can tentatively tick it off.

All the above is, absolutely, full of gaps and flaws. People gossip, turn favours, they get tribalistic about things, they pull dirty tricks and walk right up to important red lines and stick their toes over - sometimes because their career and livelihood and the future of their lab on the line, sometimes because their ego is enormous. Someone with the right contacts and influence who really wants to pull a fast one absolutely can, and stand at least a fair chance of getting away with it for some time.

I honestly don't have the skill or stomach to navigate through that stuff to success, which is why I left research. But, at it's heart, the issue is that there is no system anyone could construct that would be totally immune to that stuff - except for the old fail-safe that, eventually, you have your claims tested against how the world itself actually works, at which point they either hold up or they don't - and anyone who carries out your experiment will be able to see if they hold, or not, themselves.

That last 'test against the world' thing... even that's good in theory, and in practice, for only a majority of stuff. But like all great theories it also has its own well defined limits: What if your experiment uses the most powerful particle accelerator, or a detector that's one-or-two-of-a kind, or makes use of an opportunity that only occurs every 250 years?
So, in the end, at least part of it comes down to having faith that the vast majority of researchers, on any given day, really do care about finding the truth (with a caveat that they probably care more about keeping their kids fed), and are awake to there not just being the (IMHO fairly small) possibility of outright deception, but the much more common thing of honest mistakes, both of theory and practice and personal philosoph., And that they are willing to look for, and speak out against, at least the obvious, occurrences of them.

Sorry for going on and on but, once, many years ago now, this was my thing - and while I left, it remains close to my heart. Whiuch you can tell, because I have just unashamedly written a pretty dull essay on the philosophy of science...
All I got from your 500 word essay is that you are pushing the 'Alien life discovered' narrative.

And I'm going to tell the world that is core to your entire career and belief system. --- That is the taking out of context fallacy.

3e714c93f2855bf4bf4b9fbdb216b2f0.jpg
 
All I got from your 500 word essay is that you are pushing the 'Alien life discovered' narrative.

And I'm going to tell the world that is core to your entire career and belief system. --- That is the taking out of context fallacy.

3e714c93f2855bf4bf4b9fbdb216b2f0.jpg
Well of course that's true for me - I'm here on Earth, I've got tons of alien life to use as evidence to my fellow Slaugth back home. But it was you humans and your perspective I was talking about :D
 
That reminds me of an argument about the nature of truth: I am looking at a field. In the field I believe I see a cow. Therefore it's reasonable to believe the field has a cow in it.
The cow is a cut-out of a cow. But, behind that cut-out, completely concealed, is.... an actual cow.
So... what I believe is in fact so. But my reasons for believing it are based on an illusion. Therefore the question is: Do I have the truth, or not?
Easy peasy. You believe that the cut-out is a cow. It isn't therefore you are wrong. You don't believe there is a real cow behind the cut-out therefore the fact of that real cow doesn't change your error, since you error is based on what you think the cut-out is, not what might be behind it.
 
Easy peasy. You believe that the cut-out is a cow. It isn't therefore you are wrong. You don't believe there is a real cow behind the cut-out therefore the fact of that real doesn't change your error, since you error is based on what you think the cut-out is, not what might be behind it.
I misremembered this example - there's a link to the correct version in one of my later posts above, but I'd left it too long to edit the original, apologies!
 
I used to be a researcher (in surface engineering and coatings technology, nothing very interesting though we had some amazing machines, but still...) and we were sent of courses to teach us about peer review and citation that basically told us : 'Yes, this is a thing, the second example more than the first - be very very wary'.

OK, the big point here is: The sources you cite, or journal you publish in, aren't the test for your model-of-how-things-work and your argument. The test is other people repeating your experiment in the real world, and doing variations on it, and seeing the same results that you get, and not being able to propose an obviously less complex model that makes seriously better predictions of further results. That's the function journals, and so citations in part, fulfil - communicating what you did so others can test it. Even if what you did stands scrutiny, don't be surprised if others push for revisions, for adding new limits to the use of your model (you should already be being up-front about where it's limits are, and it definitely does have them whatever it is) and other changes that are less severe than totally knocking your work over.

If you just want the really important point about what makes scientific research useful to the world if done right, you can stop reading here. The rest is pretty boring, though important to me as an ex-researcher. That said....

Citing your sources isn't totally valueless. It's better than nothing. Citing your sources is, in theory, a good thing because it allows other people to check where you've got your ideas, approach, and contributing knowledge from. BUT: It doesn't / shouldn't, by itself add much weight to your argument - it's just, a sign of good intention you might say. To the properly skeptical reader, again in theory, it's one of several means they should use to decide for themselves how much weight to give your argument - by checking at least the sources cited in support of any key, or possibly controversial, points. It's not that citing your sources should remove any level of skepticism from your audience, it's that not citing sources on an important or controversial should immediately add two layers of skepticism.

A few specific points:

The reviewers for your paper should be kept anonymous from you.

In fact, to avoid people guessing who their reviewers are in a small field, the reviewing should be selected at random, and may not even be by someone in your field or a closely related one (obviously, this is its own problem). So you shouldn't risk making an enemy with an honest review.

Journals are anxious to preserve their reputation (because they often charge a bomb for full access), and there are organisations that act as watchdogs, so while duff reviewing and turning a blind eye (for various reasons) certainly takes place, it should stay at a low level.

Most research work is pretty dull 'attacking the boundaries of knowledge by gradual, mind numbing, attrition' stuff - it's just not interesting, or important, all by itself. Most editors wont see the title as an attention grabber, most academics wont see it as a threat even if it disagrees with them.

Last: Journalism is not good science. Good PR very definitely isn't. What sells copy, what sounds pithy, is easily absorbed by every reader, and especially what sounds gains popularity and is cool (spits), is not a sign of truth. hence why I'm running on and on here. The biggest sh*tter is that 'what attracts funding' might not be either, unless the funding body is run by scientists who are still more scientist than manager or politician, despite being in charge of a funding body. 'Alien life discovered' as a front page Guardian headline does not mean we can tick that question off as answered scientifically. When so much alien life has been 'discovered', over many years, that it barely merits a mention on page 10 - then we can tentatively tick it off.

All the above is, absolutely, full of gaps and flaws. People gossip, turn favours, they get tribalistic about things, they pull dirty tricks and walk right up to important red lines and stick their toes over - sometimes because their career and livelihood and the future of their lab on the line, sometimes because their ego is enormous. Someone with the right contacts and influence who really wants to pull a fast one absolutely can, and stand at least a fair chance of getting away with it for some time.

I honestly don't have the skill or stomach to navigate through that stuff to success, which is why I left research. But, at it's heart, the issue is that there is no system anyone could construct that would be totally immune to that stuff - except for the old fail-safe that, eventually, you have your claims tested against how the world itself actually works, at which point they either hold up or they don't - and anyone who carries out your experiment will be able to see if they hold, or not, themselves.

That last 'test against the world' thing... even that's good in theory, and in practice, for only a majority of stuff. But like all great theories it also has its own well defined limits: What if your experiment uses the most powerful particle accelerator, or a detector that's one-or-two-of-a kind, or makes use of an opportunity that only occurs every 250 years?
So, in the end, at least part of it comes down to having faith that the vast majority of researchers, on any given day, really do care about finding the truth (with a caveat that they probably care more about keeping their kids fed), and are awake to there not just being the (IMHO fairly small) possibility of outright deception, but the much more common thing of honest mistakes, both of theory and practice and personal philosoph., And that they are willing to look for, and speak out against, at least the obvious, occurrences of them.

Sorry for going on and on but, once, many years ago now, this was my thing - and while I left, it remains close to my heart. Whiuch you can tell, because I have just unashamedly written a pretty dull essay on the philosophy of science...
I really like this. Much of it matches my own experience as an amateur researcher, though probably in a field where there is much less at stake in the PR department.
 
I misremembered this example - there's a link to the correct version in one of my later posts above, but I'd left it too long to edit the original, apologies!
I looked at the link. It's the same sophism. First example:

A fire has just been lit to roast some meat. The fire hasn’t started sending up any smoke, but the smell of the meat has attracted a cloud of insects. From a distance, an observer sees the dark swarm above the horizon and mistakes it for smoke. "There’s a fire burning at that spot," the distant observer says. Does the observer know that there is a fire burning in the distance?
The observer is mistaken in that he interprets the insects as smoke. Regardless of whether there is actually a fire there or not, he remains wrong in his judgement of what he sees as being smoke. So he did not know accurately that there was a fire. He would have to improve his observed data to the point where he could accurately identify everything happening above that fire to identify for certain that there is a fire beneath.

A desert traveller is searching for water. He sees, in the valley ahead, a shimmering blue expanse. Unfortunately, it’s a mirage. But fortunately, when he reaches the spot where there appeared to be water, there actually is water, hidden under a rock. Did the traveller know, as he stood on the hilltop hallucinating, that there was water ahead?
The traveller mistakes the mirage for water therefore his judgement that there is an open pool of water is wrong, regardless of the fact that there some hidden water.

The lesson from this is the necessity of collecting enough observational data to be sure of everything about the observed subject and not mistake something for something else. And know when you don't have enough data to be sure. And accept that you will get many things wrong anyhow.
 
I looked at the link. It's the same sophism. First example:


The observer is mistaken in that he interprets the insects as smoke. Regardless of whether there is actually a fire there or not, he remains wrong in his judgement of what he sees as being smoke. So he did not know accurately that there was a fire. He would have to improve his observed data to the point where he could accurately identify everything happening above that fire to identify for certain that there is a fire beneath.


The traveller mistakes the mirage for water therefore his judgement that there is an open pool of water is wrong, regardless of the fact that there some hidden water.

The lesson from this is the necessity of collecting enough observational data to be sure of everything about the observed subject and not mistake something for something else. And know when you don't have enough data to be sure. And accept that you will get many things wrong anyhow.
This is only my personal opinion, but experience suggests to me that chasing after hard and absolute definitions for things is a mistake in any case. Any definition of knowledge a clever and determined person could, I suspect, find an exception to.

My other favourite haunt online is cosmoquest forum, which is space and science themed. A periodic feature is someone fairly asking "I learned this definition of 'life' in school but viruses don't fit it. What definition of life are we going to use when looking for actual aliens!?" And it's a fair question, but having been there for a long time I've seen this about twice a year, and every time it ends with the same conclusion: No definition of life is going to be without exceptions - that doesn't make having a definition valueless - just as a having limits to it's application doesn't make any given model of how the world around us works valueless.

I suppose I could label that 'perfectionism fallacy' :D : Thinking that, because you've shown a definition, model or idea has limits to it's application, you've shown it valueless or entirely disproven.
 
Last edited:
That reminds me of an argument about the nature of truth: I am looking at a field. In the field I believe I see a cow. Therefore it's reasonable to believe the field has a cow in it.
The cow is a cut-out of a cow. But, behind that cut-out, completely concealed, is.... an actual cow.
So... what I believe is in fact so. But my reasons for believing it are based on an illusion. Therefore the question is: Do I have the truth, or not?

Is the cow small, or far away?

 
>Citing your sources is, in theory, a good thing because it allows other people to check where you've got your ideas, approach, and contributing knowledge from
This isn't just in theory, it's the crux. As a historian I cite sources because other historians read my stuff and wherever they have a question, or wherever something sparks their interest, they can follow through. If I do not cite sources, then my work--regardless of how good or bad it is--becomes a dead end. It closes off the scholarly dialogue. I don't cite sources to show how great my argument is, I do it as a courtesy to others.

Also, not that anyone asked, here's the difference between citations (footnotes) and a bibliography. Citations show specifically where I got specific information. The bibliography shows everything I consulted, whether or not I used a specific work on a specific point. The bibliography provides the general reader all the information needed to follow threads. It also lets the specialist know more or less how current I am, and whether I've overlooked important sources.

So the scholarly thread goes something like this: reader reads along and sees something interesting or dubious. Huh, where'd he get that? Oh, there's a footnote, which provides a specific book and page number(s). But it's often a kind of shorthand, that citation, so reader flips to the bibliography for the full form and sees exactly the publisher, year, edition, etc., so they can go to exactly the same source I did. If those things are not provided, the reader can go no further than my own claims, or at least not without doing a ridiculous amount of additional research. So, courtesy and scholarly dialogue. (note: not every culture has the same values regarding this)
 

Similar threads


Back
Top