Right on point. It would be interesting to research why people by default get so attached to their existing beliefs/hypothesis even when putting aside social incentives (social alignment/reputation/self-interest/identity/etc). People seem to get emotionally attached to their priors and it feels painful to discard them.
This is what we used to call Book Smart vs Street Cred. I'll take the advice of someone with street cred any day of the week! The Piled High and Deeps can keep their opinions/advice to themselves and live within the confines of their own little bubble.
oh, this is good. These insights deserve wider discussion. The concepts deserve wider currency in the conversation on intelligence, consciousness, social psychology and old-school thick descriptive cultural anthropology.
How do you propose then a good reasoning method - in particular:
1. what is a proper, more bulletproof methodology for determining when to engage reasoning (as as you said, if we try to rigorously test literally every claim encountered every day, we would be entirely swallowed up with fact checking);
2. how do you know when you have enough context and/or how do you make sure that the context itself is sampled in a way that is suitably "unbiased"
This is an insightful article that helps me to make sense of things I've observed on social media and in face-to-face conversations. I now have more to ponder and reflect on in my own thinking and writing. Thank you so much!
I'm wary of the idea (implication, really) that deductive reasoning is the crown jewel of reason; structured reason (or maybe just structured thought) is something I think is worth pushing that way; I think it's important to avoid being overly narrow with how we should think.
The problem I've found with enshrining deductive reasoning and the ability to build out abstractions is that there's a temptation to fall in love with the modeling. To get carried away with the cleverness of an elaborate construction, despite an idealized result that only maintains a semblance of utility by working as a Procrustean bed.
A lot of this seems reasonable, but sorry, I'm a little dubious about your own fact checking and reasoning abilities after reading this paragraph:
> As documented by Sam Wineburg and Sarah McGrew in Evaluating Information: The Cornerstone of Civic Online Reasoning, fact-checkers begin by classifying the situation before engaging the content. They leave the page immediately, search for more context externally, and establish whether the source itself warrants attention. Only then do they decide whether close analysis is appropriate.
Did you actually read this paper before citing it? There are at least two problems:
1. It doesn't discuss fact checkers anywhere. The word "checker" only appears in a quote from a philosopher. The subjects are all students. Where did you get this claim from? Did you mix it up with some other study?
2. It uses an idiotic method that classifies nearly all students as "easily duped" even though they are doing everything correctly.
Specifically, the authors don't believe in evaluating an argument on its merits (as your paragraph makes clear, in fact). In their method the ENTIRE process of evaluating an argument involves looking at who is making it. If the speakers could be considered right wing in any way, then their method assumes the arguments are automatically false and claims anyone who didn't discard the source without reading it was "duped".
For example: they ask people to evaluate MinimumWage.com. It's a conservative website that argues against the minimum wage. "The site links to reputable sources like the New York Times" (lol), but "only 9% of high school students were able to see that it was a front group for a DC lobbyist". Their evidence that this website shouldn't be trusted is just a headline in Salon. If the students found the Salon headline and decided to ignore the website they had literally just been asked to evaluate, they passed, otherwise they failed. Explicit in their methodology: lobbyists can't ever be correct about anything, which is NOT a logical position to hold.
This is the kind of ludicrously low-IQ far left "research" that makes people laugh at the universities. It's from Stanford, which is supposed to be where the smart people hang out, yet the paper reads like a satire of academia.
Thanks, I pasted the wrong link, same authors. You could’ve simply Googled the study I referred to. Here’s the link: Wineburg & McGrew (2017), “Lateral Reading: Reading Less and Learning More When Evaluating Digital Information.”
I did Google it but the original linked paper _does_ talk about this topic. That's why I asked if you had made a mixup. It's not obvious there's a second paper by the same people that says substantially similar things.
Fixing the mixup doesn't change anything. The second paper uses the same methodology and has the same ideological corruption problems as the first. Their methodology for deciding whether something is correct or not is, once again, to look exclusively at who wrote it. When historians and students study the arguments made directly they are classified as having "fallen victim" to the sources, vs the fact checkers who immediately jumped to ad hominem fallacies and are thus considered to be more sophisticated, having reached "more warranted conclusions".
The worldview pushed by these papers is anti-rational. It's tribalist identity politics writ large; if you're part of the Democrat tribe anything you say is true, if you aren't then anything you say is false, and people should be trained to never actually listen to each other or think about anything said by anyone.
It's all just so tiresome and highly embarrassing for Stanford, if they were able to be embarrassed by the antics of their academics, which I doubt.
To be a little more fair (though your general thrust seems correct) they asked to evaluate the trustworthiness of the sources on the specific topic (based on my skimming), not the validity of the claim.
And the way you do that is by testing the veracity of claims within the source (if you aren't already familiar with it). Their preferred approach is just to look at funding sources, and assume if they dislike the funding source, the claims must be false.
I found this article very insightful, and it’s something I’ve thought about many times. Coming from an analytic philosophy background, this is one of the things I argue philosophy—at least in the analytic tradition—is especially useful for: it teaches people that many more things than they might initially think can be intellectualized. It teaches to think much more often and about many more topics that they would naturally tend to do.
People often have crazy takes about religion, politics, and similar topics not because they are low IQ (or at least not only for that reason), but mainly because they don’t realize that these topics can be subjected to careful thinking in much the same way that mathematics or engineering or other typical intellectual fields can.
If you read the article, you’ll see that I explain that intelligence improves algorithmic reasoning, and one’s ability to solve problems once the problem has been identified. However, knowing when to apply reasoning in non-explicit scenarios is another matter.
Of course, intelligent people who work on improving their reasoning disposition and protocols will obviously surpass less intelligent people who do the same. But the fact remains that highly intelligent people are often blind to many biases, deceptive frames, etc, and can have a lot of confidence in fundamentally wrong, but internally consistent frameworks. This explains why many highly acclaimed scientists have fallen for “quack theories” later in life.
I am not sure what you’re asking. By “perceptual reasoning” do you mean the usual definition/that which IQ test assess through matrix reasoning? If so, that’s in the algorithmic side of intelligence, and improves formal problem solving, but not the separate, real-world/non-explicit reasoning abilities I’m referring to.
Or do you mean “reflectiveness,” as I referred to in my essay (which is a disposition that does improve reasoning)? If so, having a more reflective disposition can make one more aware of their own tendency towards reflection, but in general, people tend to overestimate how rational they are, and are not reliably aware of when they’re being reflective vs. when they’re reasoning within a given frame. And this gap between capacity, disposition and awareness is part of the problem I’m pointing out in this piece.
This is all fascinating. I am very glad to have discovered you.
This was fascinating. Thank you.
One point near the end I would question is whether intellectual “humility” is the correct framing/concept. You might enjoy this essay:
https://open.substack.com/pub/genagorlin/p/intellectual-humility-is-a-copout?r=abstg&utm_medium=ios&shareImageVariant=overlay
Interesting; thanks for the link! I saved it to read later and will get back to you.
Call it what you want, but it seems necessary to embrace the possibility that one is wrong.
Right on point. It would be interesting to research why people by default get so attached to their existing beliefs/hypothesis even when putting aside social incentives (social alignment/reputation/self-interest/identity/etc). People seem to get emotionally attached to their priors and it feels painful to discard them.
This is what we used to call Book Smart vs Street Cred. I'll take the advice of someone with street cred any day of the week! The Piled High and Deeps can keep their opinions/advice to themselves and live within the confines of their own little bubble.
oh, this is good. These insights deserve wider discussion. The concepts deserve wider currency in the conversation on intelligence, consciousness, social psychology and old-school thick descriptive cultural anthropology.
How do you propose then a good reasoning method - in particular:
1. what is a proper, more bulletproof methodology for determining when to engage reasoning (as as you said, if we try to rigorously test literally every claim encountered every day, we would be entirely swallowed up with fact checking);
2. how do you know when you have enough context and/or how do you make sure that the context itself is sampled in a way that is suitably "unbiased"
?
Hello there Rickie, I hope you’re having a wonderful weekend.
Your notes have appeared on my feed the past few days, always worth pondering, thank you.
I thought you may enjoy this article of mine:
https://open.substack.com/pub/jordannuttall/p/diseases-plants-and-forgotten-medicine?r=4f55i2&utm_medium=ios&shareImageVariant=overlay
Fascinating. Can you point me to a rationality scale? I’d like to try this in my methods classes.
This is an insightful article that helps me to make sense of things I've observed on social media and in face-to-face conversations. I now have more to ponder and reflect on in my own thinking and writing. Thank you so much!
I'm wary of the idea (implication, really) that deductive reasoning is the crown jewel of reason; structured reason (or maybe just structured thought) is something I think is worth pushing that way; I think it's important to avoid being overly narrow with how we should think.
The problem I've found with enshrining deductive reasoning and the ability to build out abstractions is that there's a temptation to fall in love with the modeling. To get carried away with the cleverness of an elaborate construction, despite an idealized result that only maintains a semblance of utility by working as a Procrustean bed.
A lot of this seems reasonable, but sorry, I'm a little dubious about your own fact checking and reasoning abilities after reading this paragraph:
> As documented by Sam Wineburg and Sarah McGrew in Evaluating Information: The Cornerstone of Civic Online Reasoning, fact-checkers begin by classifying the situation before engaging the content. They leave the page immediately, search for more context externally, and establish whether the source itself warrants attention. Only then do they decide whether close analysis is appropriate.
Did you actually read this paper before citing it? There are at least two problems:
1. It doesn't discuss fact checkers anywhere. The word "checker" only appears in a quote from a philosopher. The subjects are all students. Where did you get this claim from? Did you mix it up with some other study?
2. It uses an idiotic method that classifies nearly all students as "easily duped" even though they are doing everything correctly.
Specifically, the authors don't believe in evaluating an argument on its merits (as your paragraph makes clear, in fact). In their method the ENTIRE process of evaluating an argument involves looking at who is making it. If the speakers could be considered right wing in any way, then their method assumes the arguments are automatically false and claims anyone who didn't discard the source without reading it was "duped".
For example: they ask people to evaluate MinimumWage.com. It's a conservative website that argues against the minimum wage. "The site links to reputable sources like the New York Times" (lol), but "only 9% of high school students were able to see that it was a front group for a DC lobbyist". Their evidence that this website shouldn't be trusted is just a headline in Salon. If the students found the Salon headline and decided to ignore the website they had literally just been asked to evaluate, they passed, otherwise they failed. Explicit in their methodology: lobbyists can't ever be correct about anything, which is NOT a logical position to hold.
This is the kind of ludicrously low-IQ far left "research" that makes people laugh at the universities. It's from Stanford, which is supposed to be where the smart people hang out, yet the paper reads like a satire of academia.
Thanks, I pasted the wrong link, same authors. You could’ve simply Googled the study I referred to. Here’s the link: Wineburg & McGrew (2017), “Lateral Reading: Reading Less and Learning More When Evaluating Digital Information.”
https://stacks.stanford.edu/file/druid:yk133ht8603/Wineburg%20McGrew_Lateral%20Reading%20and%20the%20Nature%20of%20Expertise.pdf
I did Google it but the original linked paper _does_ talk about this topic. That's why I asked if you had made a mixup. It's not obvious there's a second paper by the same people that says substantially similar things.
Fixing the mixup doesn't change anything. The second paper uses the same methodology and has the same ideological corruption problems as the first. Their methodology for deciding whether something is correct or not is, once again, to look exclusively at who wrote it. When historians and students study the arguments made directly they are classified as having "fallen victim" to the sources, vs the fact checkers who immediately jumped to ad hominem fallacies and are thus considered to be more sophisticated, having reached "more warranted conclusions".
The worldview pushed by these papers is anti-rational. It's tribalist identity politics writ large; if you're part of the Democrat tribe anything you say is true, if you aren't then anything you say is false, and people should be trained to never actually listen to each other or think about anything said by anyone.
It's all just so tiresome and highly embarrassing for Stanford, if they were able to be embarrassed by the antics of their academics, which I doubt.
To be a little more fair (though your general thrust seems correct) they asked to evaluate the trustworthiness of the sources on the specific topic (based on my skimming), not the validity of the claim.
And the way you do that is by testing the veracity of claims within the source (if you aren't already familiar with it). Their preferred approach is just to look at funding sources, and assume if they dislike the funding source, the claims must be false.
I found this article very insightful, and it’s something I’ve thought about many times. Coming from an analytic philosophy background, this is one of the things I argue philosophy—at least in the analytic tradition—is especially useful for: it teaches people that many more things than they might initially think can be intellectualized. It teaches to think much more often and about many more topics that they would naturally tend to do.
People often have crazy takes about religion, politics, and similar topics not because they are low IQ (or at least not only for that reason), but mainly because they don’t realize that these topics can be subjected to careful thinking in much the same way that mathematics or engineering or other typical intellectual fields can.
Except when it does…
If you read the article, you’ll see that I explain that intelligence improves algorithmic reasoning, and one’s ability to solve problems once the problem has been identified. However, knowing when to apply reasoning in non-explicit scenarios is another matter.
Of course, intelligent people who work on improving their reasoning disposition and protocols will obviously surpass less intelligent people who do the same. But the fact remains that highly intelligent people are often blind to many biases, deceptive frames, etc, and can have a lot of confidence in fundamentally wrong, but internally consistent frameworks. This explains why many highly acclaimed scientists have fallen for “quack theories” later in life.
Is perceptual reasoning something of which you are aware?
I am not sure what you’re asking. By “perceptual reasoning” do you mean the usual definition/that which IQ test assess through matrix reasoning? If so, that’s in the algorithmic side of intelligence, and improves formal problem solving, but not the separate, real-world/non-explicit reasoning abilities I’m referring to.
Or do you mean “reflectiveness,” as I referred to in my essay (which is a disposition that does improve reasoning)? If so, having a more reflective disposition can make one more aware of their own tendency towards reflection, but in general, people tend to overestimate how rational they are, and are not reliably aware of when they’re being reflective vs. when they’re reasoning within a given frame. And this gap between capacity, disposition and awareness is part of the problem I’m pointing out in this piece.
The first one.