Are we self-development nerds?
Quote from Lucio Buffalmano on July 1, 2022, 2:43 pmAwesome post, LOF!
Yeah, I think we generally agree on the need for more structure and data, and I equally love good data and science, may just happen to be more "warier" about data and its possible pitfalls -and I believe that data analyzed by someone with experience on the subject and a good logical mind to make sense of that data and put it in perspective is better than "data alone"-.
It's funny that I was thinking about a similar topic the other day, and it was:
How to defend yourself from conmen and charlatans?
The question, at the root, is the same one you ask: how to learn, and how to know who is worth learning from?
And I thought that the very first items in that list should be:
- Trust yourself more: many folks who fall for "bad teachers" have low trust in their own judgment so they're too ready to be swayed by the first guy who talks -and sells- with confidence
And the second one:- At the same time be open that who's in front of you might be a genuine expert, maybe even a genius, as well as an idiot, or a charlatan
So I go with your #3.
I personally am open to be influenced -even happy to be influenced- if the "teacher" proves himself worthy.
But don't let anyone influence me beforehand. I find that is the naive approach, and cannot work in a world where the overlap between genuine experts and genuine characters amounts to single digits percentage.And I see your point on the weaknesses of that approach.
But from personal experience, I haven't fallen for charlatans too often -notable exception in the nutrition space though, for a while. But eventually corrected, so you're right that exposure can work against you, but the more you learn, the more you grow your antennas-.
Awesome post, LOF!
Yeah, I think we generally agree on the need for more structure and data, and I equally love good data and science, may just happen to be more "warier" about data and its possible pitfalls -and I believe that data analyzed by someone with experience on the subject and a good logical mind to make sense of that data and put it in perspective is better than "data alone"-.
It's funny that I was thinking about a similar topic the other day, and it was:
How to defend yourself from conmen and charlatans?
The question, at the root, is the same one you ask: how to learn, and how to know who is worth learning from?
And I thought that the very first items in that list should be:
- Trust yourself more: many folks who fall for "bad teachers" have low trust in their own judgment so they're too ready to be swayed by the first guy who talks -and sells- with confidence
And the second one: - At the same time be open that who's in front of you might be a genuine expert, maybe even a genius, as well as an idiot, or a charlatan
So I go with your #3.
I personally am open to be influenced -even happy to be influenced- if the "teacher" proves himself worthy.
But don't let anyone influence me beforehand. I find that is the naive approach, and cannot work in a world where the overlap between genuine experts and genuine characters amounts to single digits percentage.
And I see your point on the weaknesses of that approach.
But from personal experience, I haven't fallen for charlatans too often -notable exception in the nutrition space though, for a while. But eventually corrected, so you're right that exposure can work against you, but the more you learn, the more you grow your antennas-.
---
(Book a call) for personalized & private feedback
Quote from John Freeman on July 1, 2022, 4:13 pmQuote from Lucio Buffalmano on July 1, 2022, 12:19 pmA good approach to learning what works embraces science, but also must be fully aware of its limitations.
Exactly. I'll give an example from my field.
As you know, the end-all-be-all in medicine are meta-analyses. The conclusions drawn from them becomes a gold standard.
Example (fake): the met-analyses draws the conclusion that when you give Paracetamol to children between 3 and 7, the pain is reduced by 5 points out of 10.
Most of my colleagues, will then take this result as it is. That means as an absolute truth.
What they forget is that it's an average. That means that behind this simple sentence, there is a distribution of patients: some did not respond at all, some it alleviated totally the pain and for some it did just a bit.
So that shows that even the most statistically powerful studies are inherently flawed because of individual cases and because the complexity cannot always be grasped/rendered by such studies.
All of these analyses are simplification of reality because it's too complex.
Quote from Lucio Buffalmano on July 1, 2022, 12:19 pmA good approach to learning what works embraces science, but also must be fully aware of its limitations.
Exactly. I'll give an example from my field.
As you know, the end-all-be-all in medicine are meta-analyses. The conclusions drawn from them becomes a gold standard.
Example (fake): the met-analyses draws the conclusion that when you give Paracetamol to children between 3 and 7, the pain is reduced by 5 points out of 10.
Most of my colleagues, will then take this result as it is. That means as an absolute truth.
What they forget is that it's an average. That means that behind this simple sentence, there is a distribution of patients: some did not respond at all, some it alleviated totally the pain and for some it did just a bit.
So that shows that even the most statistically powerful studies are inherently flawed because of individual cases and because the complexity cannot always be grasped/rendered by such studies.
All of these analyses are simplification of reality because it's too complex.
Quote from Kavalier on July 1, 2022, 9:17 pmQuote from leaderoffun on July 1, 2022, 11:51 am
Not picking on you Kavalier, everyone does this all the time, we humans are like this
Leaderoffun, this was power movy, yes, but it is okay. We are all learning here. Let's just brush this aside and concentrate on your awesome topic
Quote from Lucio Buffalmano on July 1, 2022, 12:19 pmA good approach to learning what works embraces science, but also must be fully aware of its limitations.
And it's important limitations -in the social sciences, especially-.
I fully agree with Lucio.
There are some things that are just not quantifiable. And some that are quantifiable, but to quantify it doesn't help. As Lucio says, this is characteristic of almost every social science (Economics is the exception - somewhat). The reasons are many:
- You can't really observe an interaction without influencing the very interaction
- The observer may strive not to be biased - but he is, like it or not
- The number of variables to observe are insane, and are culturally/context dependent.
- With big data analysis we are making strides into this - but the application will also be context dependent. "Big" data might explain "big" phaenomena, but are very false when applied to specific situations (the "old average human with one breast and one testicle" adage). And in TPM we analyse the micro-level.
- Take the persian ta'rof for example. This is a comic (absurd) example, of course. This would never happen in traffic and the situation would never escalate like this (but also notice that there is a limit, and if you overdo it, people may start getting pissed off). But people are expected to say "please, after you" and the other one is expected to reply "no, please, you go first" (perhaps TPM would advise us not to engage in this kind of power struggle, but not doing either would be a big faux pas) . If the guy in blue was to meet the same persons in future interactions, he would be power down because he gave up the ritual altogether (the "to hell with this" was not at all relevant).
- It's difficult to choose parameter to measure. Let's leave the social sciences for a while and use a more visual example:
- Let's say you are into judo. Say a researcher quantifies all the fights in the Olympics in the last 100 years and conclude that 70% of the fights end in a technique called Uchi-mata. So, does that mean that you should always strive to apply an Uchi-mata to your opponent? All the time, regardless of what your opponent gives to you? And forget about everything else? And that the Uchi-mata is always going to work? There were countless (emphasis on countless) very subtle steps attempts at techniques before that prepared the terrain for that Uchi-mata to be effective. And the opponent might be guarded against that Uchi-mata, so it wouldn't work. Overfocus on that Uchi-mata and you lose an opportunity for that less impressive Ashi-barai that might have won you the fight, and you lose.
Now, back to your example. The fact is that there is a long string of events in an interaction (let's not even say relationship). Delimiting a portion of an interaction to call it a "success" or "failure" is always arbitrary (back to the bias). That was a very small part (nano level) of an interaction , therefore statistics doesnt help us here. It's just the pull on the sleeve that might (or not) lead to an Uchi-mata. The pull may work, or it may not - there are defenses against this. Or perhaps the same pull works, but you were better positioned for a Harai-goshi and the Uchi-mata fails.
How do you know it works? You have to either do the judo or believe the guys who do the judo themselves, there is no way out. It's impossible to remove the "trust" element of any science completely (I know for sure the Earth is round, but I didn't arrive at this conclusion by my own observations. Or perhaps I shouldn't be so sure. It's called a "planet" after all, not a "roundet").
In conclusio, if we leave the replication of experiments to the experts (it's not practical. Just imagine we had to reinvent the wheel each time we wanted to travel, that would be insane), in order to have a better call at evaluating results, we are limited to:
- Take time to understand the methodology
- Compare the result to one's past experience/knowledge
- Evaluate wether the methodology is sound in the light of the accumulated knowledge
- Take into account the status of the individual who did the research in the scientific community
Quote from leaderoffun on July 1, 2022, 11:51 am
Not picking on you Kavalier, everyone does this all the time, we humans are like this
Leaderoffun, this was power movy, yes, but it is okay. We are all learning here. Let's just brush this aside and concentrate on your awesome topic
Quote from Lucio Buffalmano on July 1, 2022, 12:19 pmA good approach to learning what works embraces science, but also must be fully aware of its limitations.
And it's important limitations -in the social sciences, especially-.
I fully agree with Lucio.
There are some things that are just not quantifiable. And some that are quantifiable, but to quantify it doesn't help. As Lucio says, this is characteristic of almost every social science (Economics is the exception - somewhat). The reasons are many:
- You can't really observe an interaction without influencing the very interaction
- The observer may strive not to be biased - but he is, like it or not
- The number of variables to observe are insane, and are culturally/context dependent.
- With big data analysis we are making strides into this - but the application will also be context dependent. "Big" data might explain "big" phaenomena, but are very false when applied to specific situations (the "old average human with one breast and one testicle" adage). And in TPM we analyse the micro-level.
- Take the persian ta'rof for example. This is a comic (absurd) example, of course. This would never happen in traffic and the situation would never escalate like this (but also notice that there is a limit, and if you overdo it, people may start getting pissed off). But people are expected to say "please, after you" and the other one is expected to reply "no, please, you go first" (perhaps TPM would advise us not to engage in this kind of power struggle, but not doing either would be a big faux pas) . If the guy in blue was to meet the same persons in future interactions, he would be power down because he gave up the ritual altogether (the "to hell with this" was not at all relevant).
- It's difficult to choose parameter to measure. Let's leave the social sciences for a while and use a more visual example:
- Let's say you are into judo. Say a researcher quantifies all the fights in the Olympics in the last 100 years and conclude that 70% of the fights end in a technique called Uchi-mata. So, does that mean that you should always strive to apply an Uchi-mata to your opponent? All the time, regardless of what your opponent gives to you? And forget about everything else? And that the Uchi-mata is always going to work? There were countless (emphasis on countless) very subtle steps attempts at techniques before that prepared the terrain for that Uchi-mata to be effective. And the opponent might be guarded against that Uchi-mata, so it wouldn't work. Overfocus on that Uchi-mata and you lose an opportunity for that less impressive Ashi-barai that might have won you the fight, and you lose.
Now, back to your example. The fact is that there is a long string of events in an interaction (let's not even say relationship). Delimiting a portion of an interaction to call it a "success" or "failure" is always arbitrary (back to the bias). That was a very small part (nano level) of an interaction , therefore statistics doesnt help us here. It's just the pull on the sleeve that might (or not) lead to an Uchi-mata. The pull may work, or it may not - there are defenses against this. Or perhaps the same pull works, but you were better positioned for a Harai-goshi and the Uchi-mata fails.
How do you know it works? You have to either do the judo or believe the guys who do the judo themselves, there is no way out. It's impossible to remove the "trust" element of any science completely (I know for sure the Earth is round, but I didn't arrive at this conclusion by my own observations. Or perhaps I shouldn't be so sure. It's called a "planet" after all, not a "roundet").
In conclusio, if we leave the replication of experiments to the experts (it's not practical. Just imagine we had to reinvent the wheel each time we wanted to travel, that would be insane), in order to have a better call at evaluating results, we are limited to:
- Take time to understand the methodology
- Compare the result to one's past experience/knowledge
- Evaluate wether the methodology is sound in the light of the accumulated knowledge
- Take into account the status of the individual who did the research in the scientific community
Quote from leaderoffun on July 1, 2022, 10:05 pmthis was power movy, yes, but it is okay. We are all learning here. Let's just brush this aside and concentrate on your awesome topic
My apologies Kavalier, now that you point it I see it too. Not intended to put you down, you are writing awesome, insightful posts.
It's difficult to choose parameter to measure
It is, but if we believe Hubbard on his 'how to measure anything' book, this is true often for real life phenomena and shouldn't stop us from trying to measure. Especially uncertainty and risk. And most social decisions are decisions under uncertainty (that is, you don't know which options you have).
We can say, as in your Judo example, that each situation is unique and we cannot generalize or 'count'. That the 'finish by Uchi-mata' is a silly thing to look at, or to learn. But that's not conductive to learning! And what Lucio has done with TPM and PU is to abstract away some rules from some kind of subjective counting. His mind (incredibly perceptive and well trained) has picked up patterns, put them into buckets, and counted them somehow. Not explicitly. Before he comes up with a generalization, he must have seen the behavior multiple times, and somehow kept track (count).
In your Judo example, and I suspect this is a good example because in sports nowadays pros analyze even the tiniest detail: the 'machine' will have counted perhaps 1000s of behaviors, and found patterns far more detailed than 'having Uchi-mata finished the fight.'
My example on Tinder opening lines is kinda an outlier because it's so simple and the context is fairly replicable: once the conversation moves on, there are more variables into play and measuring gets harder. To the point of looking impossible.
What I'm saying is that just because it's difficult we shouldn't stop trying. I've read 1000s of experimental psych articles (not on social psych, but still very messy). There's a control group and far more control than we'll ever have 'in the wild.' And even there it's plenty hard to come up with solid conclusions. 'More research is needed' seems to be the concluding point of mostly every paper (which is frustrating). And the actual actionable advice that they provide is tiny, because in an effort to maximize experimental control they reduce external validity (how applicable the conclusions are!) And even there they had a replicability crisis. Most of the stuff from Danny Kahneman's group (the 'heuristics and biases' school) doesn't replicate for shit. And he got a Nobel price for his research.
We can do better. We should do better. We have a far stronger starting point in PU. Someone (Lucio) has made a tremendous effort to systematize this. To create a new discipline that is better than the entire body of social psych to date for us people 'in the trenches.'
How to do that is something for another day. If there's a few hundreds of us we could just aggregate data. Look at what the guys at prediction markets are doing. It may look like that, or more like citizen science. The bar is so low that we could beat it with a half-assed approach of... just counting 🙂
this was power movy, yes, but it is okay. We are all learning here. Let's just brush this aside and concentrate on your awesome topic
My apologies Kavalier, now that you point it I see it too. Not intended to put you down, you are writing awesome, insightful posts.
It's difficult to choose parameter to measure
It is, but if we believe Hubbard on his 'how to measure anything' book, this is true often for real life phenomena and shouldn't stop us from trying to measure. Especially uncertainty and risk. And most social decisions are decisions under uncertainty (that is, you don't know which options you have).
We can say, as in your Judo example, that each situation is unique and we cannot generalize or 'count'. That the 'finish by Uchi-mata' is a silly thing to look at, or to learn. But that's not conductive to learning! And what Lucio has done with TPM and PU is to abstract away some rules from some kind of subjective counting. His mind (incredibly perceptive and well trained) has picked up patterns, put them into buckets, and counted them somehow. Not explicitly. Before he comes up with a generalization, he must have seen the behavior multiple times, and somehow kept track (count).
In your Judo example, and I suspect this is a good example because in sports nowadays pros analyze even the tiniest detail: the 'machine' will have counted perhaps 1000s of behaviors, and found patterns far more detailed than 'having Uchi-mata finished the fight.'
My example on Tinder opening lines is kinda an outlier because it's so simple and the context is fairly replicable: once the conversation moves on, there are more variables into play and measuring gets harder. To the point of looking impossible.
What I'm saying is that just because it's difficult we shouldn't stop trying. I've read 1000s of experimental psych articles (not on social psych, but still very messy). There's a control group and far more control than we'll ever have 'in the wild.' And even there it's plenty hard to come up with solid conclusions. 'More research is needed' seems to be the concluding point of mostly every paper (which is frustrating). And the actual actionable advice that they provide is tiny, because in an effort to maximize experimental control they reduce external validity (how applicable the conclusions are!) And even there they had a replicability crisis. Most of the stuff from Danny Kahneman's group (the 'heuristics and biases' school) doesn't replicate for shit. And he got a Nobel price for his research.
We can do better. We should do better. We have a far stronger starting point in PU. Someone (Lucio) has made a tremendous effort to systematize this. To create a new discipline that is better than the entire body of social psych to date for us people 'in the trenches.'
How to do that is something for another day. If there's a few hundreds of us we could just aggregate data. Look at what the guys at prediction markets are doing. It may look like that, or more like citizen science. The bar is so low that we could beat it with a half-assed approach of... just counting 🙂
Quote from Kavalier on July 1, 2022, 10:10 pmQuote from leaderoffun on July 1, 2022, 10:05 pmMy apologies Kavalier, now that you point it I see it too. Not intended to put you down, you are writing awesome, insightful posts.
That's all cool, man 😀
Just read your post, it's awesome. I'll continue the discussion when I have more time to reply!
Quote from leaderoffun on July 1, 2022, 10:05 pm
My apologies Kavalier, now that you point it I see it too. Not intended to put you down, you are writing awesome, insightful posts.
That's all cool, man 😀
Just read your post, it's awesome. I'll continue the discussion when I have more time to reply!
Quote from Lucio Buffalmano on July 2, 2022, 8:40 amQuote from John Freeman on July 1, 2022, 4:13 pmQuote from Lucio Buffalmano on July 1, 2022, 12:19 pmA good approach to learning what works embraces science, but also must be fully aware of its limitations.
Exactly. I'll give an example from my field.
As you know, the end-all-be-all in medicine are meta-analyses. The conclusions drawn from them becomes a gold standard.
Example (fake): the met-analyses draws the conclusion that when you give Paracetamol to children between 3 and 7, the pain is reduced by 5 points out of 10.
Most of my colleagues, will then take this result as it is. That means as an absolute truth.
What they forget is that it's an average. That means that behind this simple sentence, there is a distribution of patients: some did not respond at all, some it alleviated totally the pain and for some it did just a bit.
So that shows that even the most statistically powerful studies are inherently flawed because of individual cases and because the complexity cannot always be grasped/rendered by such studies.
All of these analyses are simplification of reality because it's too complex.
Great point.
The "average" is one of the important limitations of various studies.
Averages VS "Social Glass Ceilings"
In the social sciences / social skills there's also something we referred to here as "glass ceiling behavior".
Such as behavior that is successful only up to a certain level -example.: blatant manipulation and blatant power moves, successful with clueless folks, but self-harming with more power aware folks-.
Good studies should be set up to take that into account: who is performing which action, and with whom?
Never seen anything like that: no study accounts for "personal value", "people's intelligence", "station in life", etc. etc.
Example: data "supports" abusive control strategies working. But it doesn't really say with WHOM it works
That's how you also end up with the laundry list of "dark strategies" to control people that include abuse and demeaning.
But I haven't seen a single study that differentiated the victims in terms of their sexual market value, how many other options they had, and how emotionally adjusted they personally were.
And when I read books on the subject, that's exactly what I noticed: the victims weren't very high-quality.
Which is what logic also supports: value-taking approaches work better with those who have fewer options, are more naive, and are emotionally less healthy (otherwise, at parity of everything else, they prefer value-giving men).But to understand that, you need to read between the lines, beyond the data.
The simplest case: Tinder openers. Still fail with data alone
Take the simplest possible case to study that LOF mentioned: a Tinder opener.
Even openers present various "levels".
Inviting a woman to drinks as a first message will work wonders for guys with a good profile, high in both value and warmth, and set-up for fast interactions (ie., a description that says "too busy for long chats"), swiping in good sexual market places, and looking for hookups.
But it will work a lot less for guys swiping in more conservative and risk-averse sexual market places, looking for long-term.
So you end up with an average of what texts get a reply -in itself not such a meaningful measure, BTW-, but it's not necessarily what will work best for you.
That study would still be HUGELY helpful, I totally agree with LOF (and BTW, would LOVE to set up this forum one day to include "believability-weighted opinions) .
But not in isolation.
That's where experience and a logical brain that can interpret the data steps in and make all the difference.
That being said, despite all the limitations, I think we all agree that science can be super helpful.
It's just about how to make the most out of it.
So:
Quote from leaderoffun on July 1, 2022, 10:05 pmWe can do better. We should do better. We have a far stronger starting point in PU. Someone (Lucio) has made a tremendous effort to systematize this. To create a new discipline that is better than the entire body of social psych to date for us people 'in the trenches.'
How to do that is something for another day. If there's a few hundreds of us we could just aggregate data. Look at what the guys at prediction markets are doing. It may look like that, or more like citizen science. The bar is so low that we could beat it with a half-assed approach of... just counting 🙂
Hell yeah, I'm interested 🙂
Quote from John Freeman on July 1, 2022, 4:13 pmQuote from Lucio Buffalmano on July 1, 2022, 12:19 pmA good approach to learning what works embraces science, but also must be fully aware of its limitations.
Exactly. I'll give an example from my field.
As you know, the end-all-be-all in medicine are meta-analyses. The conclusions drawn from them becomes a gold standard.
Example (fake): the met-analyses draws the conclusion that when you give Paracetamol to children between 3 and 7, the pain is reduced by 5 points out of 10.
Most of my colleagues, will then take this result as it is. That means as an absolute truth.
What they forget is that it's an average. That means that behind this simple sentence, there is a distribution of patients: some did not respond at all, some it alleviated totally the pain and for some it did just a bit.
So that shows that even the most statistically powerful studies are inherently flawed because of individual cases and because the complexity cannot always be grasped/rendered by such studies.
All of these analyses are simplification of reality because it's too complex.
Great point.
The "average" is one of the important limitations of various studies.
Averages VS "Social Glass Ceilings"
In the social sciences / social skills there's also something we referred to here as "glass ceiling behavior".
Such as behavior that is successful only up to a certain level -example.: blatant manipulation and blatant power moves, successful with clueless folks, but self-harming with more power aware folks-.
Good studies should be set up to take that into account: who is performing which action, and with whom?
Never seen anything like that: no study accounts for "personal value", "people's intelligence", "station in life", etc. etc.
Example: data "supports" abusive control strategies working. But it doesn't really say with WHOM it works
That's how you also end up with the laundry list of "dark strategies" to control people that include abuse and demeaning.
But I haven't seen a single study that differentiated the victims in terms of their sexual market value, how many other options they had, and how emotionally adjusted they personally were.
And when I read books on the subject, that's exactly what I noticed: the victims weren't very high-quality.
Which is what logic also supports: value-taking approaches work better with those who have fewer options, are more naive, and are emotionally less healthy (otherwise, at parity of everything else, they prefer value-giving men).
But to understand that, you need to read between the lines, beyond the data.
The simplest case: Tinder openers. Still fail with data alone
Take the simplest possible case to study that LOF mentioned: a Tinder opener.
Even openers present various "levels".
Inviting a woman to drinks as a first message will work wonders for guys with a good profile, high in both value and warmth, and set-up for fast interactions (ie., a description that says "too busy for long chats"), swiping in good sexual market places, and looking for hookups.
But it will work a lot less for guys swiping in more conservative and risk-averse sexual market places, looking for long-term.
So you end up with an average of what texts get a reply -in itself not such a meaningful measure, BTW-, but it's not necessarily what will work best for you.
That study would still be HUGELY helpful, I totally agree with LOF (and BTW, would LOVE to set up this forum one day to include "believability-weighted opinions) .
But not in isolation.
That's where experience and a logical brain that can interpret the data steps in and make all the difference.
That being said, despite all the limitations, I think we all agree that science can be super helpful.
It's just about how to make the most out of it.
So:
Quote from leaderoffun on July 1, 2022, 10:05 pmWe can do better. We should do better. We have a far stronger starting point in PU. Someone (Lucio) has made a tremendous effort to systematize this. To create a new discipline that is better than the entire body of social psych to date for us people 'in the trenches.'
How to do that is something for another day. If there's a few hundreds of us we could just aggregate data. Look at what the guys at prediction markets are doing. It may look like that, or more like citizen science. The bar is so low that we could beat it with a half-assed approach of... just counting 🙂
Hell yeah, I'm interested 🙂
---
(Book a call) for personalized & private feedback
Quote from Kavalier on July 2, 2022, 9:13 pmLeaderoffun, what you say makes a lot of sense.
But let me use another example (brace yourselves, this is going to be boring)
The good old game of chess!
Today chess is a game that has been measured and remeasured from every possible angle. If one decides to try to learn how to play chess today, one will have access to huge databases with hundreds of thousands of games that will show you what moves are the best, which ones are blunders and which ones are just okay. Chess computers have evolved to the point that they can shred reigning human grandmasters.
Still.... what they have done so far is mostly to confirm old knowledge.
Really, there is nothing revolutionary. If you access one of these databases today, you'll see that most of the first moves have names - usually names that pay hommage to famous players, most of them who died a long, long time, sometimes centuries ago and who either invented them or were famous for having played them. They are fantastic tools - they are fun, interactive, great to analyse your opponent's history before a match, great to help you spot moves you didn't see before (after a match). Still, even when you spot a movement you didn't see, it's easily explained by old concepts. And it's very unlikely that you are going to be at a situation where you can use that move again.
This knowledge repository was accumulated not by means of meticulously measuring and quantifying;
People came up with this by playing, looking at what worked and didn't work, publishing books with their favorite moves, tearing each other apart in heated debates in journals (the forums of the past). Not unlike we do in the forums. And even if today you have access to billions of games to watch, one would still be advised to ignore them and use one's valuable time to study the games of the masters (you can access them easily in these databases, of course. But they are also in books, só you don’t strictely need the databases).
The benefits of quantification in chess are not that big
Your Tinder study is very interesting, and I’m sure it has incommensurable value to Tinder users and students of human interactions alike. Openings and endings are the easiest part to quantify and catalogue, though – in chess we have whole encyclopedias of openings. And most of them are interchangeable. They are equally efective because until midgame is established, the pair is doing little more than establishing the game they’re going to play: the Gioco Piano, the Sicilian, the French, the Four Horsemen usw. If you check one such database, you may see that everyone’s favorite opening is e4, and that it leads to slightly more wins. But if you know your opponent is a strong e4 player, than you might want to open d4. Now he has to adapt to your game, and it’s going to be a very different game. But it’s very, very unlikely that either an e4 or d4 opening is going to be the determining factor in a game.
This doesn’t mean that everything works (don’t ever try the Bongcloud, please – but if you do, nothing is doomed. Strong players can still open like this and crush a less experienced opponent. This shows that there is plenty of room to recover even after). But this means that a huge number of things work.
It's easy to quantify chess. Human behaviour, though... difficult and dangerous
Chess is an easily quantified game. It's played in an 8x8 board, with half that number of pieces, which can only move a certain way. Human interactions are incomparably more complicated than that. Also, chess databases can be easily fed: people just have to use them to play. And chess notations make it very easy to put old games into the system. In order to measure human interactions with nearly the same accuracy and effectiveness you can get from a chess game, we need huge amounts of data – data that is difficult to collect (it requires constant surveillance), to analyse, that people are often usually unwilling to give (there is a fringe, but growing number of people who resist smartphones, use alternative web browsers and search engines, legislative bodies are creating law barriers, technological race may go in a direction that preludes monopoly of data), and even if you are successful, this success come with dangerous consequences: that would be the wet dream of totalitarian dictatorships, after all.
With this I mean: quantifying, measuring, ascribing numbers might be useful, but are not make it or break it when it comes to the validity of knowledge, and come with their own consequences.
Leaderoffun, what you say makes a lot of sense.
But let me use another example (brace yourselves, this is going to be boring)
The good old game of chess!
Today chess is a game that has been measured and remeasured from every possible angle. If one decides to try to learn how to play chess today, one will have access to huge databases with hundreds of thousands of games that will show you what moves are the best, which ones are blunders and which ones are just okay. Chess computers have evolved to the point that they can shred reigning human grandmasters.
Still.... what they have done so far is mostly to confirm old knowledge.
Really, there is nothing revolutionary. If you access one of these databases today, you'll see that most of the first moves have names - usually names that pay hommage to famous players, most of them who died a long, long time, sometimes centuries ago and who either invented them or were famous for having played them. They are fantastic tools - they are fun, interactive, great to analyse your opponent's history before a match, great to help you spot moves you didn't see before (after a match). Still, even when you spot a movement you didn't see, it's easily explained by old concepts. And it's very unlikely that you are going to be at a situation where you can use that move again.
This knowledge repository was accumulated not by means of meticulously measuring and quantifying;
People came up with this by playing, looking at what worked and didn't work, publishing books with their favorite moves, tearing each other apart in heated debates in journals (the forums of the past). Not unlike we do in the forums. And even if today you have access to billions of games to watch, one would still be advised to ignore them and use one's valuable time to study the games of the masters (you can access them easily in these databases, of course. But they are also in books, só you don’t strictely need the databases).
The benefits of quantification in chess are not that big
Your Tinder study is very interesting, and I’m sure it has incommensurable value to Tinder users and students of human interactions alike. Openings and endings are the easiest part to quantify and catalogue, though – in chess we have whole encyclopedias of openings. And most of them are interchangeable. They are equally efective because until midgame is established, the pair is doing little more than establishing the game they’re going to play: the Gioco Piano, the Sicilian, the French, the Four Horsemen usw. If you check one such database, you may see that everyone’s favorite opening is e4, and that it leads to slightly more wins. But if you know your opponent is a strong e4 player, than you might want to open d4. Now he has to adapt to your game, and it’s going to be a very different game. But it’s very, very unlikely that either an e4 or d4 opening is going to be the determining factor in a game.
This doesn’t mean that everything works (don’t ever try the Bongcloud, please – but if you do, nothing is doomed. Strong players can still open like this and crush a less experienced opponent. This shows that there is plenty of room to recover even after). But this means that a huge number of things work.
It's easy to quantify chess. Human behaviour, though... difficult and dangerous
Chess is an easily quantified game. It's played in an 8x8 board, with half that number of pieces, which can only move a certain way. Human interactions are incomparably more complicated than that. Also, chess databases can be easily fed: people just have to use them to play. And chess notations make it very easy to put old games into the system. In order to measure human interactions with nearly the same accuracy and effectiveness you can get from a chess game, we need huge amounts of data – data that is difficult to collect (it requires constant surveillance), to analyse, that people are often usually unwilling to give (there is a fringe, but growing number of people who resist smartphones, use alternative web browsers and search engines, legislative bodies are creating law barriers, technological race may go in a direction that preludes monopoly of data), and even if you are successful, this success come with dangerous consequences: that would be the wet dream of totalitarian dictatorships, after all.
With this I mean: quantifying, measuring, ascribing numbers might be useful, but are not make it or break it when it comes to the validity of knowledge, and come with their own consequences.
Quote from Kavalier on July 2, 2022, 9:24 pmLucio, I've arrived at the conclusion that, yes, we are nerds 🙂 LOL
Lucio, I've arrived at the conclusion that, yes, we are nerds 🙂 LOL
Quote from leaderoffun on July 4, 2022, 8:36 amReally good point about Chess Kavalier. Chess was the domain of choice for Newell and Simon, who wrote 'the' book on 'human problem solving' in 1972. Unfortunately the book didn't have much of an effect in applied settings and the field itself kinda died off. The reason: Low Eexternal validity of anything you conclude studying chess to solve any other problem. Others tried to use 'real world problems' but the consequence is that they gave up control and could only write books, not peer reviewed papers in prestigious journals (Example: Gary Klein with 'Naturalistic decision making').
So the problem we are dealing with is not new, and not small 🙂 It's the core problem of social sciences, even the core problem of the more 'hard science' part of social sciences (cogSci).
This may sound like an attack to everything we believe, so continue reading with a antifragile ego and note that your mind will want to reject this idea. The more attached you are to your beliefs, the more your mind will resist testing them. What if they are wrong? Gulp!
The idea that we floated before is that we should test our beliefs, including those on TPM material.
I just invented an acronym :
NEEOTIIA, no evidence of efficacy other than its intuitive appeal.IME, most of my beliefs (I could even say every belief) I hold dearly entered my identity with either very little evidence of efficacy (it worked once!) or full-on NEEOTIIA.
This is so because in business and 'life' you don't have the luxury or multiple trials or big samples. Most situations where you make a decision only occur once, then you learn.
Very often you make a business decision for an option with NEEOTIIA. I've changed continents on a idea with NOEOTIIA. Twice.
I'm not alone. The CIA and all other intelligence agencies combined produced a report saying they were sure there were WMD in Iraq. Trillions of USD and 1000s of lives later, the WMD were not found. That's one decision with high stakes and it was made poorly.
Who you choose as a life partner, where you live (country), what you do for work... all crucial decisions that we don't really make carefully. We have entire 'stories' in our head on why we made the right decision, but the reality is that we did them under NEEOTIIA.
While I love, love the wisdom I see condensed in PU... it has NEEOTIIA. I really wonder if we should treat it different this time? Just because this time it's written, and there are many people reading it. We don't have to live with NEEOTIIA.
The closest thing we have to evidence is the youtube videos that Lucio dissects. These are applications of 'the law' that he's trying to teach.
This is a very, VERY long shot, but how about this...
For every 'law' in PU (there are 1000s of those!), we try:
- to specify the input and output clearly. Something like 'if they say X, you say Y, outcome should be Z'. Or 'in context C, the eagle way is to do D. Outcome will be Z'
- We may even assign a probability to how sure we are that outcome would be Z.
- We find youtube videos where a person is in context C. Observe if they did action Y and got outcome Z.
- We count how many times this happensWe are assuming that youtube video is representative of reality... which is not. People act for youtube.
But youtube being cheap to produce is the closest thing we have to 'bottled reality', where we can rewind and play it back to see whether something happened or not.
What do we gain accepting wisdom from PU has NEEOTIIA? After the shock, reverting to a mindset of 'I need to try this before I add it to my identity' could give us A manual for life decisions. How powerful is that? This is what both academic psychology and the selfhelp industry has been trying (and failing) to produce for centuries!
EDIT: fixed the acronym. Twice lol
Really good point about Chess Kavalier. Chess was the domain of choice for Newell and Simon, who wrote 'the' book on 'human problem solving' in 1972. Unfortunately the book didn't have much of an effect in applied settings and the field itself kinda died off. The reason: Low Eexternal validity of anything you conclude studying chess to solve any other problem. Others tried to use 'real world problems' but the consequence is that they gave up control and could only write books, not peer reviewed papers in prestigious journals (Example: Gary Klein with 'Naturalistic decision making').
So the problem we are dealing with is not new, and not small 🙂 It's the core problem of social sciences, even the core problem of the more 'hard science' part of social sciences (cogSci).
This may sound like an attack to everything we believe, so continue reading with a antifragile ego and note that your mind will want to reject this idea. The more attached you are to your beliefs, the more your mind will resist testing them. What if they are wrong? Gulp!
The idea that we floated before is that we should test our beliefs, including those on TPM material.
I just invented an acronym :
NEEOTIIA, no evidence of efficacy other than its intuitive appeal.
IME, most of my beliefs (I could even say every belief) I hold dearly entered my identity with either very little evidence of efficacy (it worked once!) or full-on NEEOTIIA.
This is so because in business and 'life' you don't have the luxury or multiple trials or big samples. Most situations where you make a decision only occur once, then you learn.
Very often you make a business decision for an option with NEEOTIIA. I've changed continents on a idea with NOEOTIIA. Twice.
I'm not alone. The CIA and all other intelligence agencies combined produced a report saying they were sure there were WMD in Iraq. Trillions of USD and 1000s of lives later, the WMD were not found. That's one decision with high stakes and it was made poorly.
Who you choose as a life partner, where you live (country), what you do for work... all crucial decisions that we don't really make carefully. We have entire 'stories' in our head on why we made the right decision, but the reality is that we did them under NEEOTIIA.
While I love, love the wisdom I see condensed in PU... it has NEEOTIIA. I really wonder if we should treat it different this time? Just because this time it's written, and there are many people reading it. We don't have to live with NEEOTIIA.
The closest thing we have to evidence is the youtube videos that Lucio dissects. These are applications of 'the law' that he's trying to teach.
This is a very, VERY long shot, but how about this...
For every 'law' in PU (there are 1000s of those!), we try:
- to specify the input and output clearly. Something like 'if they say X, you say Y, outcome should be Z'. Or 'in context C, the eagle way is to do D. Outcome will be Z'
- We may even assign a probability to how sure we are that outcome would be Z.
- We find youtube videos where a person is in context C. Observe if they did action Y and got outcome Z.
- We count how many times this happens
We are assuming that youtube video is representative of reality... which is not. People act for youtube.
But youtube being cheap to produce is the closest thing we have to 'bottled reality', where we can rewind and play it back to see whether something happened or not.
What do we gain accepting wisdom from PU has NEEOTIIA? After the shock, reverting to a mindset of 'I need to try this before I add it to my identity' could give us A manual for life decisions. How powerful is that? This is what both academic psychology and the selfhelp industry has been trying (and failing) to produce for centuries!
EDIT: fixed the acronym. Twice lol
Quote from Lucio Buffalmano on July 4, 2022, 10:36 amQuote from Kavalier on July 2, 2022, 9:24 pmLucio, I've arrived at the conclusion that, yes, we are nerds 🙂 LOL
OK, we might 😀
Quote from leaderoffun on July 4, 2022, 8:36 amFor every 'law' in PU (there are 1000s of those!), we try:
- to specify the input and output clearly. Something like 'if they say X, you say Y, outcome should be Z'. Or 'in context C, the eagle way is to do D. Outcome will be Z'
- We may even assign a probability to how sure we are that outcome would be Z.
- We find youtube videos where a person is in context C. Observe if they did action Y and got outcome Z.
- We count how many times this happensWe are assuming that youtube video is representative of reality... which is not. People act for youtube.
But youtube being cheap to produce is the closest thing we have to 'bottled reality', where we can rewind and play it back to see whether something happened or not.
What do we gain accepting wisdom from PU has NEEOTIIA? After the shock, reverting to a mindset of 'I need to try this before I add it to my identity' could give us A manual for life decisions. How powerful is that? This is what both academic psychology and the selfhelp industry has been trying (and failing) to produce for centuries!
EDIT: fixed the acronym. Twice lol
Still loving the idea.
But to make sure we'd be doing it well, I need to address the added complexity.
If it's not addressed, I worry we'd end up with that naive empiricism pitfall.Pitfalls to be addressed
- Exceptions are expected: PU has more "principles" and "high-level strategies", rather than "laws" or "rules"
It's stated, normal, and expected there will be plenty of exceptions.
How can you be sure, without experienced folks judging for each case study, that the principle wasn't properly applied, or it was a case of the situation being an exception?
- Techniques backfiring when misapplied are expected: PU has many techniques, those also aren't rules, and different techniques better apply to certain situations, and less to others
Same as before:
How can you be sure, without experienced folks judging for each case study, that the techqniue wasn't properly applied (rather than the technique not being effective)?
Who's going to count whether one example counts as "against" or as "exception" (or counts as "proving the rule", or doesn't count at all)?
Example with Hunter who tried meta 3 times and it failed 3 times on him.
I've done it several times instead, and it worked great most of the times.I took a look at his case study, and to me it was obvious that it was poorly applied, and in the worst possible situation.
How would you count Hunter's example?
As a failure of meta?
Because to me that's a failure of understanding one, the technique, and two of gaining that "social intuition" for when it works (or, shall I say: my failure to properly explain it, I since amended the lesson BTW).For Shared Meaning: Rules VS Skills
I want to clarify this important concept that may not be clear.
And it's this:
One of PU's main goal is to help grow expertise and social intuition (not to only share strategies and techniques).
So that you don't need to rely on laws or rules because rules that always apply don't exist.
Instead, the goal is to "learn fishing", so you can adapt and strategize, naturally and in (more or less) real-time, to reach your goals.That being said, I'm still excited at how this may work.
How Would It Look Like? Let's Do An Example
Do you have an example in mind of this would look like?
I'll go first and see if I understand:
- Power Move: If someone pulls a power move
- Response: And one goes meta on him...
- Caveat: Provided that he does it well and "wins"...
- Result 1 (on the attacker): The power mover will be shamed, as evidenced by either:
- apologizing
- self-soothing body language
- submissive body language
- Result 2 (on the bystander): the people around will side with the defender who effectively went meta, as evidenced by:
- Clapping
- Nodding
- Laughter
- Signs of approval such as "wow", "ouch", "that was brutal"
- Result 3 (on the defender): the defender will gain status and power, as evidenced by:
- proud body language
- more speaking time
- less interruptions going forward
- decreased attacks or hostility from the now disempowered attacker
Something like that?
What do you think.
Quote from Kavalier on July 2, 2022, 9:24 pmLucio, I've arrived at the conclusion that, yes, we are nerds 🙂 LOL
OK, we might 😀
Quote from leaderoffun on July 4, 2022, 8:36 amFor every 'law' in PU (there are 1000s of those!), we try:
- to specify the input and output clearly. Something like 'if they say X, you say Y, outcome should be Z'. Or 'in context C, the eagle way is to do D. Outcome will be Z'
- We may even assign a probability to how sure we are that outcome would be Z.
- We find youtube videos where a person is in context C. Observe if they did action Y and got outcome Z.
- We count how many times this happensWe are assuming that youtube video is representative of reality... which is not. People act for youtube.
But youtube being cheap to produce is the closest thing we have to 'bottled reality', where we can rewind and play it back to see whether something happened or not.
What do we gain accepting wisdom from PU has NEEOTIIA? After the shock, reverting to a mindset of 'I need to try this before I add it to my identity' could give us A manual for life decisions. How powerful is that? This is what both academic psychology and the selfhelp industry has been trying (and failing) to produce for centuries!
EDIT: fixed the acronym. Twice lol
Still loving the idea.
But to make sure we'd be doing it well, I need to address the added complexity.
If it's not addressed, I worry we'd end up with that naive empiricism pitfall.
Pitfalls to be addressed
- Exceptions are expected: PU has more "principles" and "high-level strategies", rather than "laws" or "rules"
It's stated, normal, and expected there will be plenty of exceptions.
How can you be sure, without experienced folks judging for each case study, that the principle wasn't properly applied, or it was a case of the situation being an exception?
- Techniques backfiring when misapplied are expected: PU has many techniques, those also aren't rules, and different techniques better apply to certain situations, and less to others
Same as before:
How can you be sure, without experienced folks judging for each case study, that the techqniue wasn't properly applied (rather than the technique not being effective)?
Who's going to count whether one example counts as "against" or as "exception" (or counts as "proving the rule", or doesn't count at all)?
Example with Hunter who tried meta 3 times and it failed 3 times on him.
I've done it several times instead, and it worked great most of the times.
I took a look at his case study, and to me it was obvious that it was poorly applied, and in the worst possible situation.
How would you count Hunter's example?
As a failure of meta?
Because to me that's a failure of understanding one, the technique, and two of gaining that "social intuition" for when it works (or, shall I say: my failure to properly explain it, I since amended the lesson BTW).
For Shared Meaning: Rules VS Skills
I want to clarify this important concept that may not be clear.
And it's this:
One of PU's main goal is to help grow expertise and social intuition (not to only share strategies and techniques).
So that you don't need to rely on laws or rules because rules that always apply don't exist.
Instead, the goal is to "learn fishing", so you can adapt and strategize, naturally and in (more or less) real-time, to reach your goals.
That being said, I'm still excited at how this may work.
How Would It Look Like? Let's Do An Example
Do you have an example in mind of this would look like?
I'll go first and see if I understand:
- Power Move: If someone pulls a power move
- Response: And one goes meta on him...
- Caveat: Provided that he does it well and "wins"...
- Result 1 (on the attacker): The power mover will be shamed, as evidenced by either:
- apologizing
- self-soothing body language
- submissive body language
- Result 2 (on the bystander): the people around will side with the defender who effectively went meta, as evidenced by:
- Clapping
- Nodding
- Laughter
- Signs of approval such as "wow", "ouch", "that was brutal"
- Result 3 (on the defender): the defender will gain status and power, as evidenced by:
- proud body language
- more speaking time
- less interruptions going forward
- decreased attacks or hostility from the now disempowered attacker
Something like that?
What do you think.
---
(Book a call) for personalized & private feedback