Tali Sharot: Overcoming The Optimism Bias
Are humans wired to deal with the kinds of existential threats facing society today?
This is not just an intellectually-interesting metaphysical question. The answer may well determine whether our species makes it to the next century or not.
As PeakProsperity.com readers know well, humans’ relentless pursuit of ‘ever more’ growth is finally slamming into the limits of a finite planet.
The energy fuels and natural resources necessary to continue to expand the global economy are becoming more scarce and more expensive to extract. And yet the global population continues to grow, now expected to hit 9.7 billion (or higher) by 2050.
Evolutionarily wired for immediate, visible threats (like a snarling lion ready to pounce), can we realistically expect our species to proactively pull together to deal with long-term, faceless emergencies like overindebtedness, Peak Oil, climate change, and overpopulation?
And if it’s possible we can, how can we help others wake up to these threats? How can we break through their existing belief system that “everything is fine”?
In this week’s podcast, Chris Martenson poses these important questions to Dr. Tali Sharot, professor of cognitive neuroscience in the Department of Experimental Psychology at University College London. Dr. Sharot is known for her research on the neural basis of emotion, decision making, and optimism. Her 2012 Ted Talk on the optimism bias received over 2.3 million views.
Optimism is good for many reasons. But it can be quite negative, too, if you’re underestimating risk.
If we underestimate risk, then we’re less likely to take precautionary actions. So if you think, well, I am going to be so healthy that you don’t even go to medical screenings or you don’t buy insurance when you actually should, right, that’s a problem. You don’t prepare for the worst case scenario, that’s a problem.
We have an avoid/approach instinct where we approach the good things, whether it’s chocolate cake or money or love. We move forward, we approach. If you see a photo with a smiling person you approach.
When we see the bad stuff, we try to go back. We avoid it, whether it’s poison or someone frowning or any kind of danger — we just stay away.
So in order to get to action on big threats, we need to refrain the problem not as: we’re going into extinction, we’re going into a catastrophe, we need to do something now to avoid it. But more like: what can we do to make our planet as good as it’s ever been?
What can we do to protect it and make it a place that is flourishing?
We need to insert a positive message rather than a negative in order for people to want to approach and take action and to be involved. Otherwise people say ‘I don’t want to think about this bad stuff because it’s not something nice to think about’.
And I think that’s something that hasn’t been done so much. The focus has mostly been on trying to scare and trying to cause fear, rather than trying to enhance the sense that we can create something that’s great for future generations.
Click the play button below to listen to Chris’ interview with Tali Sharot (50m:21s).
Listen to the AudioRead the Full Transcript!
Tali Sharot: Overcoming The Optimism Bias
The following is a transcript of recorded content. Please note, these transcripts are not always perfect and may contain typos. If you notice any major mistakes, please feel free to report them by opening a Technical Support ticket under the Help menu at the top of the screen.
Chris Martenson: Welcome, everyone, to peakprosperity.com featured voices podcast. It is July 16, 2019. I am your host Dr. Chris Martenson.
In a recent piece titled Do You Have Free Will? I wrote about all of the data piling up showing how we tend to behave as a function of both nature and nurture, understanding how we’re wired to respond to various stimuli or data or information is especially important today. This is a vital question, of course, to explore for those of use interested in introducing ideas about reality that might be troubling in nature, let’s say.
So we need to know, are humans, on average, wired to receive the kind of information that we think needs to be out in the world, or are we wire against it? And if so, is there anything we can do about it?
Well, today’s guest is an absolute center of the bowling alley strike on this one. Dr. Tali Sharot is a professor of cognitive neuroscience in the Department of Experimental Psychology at University College London. She received her PhD in Psychology and Neuroscience from New York University. Dr. Sharot is known for her research on the neural basis of emotion, decision making, and optimism.
Her book, The Influential Mind, What the Brain Reveals About the Power to Change Others, was published highlighting the critical role of emotion in influence and the weakness of data. Perfect guest for today. I’m especially interested in that topic, of course, as someone who is constantly trying to reach people oftentimes, using data.
Her 2012 Ted Talk on the optimism bias has received over 2.3 million views. Welcome to the program, Dr. Sharot.
Tali Sharot: Thank you for having me. Glad we could join.
Chris Martenson: Dr. Sharot, tell us just a little background. How did you get interested in science and then gravitate towards your current area of expertise?
Tali Sharot: I was really interested in people, which I think is quite natural. I was just interested in why people do what they do, and why do I do what I do? And because every kind of action or thought or feeling that anyone has really is triggered by the brain. It’s generated by the brain, and so to understand human behavior, understanding the brain seemed the path to go. So that was kind of the reasoning or the motivation.
Chris Martenson: Right up front. What are the chances that I’m a rational individual operating with complete freewill?
Tali Sharot: Zero.
Chris Martenson: [Laughs] You seem so confident. I want to talk about this because I think there is a concept out there in a lot of people that we’re rational beings. Of course, economics is founded, maybe in a flawed fashion, on the idea that we are rational actors. But everything that I’ve done and studied and the more I’ve gone down this path of looking at how humans are actually wired up, we come with a lot of preconditioning, a lot of wiring that nature’s bestowed upon us. And the understanding seems to pile up that maybe we’re not as rational and as full of freewill a we thought.
Let’s go into that a bit. What is your expertise – sorry, I jumped ahead a question. What is it in human wiring, Dr. Sharot, that pushes us away from being rational?
Tali Sharot: So really, you know, I have to kind of step back for a second. When I said you’re not rational, I was kind of jumping ahead and gave a simple answer. It’s not that simple. It really depends on what you mean or what a person means when they say rational. Because, you know, in economics when people say people are not rational, it usually means that they are making decisions that are not necessarily going to gain them the most monetary rewards.
But, in fact, all these kind of incidences of biases and heuristics and decisions that seem irrational, if you look at them from a different angle, they could be rational.
So there’s two things to consider. First of all, what it is that you’re trying to maximize. Because for economists, they usually think well, we’re trying to maximize material outcomes. And if you look at that at that angle then, yes, we’re not rational.
But from a point of a psychologist, if what we’re trying to maximize is, for example, effective wellbeing, emotional wellbeing, then many times these decisions that seem irrational are actually rational because we’re trying to minimize anxiety, for example, or maximize anticipation. And the cost may be material, but that’s fine. That’s a decision that we make.
And the other kind of thing to keep in mind is that those behaviors that seem irrational are actually because we’re using some rules that are very good rules to use in updating our beliefs or making decisions, but they’re not, of course, going to give us the best decision or the best outcome 100 percent of the time. So they’re good rules to use because they are the beneficial rule 80 percent of the time, but 20 percent of the time it isn’t the best thing to do. And we, kind of as behavioral economics, focus on those 20 percent.
So, I have to step back and say our brain and our minds are actually quite extraordinary, and it’s quite amazing that we can make so many decisions, you know, in just a single day and get along in this world relatively well. And we do it because the brain is set up in a way that usually gets you where you’re going and gets you to the best decisions.
So, we really shouldn’t look at ourselves as, oh, we’re just making stupid decisions all the time. We should look at our behavior and say, “Well, why are we making these decisions that, from our angle, doesn’t seem the best ones?” Are we making them because we don’t really understand what people really want? Maybe we’re making it because we think we want one thing, but really, it’s another thing that we’re trying to get? Is it because we’re using a rule that’s quite good, but it’s not good for this instance, and that’s okay.
I didn’t answer your question, but what was the question now?
Chris Martenson: I want to get at this idea of how we’re wired up and so that we can understand – I truly believe that if we can become conscious and knowledgeable about the things that are actually driving us, that we have opportunities then to be aware of those, and sometimes they’re very helpful and sometimes they’re pitfalls for us. And so knowing when is which would be a good place to start.
One of my sayings is humans are not rational, we’re rationalizers. And I do this all the time. If I become emotionally attached to a car and I want the car, I’ll come up with all the rational reasons. Look at its safety. Look at its mileage ratings. Look at all this stuff. But the truth is, the decision got made somewhere deeper in my limbic system, I think. It was made emotionally.
And so that’s the part I’m trying to tease apart is where are we rational and where are we making decisions driven through some other processes that are a little bit more subterranean, not invalid – I’m not here to say good, bad, or put any judgements on. I just want to understand how people make decisions and with the idea that maybe we could influence those decisions.
Tali Sharot: We certainly can influence those decisions is many ways. But first, the idea that we have two systems in our brain is a good metaphor, but it’s not actually how the brain works.
And so what we call emotional system or so on and what we call our rational brain, it’s not two systems. These regions interact with each other all the time. They actually create subsystems together. And each of these so-called systems, like an emotional system, in fact, can do very sophisticated calculations and vice versa.
The system that neuroscience may call the rational system, the frontal cortex, can actually produce lots of biases. So we shouldn’t look at it as two separate systems, as one is more biased than the other. All regions are contributing to our decisions, and these different regions or collection of regions can do both very sophisticated and very superficial – use superficial rules to make decisions.
And our emotions are there for very, very, very good reasons. Emotions are very important for our decisions because emotions convey a lot of information that we can and should use for making decisions.
So if you’re making decisions because maybe what you call an emotion, that’s not necessarily a bad thing because why would our brain be given this thing that we call emotion? It’s given emotion in order to help us survive because emotion tells us what’s good, what’s bad, how urgent an action should be. So it’s not something that should be kind of frowned upon and looked like something that we shouldn’t include in our decision making process.
In fact, people that can’t experience emotion or have impairments in those regions in the brain have a very time making decisions, and they don’t make optimal decisions.
And again, I think I didn’t quite answer the specific question which was – can you repeat it for the last time?
Chris Martenson: It’s perfectly fine. We’re going in the right direction. I’m trying to build up to this idea of how we’re wired and how it is that we make decisions. And again, I’m not here to say that emotional decisions are bad and rational are good. Far from it. However, I am interested in the idea that as a collective human species, when we’re presented with some really big challenges, that getting that information across can be really challenging.
And so your work just is absolutely fascinating because it’s revealing the ways in which we actually go about making those decisions. And again, not to say good ways, bad ways, or any of that. But if we understand how we go about making decisions, then we have a chance to influence those in ways both for our personal lives and then maybe more collectively.
So you started with this idea that these aren’t necessarily separate brains. I’m aware of the idea of the triune brain, that we have stacked sort of different brain functions through evolution. Is that still current knowledge? Is that how we look at the brain now, that it consists of various regions of evolutionary design and that they basically interact with each other that way?
Tali Sharot: It is true that as you go deeper those brain regions are evolutionarily old. And of course, the newest ones are up there, our frontal cortex, and they make us more human in the sense of that it makes us uniquely human, like what makes us different from other animals. So, a greater ability to plan ahead, a greater ability to do calculations and analysis and language and things like that. So that is true, and all these systems are connected.
There are a few functions that are very much specialized. We know that we have specific regions for face perception. We know that there are specific regions that are extremely important for language. If those were impaired that we’re going to have a problem.
But when it comes to decision making and assessing risk and so on, you can’t really pinpoint it into one region. It’s really processes that happen all over in quite a few regions in the brain in order to come up with a decision, whether you want to choose a or you want to choose b and definitely the sort of financial decisions and personal decisions that we make will involve a whole host of regions kind of working together in an orchestra to come up with the action and the choice that you’re going to make.
Chris Martenson: Let’s talk then about – I’m really fascinated by your Ted Talk 2012. What is the optimism bias?
Tali Sharot: The optimism bias is people’s tendency to expect the future to be better than the past and the present and our tendency to overestimate that the positive events in our lives, like a successful marriage or having talented kid or successful in your profession, and to underestimate the likelihood of negative events like the likelihood that you would get divorced or have cancer or be in an accident. And we find that about 80 percent of the population has this optimism bias.
And there’s a few things to remember about the optimism bias. First of all, it’s very much about your personal future, your future, perhaps the future of your family and very close friends and kids. But it’s not about the future of the world or society or the country. In fact, we often find that there’s something called private optimism but public despair where people are quite optimistic about their own future but not necessarily about the future of humanity or the future of citizens in their country in general.
One reason for that is a sense of control. So people have a sense that they have control over their own life, and therefore they can steer the wheel in the right direction. And if we have control over own life we think, well, we’re going to steer the wheel towards the good future, and that makes us optimistic. When we feel we have control we’re more likely to be optimistic because we say, “Well, I’m going to be healthy because I’m going to do the right things to make me healthy.”
But we don’t necessarily have a sense that we have control over the future of our country or the world and so on. And so in those cases, we don’t necessarily have as much optimism as we do about our own future, which is why you often see people being very pessimistic about a host of different things, such as politics and so on.
Chris Martenson: And that comes from, if I got that right, this sense of agency or control. And how does that feed into, that sense of agency and control, how does that feed into the optimism bias?
Tali Sharot: As I mentioned, if you believe that you have control over your future – you start a company and you think, well, whether the company succeeds or doesn’t succeed, have a lot to do with my own action. If you believe that, you basically believe that you have control over the destiny of your company, then you will be optimistic because you think, “I can do what is needed to succeed.”
But let’s say you are someone who is starting a company and you think, “I don’t have any control over where the company will go.” And therefore there’s less reason for you to be optimistic, yeah? So that’s why the two things are related.
Chris Martenson: And let’s talk about, say, I know a guy, a serial entrepreneur, he’s managed to not be successful at every company he started. He’s very optimistic. So let’s talk about how data or information feeds back into the optimism bias.
Tali Sharot: Right. So far we’ve talked about why a sense of control can make you optimistic, but there’s other mechanisms that generate optimism. And one of the mechanisms is one that we discovered back in 2011 which is that you learn a little bit better from information that suggests a positive future versus information that suggests negative things about the future.
For example, you would think my likelihood of succeeding with this company is 70 percent and I would look at all the data and I would say, listen according to what I’m looking, maybe it’s 50 percent. So I’m giving you bad news. I’m telling you it’s less likely that you would succeed than you thought.
What we see is that people update their belief a little bit but not a lot. They might say, “Maybe my likelihood is 65.'” So you thought it was 70, but maybe it’s 65 percent now.
However, if I told you, I looked at your data and I said, “Look, I think your company is not 70 likely to succeed, it’s 90 percent likely to succeed.” At that point, I’m giving you good news, and what we find is that people take good news are really change their beliefs quite quickly. And you would say, “Well, okay, maybe it’s like 87 percent.” So you’re much more likely to alter your belief and change it when I’m giving you information that’s good about the future that you didn’t expect versus information that’s bad.
And when we looked at how the brain responds to that kind of good and bad news, we found that the brain, especially parts of the frontal cortex, encode information that suggest good news better than information that suggests bad news. So we encode good and bad news, but most of us, on average, encode the good news a little bit better than the bad news. And so we use it alter our beliefs.
And, in fact, we could change how people process information by interrupting with these brain regions that include information. So we could use a method that we have where we pass a little magnetic pole for the scalp of the participant into a certain region of the brain and the frontal cortex and then we interfere with that brain region.
And by doing that, we can actually interfere with the way that people process information, and we can do it where we only interfere with how you process the good news or mainly how you process the good news. And we can also do it in a way that we interfere mainly with how you process the bad news.
Chris Martenson: Well, that’s fascinating. So what happens when you interfere with the region where somebody’s processing good news?
Tali Sharot: Right. So they would process good news a little bit less, and then interfere with the part of the brain that we, in this specific incident was processing bad news, they would encode it a little bit less.
Chris Martenson: And so that bias you talked about, where they would adjust their percentages, they wouldn’t adjust them quite as far. Would that be the…?
Tali Sharot: Yeah. You can play around with the bias. If your typical person learns more from good news than bad news, and then you interfere with the part of the brain that was encoding good news, then of course, the amount that they would learn from good news goes down. And therefore, now, you don’t have as much of a bias or a bias at all.
Chris Martenson: Now, is this a very specific region? Like you would say like facial recognition or the ability to speak or something like that? This is a part of the brain – is it dedicated to this task?
Tali Sharot: Yeah. I mean, in our case, we knew exactly what brain region was including the type of information that we were giving a person in a specific task, and so we could interfere with that region. It’s not necessarily the only region that’s doing it. And again, it’s part of a system, but it’s one that our brain imaging study suggests it is very important, and so we can interfere with that specific one.
But I wouldn’t call it a good news region or a bad news region. Whether it’s a specific region that is doing that job does matter in things like, oh, how am I presenting the information, for example. I presented numbers, or I verbally told you something or what you had to do. So, we can identify specific regions that are really important in specific context in situations, but I wouldn’t then go ahead and say this region is a good news region in the brain or that region is a bad new region in the brain.
Chris Martenson: Fascinating. But evolution has decided this is a thing worth giving us, and there must be a lot of pros to the optimism bias. What are those?
Tali Sharot: Right. If you think the future is bright, then it reduces your anxiety and that’s really good for both your mental health and your physical health. In fact, we find that people that don’t have an optimism bias, a lot of those people tend to have depression. And people have also found that optimists are more likely to succeed in different professions, whether it’s in business or sports or politics. Because if you are an optimist and you think, well, I’m likely to succeed then what you do is you put more effort into it. If you think, well, my company is definitely going to succeed or is likely to succeed, then it motivates you to work hard.
But if you think, well, nothing I do is going to help. This is a lost cause. I’m going to not – the company is not going to succeed, then you’re less likely to put effort into it and then it becomes a self-fulfilling prophecy. So to be clear, it doesn’t become a self-fulfilling prophecy just because we had a thought, but your thought will change your actions and your actions can, of course, change outcomes in the world and that’s why optimism is related to success because it enhances motivations.
Chris Martenson: So health and wellbeing. Any downsides?
Tali Sharot: Yeah. If we underestimate risk, then we’re less likely to take precautionary actions. So if you think, well, I am going to be so healthy that you don’t even go to medical screenings or you don’t buy insurance when you actually should, right, that’s a problem. You don’t prepare for the worst case scenario, that’s a problem.
So there’s two things here to say. One is that most people are mildly optimistic. So they don’t necessarily think, oh, this is never going to happen to me. It’s just that they underestimate the likelihood. That’s first thing.
And the second thing to say is that we found something interesting quite recently. We published this last year. The problem that we started talking about here is that, okay, optimism is good for many reasons, but then it could be quite negative, right. You’re underestimating risk.
In fact, for example, people have said that the financial collapse of 2008 is partially because of the optimism bias. People overestimating how the economy will do, people overestimating their ability to pay mortgages and so on and so forth. And so optimism bias of a lot of people was one of the reasons creating the downfall. So there’s negative things to hear.
And so it was commonly thought by psychologists and philosophers that okay, there are advantages to optimism bias. There’s disadvantages, but probably if you probably look at everything together, probably the advantages are simply more than the disadvantages. And that’s why we have the optimism bias.
But, what we found recently is that the optimism bias can disappear quite quickly in environments where it should disappear, where it’s probably advantageous not to have it, and then it comes back in other environments.
So, in relatively safe environments, like the one that you and I are in today, on average the optimism bias is probably quite helpful. It keeps your mind at rest, it enhances your motivation. But if I put you in a really threatening environment – I’m going to put you out – there’s lions around and so on, you really don’t want to underestimate your risk at all. And so in those
environments, it might be better not to have an optimism bias at all. And we found that is likely what happens.
And so we thought, well, we’re going to take people – and of course, we can’t put them in a frightening environment with hungry lions around, but we will put them in a threatening environment where they feel quite stressed, and we’re going to test whether under those environments the optimism goes away. And so what we did is we told people that they’re going to have to give a speech in front of everyone else on a surprise topic that we were going to give them, no time to prepare, we would videotape them, we would put it on YouTube. The idea was just to create a threatening environment where people are stressed.
And the idea was would this stress change the way you process information and therefore make the optimism bias go away? And that’s exactly what we found, that under threat, under stress, people started learning from unexpected negative information much better than they did before. So if you told someone you think the likelihood of your company succeeding is 70 and we told them, oh, it’s only at about 50, they would learn a lot form that under stress. They would change their belief quite a lot.
But, if you didn’t it, it you just put them in relaxed environment, back to the optimism bias. Learn more from positive than negative.
We did the same with firefighters in the state of Colorado. The idea is that firefighters have a quite diverse days. So some days they’re just sitting in the station – it’s quite relaxed and quite safe. But some days they actually have to go out and there’s life threatening events. And we found that when the firefighters – we had the firefighters do our experiments. And we found that they were under stress, the optimism bias went away. They learned quite well from negative information, equally well to positive. But when they were relaxed, they learned less well from negative than from positive and the bias was there.
So this means that the optimism bias is not just adaptive because it has more advantages to disadvantages on average; it’s adaptive because it can come and go in different environments in a way that makes it adaptive.
Chris Martenson: So this raises a couple of things. First, we’ve noticed this bias ourselves, for instance, the worst time to try and buy a generator is when a hurricane is coming, and the best time is about a month after its past. It takes about 30 to 60 days, and people forget, and they don’t need these things anymore, and they don’t want to be prepared for the nest one. And so it goes away very quickly.
But the important point here during times of calm, and so this is getting the heart of whether this is evolutionary helpful or not. Humans have come through an extraordinary arch where we’re up at 7.6, 7.8 billion people now. There’s not fresh continents. We’ve kind of run through things. We are and the animals we eat, 96 percent of the biomass of animals on the surface of the planet. So we’re now at the edge of our cradle, as it were, and we need to start making some really big, collective decisions around things like climate change or the dwindling of fossil fuels eventually or soils or ecology, things like that.
Given the fact that we have the optimism bias, given that there are people out there like myself who think we need to begin motivating toward some fairly large, long distant thing which we might not feel a lot of agency around, to combine a few things we’ve talked about. What would be the does and don’ts of beginning conversations where you say, for example, interested in alerting people that climate change in a way that would actually lead to action?
Tali Sharot: So climate change is an extremely problematic issue because the only way for us to understand it is kind of to think about things that are not in front of us. Yes, we can see the little bit hotter summer or more snow than usual, but our environment feels quite safe. It feels quite pleasant, and so there’s not an immediate danger in front of us that we can actually see and feel, that our brain can actually process as stimuli coming in. And so we feel like we are in a safe environment. We’re quite relaxed.
And so the only way for us to understand the dangers is try to grasp us then in this abstract way to try and think about these numbers of what could happen in the future. And that’s really problematic because those numbers, on their own, are not necessarily going to change the way that we process information and stress us out and cause us to change action. It’s something that we have to imagine in our mind, and imagine something that we’ve never seen before. This is extremely difficult.
The other difficulty is that most of the real, real negative effects are not going to happen to us in our lifetime, maybe to our kids. But that’s another problem that people think, well – especially depending on how old you are – but it’s sort of you think, “Well, it’s not going to affect me directly.” And that’s another problem. You’re asking people to make decisions on things that are going to affect future generations. That’s another problem.
And finally, in order to think about climate change, you are really asking people to think about the bad stuff. We’re asking them to think about the dangers and all the catastrophic that could happen in the future. People don’t like to go towards the bad stuff. We have this kind of avoid approach instinct where we approach the good things, whether it’s chocolate cake or money or love. We kind of move forward, we approach. If you see a photo with a smiling person you approach, or if you see a smiling persons, you approach.
When we see the bad stuff, we try to go back, avoid it and not get close to it, whether it’s poison or it’s someone frowning or any kind of danger. We just stay away. And, in fact, it’s been shown, for example, that if you try to get people to donate money, let’s say on one of those Go Fund Me Things or that kind of sort, if you have someone that is smiling and showing positive valance, people are more likely to put money for that individual than someone that looks sad or distressed, even if the money is to help someone who’s sick.
But showing a person that has a positive balance is more likely to get people to contribute than a negative. So that’s another problem with climate change.
And finally, we have the problem of control because we don’t really feel that we have much control over climate change as an individual. It’s really hard to convince people that they have control.
So those are all kind of the problems. And that is ignoring for a second that the whole thing, that some people don’t believe in it in general for different reasons, as well.
So I think in order to get to action, we need to refrain the problem not as we’re going into extinction, we’re going into a catastrophe – we need to do something now to avoid it. But more like what can we do to make our planet as good as it’s been. What can we do to protect it and make it a place that is flourishing, to kind of try to insert a positive message rather than a negative in order for people to want to approach and take action and to be involved, rather than people say I don’t want to think about this bad stuff because it’s not something nice to think about.
And I think that’s something that hasn’t been done so much. The focus has mostly been on trying to scare and trying to cause fear rather than trying to enhance the sense that we can create something that’s great for future generations. It’s something that at least is worth examining whether that kind of message…
Chris Martenson: That’s a great point I learned from George Lakoff. He wrote Don’t Think of an Elephant, other pieces like that. And we were discussing how to – something along these lines. He uses a great metaphor. He said, “Look, when Obama releases the idea of the Affordable Care Act he goofed because he acted as the COO rather than the CEO. He came out and he said we need to control rising healthcare costs.” So that’s a tactic and its operational.
Instead, if he had reframed as the CEO in terms of vision and put it in a moral term, so if he said, “People deserve access to healthcare when they need it,” Then it’s harder for the sides to get out their machine guns and go at it.
But as soon as you put a tactic out there, like you say we have to control the temperature and it needs to stay under two degrees. I don’t know what that means personally, and I’ve studied this, what that would mean to me personally. So what would your research say? What’s happening when we frame something in terms of data versus at the moral level, if I could use that?
Tali Sharot: Well, so the moral level is more highlighting our motivations and goals. Right? You don’t even have to think about it as moral, but just what goal are we going towards, right. How do we want things to be? That’s something that very much motivates people. Yes, it’s still problematic when the goal is way in the future and perhaps not in our lifetime, but you can have some goals that can be in our lifetime. A lot of them are. So if action would be taken now, we will see consequences in the near future.
So goals is definitely something that motivates people. I mean, it’s kind of playing one of those online games – there’s a goal, and even though the goal doesn’t really matter for anything, people get so involved in it and want to reach it. So that’s definitely a great way, a great way to motivate people.
Another great way is to highlight the immediate rewards. So what I talked about before is framing it as something that is trying to make good in the world that’s better for everyone and future gen
– Peak Prosperity –
NOTE: Comments from the old website are still being migrated, but feel free to add new ones. Please be patient while we complete this process. Thanks!