I'll Get To It Later

Why we procrastinate even when we know better

I started writing seriously seven years ago, and sometime in the intervening period, I became a procrastinator. Missing a deadline is a terrible, deep pit: At first, it’s to be avoided at all costs, and then, once experienced, it’s something never to be relived. And yet, I catch myself following the same patterns, flirting with the same disasters. Creativity can’t be rushed, I say to no one in particular, and so I delay my work obligations throughout the week, and then let them bleed into the weekend, when I’ll be free from the rigors of responding to e-mail. Then, I bump my tasks until Sunday evening, when the weekend’s activities will have abated; once I find myself too sleepy to think properly, I postpone again, with plans to hit the ground running early Monday morning—at which point, I work my alarm’s snooze button like a speed bag.

A few months ago, I decided to try to learn why this keeps happening. I wanted to know the mechanisms in the brain causing me to stall, to invoke a Wimpy’s choice, if you will, and perpetually assume that next Tuesday will be more fruitful than today. In other words, what is unique about future thinking that causes people like me to fall behind, again and again and again?

So, I pitched this essay, and after a sufficient number of weeks passed for the shadow of a deadline to creep up, I talked to Scott Huettel, chair and professor of psychology and neuroscience at Duke, who told me, very kindly, that my question isn’t that straightforward. The explanation for this phenomenon doesn’t lie so much in how we think about the future; rather, it’s found at a more elementary level, in how we decide.

“Almost all our decisions are about consequences in the future,” says Huettel, whose lab specializes in the field of “decision neuroscience,” broadly engaged with exploring how humans navigate such choices. Outside of very specific instances—say, electing between two hors d’oeuvres— our brain chooses by parsing benefits and consequences to be realized far down the line. “If we make a decision about where we go to dinner tonight and where we make a reservation, that’s about the future,” Huettel says. “If we have to make a decision about which college to attend, that’s about the future. If we have to make a decision about whether to spend money now or save it, that’s a decision that weighs future consequences.”

But what do we think about when we think about the future? Mostly, we’re creating and testing scenarios, says Huettel. “Your brain is sort of a simulation engine. One of the things that’s most important about a human brain is that it can take conditions and run them through.”

Doing so requires tapping into the part of the brain called the hippocampus, says Felipe De Brigard, Fuchsberg- Levine Family Associate Professor of philosophy at Duke. The hippocampus is maybe best known for its role in memory, but memories provide the kindling, the “building blocks,” for any forward projections. “The future is not exactly how the past was,” says De Brigard, “but it is more likely that the future may be how the past could’ve been.”

De Brigard studies the interaction of memory and imagination through counterfactual and hypothetical simulations, more precisely, our constant formation and creation of such false scenarios, of “something that didn't happen but could have occurred.” He’s fascinated, then, by what specifics we retain from our past to use in our projections. Over time, scientists have learned that one’s memory doesn’t function like a court stenographer, dutifully recording what happened when and allowing precise recall of the facts. Memory is more of a construction. And thus, unsurprisingly, so is our understanding of the future.

I’ve used “our” a lot in this essay, and I feel bad about it. Like most invocations of the first-person plural, it’s born out of laziness: The brain mechanisms might loosely be the same across all humans, yet there is no uniform approach to the future. For example, De Brigard notes that not all elements of memory—and thus future projections—are equivalent: There are semantic building blocks, created from a knowledge of the world, and episodic, or more autobiographical, building blocks. Older adults have more semantic memories; younger adults are the opposite.

But many explorations of future thinking, including Huettel’s, center on the question of patience: When given a choice, do people elect for a “smaller, sooner” reward—say, forty dollars today—or are they able to hold out for a “larger, later” payout—sixty dollars in six months? And what do those individuals who are more patient have in common?

The findings oscillate between obvious and striking. Adults are more patient than youths, but older adults—who might, callously speaking, be planning for a shorter future—tend to demonstrate more patience than younger adults. Individuals suffering from drug addiction prefer rewards sooner, but impulsivity, as a character trait, isn’t a strong predictor of what economic payouts a person will prefer. Perhaps most compelling is the lab’s recent discovery, from tracking subjects’ eye movements when comparing rewards across time, is that the more future-oriented individuals exhibit a very rapid comparison process. Said another way: When people took more time to decide on a reward, they were actually less patient.

Huettel explains that, traditionally, scientists have believed patient individuals either exhibit greater self-control or slow down to make the proper choice. “Our recent research really argues that neither is correct,” he says. “What seems to be the case is that the people who are most patient have this meta-program of a rule that they can apply in a lot of contexts that basically says that the future is very valuable, and you should look for what gives you the most total resources regardless of when it occurs.”

Undoubtedly, myriad biases plague humans’ future thinking. There’s “decision paralysis,” or what De Brigard calls “the Hamlet problem,” in which one faces a complex and/or infinite set of possibilities and struggles to find a shortcut. (Teenagers struggle with this task more so than adults; they decide both more slowly and more randomly.) There are “availability heuristics,” in which the choice is colored by one’s ability to almost automatically think up expenditures of forty dollars today—but not costs of sixty dollars in half a year. Individuals “might be more willing to wait for the future in cases where there’s a major life transition you can think about,” Huettel says, like the holidays, or an upcoming move. In other words, if your goal is to be more patient, force yourself to consider what your future self might need.

And time and again, people procrastinate, a symptom of “optimism bias.” “People are very willing to assume, in the future, that they’ll have more time,” says Huettel. “It’s harder to think of the constraints” in the future, and so quickly, tasks get pushed back, and schedules become overbooked with future commitments and responsibilities. “It’s particularly pernicious when thinking about things in the future, because we might be overestimating our ability to deal with things,” he says.

“We don’t think about all of the little obstacles that life is going to throw at us.”

Certainly, there’s a lot left to be discovered in these areas. De Brigard hopes to explore variances in the creation of mental simulations: Are certain future scenarios overlooked because it simply doesn’t occur to us that these things could happen? Do particular populations—say, people suffering from anxiety—struggle because they generate too many counterfactuals?

For Huettel, one aim is to understand the conception of the future self. Recent scientific literature has suggested we might think about our present selves and our future selves as different people, although he wants to explore this debate further. “Is it the case that in order to think about our own future self, we use social-cognitive methods that evolve when thinking about others?” If so, it could mean that the interventions—the nudges to inform optimal behavior—might need to exist not just individually but societally. “To make better decisions,” Huettel says, “we might need to develop empathy.”

At first the idea seems tangential, but at its core is the desire to effect change. Perhaps the most dangerous flaw is in thinking that our decision- making will always improve, that things will always, eventually, become painless. To believe that deprives us of our agency in the current moment; our future selves can only be better off when we do things to improve their worlds—now.

I know, firsthand, how easy it is to fantasize about being productive in a week, a year, a decade. What’s even easier, I’ve discovered, is entertaining the dream where I don’t have to be, because all my work has been taken care of by a very kind soul: present-day me. It’s a nice dream, in that the fantasy soon reaches its natural conclusion, evaporating as fast as it forms. And by the end, I’m left sitting at my desk, typing.

Hubbard ’14, the former Clay Felker staff writer for Duke Magazine, is a writer and researcher based in Durham whose work has appeared in INDY Week, Deadspin, and other publications.

Share your comments

Have an account?

Sign in to comment

No Account?

Email the editor