Everyone is Obsessed

“I’m designing a machine that will allow us to break every message, everyday, instantly.”  (Benedict Cumberbatch as Alan Turing)

I’m on the last few days of winter break right now, after finishing my first quarter at Stanford (and deciding to stay). During break, I’ve done something that I’ve never done before: watch lots and lots of movies. In the past three weeks, I’ve watched The Imitation Game, Big Hero SixInterstellar, Gone Girl, The Princess Bride, and Into the Woods. I’ve watched more movies in the past three weeks than in the rest of 2014 combined!

Why the sudden change? Previously, I’d never been able to justify spending two hours watching a movie. After all, in that time I could be reading a book or learning a skill or having a great conversation with a friend! The reason I suddenly found myself watching many, many more movies is because of what I decided to obsess over, starting in the last few months of 2014.

In my last piece, I wrote about needing to teach myself how to think. For the past month, this is exactly what I’ve been doing. I’ve been completely absorbed in putting together a curriculum for myself (which I’ll post shortly!) and trying to figure out the best way to learn this. In the meantime, I’ve been letting myself get more and more obsessed with how I and the people around me think. (This is all deliciously meta.)

And because I’m learning how to think, and about how others think, it really struck me when one of my friends said, “You know that many people base [parts of] their lives on movies, right?”

Oh God, I remember thinking. If it’s true that people are so influenced by movies, then it’s crucial that I understand what movies are about, and why they’re so influential. And in this case, just hearing or reading about movies hasn’t been enough to allow me to understand the appeal.

So, I’ve been watching lots of movies. And by golly, now I’m hooked. Not only do I have a better understanding of why other people are influenced by movies, I’m finding that am easily influenced by movies. After all, I aspire to be obsessed with learning how to think, and all good movie characters are exceptionally obsessive.

In The Imitation Game, for example, Alan Turing is obsessed with building his Enigma Machine. In Big Hero Six, Hiro is obsessed with avenging his brother’s death. In Interstellar, Cooper is obsessed with finding a new home for mankind. In Gone Girl, Amy is obsessed with punishing her husband into submission. And so on.

People in movies are all obsessed with things — that’s what makes the movie interesting. More specifically, they are all obsessed with things that are epic or virtuous or aspirational in some way. For me at least, noticing characters I admire in movies reminds me to focus on one thing, and to do that one thing very, very well. To become obsessed with something is the only way to succeed.

That got me thinking. Characters in movies are obviously obsessive, but what about real people? When I thought about this initially, I found myself pointing to a number of my friends who are obsessed, perhaps because they have found their passion — whether in programming, persuasion, or positive impact.

But then, I wondered, what about “regular” people, people who haven’t found their passion yet? Are they obsessed, too? I realized with a start: Hell yes. Among the people I know who are still searching for a passion, many of them are obsessed with doing so. On the other hand, some people are obsessed with the latest work drama, or the latest Korean drama. Others are obsessed with finding a romantic relationship. Still others are obsessed with having fun. A subset of these people are obsessed with getting wasted. And regardless of whether or not I or society endorses what different people are obsessed with, the fact is that everyone is obsessed.

And this makes sense. People can’t think deliberately all the time. When they’re not thinking deliberately, they loop back to a thought pattern — either a way or a topic of thinking that is natural or comforting to them. Their thoughts return to this pattern whenever they have free time.

Most people seem to let their minds wander. They become obsessed with things that society encourages them (with pop culture, for example) to obsess over. They find themselves fixating on petty conflicts between friends or drama in their romantic relationships or one-upping their peers on Instagram. This is the less glamorous side of obsession.

Luckily, we do not have to be stuck in our thought patterns forever. We can deliberately change them, by deciding and training ourselves to become obsessed with something new. I don’t know exactly how this works yet (I suspect that there are certain constraints, e.g. you can only become obsessed with something if it helps achieve many of your goals), but I know that people do it, because I have (2011: Growth Mindset, 2012: Being interesting, 2013: Learning, 2014: Thinking deliberately).

You can train yourself to be obsessed with something, and in fact, you must. If you want to achieve exceptional success in anything, you have to be thinking about it much more than anyone else. Obsessing over something means that you’re thinking about it and making progress on it all the time — when you’re in the shower, when you’re in the car, when you’re trying to have a conversation with a friend…

Everyone is obsessed with something. What will you obsess over in 2015?

My Curriculum: Learning How to Think

On this blog I’ve posted a few reflections on how I think, and a few musings on how I should think. But I haven’t really explained how I’m learning to think. This is because I’ve been trying and tweaking my plan. My original curriculum sucked. It sucks a bit less now. But it’s getting better, and it’s gone through significant changes. I expect it to go through many more. Now is a good time to share it.

Plan #1

My original plan to learn how to think was to read a bunch of texts and take notes on them. Over winter break I started doing this by capturing the arguments from a book called “de Bono’s Thinking Course.” Very quickly, however, I realized there was at least one massive flaw in my plan. Namely, just reading about different thinking techniques didn’t guarantee that I would apply them or apply them well. Also, you can imagine that some techniques are much much more useful than others – I didn’t build into my plan any way of evaluating what I read.

My first plan, therefore, rested on two incorrect assumptions: (1) that all thinking techniques I encountered were equally worth adopting and (2) that just knowing about the techniques meant that I would use them and use them effectively. In my second plan, I tried accounting for these things.

Plan #2

My second plan was very similar to my first, with two notable editions. The first was that I would separate my curriculum into two distinct phases, that I would switch between, the first phase being collecting useful thinking techniques and the second phase being choosing the most useful and practicing them. This accounted for the two problems I ran into when I tested my first plan.

Of course, I ran into problems here, too. How was I going to check whether or not I’d actually gotten better at thinking? Was it when i noticed myself using new techniques that I’d learned? Maybe… but what if I was just using them arbitrarily, and they’re weren’t actually causing me to think better, just differently? That’s not what I was aiming for. Oh no.

Plan #3

Between plan #2 and plan #3 I had a great late-night conversation with my friends at Leverage, who pointed out a critical flaw in my plan. Thinking is a meta level skill. You have to apply it to something concrete, to an object level skill. Or else you won’t have adequate feedback mechanisms telling you whether you’re actually getting better.

They proposed that I choose a concrete skill (defined here as “a skill that will cause noticeable changes in the outside world”), suggesting things like storytelling or programming or learning an instrument. In each of these skills it’s obvious whether there’s been improvement: you can detect the fluency of your stories, you can see the usability of your code, you can hear the quality of your music.

The object level skill I chose was persuasion. If I got better at persuasion, I would know just by answering a simple question: could I more reliably get people to do what I wanted? The idea here was that if I was getting better at thinking about how to learn persuasion (yes yes, it’s very meta), then I would notice rapid gains in my persuasion ability that wouldn’t have occurred otherwise.

So I thought about how to learn persuasion. First I addressed my underlying objections to learning persuasion. I realized that there were a number of valid reasons why I felt blocked from trying to persuade people of things. For example, I was worried that people wouldn’t like me if I asked them to do things for me (which is not necessarily true). I articulated my concerns and addressed them and moved on.

Then I ran into another problem: there wasn’t anything I wanted to persuade people to do that they wouldn’t easily do. Or so I thought. When I reflected on this a little bit more, I realized that there were in fact “unreasonable” things that I wanted others to do, but since I didn’t believe I could persuade them to do it, I’d convinced myself that I didn’t care. To solve this, I made a list of “unreasonable” things that I could ask others to do. Some of them were pretty funny (e.g. get someone to take off all of their clothes with you in public). Others were heartfelt (e.g. convince someone that they’re good at something that they actually are good at, but won’t take credit for). None of them were completed.

Plan #4

Why wasn’t I doing what I’d said I’d do? Well, there were even more flaws in my plan. First, I didn’t know how long each of the persuasion challenges on my list of unreasonable things was going to take. So, I didn’t plan for them and they didn’t happen.

More importantly, the persuasion challenges didn’t feel important to me. I didn’t believe that if I did them, I would actually be any better at persuading people to do things. I thought I would just get better at being more socially daring. This is important, and something that I’d like to get better at, but not exactly what I was aiming for.

I realized that the thing I really wanted was to persuade by means of understanding people better. I wanted to have a better grasp on who people were – from their essence to their specific idiosyncrasies to their ultimate goals. (I have lots of room to improve in this dimension.) If I understood people better, then I would be more aware of how to change their behavior. This is what I wanted.

So, this brings me to my current plan. After about a week of deliberation, I decided to narrow my efforts for learning persuasion to a single task: modeling people. Now, I am building the habit of spending an hour each day writing out what I know about someone I care about, someone who I regularly interact with. I’m trying to develop a systematic way of thinking through what I know about people I know. The feedback mechanism here is simple: once I have an explicit model of someone, I can check it by asking them questions and trying to anticipate their behavior.

I know that plan #4 isn’t perfect. Nor will be plan #5 or even plan #100. But it’s getting better! Now I’m at a place where I have three important things. I have:

(1) A meta level skill that I want to get better at – thinking.

(2) A concrete skill that I’m applying it to – persuasion.

(3) A habit that I can learn persuasion with – spending 1 hour a day writing out my understanding of someone.

Let’s see how this goes.

A Reflection on Growth Mindset


When I was 15 years old, I adopted growth mindset. It made a huge impact on my life. For a long time I accepted that this change in my beliefs was the best thing that had ever happened to me. In the past year or so I’ve begun to realize that although growth mindset was extremely useful for me, it didn’t come without unintended consequences.

Here’s my story.

What is growth mindset?

Growth mindset, a term coined by Carol Dweck, is a set of beliefs that a person can have regarding their ability (and the ability of others) to learn. It points to the idea that intelligence is developed, not innate. Given this belief, a person can then believe that if they aren’t good at something now, that’s okay – they can just work hard and learn it.

In contrast, people who have a “fixed” mindset believe that they always will be how they are now. If they aren’t good at something, they can never be.

It’s apparent why having a growth mindset is useful. It gives people flexibility in what they choose to do. It allows them to get better, because they believe they can. Also, it more accurately reflects reality: most things are skills, and all skills can be developed with adequate learning techniques.

How did I adopt growth mindset?

I first encountered the idea of growth mindset during a class I took during my junior year of high school. It was a pilot program offered by a nearby independent school, and I loved it. There was a very flexible curriculum and a relatable teacher who came to lead discussions once a week, but other than that it was entirely self-directed. In this class I also studied other useful things (e..g the flaws of higher education, which led me to UnCollege, where I worked for two years).

It turns out that I was very well-positioned to adopt growth mindset. I already had an obsession with becoming much better than I was. I’d devoured self-help books the two years previous – freshman and sophomore year of high school – and this was a missing piece. Although I had inklings of growth mindset, having it articulated gave me a framework for implementing it in many more parts of my life.

I obsessed over growth mindset for a good three years. Whenever I noticed myself thinking “I can’t do it,” I would append the sentence with the word “now.” I can’t do it now, but I can learn it and be able to do it later. Once I had growth mindset, I began to adopt adjacent beliefs. One of them was the idea that anything could be a learning experience. So, whenever I left a situation feeling like I’d wasted time because I hadn’t achieved anything or because I’d looked stupid, I would query myself: “What did you learn from that experience?”

Over time I began to believe that learning was the most important thing to do. I realized that in order to learn, you had to believe that you could do it – which is where growth mindset comes in. People often ask me the story behind my tattoo, the one that says “we are all still learning.” The answer is growth mindset. It changed my life.

Growth mindset gave me freedom. The flexibility that it provides – because it posits that you can become anything if you just learn – gave me the confidence to, in the 11th grade, quit everything I was doing and start over, to start doing things that I had no idea how to do, with the idea that over time I would learn and get better. It gave me the confidence to explore, which I did – from fashion design to hackerspaces to martial arts to writing.

It made me feel powerful. I knew (and continue to know) that I suck, but that if I just keep trying and learning I can one day do big things. I had control over my life, where previously I thought I didn’t.

As a result of adopting growth mindset, I developed higher ambition and increased realism about just how much I need to learn in order to become the person that I want to be.

So where does growth mindset go wrong?

This is a question that I’ve exploring for a while. The things I say here only pertain to me – the adoption of growth mindset no doubt influences different people differently. The reason I elaborate on them here is because I want to emphasize the fact that when you change your mind, you have to watch out for unintended consequences.

Below are five that I’ve run into.

1. Growth mindset caused me to lack focus.

After I adopted growth mindset, I came to believe that I could be anything. This was great at first. I tried a whole bunch of things and gained basic proficiency in them. But once I realized I could do anything, I realized that I wanted to do everything. This is a problem. It’s true that you can get better at anything, but it’s not true that you can do it without adequate time and effort. Even now, I sometimes still want to do too much, and falsely believe that I can.

2. I had an existential crisis.

Something that I felt compelled to do after adopting growth mindset was test to see whether it worked, whether I could actually get better at anything if I just spent time learning it. It was. But after many successive proofs of possibility, I began to wonder: “What’s the point?”

What should I choose to get better at? Before I could answer that, I had to answer the question: why was I trying to get better at anything at all? Was it to look cool? To earn money? To help other people?

Once I realized my motivations, I then realized that I had to figure out which skill would be the most useful and learn that, since I had the ability to learn any skill. Of course, I didn’t have a good way of evaluating what skill was most useful, so I freaked out, and spent a bunch of time worrying about not choosing the optimal skill and therefore not learning anything at all.

3. I grew overly dependent on thinking my way to success.

Growth mindset is really valuable. But it’s not true that if you believe then you can achieve. Hard work doesn’t pay off if it isn’t useful work. For a while I was too dependent on “trying harder” to learn and get better at things, instead of realizing that I had to combine growth mindset with actual techniques for learning (e.g. deliberate practice or feedback mechanisms). You won’t get better at things just by trying – you have to try in a certain way.

4. I stopped being able to tell what I liked and disliked.

A consequential belief that I gained as a result of growth mindset was the belief that I could learn how to control my feelings. In particular, I could learn how to enjoy anything. This ended up being true on a meta level. If you stuck me in a tank with sharks, for example, and I was terrified and I lost my arm, I would get out of the tank and think, “Well Jean, that was a still a good experience because I learned that I can survive even my worst of nightmares. And now that I am disabled I can have the opportunity to learn how to be okay being different than other people!” This is a fine reaction to have… IF I also acknowledged that I felt anger and resentment because I was thrown into a tank full of sharks in the first place.

Too often I completely masked how I actually felt about something, if I felt negatively, instead of realizing that my negative feelings are valid too. In other words, I could frame anything positively, sometimes detrimentally.

This meant that even when I put myself in situations that I strongly disliked, I pretended like it was enjoyable anyways… I was relentlessly positive, and it made finding what I actually liked on a gut level really difficult, since I wasn’t even acknowledging that I had a gut.

5. I singularly dedicated myself to learning, without taking care of my other goals.

As a result of growth mindset, I started to believe that learning was the most important thing, and that if I wasn’t learning (or if I wasn’t learning fast enough) then it meant that I was wasting time. This is bad because sometimes wasting time is good! There are countless arguments for this.

At one point, my dedication to learning got so extreme that I stopped believing that it was important to have fun. Except it was, obviously, for my sanity. But for a few months I was detestably serious, keen on efficiently making the most of every moment of every day. I was a bore.

But despite this…

Adopting a growth mindset was great in many ways. It was not so great in other ways. Even with the myriad of unintended consequences that it caused, I’m really happy that I have it. That said, I am aware that I took it to its extreme. For now, I’m trying to find the in between.

How do you store and access information?


my timeline, to remember important dates in scientific history

I used to run into a frustrating problem. While reading, I thought that I’d understood very clearly what the author was saying. After reading, I’d forget it all. I’d be able to recollect a few arbitrary details of the book, but not in a systematic or cohesive way. For a while, I tried just taking detailed notes, but this didn’t work either. I didn’t have a guide for what kinds of things to write down, so I ended up writing down everything. This was unhelpful and took too much time, so I stopped doing it.

But I didn’t want to stop reading. I know how useful books are, especially when you’re like me and don’t know two shits about the world. But I also didn’t want to keep reading books. What was the point it I wouldn’t remember any of what I read?

What a dilemma. To be honest, I struggled for a few months, and ended up taking a break from reading until I could find some answers. Luckily, I was eventually given a great piece of advice, which I now follow: when I’m reading, I should simply capture all of the good (i.e. intriguing, insightful, unconventional) arguments that an author makes. I can ignore the ones that don’t make sense, the ones that don’t have adequate evidence, and the ones that leave out crucial parts of the picture.

After trying it out for a few weeks, this framework for capturing information from non-fiction books proved super useful, at least for me. It allowed me to capture relevant information in a systematic way, helping me remember small details in the context of main ideas. (This prevented me from drowning in interesting details that weren’t actually relevant to the bigger picture, which was an earlier problem I’d had). Success!

Because this framework made sense to me, and I adopted it, I feel much more motivated to read. I have a way of retaining information now!

A Recurring Problem

Of course, I soon started to notice this problem in other parts of my life, too. My inability to capture information in a cohesive way was preventing me from even trying to engage with all sorts of important things. When people told me about a monumental date in history, for example, I would say, “That’s super interesting!” and immediately forget the details of what they’d said. I didn’t have a way to integrate the new piece of information into my model of the world, so… I didn’t do it.

After solving my reading problem, however, I discovered that there’s an easy solution to the general problem of retaining information:

1. CaptureWhen interacting with any sort of information that you want to remember, first check whether you have a method for sorting between what is relevant and what is irrelevant. If you do, great! If you don’t, come up with a heuristic to sort between the two. If you’re taking a history class, for example, perhaps you decide want to remember key people and events, but don’t care about exact dates. Good enough.

2. StoreNow that you’ve decided what information to capture, you have to find a way to remember it. You have to come up with a framework that makes sense to you. With the hypothetical history class, for example, you might choose to create a timeline for a period of 50 years. You choose to remember one major event from each decade, and decide that for each event you’ll remember (a) the key people involved, (b) reasons why the event occurred, and (c) and key changes the event caused. For each event that you want to remember, you know that you have to fill out these three blanks.

3. Access. Then, when you want to use the information, you have an easy way of doing so (as long as you remember the framework you used to store it). Again using the history example, you would remember that you chose to remember 5 key events and 3 details (a, b and c) for each of the events. So, if you’re studying the history of science like me, you might ask yourself: what was a major event in the 1950s? Well, one of them was the development of the birth control pill. And from this event you can then query yourself for the specific details: namely, that (a) Margaret Sanger was the key advocate and Gregory Pincus the key scientist, (b) Margaret Sanger was a fierce proponent of sex and sexual openness, and (c) birth control helped give women control over their bodies and their futures – and thus ushered in more gender equality.

So now when I come across dates in history, I have a system for remembering the information in a way that makes sense to me: this event/detail model. It’s also useful, you can imagine, to store the information in a way that you can relate each piece to another. I have another system for this: a visual timeline that I’ve made on my wall (see picture above). Now I have yet another way to store and access the information (the timeline is 5 feet from my bed, so I often fall asleep looking at the different dates), so I am extra sure that the time I’m spending learning is not going to waste.

A few more examples might be helpful here. Let’s say you’re trying to remember stories you’ve read, watched, or heard. You might, for every story, make a list in your head of the main characters, the setup, the problem, and the solution, and then give the story a name. Then, when you want to access the specific story, you can simply recall the name of the story. Then, based on your model of the parts of a story necessary to remember (the characters, setup, etc.), you can remind yourself of specific details. This way, with just one piece of key information (the name of the story) you can store and access key details. This convention of tying various details to one piece of information can be thought of like a Hash Map in Java, where you have a key that links to other pieces of information. Except this time, it’s information in your brain.

Another example of information you might want to remember: conversations. Let’s say you want to track how a conversation has progressed. You might have a system as simple as keeping a mental list of the major topics you’ve covered. For each topic, you choose to mentally note what you thought about it, and what the other person thought about it. By remembering the different topics, you now have a framework for remembering the details of the conversation. This makes accessing the information simple.

This might all sound incredibly basic. Perhaps it is, for many people – I don’t know. I just know that this mind problem that I had — not being able to effectively retain information – is now a problem that I have a handle on, thanks to this three-step process. Hopefully it helps you, too!

What does it mean to be considered “smart?”

my bumper sticker

I often leave conversations in awe. Wow, I’ll think to myself. She is really smart.

It didn’t occur to me until recently that this is a very strange thing to say. Why? Because I mean all sorts of things when I call someone smart: from perceptive to eloquent to proactive to expert. Yet I don’t have a concrete concept of smartness itself.

This poses an interesting problem: without a specific model of smartness, how am I telling if someone is smart? What am I actually perceiving? And does this map on to actual smartness?

What Smartness Isn’t

To answer this question, I tried generating a number of definitions of smartness. I ended up coming up with a bunch of things that smartness isn’t: doing well at school, being socially aware, having domain expertise, and so on. These things sound reasonable, but alone don’t necessarily demonstrate smartness.

You can imagine a scenario, for example, where you wouldn’t consider someone smart, even if they possessed one of these traits. Maybe Kayla does really well in her classes but fumbles all social interaction. I wouldn’t consider her smart (perhaps bookish, though). Perhaps Dylan is very socially aware, but is otherwise vapid and boring to talk to. I wouldn’t consider him smart, either. And it’s possible that Stephanie knows a ton about computer science, but doesn’t ever think to apply her knowledge, or express that she knows about these things, and thus her skill passes unnoticed and unused in her work. She’s not very smart, either.

What People Perceive

If a tree falls in a forest and no one’s around to hear it, does it make a sound?

This question seems fitting here. Implicit in my example of Stephanie is that simply having knowledge doesn’t make you smart. Many people have knowledge; the question is, do you use it? Do you share what you know? Do you apply what you know to make other things?

What you do with what you know (for example, sharing it with others) is an important part of being smart. But again, it itself does not guarantee that you are smart.

Our perception of smartness is separate from smartness itself. I think there is a certain set of skills that you can develop that can make you seem very smart, even if you aren’t. Take Nick as an example. Nick is super charismatic, very eloquent, and talks a lot about interesting ideas. With his Ivy League education and stellar resume, he’s gained the confidence to hold his own in conversations. Is he smart? Maybe. Or maybe he’s just persuasive. Maybe he’s just trained himself to speak and look and act a certain way, a way that compels others to like and admire him.

Now consider this: none of the ideas that Nick talks about are his own. He’s actually just paraphrasing a well-known columnist. And his “stellar” background? Yeah, it’s forged.

You might say, “Well, there’s no way that I would consider Nick to be smart in this case!” And I would have to agree with you. But the thing is, if I had a conversation with him, I might easily leave thinking: Holy shit, this Nick guy is amazing.

Seeming smart is not the same thing as being smart. They’re different skills.

Actually Being Smart

So what does it mean to actually be smart? I’m not sure. Now I believe that your view of how “smart” someone is, and smart someone is in relation to you, will rest heavily on what you value and aspire to achieve.

In my case, being smart means being good at thinking — being able to think methodically, precisely, and with self-awareness — and being able to express clearly what I think (we might call this persuasiveness). Because your ability to think is something that you have to do all the time, and influences everything else you do, it seems to me that becoming smart in this respect is crucial.

Why aren’t more people learning how to think?

Have you noticed the dirt on your glasses?

What’s that? You don’t wear glasses?

Oh yes you do. We all do.

The Invisible Lens

Our ability to think is like a pair of glasses, a lens that shapes our understanding of the world. We wear these glasses all the time, and yet they are often invisible — most noticeably to people who believe their perception of reality to be definitive.

Some people do notice the glasses we’re wearing. Then, they notice that their glasses suck. They’re smudged, and out of focus. But since we can’t take them off, we’re forced to try to understand things from behind their imperfect lenses. This means that our perception of reality is distorted, and this has enormous implications.

The world that we believe that we live in informs what we do, what we think we can do, and how we go about doing it. What happens if the world we actually live in is very different? In all likelihood, it is.

We need better glasses. We need to learn how to think.

Why We Should Learn How to Think

Our ability to think affects everything else that we do. If we terrible at thinking, we’ll suck at whatever object level task we choose. If we’re methodical, precise, and effective at thinking, that’ll help us in anything we undertake.

By virtue of thinking all of the time, we inevitably train ourselves to think in certain ways. We need to make sure that we’re training ourselves to think in useful ways, or at the very least, not in harmful ways. If we don’t aside time for learning how to think, however, we’re at risk of picking up constraining beliefs and abysmal thinking techniques. And very unlikely is it the case that we’ll stumble across all the best thinking techniques and also adopt them.

What I Mean By “Learning How To Think”

I often hear adults talk about how their thinking has changed over time. This is fine and true, but implicit in these stories is the commonly accepted idea that people learn how to think by simply growing older and having more experiences. This is a terrible way to learn how to think. (More on this below.)

When I talk about learning how to think, I mean something very specific. I mean sitting down at a desk, learning about different thinking tools and techniques, and practicing them until they come naturally to you. I mean deliberately setting aside time to think about thinking. I mean learning how to think as if it was a skill just like writing or programming. I don’t mean learning how to think just in the process of living life.

Why People Don’t Learn How To Think

There are tons of reasons why people don’t learn how to think. But since I’ve spent my whole life living in a series of bubbles (most recently Leverage Research, where I spent the majority of my gap year), the fact that most people don’t consider directly and deliberately learning how to think to be important is baffling to me. So I came up with a list of reasons why this might be the case. Here they are:

Possible Reasons Why People Aren’t Learning How to Think

  • It’s assumed that we already know how to think, at least well enough. To set aside time to learn how to think is to admit that you don’t know this very, very basic skill. How embarrassing.
  • It’s commonly accepted that learning how to think by simply living our lives is an adequate strategy. It is not. There’s no telling which thinking techniques you’ll stumble upon, whether they’re the “right” ones (the ones that are most useful for achieving your goals), or (if they are) whether you’ve adopted them.
  • It’s not obvious to many people that thinking is a skill, a skill that you can set out to learn. In fact it is. Thinking is a skill just like writing or programming — albeit less celebrated.
  • It’s not a skill that’s explicitly taught in (most) schools. Teachers allude to it when teaching their individual subjects, of course, but these references are often missed. Instead we just learn information, instead of how to process that information.
  • It requires a level of self-awareness and growth mindset that many people don’t have.
  • It’s not immediately profitable, and people are monetarily motivated and constrained.
  • People don’t know how to learn it, because it’s a fuzzy meta skill and it’s not talked about very much. In order to learn it, they would have to think through how they’d do it, and that requires them to be somewhat good at thinking.
  • People aren’t aware of just how much better they can get at thinking. This could be because they don’t have examples of others who are thinking much more clearly, methodically and precisely than they are.
  • They don’t think they need to, because they don’t realize how rigorous and technical a skill thinking can be. But it’s not true! There are so many techniques to learn.
  • They don’t understand why it’s important. Or they don’t realize that they’re wearing glasses.

When is okay to guess?

My EE professor (who is amazing) chastised me today for trying to guess my way through circuit analysis. I guess you’re not supposed to do that. Whoops. “You need to be absolutely certain what you’re doing, each step of the way,” he reiterated to me. “Or else more and more noise will accumulate until you don’t know what’s going on at all.”

This idea of certainty was striking to me for two reasons. First, it seems obviously better to learn a subject thoroughly, feeling confident in your understanding of each of the components and how all of the pieces fit together. You’re able to do much more with the information if you understand it like this. The thing is, I’m not doing this right now. Instead my current strategy for learning is to try to absorb as much information as I can, perhaps only understanding 70% of the information and its implications. I end up not grasping the information well enough to feel confident applying it where I haven’t been told how to do so – I prevent myself from inferring from what I know, because I’m not sure what I know. This needs to change.

The second reason is because this idea of certainty seemed to directly contradict what I’d been taught to do in CS106A (Programming Fundamentals) last quarter. CS106A encourages you to guess (by writing and testing code) if you don’t understand something, in order to understand it better. It was very helpful to be reminded today that this learn-by-guessing approach, although useful for CS, does not necessarily work well in other fields. But why is this?

I suppose it’s because computer science (at least, CS as taught in my intro class) has better feedback mechanisms. In CS, guessing costs nothing and testing is easy — you can just write up some code, run the program, and see what you get. This is not the case in EE. It’s harder in circuit analysis, for example: you’d have to actually build the circuit you envision, measure voltages and currents with special tools, and also hope nothing blows up. It pays not to be wrong in EE, where it doesn’t matter in CS.

This got me thinking: when is it okay to guess? When is it beneficial to guess? When is guessing costly?

Here’s my current understanding:

  • It’s beneficial to guess if:
    • by guessing you get useful data on how good your answer is and how you can improve it, allowing you to find the right answer in a low-cost (e.g. fast) way.
      • Ex) If you guess at code in CS, you can tell if it works (your program successfully runs) and where it’s broken if it doesn’t (by using the debugger).
    • making a reasonable guess now is more useful than having a precise answer later.
      • Ex) You’re playing a game of Diplomacy and you have 15 minutes before you have to decide your next moves. It’s much more helpful to just act on the limited information that you have on other players, rather than trying to ascertain what everyone’s next moves will be before you act, in which case you’ll miss your chance.
  • It’s bad to guess if:
    • you’re trying to build a solid foundation of knowledge in a field where things are known for certain, and there aren’t adequate cheap feedback mechanisms for you to tell if you’re on the right track.
      • Ex) Trying to guess as to how to apply fundamental principles in EE.
    • guessing has irreversible negative consequences.
      • Ex) Trying to guess your way to correct form while lifting, which is what I’ve been trying to do, and getting severely hurt in the process.

Everything Is An Argument (Even This)

i ran away to the other side of the bay today to write this

(i ran away to the other side of the bay today to write this)

I remember when my friend Geoff first made this claim to me. I had just begun to learn about arguments, and I was baffled. “Everything…?” I said, incredulous. Was this true? And if this was true, how could I have been oblivious to something this fundamental until age 18?

So I did some reading. Specifically, I read “Systematic Philosophy,” a website Geoff created to teach people the basics of arguments. It was hard for me to digest; there were many pieces. That was this summer. Now, six months later, I found myself rereading his site, still not having fully grasped the concept of arguments, or the argument: “Everything is an argument.” These are the two things that I will discuss here.

My Current Understanding of Arguments

In order to understand the argument, “Everything is an argument,” I first have to understand what arguments are, and how they work. Below is my current understanding.

The following is largely a summary of Geoff Anders’ website, “Systematic Philosophy,” which I cannot recommend enough. It’s so clearly written. This model also includes other things that I have learned about arguments (e.g. of rhetoric in my writing class at Stanford). It is solely my understanding of arguments, and may contain inaccuracies.


  • Why learn about arguments?
    • To find the truth (which we can do because of entailment).
    • To understand reality (because everything is an argument).
  • What is the purpose of arguments?
    • To give people knowledge of that argument’s final conclusion. This allows us to find the truth.


  • What is an argument?
    • An argument consists of three things: (1) one or more propositions, (2) exactly one final conclusion, and (3) at least one proposition that entails the final conclusion.

  • What are some basic terms to describe arguments (or their parts)?
    • Paradigm Case
      • Examples that show a definition (often allowing us to avoid generating an incorrect or imprecise definition. For some of the words below, Geoff illustrated them with paradigm cases rather than with actual definitions. This will be noted as (PC).
    • Concept
      • (PC) <square>, <an action being good>, <non-red>. Marked with angle brackets.
    • Proposition
      • (PC) <squares exist>, <an action is good>, <all bachelors are unmarried>. Marked with angle brackets.
    • Step
      • One of the propositions in an argument.
    • Premise
      • A proposition in an argument that is not entailed by any of the steps in that argument.
    • Plausibility.
      • How good a premise is.
    • Statement
      • A unit in physical or mental language that we would judge  to be capable of truth or falsity. Not a question. Not a command. Not a physical object. Marked with quotation marks.
    • Conclusion
      • A proposition that is entailed by one or more of its steps.
    • Final Conclusion
      • A proposition that has been selected to be the final conclusion. Must be entailed by one or more of the propositions in the argument.
        • Key Idea: If you take an argument and try to designate a new proposition in that argument as its new final conclusion, you’ve really just changed which argument you’re talking about.
    • Intermediate Conclusion
      • Any conclusion that is not the final conclusion.
    • Entailment
      • When one proposition is related to another by a chain of immediate entailments; a relation between propositions.
        • Key Idea: Just because you can’t see the entailments is not enough to show that the entailments are not there.
        • Key Idea: Entailment is truth-preserving. True entailments never entail false propositions, although false propositions can entail both true and false propositions.
    • Immediate Entailment
      • (PC of categories) logical connectives like and, or not, if/then; Modus Ponens: A=B, if B then C, therefore A = C; Modus Tollens: if A then B, B is false, therefore not A
        • Key Idea: If P1 is true, and P1 entails P2, then P2 is true as well.
        • Key Idea: If P1 immediately entails P2, and P2 immediately entails P3, it is possible that P1 does NOT immediately entail P3.
        • Key Idea: Every proposition immediately entails itself.
    • Validity
      • A property of arguments. An argument where the conclusion is actually entailed by the other steps.
        • Key Idea: An argument can be valid and bad. Validity alone is not enough to make an argument good.
        • Key Idea: All good arguments are valid.
    • Transparent Validity
      • An argument is transparently valid if in every case where an argument represents a conclusion as being entailed by one or more steps, those steps immediately entail that conclusion. Necessary for an argument to be good, but not sufficient on its own.
    • Soundness
      • A property of arguments. An argument is sound if it is valid and all of its premises are true. Soundness is important because if an argument is sound, its final conclusion has to be true. But just because an argument is sound does not mean it is good.


  • How can we evaluate arguments?
    • Transparent validity. If an argument is not transparently valid, it is automatically not a good argument.
    • Quality of premises. An argument is at best as good as its worst premise. Assessing premises is the central question of philosophical methodology.

  • How can we express arguments?
    • Text Format. Express your argument in paragraph form.
      • Benefits: Allows you to give examples and ask questions.
      • Drawbacks: Hard to tell how many arguments are being made, or propositions there are.
    • Numbered Format. Express each proposition as a step. Note which steps entail which conclusions. Give each step its own line.
      • Benefits: Makes the argument crystal clear. Cuts away anything inessential. Exhibits the structure of an argument. Makes it easy to check for transparent validity.
      • Drawbacks: Can only contain statements.
    • Formal Logic.
      • Benefits: Ultra precise.
      • Drawbacks: Not easy to understand.
  • What are some rhetorical appeals, used to make arguments?
    • Ethos. (Appeal to authority)
    • Pathos. (Appeal to emotion)
    • Logos. (Appeal to logic)

Is It True That Everything Is An Argument?

I don’t think so. Everything is a lot of things. If I can come up with just one example of something that isn’t an argument, then this statement is not true. For instance: Concepts aren’t arguments. Neither are lists. There.

But lots of things are arguments that people overlook. When students learn history, for example, they often don’t realize that the textbook is making lots of arguments about why things happened, and how significant things were.

I think people say this because it reminds people that arguments exist even when not presented explicitly. All in all, it’s a useful line, just not entirely accurate!

Sample Arguments

To conclude this blog post, and to make sure that I have the idea of arguments at least semi-firmly engrained in my mind, I am now going to try to make three arguments. Each will be in text and in numbered form. Right now they probably suck. Come back to the post in a week (after I get feedback on them) and I’ll have them updated!

  • Argument 1
    • Text
      • Not everything is an argument, because everything is a lot of things. If I can come up with just one example of something that isn’t an argument, then this statement is not true. Since neither concepts nor lists nor physical objects are arguments, it is not the case that everything is an argument.
    • Numbered
      • 1) Not everything is an argument.
      • 2) Physical objects aren’t arguments.
      • 3) Therefore, not everything is an argument. [1, 2]
  • Argument 2
    • Text
      • Building a network isn’t a good reason to go to college, because you can build a good network as long as you are in a reasonably sized metropolitan area, probably for much cheaper.
    • Numbered
      • 1) If you can do something much cheaper, then you should.
      • 2) If you live in a reasonably sized metropolitan area, you can build a network more cheaply than paying college tuition.
      • 3) Therefore, building a network isn’t a good reason for going to college. [1, 2]
  • Argument 3
    • Text
      • No one really understands reality because each part of reality is influenced by other parts, and no one understands everything.
    • Numbered
      • 1) Each part of reality is inextricably tied to and influenced by other parts.
      • 2) In order to really understand reality, you’d have to understand everything.
      • 3) No one understands everything.
      • 4) Therefore, no one really understands reality. [1, 2, 3, 4]

Thinking Things

“How would I define an effective thinker? Someone who can turn on his thinking at will and deliberately focus it in any direction he wants. Someone who is in control of his thinking instead of just drifting from idea to idea, from emotion to emotion.”

-Edward de Bono

My mind is a mess. It is cluttered with incoherent thoughts and unjustified beliefs. It rivals the American garage in its state of disarray. It thinks it knows a lot. (In fact, it knows very little.)

Sometimes I wonder how I’ve made it this far.

I try not to show this, of course. In conversation, I fend off the uncertainty I feel. I express myself more assertively than perhaps my statements deserve. In the future, I don’t want to do this.

I want to know where my beliefs come from. I want to have good reasons for them. “Someone I admire believes this” isn’t a good reason.

People keep talking about independent thinkers. I’d like to be one. I want to know what’s true and what’s not.

But society’s definition of independent thinking sets a low bar. It applies to anyone who does not blindly follow convention. Being a truly independent thinker extends beyond this.

Being a truly independent thinker means evaluating everything you hear. It means not deferring to anyone’s thinking… even and especially if you admire them. It means not fooling yourself about what you know.

By this definition, I’m not an independent thinker. At least, not yet.

Being this kind of independent thinker requires skill. It requires the skill of being able to assess arguments. It requires conscientiousness of different idiolects. It requires understanding and avoiding cognitive biases. It requires knowing what you actually believe. (This is harder than it seems.)

Descartes calls human beings “thinking things.” It’s true. By living our lives, we are thinking all the time. We are training ourselves how think in the process.

Knowing this, we have a choice. We can live and think haphazardly, or deliberately. I can’t fool myself anymore, so my decision is clear. It’s time I teach myself how to think.