INTERVIEWEE: Jeff Hammerbacher
BACKGROUND: Scientist, software developer, Cofounder of Cloudera and Related Sciences, founding manager of Facebook’s Data team
TOPIC: What gets measured
LISTEN: On the web, Apple, Spotify
“It makes me uneasy when I think about how much corporations are measuring about human interactions today and not making considered decisions about what gets measured. Deciding what gets measured is a political and ethical choice." — Jeff Hammerbacher
In one of the most infamous quotes of the internet era, Facebook’s first head of data told BusinessWeek: “The best minds of my generation are thinking about how to make people click ads. That sucks.”
That person was Jeff Hammerbacher, an important figure in data science who now works full-time in biomedicine. As the founding manager of Facebook’s Data team and cofounder of the publicly traded enterprise data company Cloudera, Jeff pioneered a lot of data techniques and ideas — including the title “Data Scientist” — that are foundational today.
Over the past decade Jeff’s attention shifted to medicine, working on cancer research at Mt Sinai using computational methods to design personalized therapeutic cancer vaccines (as covered in the New York Times), and he’s a founder and advisory board member of the COVID Tracking Project. His current focus is a venture pharmaceutical studio called Related Sciences, which does early-stage research to identify and develop new pharmaceutical drugs.
Jeff is someone with a significant impact on the world while also being very quiet about it. He doesn’t post on social media, and this conversation for the Ideaspace is his first on-the-record public interview in years. Jeff is a dear friend and one of the smartest people I know. I highly recommend listening to the conversation in full or reading the complete transcript (which goes much more into biomedicine, philosophy, and science). Below is an edited and condensed conversation focusing on Facebook, data science, and whether corporations should live forever.
YANCEY: We're friends, but I only know the high points of your bio: you grew up in the Midwest, you were a very early Facebook employee, when we met you are working in Mount Sinai doing cancer research. There's other things in between, but what's your story?
JEFF: The story, compressed—you got a lot of the high notes. Born in Michigan, grew up in Indiana; Dad worked at GM, Mom was a nurse. Went to Harvard to play baseball, declared English, ended up studying Mathematics. Worked as a quant on Wall Street right out of college in New York City, quickly moved to Facebook, where I was one of the first people to try and work on data there. I ultimately built and led a group called the Data Team, which built our data infrastructure, as well as the Data Science group, which is certainly a moment of notoriety for me in that we were really the first large, publicly-visible team to choose the title “Data Scientist,” and this title has become quite popular. So that was an adventure at Facebook. We wrote some open-source software that had commercial potential beyond just Facebook, and I joined Accel Partners, an early Facebook investor, as an entrepreneur-in-residence, and quickly helped create a company called Cloudera. I served as the Vice President of Products initially, and then ultimately the Chief Scientist, and helped grow the business. It eventually became a publicly-traded company and still is today.
Around 2012, 2013, we had hired a professional CEO. The business was growing nicely; it was doing really what it was supposed to do. And I started getting more interested in applications of data science again, rather than infrastructure for data science. That is what took me to Mount Sinai, when we met. I was on the board of a nonprofit called Sage Bionetworks, and it works with a gentleman named Eric Schadt. Eric had been recruited to be the Chair of the Department of Genetics in Mount Sinai, and in discussing with him my interest in applying data science, he mentioned that it would be possible to create space for me within his department to try and apply data science to biomedicine. So I started spending more and more of my time in New York City, and built out a group there at Mount Sinai with no real mandate beyond just, “let’s try and do some of the stuff that we did at Facebook and Cloudera here on the data they have at Mount Sinai.” Ultimately [we] became focused on cancer immunotherapy, which is really revolutionized cancer treatment. And that was a fun process to work on.
YANCEY: That was a successful project? What was the outcome of that?
JEFF: It was an academic lab, so it was limited in its impact on any one project. We work in a variety of different things on the periphery. I'd say the thing that was really the primary focus of the lab was working on what's called a neoantigen vaccine for cancer. A neoantigen vaccine is a therapeutic vaccine, not a prophylactic, or preventative vaccine, which can be counterintuitive to people, including me, who didn't realize that the term vaccine can be used in the therapeutic setting. For people that already have cancer, we can take a biopsy of cancer tissue and a blood sample and do sequencing on both. And then we can look for mutations that occur in the tumor that don't occur in the blood, in the normal healthy tissue, that might be generating an immune response. In the same way many of us are getting vaccines right now which are trying to tell our immune system to create antibodies and T cells that are specific for a small little sequence that's present in the SARS-CoV-2 virus. What we were trying to do was make therapeutic vaccines, which would teach the patient's immune system to create antibodies and T cells specific for the tumor, but would leave the normal tissue alone. The altered protein products generated by those mutations were known as “neo,-” as in new, “antigens,” and antigens are things that the immune response is directed against. So there were a couple groups working on this. Amusingly, the primary commercial company working on a therapeutic neoantigen vaccine was BioNTech, which has now helped prevent me from getting COVID based on my Pfizer vaccination. I was intimately familiar with their science through my work in neoantigen vaccines. There were a few other groups that were working on neoantigen vaccines. There was a group at Dana-Farber, led by Kathy Wu and Patrick Ott, and a group at Washington University in St. Louis, led by a guy named Bob Schreiber. But Nina Bhardwaj at Mount Sinai was definitely in the mix as one of the earliest people to be working on this. And we were lucky enough as a lab to encounter Nina during a formative stage of her neoantigen vaccine clinical trial. We plugged in and were able to provide the algorithms that would pick up the sequencing data and ultimately tell Nina which antigens to target for the therapeutic vaccine. We were the group that in the phase one clinical trial was receiving data on the patients and then directing the peptides to be produced for the actual vaccine. So I think of that as a successful outcome for the academic lab. It was a really exciting project to be involved in. Ultimately we did a sponsored research agreement with a company called Neon Therapeutics, which had its own commercial neoantigen vaccine, and was eventually acquired by BionNTech. So I guess you could say there was some commercial impact of our work as well.
That was the main output of that New York lab. Eventually I opened up a branch of our lab in Charleston, South Carolina, at the Medical University of South Carolina. My wife and I chose to move down here to start a family, and ultimately shut down the New York lab and do more wet lab research. In 2019 I put my lab at MUSC on hold and started a biotech venture creation firm called Related Sciences, which is my primary professional focus today.
That brings us up to the present. Mixed in there are things like: I taught a course at Berkeley called Introduction to Data Science for a couple years that ultimately became the foundation for a degree-granting program in Data Science, which was quite cool and interesting to be involved in. My wife and I do angel investing through a firm we call Techammer; we've made over 120 angel investments. That's something that I do with a portion of my time. And I've sat on the board of several companies. I’m currently on the board of a company called CIOX, which does electronic medical record data exchange, and only recently stepped off the board of a company called Cytel, because it was acquired, which was a leading standalone biostatistics CRO. So it's pretty difficult to create a summary of all that's going on, but hopefully that gives you some background for my story.
YANCEY: When you started at Facebook, was it still The Facebook?
JEFF: That sounds right. This is all lost in the sands of time. I interviewed at the end of 2005 and started in the beginning of 2006. Somewhere in that timeframe we expanded from colleges to high schools. Then I think after I started we expanded to corporations, and then later that year the big thing was what we called “Open Registration,” which was allowing anyone with an email address to sign up. Somewhere in there the branding might have evolved from “the Facebook” to “Facebook.” That's right.
YANCEY: What was your first project? What do you remember about the beginning? What was the state of data? What were you there to do?
JEFF: The state of data was interesting. There was an internal tool called the Watch Page: the MySQL databases that deliver data to the website, it was running queries over that data and syncing results from those queries back to a single MySQL server, which would give you a sense of how many active users were on the website and how they broke down across networks. So the first order of business when I arrived was to create a more professional version of that, which would be referred to as a data warehouse, that could collect a broader range of data. The questions that people wanted to ask of that data warehouse were primarily around growth: which of the networks were growing, which of them weren’t, what was causing them to grow. That was the early focus. There was some thought that I should be involved in things like the ranking algorithm for NewsFeed or sponsored advertising ranking algorithms. Ultimately, that wasn't too much of a focus, we ended up being more of a business intelligence function.
YANCEY: When you're trying to find out which areas are growing or why they're growing—now I feel like we have some implicit data tech ways of thinking about that. But was that an easy question to answer then? Was it obvious to you what to do? Were you creating the path to identify triggers of growth? How nascent was that?
JEFF: It was very nascent. I don't think we really had any idea. We explored a lot of different hypotheses. At the time it was broken into networks, so we didn't really think of it as a single product. It was more like, “At this college, we took off, at this college, we didn't take off; what could’ve caused that?” So we did analyses — I remember crawling student newspapers for mentions of Facebook and correlating that with user growth. We looked at things like the largest clusters of users, so the largest connected component or nearly-connected component of users, thinking that there might be something. We would often compare it to walking into a party where there are seven people standing in the room all by themselves or walking into a party where there's seven people standing in a circle talking to each other. It's a different social experience. So we evaluated things like that. We didn't really get rigorous for another year or two. Matt Cohler was an early executive, and I remember him pulling me and Naomi Gleit — who I believe is actually still there as a pretty senior person — he pulled me and Naomi aside and basically said, “We need to make growth the number one focus for our analyses.” My first hire was a guy by the name of Itamar Rosenn and I basically said, “Hey, I want you to every week publish a thing called the Growth Report.” Which was literally a PDF where he would present a set of standardized metrics and then go deep on one hypothesis about what might be causing growth. The Growth Report was a pretty fundamental tool internally at Facebook at the time that ultimately served as the foundation for the Growth team. A lot of the terms like “growth hacking” could be traced to Matt telling us to really emphasize growth for our analyses.
YANCEY: I'm going to get the numbers wrong, but I've heard Chamath Palihapitiya talk about a famous thing where Facebook was trying to get a user seven friends in the first three days, or a certain number of friends within a certain timeline, and then certain user behaviors happen. Are those the kinds of things that were showing up in this growth report?
JEFF: Obviously it evolved quite a bit politically after I left, but I suspect that's referring to—I recall doing analyses where we would say, “Okay, let's take everyone who signed up in this fourteen-day period, and let’s take everything we know about them and their activities on the site. And then let's follow them for six months or twelve months” — so this would be a cohort analysis — “and let's try and predict their behavior six months, twelve months from now, based on what we saw them do in those first two weeks.” We would use a methodology known as feature selection, which would allow us to take all the variables that we put into that model and highlight the subset of those variables that was most predictive of the eventual outcomes. Then we would try and create narratives around those features which had high importance. Those features could be derived from hypotheses that different executives had — if someone had a hypothesis that walking into a party with seven people talking each other had value, then we would figure out a way to formulate that quantitatively and include that as something that the model could put emphasis upon. We also did a bit of automated feature engineering to see if we could actually discover features computed from the basic underlying features.
We published some of this stuff publicly, but then ultimately that became very controversial after I left, so I don't think the team published as much what they found. But things like ensuring you had a profile photo, that's one that I can recall. So then we would try and push a lot of energy into the user experience to get them to put up a profile photo. I remember another one — we ended up finding that real-world, in-person events were big drivers of new connections and new signups on Facebook. So we would orient our field marketing around events because of that. Findings from the analyses would then trickle down into business strategies.
YANCEY: I don't know how you want to credit it, but you maybe created the title of Data Scientist. In the coursework for your class on data science, you have some of the email exchanges where you're proposing the idea. “Data Application Scientist” I think was the first one you used, I'm sure probably still the one you root for in your heart. But you talk about how this was a different role for data. It wasn't an analyst, it wasn't a researcher. You were saying, “Our jobs are something new, and we need a new title.” Why did you think that was the case?
JEFF: Well, I can tell you the proximal cause, which was actually pretty simple. We had certain people in our group who had PhDs, and there was a well-understood research scientist title hierarchy. We hired them away from research groups where they participated in that title hierarchy, so when they joined our team they wanted a title of Research Scientist. The other people who came more from the business intelligence realm, as it was known, there's a title hierarchy there and Business Analyst was the standard title. I looked at my group, and at the time, we maybe had twelve, fifteen people. I thought, “This is really silly. Why do we need so many different titles for people who are more athletes; who are capable of on one day writing a script to collect data from the source systems into our warehouse environment, and on another day are building a dashboard, and on another day are running an analysis to identify what might cause people to use the site in a few weeks?”
We really wanted people who were more adaptable and had a broad base of skills to keep the team small. At my quarterly team retreat I was getting pressure externally to simplify, and in particular to squeeze out any notion of research. At the time there were negative connotations for people to be seen as being involved in research. It was all hands on deck. Today Facebook feels inevitable, but at the time it certainly didn't feel that way. We wanted to emphasize that everyone in our group was actually contributing to the success of the product, not just writing papers and doing research. So the proximal cause was effectively that I needed to get rid of the title Research Scientist in my team. Parsimony led us to Data Scientist as the title that could preserve what they valued in terms of the science. We called our team the Data Team because we had two sides: we had Data Scientists, and what today would be referred to as Data Engineering — we called it “data infrastructure.” So it was simple. It was recognizing our team was small enough that we didn't want premature specialization, and then saying, “Okay, if we're going to flatten titles, and just go down to two titles, how can we capture what people value about research scientists and what people value about other title schemes?" Subsequently, people have done all this sleuthing and discovered that people used the term “data science” in different contexts prior to our use of the term Data Scientist, so I certainly don't want to assert that I had any kind of unique claim to creating a field of thought. It really was just trying to find a title that captured what people liked about research scientists.
But ultimately I do think people felt unhappy with the available titles at the time. Because the world was changing. At the time Google was such a dominant force in what could be done with data. When I thought about, “Well, why is Google considered so exceptional with data?” It's not because they're publishing great statistical algorithms. People didn't look at Google and say, “Those are the best statisticians in the world.” It was more that they had a muscle, a capability, to think creatively about how consumer web products could be informed by data, and then to execute on building the infrastructure necessary to leverage data at the scale that they were able to leverage it. I thought more about, “I want to build a group that gives Facebook that broad base of capabilities.” I was a mathematics major as an undergrad, and I did a lot of courses in the Statistics department. I felt like I had a good understanding of the shape of statistics, and I wanted to be more ambitious than that. I wanted something that was a bit broader. We were at a point in time where the amount of software required to do the job of a statistician or of a data analyst or of a business analyst was just ever-increasing. It was just becoming clear that you weren't gonna be able to accomplish your day-to-day tasks if you didn't know how to write software. A statistician who can write software might be another great way to describe what a Data Scientist does, in a summary.
In my head, I thought what we were doing was applied epistemology. We were trying to know about the world. And we wanted to actually know about the world, we didn't want to describe what it meant to know about the world. We wanted to actually generate knowledge about the world. There were a lot of things in the ether at the time that made the existing titles not fit. And never underestimate how fashionable it all is. Larry Ellison from Oracle always talks about how software is even more trend-driven than fashion. The fact that we called people [Data Scientists] at Facebook and Facebook became this meteoric success as a startup is really the thing that explains it. We could have called it really anything, and I think because Facebook did it, people would have adopted it.
YANCEY: So you left about a decade ago, right?
JEFF: 2008.
YANCEY: So now Facebook and data together are these combustible words. It’s an explosion of fear, controversy, all these things. How do you reflect on that mood that exists around social data and Facebook these days?
JEFF: It's a completely different environment. I will say that one of the books that I recall purchasing and reading to get ready for the job at Facebook was a book called Database Nation by a guy named Simson Garfinkel. Everything that he said in that book came to pass. The subtitle of the book is “The Death of Privacy in the 21st Century.”
I guess in one sense, it all feels a bit inevitable. I actually had a moment when we were still living in San Francisco, and I remember turning to Halle [Tecco, the entrepreneur and Jeff’s partner] — and this is before the protests of the Google buses and things. Reading books like Bonfire of the Vanities and understanding the perception of the great wealth creators of the ‘80s in finance — I remember saying, “The people who are making money in technology, these are going to be the most hated people in the world. What is being done is not right, and there's going to be a massive readjustment of us as a society.” Everything that's happened has felt a bit teleological. The other thing that's hard about it is I still know people there who I consider to be good, ethical people. When you've fully adopted the worldview of the corporation, you're unable to perceive these things. So it's interesting. Compare it to how I think about charter schools — a good friend of mine from college is the president of a very large city’s teacher union. I don't want to reveal her, the actual city. But in the tech world there's a lot of emphasis on charter schools as a good solution to education. But then I hear about it from her, and the side of the teachers union, and it feels like actually this conversation is much more fraught with difficulties than I'm hearing from the tech side. So it's interesting to continue to hear the bull case from the people inside of Facebook, but ultimately I think the bears are mostly right, that we’ve achieved a level of utter imbalance in surveillance and data use and we need to rigorously regulate surveillance if we're going to get back to a healthy society.
YANCEY: So you don't look at the critiques and think these are emotional arguments; you think these critiques are founded on real issues, many of which were foreseeable? If you would try to share ideas like that in a meeting in 2006 or 2007, what happens?
JEFF: That’s a great question. I often describe the process of building a hypergrowth startup as removing all of the arrows which point in a different direction from the direction that the founder-CEO is trying to take it. At the time, Facebook felt like a very open intellectual environment where you could say that kind of thing in a meeting and there was no real repercussion. People were genuinely trying to figure out what this new world was that we were creating, how we fit into it, and the right way to evolve along with that.
Once you reach a certain scale, that sort of deliberation is not helpful if your goal is to get to the scale that a Google or an Amazon or a Facebook has achieved. There was definitely a thinning of heterodox viewpoints that occurred in the era in which I was leaving. I can actually remember: I told my boss that I was planning to leave, and he was like, “Okay, okay, let's try and arrange things so that we can soften that landing. Let's not tell people, let’s do it for a couple months, and then we'll tell people, and then you’ll have another couple months, you'll stick around.” And one-on-one, a couple weeks later, he was like, “So I'm leaving.” [laughs] And I was like, “Wait, what?” And Matt Cohler, that I mentioned earlier, he left during that time. Dustin Moskovitz left around that time. There were a lot of people who I thought were contributing to the vigorous intellectual debate who were selectively removing themselves from the rocket ship, because it was going in one direction. If I had brought it up in 2007 — I probably did. And we probably had a good discussion about it, and we tried to achieve a good outcome. But ultimately you're trying to achieve a goal that requires de-prioritizing these kinds of discussions and just focusing on the corporate outcome.
YANCEY: Make a decision and move, no more debate. I can see that.
You have a very famous quote—you are almost like a person who follows a quote. You have this quote that was in Fast Company, which is, “I believe the greatest minds of my generation are figuring out how to make people click more ads. That sucks.” Something to that extent. I wonder if you can talk about that, that moment in your life?
JEFF: It was actually Businessweek. [laughs] But they’re somewhat interchangeable to me as well. It grew out of — I got to know a guy named Ashlee Vance. At the time that I got to know him, he worked for a company called The Register. And The Register had this weirdly deep technology section where they were the only people writing articles about nerdy things that I cared about, like databases and file systems. There was also a guy named Cade Metz there who now has a big new book out on AI. A lot of the people who were at The Register have done quite well, and Ashlee, obviously, is quite famous for his biography of Elon Musk. Ashlee is a really interesting guy. He was someone that I got to know and enjoyed having conversations with. He basically said, “Hey, my editor wants an article about the bubble. I don't want to write about the bubble, everybody's writing about the bubble. Would you mind having a conversation with me and going on record so that I can have something more interesting to say than just like, ‘there's a bubble,’" which was what everyone was writing about. So it was really just an hour of hanging out with Ashlee and talking about the world.
I read a lot. If there's any one defining feature of me as a human, it’s just that I read a lot. Like any sort of angsty, Midwestern teen, I was really into poetry for a while. Great poetry equips you with these sentence structures that are adjacent when generating text during discourse. It was a sense of just how bored I was of the problems inside of large consumer web properties. There's definitely very interesting science to be done, even to this day, on click-through rate prediction and surfacing the right person in People You May Know. Sure, the underlying algorithms are interesting, but the actual problem you're solving, I just wasn't interested. I never was, to be honest. In any of that stuff.
So it was really just expressing frustration that the people that I worked with were absolutely some of the smartest people I'd ever met. I was a math major at Harvard, so I felt like I'd seen some pretty smart people come through there. At the time, the reputation was if you were an incredibly smart math person, you would go become a quant on Wall Street. But that was fading. All those people who were the top mathematics people — and obviously this is one form of intelligence — but let's just say in the category of “math-y” people, the people who dominated within that category, they generally went on to go be quants on Wall Street for a certain amount of time. But that was shifting to where they were all going to work at Google and Facebook and other companies. So it's really just observing that the problems these people are going to work on are boring, but these are all the people that I would have considered if you'd asked me “who were the five smartest people you sat next to in college at some point,” it's surprising how many of them found their way into this problem domain. And that just felt like, “Okay, we design incentive structures, as a society. And it feels like we've sub-optimally designed our incentive structures, if this is the problem that these people are being attracted to.”
YANCEY: Do you feel like the greatest minds of your generation are still working on helping people click ads? Is there something different? Has the mood shifted another way?
JEFF: Honestly — and I’m not saying this just to lean into where your thinking has gone — most of the smartest people I know now are thinking about climate change. It's really in the last two years, something has happened. When I catch up with people who I think are really smart, that I just like hearing what they're thinking about, I'm amazed at how many of them are all in on thinking about climate change. For a while I was biased because I was working in biomedicine, so I definitely felt like I could feel some pull into healthcare and life sciences. But for people who have a sense of, “I don't want to just make a lot of money and challenge myself”—which, by the way, I'm completely fine with, that's a reasonable rubric to use to determine what to do with your life — for the people who are trying to search for deeper meaning and think about, “How can I work on something that makes me feel like I'm engaged in a meaningful endeavor?” it's almost always climate change now. It's interesting.
YANCEY: Tell me about what you're doing now. Especially the structure of what you're doing.
JEFF: Related Sciences is a biotech venture creation firm. In the technology world you might say startup studio. Essentially, we're a company that has access to capital, but instead of just giving that capital to people who started companies, we're starting the companies ourselves and then putting capital into them at the earliest phases. Eventually they raise outside capital, but we provide the initial capital.
Related Sciences grew out a number of different constraints. One was simply an observation that the leading early-stage investors in biotech had all moved to this venture creation model. Some of them started that way, some of them found their way there, some of them are only halfway there, but almost all of them are creating companies. It's as if in the technology ecosystem and in the software world, Accel, Sequoia, Benchmark — whoever you consider to be the the most prestigious, top early-stage technology investors — it’s as if they all decided to stop investing in outside companies and started their own companies. So that would cause you to reflect, I think, as someone who participated in that industry. So I was reflecting on that fact, and then someone I had gotten to know through my work at Mount Sinai, who had worked at two of those firms that I thought of as two of the three leading firms, had moved to Denver to start a family, similar to how I had moved to Charleston to start a family. And we got to talking about wanting to participate in having an impact on what medicines are available for disease, while doing so not living in San Francisco or Boston. And there's a third person who I had gotten to know through one of my funders for my lab, the Parker Institute for Cancer Immunotherapy, and he had been very thoughtful about creating a lot of financing for early-stage biotech ventures. So the three of us started getting together. We would just get together for a long weekend and just brainstorm about what kind of vehicle would conform to the constraints we place upon ourselves in our personal lives, as well as with our moral perspectives. We eventually came up with this biotech venture creation firm, which is going to create new ventures at a relatively — I don't want to say leisurely, but not-breakneck pace. The goal we set for ourselves is every eighteen months we'll create a company. When we create a company, rather than hiring a thousand people, we're going to leverage what are called contract research organizations, or CROs. Each company we create is effectively a virtual company, where we have a small number of people who manage it, but the actual experiments are run by the people that we pay at other companies that don't participate in the headcount of the company. I should mention, we're specifically focused on preclinical therapeutics development. So we're trying to take something from idea to a molecule that's ready to be put into humans. We're focused on that phase, we want to do that through virtual biotech venture creation, and we want to do it every eighteen months.
This is solving for constraints that we had. I didn't ever want to participate in a hypergrowth startup again. It's a very alienating experience, and I didn’t feel like I liked myself when I was doing the things necessary to succeed. You very explicitly can't build strong interpersonal relationships with the people that you work with, because as the founder and major shareholder you have to make decisions that might not be in the best interest of that person as an individual. You have to make the decisions that are in the best interests of the company. You're hiring so aggressively. When I hire someone, I genuinely get to know them and I try and build a relationship with them. And I only try and make an offer if I genuinely believe that the job I have is the best job for this person. If you're doing that at a pace that leads to thousands of employees over the course of a year or two, then at some point you're unable to maintain those relationships, and you feel like you're letting those people down. You’re being inauthentic. Because I genuinely did care about them as I was meeting them, but then I had to move onto the next one. Didn’t have enough time to devote to these people that I genuinely did care about because there's just too much.
So with Related Sciences, hopefully there's no need to grow beyond a dozen, two dozen people in the company. One of my other co-founders, his concern was more around pace. Our belief is science moves at the speed of science, at the speed of reality. You can only discover new things carefully and thoughtfully. So we wanted to create a pace that was fast enough to generate an economic return, but slow enough that we could allow for science to happen. And life to happen: one of my co-founders had a kid recently and was able to step away for two, three months. One of my direct reports has been away for two months as well, starting a family. We wanted to have a business model that could allow for life to happen for us. By being virtual, we're not carrying these huge fixed costs where we have this runway that's gonna run out in eighteen months. We just have to hit our milestones. We have the ability to basically hibernate. If we don't have a good idea in front of us, or the science isn't moving at the speed that we want, it’s not a big deal. We take a few months. That's some of the constraints we put upon ourselves. We're very early, so we'll see if it ends up being successful.
YANCEY: I like this thought of being able to hibernate as a metric for a health or success of a business. So the Mount Sinai research was doing data-driven immunotherapy. Is this similar? You're throwing massive amounts of data at a problem seeing what pops out the other side? I only vaguely know what the entire supply chain of a drug is. Where in that chain are you?
JEFF: We're very early. We're upstream from a lot of things in the chain. There are two primary ways that a new idea for a drug emerges. The first is called a phenotypic screen. This is where you take some model system that's representative of a disease that you'd like to cure, whether it's a cellular model, or an animal model, or even some humans and data you've generated from intervening in humans. You take some model, and then you take a large number of potential interventions — those might be drugs, they might be antibodies. I like to use the term intervention to be broad. You take some library of interventions and you throw it at the model, and then you measure what happens to the model when you hit it with those interventions. You see if any of those interventions look promising in terms of taking the model from a disease state into a healthy state. That’s phenotype-driven drug discovery, and it's how most drugs were discovered in the beginning of the biopharma market. The other approach to become possible with new technologies is target-driven drug discovery, where you try to figure out, “Is there a biological entity (which most often is a protein) that I can perturb in some way to create the disease state from a healthy state, or to reverse the disease state into a healthy state?” That becomes the focus of the drug development effort. So you say, “Okay, this protein is critical to this disease process. So if I could make a drug that did something to that protein, then I would be successful.”
Related Sciences takes more of a target-driven approach than a phenotype-driven approach. Both are equally valid. I have some reasons for preferring the target-driven approach. I think it can be a bit more efficient in terms of use capital, which is one of the primary challenges of drug discovery right now. It's getting more and more expensive to make new drugs, and we need as an industry to think about how to make them more efficiently.
The datasets that we're working with are primarily open datasets. There are some datasets that we pay for; in particular, there are firms like Clarivate or Informa who have people observing the biopharmaceutical industry and recording data about it. Every time a clinical trial gets run, or a merger or acquisition gets executed, they stick that into a database. They've done a good job of curating that data, so we pay them to access it. But also there's a tremendous amount of data that gets made available in an open setting. And in particular, our perspective at Related Sciences is that the best evidence that a particular target is related to a particular disease is going to be discovered in humans. So we look for human evidence. And in particular, we like human genetic evidence. When we're born, we possess an array of mutations in our DNA that was effectively assigned to us randomly. So through the process of recombination, you've been given a set of mutations, some from your mother, some from your father. Hopefully the selection of those mutations was not biased; there’s presumably a lot of randomness in that. So we can simulate the conditions of a randomized controlled trial within humans, where the intervention in this case is the set of mutations in your germline DNA. Then we can observe the impact of those mutations on health outcomes and other aspects of your life.
There's a really fundamental project that has been going on for two decades now called the UK Biobank. And the UK Biobank was remarkably prescient, in that well over a decade ago, they said, “Let's take 500,000 people who live in the UK, and we want them to be above a certain age, because above a certain age health events happen at a greater rate, so that's going to make the dataset more interesting. And let's gather as much data as we can about them. Since we have a single-payer, single-provider, we can get all their medical records very easily. We can also perform additional measurements if we bring them into facilities over time. And then let's structure that data and make it available to researchers under a data use agreement that ensures they're not going to do anything reprehensible with the data.” And most critically, the UK Biobank several years ago said, “Let's do genome sequencing on all 500,000 of these people.” So there's this remarkable data set accessible for researchers, which has 500,000 people who have been what we would call deeply phenotyped. A phenotype is effectively some aspect of your being, like what diseases you have, or how tall you are, or your eye color. Or it can even be a molecular phenotype like your white blood cell count, something like that. So the UK Biobank has a tremendous amount of deep phenotype data together with genotype data. And this data set is the ideal data set, we think, for discovering evidence that a particular target is important to treat a particular disease. And it turns out that the UK Biobank isn't the only biobank of this scale that's being built. There was a recent paper from Calico, which is a spinout of Alphabet focused on life sciences, where they estimated that within the next three years there's going to be over 35 million people who have their data deposited into a biobank, like the UK Biobank, accessible to researchers. That is the primary source of data, coupled to information about how pharma companies behave contained in datasets like Informa and Clarivate, that we are using to identify compelling targets and the relevant diseases for those targets to initiate preclinical drug discovery against.
YANCEY: That’s amazing that there's a resource like that. We need some digital equivalent as a way to break some data monopolies. The idea that everyone can work off of such a great data source is amazing.
I watched you on Charlie Rose in 2013. There's something you said that really struck me. You said, “We're just now entering a period where for the first time in human history, the vast majority of our actions will be digitized. So let's at least give ourselves the tools and the option to perform numerical thinking alongside our very well-developed mechanism for narrative thinking.”
A year later you did a talk at Berkeley where it's a similar train of thought. You said, “It makes me uneasy when I think about how much corporations are measuring about human interactions today. Not making considered decisions about what gets measured, but deciding what gets measured is really a political and ethical choice."
Do those things still feel true? Less true, more true?
JEFF: Absolutely. The first quote — I'm impressed at how articulate I was then; it’s sad how old and slow I am now. I like the framing that Past Jeff used there. Because there are things you see when you use models to interpret digital representations of reality that you don't see when you're equipped with the senses with which we're all equipped and you apply the reasoning that your brain comes out with. A lot of those things that you might see could have tremendous value. But obviously they could also be used for nefarious purposes.
The second quote is where my thinking has gone mostly in the last several years. A good friend of mine who lives out in LA was having a kid a few years ago, and we were sitting on the rooftop of a hotel after a dinner that I think you were at, and we were talking about what it's like to be a father. I just had a kid, he was starting to have a kid. He said, “I'm getting a lot of anxiety about moral reasoning and equipping my son with a moral philosophy to engage the world with.” I think about it literally every day. It's something my mom used to always say to me. I know a lot of people in the environments that I've participated in in the last few decades that grew up in households where a lot of pressure was put upon them to achieve successful outcomes for some conventional notion of success. My parents were almost the opposite, where the more successful I was, the less impressed they were. They would always say, “We just want you to be a good person.” The genius of that has only compounded for me as a father. So I think of those two things, my friend telling me the urgency of learning moral philosophies to equip his son with, something to face the world, and then my mom just saying over and over to me, “I just want you to be a good person.” What does it mean to be a good person? And how can I teach someone to be a good person, or at least how can I give them the tools? During my undergraduate degree at Harvard we had a core curriculum, and one of the requirements of the core curriculum was to do a moral reasoning course. The things that I read in that course, I've reflected upon more since attending Harvard than almost anything else I learned in my undergraduate. A lot of my reading recently has been on tools from moral and political philosophy that we might use as a society to arrive at the right framework for thinking about the morality of surveillance.
It's very clear we need to reassess. The problem statement is inarguable. But then people are moving past that problem statement and starting to get towards the constructivist notions that we can use. So things like social contract theory, and the various contemporary manifestations of that, from John Rawls, famously; David Gauthier’s Morals by Agreement; Scanlon’s What We Owe Each Other; and [Michael Walzer’s] Spheres of Justice is cited by you, in your book, which is another really interesting one. The evolution of social contract theory in the last twenty years has been around recognizing the multiplicity of spheres in which we need to run these contracting or bargaining simulations. We as a society need to engage in the first order social contract bargaining required to understand, “Okay, now that we know the tendencies of corporations in surveillance in the contemporary setting, where do we want to get to? What does distributive justice, intergenerational justice look like, in a world that looks like the one that we have, and not the one in which these theories were formulated?” I don't know what answer is going to come out of that process. So for me, it's more about, “What’s the right process for us all to engage in? Where's that conversation happening? And I should probably be in there, because I have some thoughts on this.” And honestly, your Dark Forest Theory of the Internet — I’m still marinating on it, I’m still trying to stay in the dark forest, because I'm pretty terrified of it. I’ve been observing you thinking, “Okay, maybe I could do that someday, and start poking my head out again and talking to people.” But to be frank, where things have gone — I don't use any social media today, because it's such a noxious context for thinking and for discourse that I don't really understand why people do it, besides putting a press release out.
YANCEY: I was looking and I didn't see I didn't see any interview or public statement from you in many years. Maybe I'm not up on your Bebo or whatever. I want to go back to something you said that I didn't quite understand. When you are seeing people through data, you are seeing something different than someone who's not seeing through data. What were you thinking about when you said that?
JEFF: So one example that jumps to mind is there's some very cool work out of MIT, and unfortunately the researcher’s name is not at the top of my head, but they work on something called affective computing, where they can use computer vision to detect emotional states that might not be apparent to people that have a hard time detecting emotional states using the sensory and reasoning apparatus that they've been equipped with from birth. Often these are autistic people or other people who have deficits of perceiving emotion. That's one thing that you can imagine that an algorithm can see that could bring someone up to the same perception level that the rest of us have. Another simple example could be: I could take a recording of this talk and I'm sure I would recognize verbal tics or things that I'm doing that I wish I weren’t in terms of how I speak. They could be arrived at just by watching it, but I could also run algorithms over it that could detect things that I didn't personally detect just by watching it. It might say, “You’re using words of Germanic origin in situations where had you used one from a romance language, you would have been received more softly." There are things like that where allowing algorithms to operate upon a digital representation of reality can give us a deepened awareness of what reality is. That is genuinely exciting for me, and one of the reasons why I enjoy working in data. But then obviously there are things that you can do with those insights — things like predictive policing, which I don't like and should not be done. Does that give you a better, more concrete sense of what I was thinking about?
YANCEY: Yeah, totally. There is some inevitability that our actions became digitized and more intricately measured at this moment just because the tools are available for that to happen. If you think about the organizations you've been a part of, or even just the field, why do we choose to measure something? Who is it that decides whether something is going to get measured? That decision of what gets measured and what doesn't get measured is an invisible choice. Is that something that you think about or have reflections on?
JEFF: I think there's two pieces. There's what we choose to measure, but there's also what we can measure. Those two things determine structures of society to a much greater extent than we recognize. There's a book I recommended to our mutual friend Ian Hogarth. I remember after I read this book — it was an amazing book, it’s slipping my head right now, I'll look it up later. [Note: The book is "The Institutional Revolution: Measurement and the Economic Emergence of the Modern World" by Douglas Allen!] But one of the examples that I specifically recall was the British Navy. As we gained the ability to measure and communicate — so I guess that's the other piece, what can we measure and what can we then communicate, but obviously communication infrastructure at this point is so ubiquitous that it's not as critical — but it basically showed how the hierarchical structure of the British Navy was reorganized entirely based upon our ability to measure and communicate the state of the various naval battles or training that was occurring. There's another book Seeing Like a State that's very popular amongst people. This led me down the road of thinking about, “What are the appropriate government and corporate organizational designs that fall out of these novel measurement capabilities?” And then to your question about what do we choose to measure — the way that I said it in that talk is the way that I still think about it today, that it's a moral, ethical choice. We need thought experiments related to measurement in the same way that we have thought experiments related to the trolley problem.
I think a lot about how much we measure individual contributors relative to executives. A lot of the dysfunction of contemporary society [is] related to the lack of consequences for people who penetrate a certain level of capital versus labor. So I think about what would happen if you did performance management for executives. What if we treated them as coin-operated in the same way that they treat their individual contributors as coin-operated, and would they flourish in those environments? So I actually find myself arguing against measurement in many situations in which customers of mine at Cloudera or companies that I invest in now — they’re pursuing a strategy that involves measurement, and where I can see the potential for improved business outcomes. But I also see the potential for reduced societal outcomes. To put it in the language of the Bento: would Future Us want that measurement act to be performed? I'm very uncomfortable, I think right now we're erring on the side of if it makes us money — in your language, financial maximization is really the ethical principle that's being applied to whether or not we should measure something. We need to move to a different ethical principle.
YANCEY: You got right to what I think is so important: people in power decide what gets measured, and they measure things that are ultimately in their interest. And so you have things like workplace monitoring for lower-level employees but not for higher-level employees, even though higher-level employees actually might be more prone to commit greater crimes and have a greater impact on the business. What that reveals is the way that measurement is a form of control. It feels like Facebook data's a little bit similar, where we as the consumers are more helpless, we are less empowered, we are disempowered by this data that someone else has about us.
JEFF: When we think about freedom, freedom from domination is a term that I see a lot in moral philosophy. Freedom from observation. There was a conversation I had a long time ago where someone said something that changed the way I thought of cities forever. They said the city invented anonymity. I think about growing up in the Midwest in a smaller town. And even though I wasn't fully surveilled, I did feel coupled to an identity that had been constructed by others and given to me. Moving to New York City, that identity was no longer coupled to me. But then we reinvented small town life — and this is nothing new, Marshall McLuhan has given lectures on this for fifty, seventy years. But it really struck me that we basically uninvented anonymity. Perhaps there's a role for it in contemporary society. I don't think we're going to figure it out on the timeline of our lives, though. I think we're going to live in an era where anonymity is not a choice that we can make. We need to figure out is that something that matters? How do we construct a society in which that is a choice? We've become this global village. How can we recapture a metropolis? How can we reconstruct the metropolis in the digital realm is a really fascinating challenge.
YANCEY: It's so telling of our age that as you say that, you raise these great moral questions and thinking about future anonymity, and my solutionist mind is like, is this a blockchain problem? Is this what blockchain is meant to solve? That's just how we think now, I guess.
There's two questions that are related and are really big and I just keep thinking about, and I'd love to know what you think. How can we quantify the certainty of knowledge? How certain can we or should we be in the things that we learn from data? There's this information veracity and certainty question on a lot of levels, and I'm curious how you think about it.
JEFF: Now you're driving straight into the applied epistemology framing. Ultimately statistics as practiced in academia is a subset of mathematics, but statistics as practiced in industry is a subset of epistemology. That was the split that caused the creation of the term data science in my brain. What can be known and what tools can we use to know is at the foundation of everything.
It's such a big question it's hard to know where to even begin. I spend a lot of my time as a data scientist basically saying to executives, “No, we can't know that.” The general perception, possibly because of the marketing arms of companies like the one I created and others similar to it, is that we can be more certain about the world with data than we actually can.
I really like a book by Duncan Watts called Everything is Obvious. I love this book because the motivating notion of the book is that you can take a study that's been done in a social science and you can tell people about the study, and then tell them the result was one way, and then take another group, tell them the study, and tell them the result was the exact opposite way. Both groups will say, “Well, that was exactly what I expected to occur.” What can be known is often influenced by how we convince ourselves that we know something. Another way to say this is I used to tell people in my group that it's not important that we are helpful, it's important that we are perceived to be helpful. Because most people when they encounter the data science group expect to be helped in a certain way. And ultimately the way that we would help them was actually a disabusing of the notion that they could be helped, but giving them the sensation that they had been helped.
So I have pretty conservative expectations about what we can know. And in fact, even in my personal life, in situations where people close to me have had medical issues where they've reached the limits of what is known, and they say, “Let's not give up, let's keep pushing, we have to do everything we can to to make sure this doesn't happen.” I have to say, “We're kind of at the boundary of what can be known. These are the tools that we have, that I'm aware of, to create knowledge about biology. These are the experimental designs that we could construct to create some knowledge. And ultimately, I don't think we can actually know an answer to the question you’re asking.”
And so I think I'm very conservative in what I believe can be known about the world. I would also say that people who are thinking hardest about this are in the social sciences, and they aren't given the resources they need or the respect they need. There’s this continued sense —we’re still sort of shedding from the 20th Century, honestly from the first half of the 20th Century — that physics is “real science” and chemistry is sort of real science and biology is kind of fake. And social science is not a science. Whereas on some level, what else is there but social science for humans, when we want to start talking about moral philosophy and things. There's a tremendous amount of good work being done in the realm of quantitative social science and more resources should go into it. The tools they're developing around things like causal inference in observational settings are growing what can be known faster than other fields of inquiry.
YANCEY: Bringing that numerical lens to human interactions, not to proxies of interactions, there’s a more understandable-ness, maybe? Trying to bring math and statistics into social science.
JEFF: Math and statistics are tools that are part of some evidence creation process. I'd say it’s more about bringing the scientific method into it.
We all passed through four traumatic years — for many, the history of the United States has been traumatic, so saying just four traumatic years, I don't want to undersell what many have experienced in the history of the United States — but certainly these four years were traumatic in their own way. I think like many of us, I was reading social scientists who passed through the previous emergence of fascist regimes internationally: people like Adorno, and — what’s the Frankfurt School? The other folks from there?
YANCEY: Benjamin, Gramsci.
JEFF: Exactly. So reading a lot of those books, and Erich Fromm, Escape from Freedom. I felt like that title was resonating through my brain for all four years. I find what they were doing interesting, but ultimately it's not science. What I mean is social scientists, it's not just mathematics and statistics. One of the experiences of reading economics literature for me — I was a course assistant for a mathematics course on convex optimization, through which many PhDs in economics pass, while I was still an undergrad. When I would read the economics literature, to the eyes of someone who approaches it as a mathematician, it was almost like they were stunting. They were bringing in unnecessary mathematical machinery to basically show off. It was difficult for me to consume that literature, because to me, the mathematics was quite straightforward. Ultimately the content of the paper was very limited, because most of what they wanted to say was obfuscated in this math. So I actually was almost uncomfortable when people bring too much mathematics and statistics into it. But I do want them to bring in the basics of experimental design, reproducibility, and some more epistemological considerations than just the technology of mathematics and statistics.
YANCEY: So I have three last questions. One is something you made a reference to earlier. How do the ethics of corporations influence the ethics of individuals?
JEFF: [Laughs] In a way that is not good, I would argue. Particularly large multinational corporations. Moral Mazes is a touchstone book for me and a lot of people, I think. It's this almost difficult-to-read ethnography of middle management in corporate America in the mid-20th Century. It spoiled for me the desire to ever create a large corporation again. And I saw it happening to people around me.
I think about who were the people that I befriended in high school and in college and early after college, and what did I find interesting about them, what drew me to them? What were their aspects of charisma? What were our shared values about the world and how did we arrive at those values? The period of 2008 to 2012, I had started Cloudera and I was interacting with a lot of people who I'd known for a long time, but we were now in this new setting of corporate America, effectively. Everybody got so boring and so similar. It was so frustrating. Then there was all this stuff around “bring your authentic self to work,” and I just kept thinking about — who is it, Baudrillard, who wrote about wrestling and politics and the similarities? I was like, “Corporate America is just wrestling. We’re all mimicking — I’m giving my interview about how much I'm gonna beat that guy up, but actually I have my own conceptual states.” We're all piloting these identities; we’ve all crafted these wrestling characters, and we're in the ring wrestling with each other. The whole time I'm sitting objectively and thinking, “Wow, like this is just so much bullshit. What is happening?” I watched so many people that I thought were interesting and had novel perspectives on the world adopt the perspective of their employer for whatever reason, whether they're resolving cognitive dissonance, whether they actually had latent in them that viewpoint, and then being placed in an environment where they were handed arguments for that viewpoint, it emerged that's actually what they truly believed. This is difficult because I interact with people who exist in corporate America quite regularly. It's kind of like saying that people who think about clicking on ads, that sucks, and now all of a sudden anyone who works in advertising feels like I don't respect them as a human. So I want to be careful about how I say this: there are many reasons why participating in corporate America makes sense for many people. But for me personally, the changes that I observed in people that I loved and cared about really made me reflect on how those changes came about, and how much the environment of corporate America caused those changes.
One of my favorite SNL skits of the recent era, I don't know if you saw this one, I think it was when Kristen Stewart hosted. And they did like a Sum 41-style pop-rock song, where they're in the office, and they're all interns and they hate their boss. And they're doing this “I'll never grow up” pop-rock song. And then the boss says something nice about the work they did, and the song slowly evolves into how cool their boss is and how good of a job they want to do on their presentation. That perfectly encapsulated my experience. I'm fortunate that for me, the goals that I have in life didn't require me to persist in that environment much longer. I could maintain this viewpoint. I know it's a very privileged viewpoint to say. Most of us have to engage with corporate America.
I don't know if you watched the Sarah Lacy interview I did a long time ago, but you did a pretty comprehensive search of the archives.
YANCEY: I did see the Sarah Lacy one. I did not watch it, though.
JEFF: She asked me “What's one thing that you believe that’s different from most of the people in Silicon Valley?” I just said, “I think much less highly of capitalism than most people in Silicon Valley.” That's a big part of it. I noticed one of your favorite philosophers, Elizabeth Anderson, her most recent book, I’m really looking forward to reading it. It’s basically about why have we chosen to allow corporations to have such broad decisions over our personal lives. I suspect she's articulated this better than I ever could. But ultimately, I wish that people could retain more individuality in outlook and still participate in the modern economy — still achieve gains from that while retaining some sense of individuality. A lot of it has to do with how much we couple to being employed in the US, like health insurance, and the precariousness we've placed so many people into that people feel like, “I have to play this game, or else.” There's a lot of really negative consequences that occur.
YANCEY: Yeah, there’s a lot at stake. I like how you make someone joining corporate America almost sound like someone becoming religious, like, can I make the same jokes around them anymore? This is now a more devout person than I am, I may need to check myself. For me, being a CEO, being an executive, thinking about company culture — because really, we're kind of talking about culture, culture as in, how do we define our tribe? What do we want? What's important to us? We want people to see things in a similar way. Which — what you said earlier, you want everyone to be going in the same direction. But here I think you're highlighting the real cost of that, which is, “Maybe you're a great team player 9-6, and your teammates are like, ‘Jenny has all the flair.’ But outside of work, the rest of her friends are like, ‘What happened to Jenny? Jenny became a zombie.’” Maybe that's a little bit of what happens to people.
JEFF: There's somebody that I really liked that I worked with at Facebook in the early days, and I had a conversation with him ten years after leaving Facebook. He used a phrase that I'd never heard anyone use while I was at Facebook. He said something like, “Oh, yeah, that person never bled blue.” I was like, “What? Were people saying that?” It almost perfectly captured why I never felt of that. That’s my notion of corporate culture, that if you in any way identify with the group of people that are your employer in a way that's greater than you identify with whatever's happening outside of this sphere, then we failed as a corporate culture. So I want to have a minimum viable corporate culture, I guess, to use an Eric Ries term. Because to maintain the synchrony of motion required for thousands or tens of thousands of people to accomplish tasks in unison, you're a cult. It would be very, very difficult to distinguish someone's feelings about a cult leader from someone's feelings about their CEO. I could never participate in that.
YANCEY: Or maybe to do that it might take a climate-level mission, or maybe you participate in your own way. But I think this is where to some degree, we do need people to give up some of their individualism to be a part of larger changes. There's probably parts we all have to give up, and then parts our systems need to respect and honor about us. Maybe while we're around that problem isn't solved, but maybe for further generations, it is.
There's a question you asked in one of the talks that I loved, and it's a question I've been thinking a lot about, which is: how do corporations die? You're talking about all these zombie companies in the world, and I've been thinking about this exact same question. What it has made me think about — I wonder what you think about this — is whether companies should have wills. Could a company declare from the very beginning, “We are dead if this happens. If this happens, here's the Do Not Resuscitate order. If employees ever say we're bleeding blue, please shut us down at that moment.” Some notion of truly going astray, and whether that could be hard-coded. I'd love to hear you talk about companies dying and companies maybe having wills, or something like that.
JEFF: The first thing I thought of when you said that is my mom always says to me, “If I get a perm, shoot me.” [laughs] Now whenever I bring up companies dying, I’m gonna use this. The Do Not Resuscitate is like “if there's a perm on this head.”
Around this time last year, I was basically working full-time to help create something called the COVID Tracking Project, which became an authoritative source of state-level data on tests, cases, hospitalizations, and deaths from COVID-19 in the United States. That organization just shut down. This is a fantastic example of a mission-driven organization that may have had some aspects of a cult-like character that I was happy to participate in. I'm glad that this came up. One of the things that they're doing at the COVID Tracking Project now that it shut down, is they're doing an oral history project where a reporter is interviewing all the people who were involved just to have a time capsule of what happened. And I brought this up, actually. My esteem for the leaders of the COVID Tracking Project grew immensely when they made the difficult decision to not capitalize on the achievement of the COVID Tracking Project. I mean, this thing had 500,000-plus Twitter followers, and these people are journalists, they work in an industry that’s had a very difficult time rewarding its highest achievers. This absolutely could have been a vehicle for self-aggrandizement for the leaders, and they made an incredibly humble choice to shut it down because it made sense. If you paid attention to the motivating mission of the organization, then this was clearly the right call.
Corporate America has asserted that corporations should be considered as people. Should be treated as people, literally by the law. And I'm no expert on that, there are other people that can provide a lot more color on that. But you know what? People die. And most of what makes us human is working backwards from that. Why can't corporations be under the same restrictions? I don't love capitalism. But I don't think there's an alternative ideology. I think the arena in which the market economy makes sense is much smaller than the vast majority of Americans think. But I still think it makes sense for a subset of economic transactions. But even for that subset of economic transactions — capitalism isn't one thing. The distance between the platonic ideal of capitalism as it exists in the ideaspace and capitalism as it exists in meatspace — there’s many, many different configurations. A configuration of capitalism that imposes the organic requirement that any entity must die — obviously, how you implement that is left to the reader, and it's highly complex — but I want to see corporations as humane actors. I feel the same way about government, by the way. I think we need to flip how government is constructed in the US. The last four years, I spent a lot of time daydreaming about moving to Switzerland. One of the fascinating things about Switzerland is when you look at tax revenue and how it’s allocated in the city, the canton, and the country level, it's flipped from the US. When you go to get your Swiss citizenship, the entity that grants you citizenship is the city or canton rather than the country.
So corporations should die, I believe. But corporations also should not be able to be stunningly obese. They should not be able to be so large. That takes them outside of the scope of organic existence as well. An acquaintance of mine from college, his name’s Jesse Andrews, and he writes young adult fiction, he's incredibly clever. One of his books, the premise was you are as large as you are wealthy. Just as a way to make manifest and give people a visceral sense of how absurd it is. One of the philosophers that I wanted to throw your way, that I didn't know if it was in your reading stack, was Ingrid Robeyns. I don't know if you've read Ingrid Robeyns; she’s proposing limitarianism as a philosophy. I thought of her work when you were having the Doughnut Economics discussion. She's basically proposing Doughnut wealthonomics, where we have a poverty line, which is the inner part of the Doughnut, but then we also have a riches line, she calls it. People would say, “Oh, progressive taxation is roughly that; look at France, all the wealthy people moved away.” Sure. I'm sure there's implementation challenges. But I feel like I affiliate with limitarianism in many ways. So I think corporations should die. I think there should be consequences for becoming obese. In the same way that it's difficult to be healthy as an organic being if you consume, consume, consume. We need to figure out ways of a more humane capitalism and, frankly, a more humane political economy. We should be flipping. Local governments should have the vast majority of say, and it should basically get thinner and thinner in terms of responsibilities as you go up the stack. I'd like to see the same thing with both government and corporations.
YANCEY: Excellent. Corporations as organic material, I like it. My very last question: for someone who's interested in data science, someone that's interested in applied research, if you were to be starting at this point in time, how would you do it? How would you think about entering, making a mark, or doing work to be proud of?
JEFF: First of all, I worked on Wall Street for a year out of college and I learned a lot. They do really hard math on Wall Street. I learned how to code professionally. I learned how to write software as an eight-year-old, I did it as a hobby, but I learned how to write software professionally, and how to do some really hard math on Wall Street. And honestly, just how to show up at work and create value for other people trying to solve business problems. At Facebook I learned how to hire and manage a team, and build out the infrastructure for storing and analyzing petabytes of data. So all those skills were incredibly important to be able to do anything that I could be proud of. So I guess I'm proud of the learnings that happened. And I'm proud of the interpersonal commitments that I upheld in those work environments. But I can't say that I'm proud of the impact on the world that the work that I did at either of those places had. So I guess I would say: we all have to optimize for skill-building early on, and preserving optionality. I often tell people that finding a place where you can see what the win state looks like, a place where they’re doing things right. Go sit inside of that and pay attention. If you're coming just out of college, thinking that you can invent the win state from whole cloth is a bit naive. Every once in while it happens, but you're probably better served by trying to observe the win state inside of somewhere that's doing good work.
In terms of finding one that can align with your personal values — that’s a lot harder. I personally think that the skills that we build come from the people that we solve problems with, and the quality of problems we're solving. I would place the most emphasis on that early in my career, recognizing that until you have the economic freedom, don't put too much pressure on yourself. This is difficult, because ethically, it's like, “Okay, even for one minute contributing to something that you don't fully believe in…” I don't know, things evolve. Albert Wenger mentioned to you that when they invested in Twitter, the position was, “That's silly.” Now it's too weighty for its own good. So ethics change. I personally recognize constraints that cause people to make decisions that I might not agree with ethically, and I don't vilify them for that. I personally am an incredibly flawed human. We're all collaboratively trying to get better and we’re all trying to solve for constraints.
So I'd say early on, find a hard problem to solve with smart people, and have fun doing it. Then as you build more autonomy for yourself, that can allow you to then find your way into things that you think may be more aligned with how you feel. I would also say honestly, I don't know that I had a fully-formed ethics coming out of college. There's obviously a lot of moral controversy happening in college today, and if I try and place myself in that environment, I don't know how I would have perceived it then relative to how I perceive it now. So I also don't want to put so much pressure on people to be fully-formed ethically at the age of nineteen. Focusing on building skills, keeping the option value open, is the advice that I give to people. Then figure out who you are and what you value, and then bend your career arc towards that over time.
YANCEY: Jeff, this was great. We should be friends or something. I love what you had to share. I think people are gonna get a lot out of this. So thanks for being so generous with your experiences. I really appreciate it.
JEFF: Well, I appreciate you trying to make space for people to be able to work through these questions in an authentic way. I think a lot about them, and I talk a lot about them with my friends, but I don't feel like there's really space in the public Internet to evolve these thoughts today. Like you said, I haven't really talked anybody for about a decade. [laughs]
YANCEY: If only people could see what you look like right now. It's insane.
JEFF: This is me trying to learn from you and experience this a bit more. So thanks for making the space.