illustration

Episode notes and references:

Pareto Principle

The Quantified Self movement

Moore’s Law

Singularity

Palantir Technologies

Transhumanism

Robert Solow

Zygmunt Bauman, Mortality, Immortality and Other Life Strategies, Standford University Press, 1992

Episode 9 with guest Christian Nagler

Max Haiven: Hello and welcome to “The Exploits of Play.” This is a podcast about the strange and unexpected roles of games and play in our stage of capitalism that just keeps getting weirder. My name is Max Haiven and I am the Canada Research Chair in the Radical Imagination.

Halle Frost: And I’m Halle Frost from the platform Weird Economies. We’re presenting this podcast. Today we’ll be talking to Christian Nagler, who is a writer and artist. His work looks at and performs the implications of embodiment and global economics both in his everyday life and in projects like Market Fitness and Yoga for Adjuncts. He researches critical ethnography, political theory, and media and cultural studies at UC Berkeley. Max, how would you introduce Christian? Christian is the sort of person that I turn to when I’m trying to figure out things I don’t understand at the intersection of the neoliberal economy, financialization, technology, which really these days means pretty much everything because one of those terms and probably all of those terms is going to apply to everything we talk about and certainly has really applied to almost everything we’ve talked about so far on this podcast.


Max Haiven: When we were planning the podcast, I was initially conceiving this episode as thinking about the way that the mega rich and particularly the kind of Silicon Valley playboy rich are trying to cheat death. You know, this might include the kind of stuff we’ve seen in cartoons and satires about cryogenic freezing, but it also includes the so-called longevity community, which is a very broad international network of people interested in either extending their life or at the extreme end of the spectrum, avoiding death completely. And it also includes a number of people within the kind of Silicon Valley orbit and beyond that are interested in the arrival of this thing they call the singularity, which Christian is going to speak about a bit more.

And of course, it’s made a lot of news recently, and we’ll get to that in a minute. We just before we go into the interview with Christian, I want to say that, you know, one of the things I really value about talking to Christian is he goes really, really deep. And it’s based on having done years and years of ethnographic fieldwork in the venture capitalist communities of Silicon Valley.

So these are the kind of he goes to meetings and he goes to events and he goes to conferences that try and bring together, on the one hand, researchers and technologists who are trying to build new things in sort of Northern California and the people who have the money. And the people who have the money are usually people who have already become very rich through tech, or they’re people from sort of the proverbial Wall Street of the world who are, you know, brokering money for very rich people or investment banks. So he’s really got his finger on the pulse of how kind of moonshot ideas and weird ideas about how technology can transform our world get made real by the power of capitalist money. So really a great topic for Weird Economies and “Exploits of Play” if there ever was one. He’s going to talk about a number of different things, and I just want to briefly define them for some of our audience members who might not have heard of them.

He’s going to talk a bit about the quantified self movement. This is a kind of grassroots movement of people who use all sorts of sensors and apps and tests in order to continuously measure and chart their body and their mind. So, you know, at a very basic level, you can think about people using their Fitbit or other wearable apps that track your heart rates or other bio indicators and then using this to try and improve your health. Now, that’s something that lots of people do.

And it’s also been something that there’s been a lot of struggle around. You know, lots of trade unions have been opposing the way that insurers, private insurers or employers are asking workers to wear wearables as a means to surveil them or to otherwise control their body. But many people do this as either a hobby or just as part of trying to live a healthy lifestyle.

The quantified self community more broadly goes really up to some very big extremes. I mean, there’s huge influencers online who have put together whole packages of how to use blood tests and other forms of tests along with this data in order to try and, again, live longer, live better, have a more fit and active life, to be more productive, you name it. And as Christian points out, this is a community that has a fairly nefarious element to it. I mean, it’s kind of a neoliberal culture of measuring and improving the self. But it’s also one that has a huge number. I mean, millions of people are in various ways entangled with this community. Christian is also going to speak about Moore’s law, which he defines there. But just to remind you, is the idea that all else being equal and based on past trends, the capacity of semiconductors is going to basically double over a certain span of time. So basically, to cut a very long story short, computing power is going to increase exponentially.

We can expect computing power to increase exponentially. And this becomes very, very important. It might seem to you like a weird, nerdy thing, but as Christian points out in this interview, Moore’s law and this idea that computers will get more and more and more powerful, a computer on the same chip will become more and more and more capacious over the years. This has become like an article of faith for many people within Silicon Valley and the communities that fund it. And why it’s so important is it powers a certain idea of what they call the singularity. Now, anyone who tries to define the singularity, I think, is inevitably going to get it wrong.

It’s because it’s become an almost mystical concept. I’m going to leave it to Christian to demonstrate some of the kind of pretty psychedelic and weird stuff people think about it. But basically, the idea of the singularity is that at a certain point, computing power, humans are going to create computers that are so powerful that they begin to eclipse or replace or enfold humanity in a new digital idiom, let’s say, in the sense that humanity might replace itself by some sort of artificially intelligent, artificially generally intelligent entity that emerges from all this computing power. Human consciousness might merge with computing. At its farthest, most outside reaches, the idea of the singularity is that the universe itself might become conscious, that it might achieve a kind of conscious state because of computing power. And as Christian will sort of go into into some detail, there’s an idea that underlies this, that in fact, everything in the universe is a kind of computer that computes itself.

And that, you know, inevitably in the future, human consciousness will transcend the physical body and achieve some sort of probably technologically augmented state. And there’s optimists about what that’ll mean for the future of humanity or the thing we would imagine to be humanity. And there’s pessimists.

There are people who believe that that goal can be set aside and is naturally going to happen. And there are people, you know, who might call themselves something like techno-accelerationists, who believe that we need to drop every other humanitarian goal and ambition and not care about the fate of us, you know, stupid meat sacks and instead go full speed ahead trying to achieve the singularity, because ultimately that will be better for everyone and everything. Ultimately, the final word about this episode is just to point out what we’ve been bringing up in the last couple of episodes, is that when we talk about games and gamification and play and technology, and, you know, here in this episode talking about cheating death and trying to, you know, outplay death through our technologies, we are talking about forms of technology that were not inevitable.

There is nothing inevitable about the form of smartphone neuro-hacking that we experience every day. There’s nothing natural about the fact that the discovery of atomic power led to a massive nuclear arms industry that still to this day threatens to destroy the world and everyone in it. There’s nothing inevitable about the fact that, you know, the discovery, as we were talking about in our last episode, the dopamine neurotransmitter should have been placed in the hands of corporations that want to do nothing more than harvest our data and sell us things we don’t need. All of these are about how technology and the development of technology has been shaped by capitalist forces. There are other futures that were possible at every stage of the kind of development of digital technology over the last 40 years. And there are many pathways that lead from the present.

Christian is going to present us some very scary pathways that he’s learned about that are being funded and pushed by many of the protagonists in Silicon Valley and many of, most importantly, many of the people who really have the money, the venture capitalist community. Those people see the future as kind of capitalism, writ large, getting larger, dominating more areas of life forever. Many of them have a worldview that I think Christian rightly identifies as fascist or at least fascistic. But there are other possibilities. And I think in spite of the fact that this interview really focuses a lot on some of these kind of nightmare scenarios, we should also take inspiration, if not hope from it, that, you know, we need to turn this around and put our imaginations to very different technological games.

+

Max Haiven: Christian, welcome to “The Exploits of Play.” I wanted to begin our conversation today talking a little bit about California, because later in the conversation, we’re going to think through some of the really fascinating things you’ve pointed out about how California became a site for the renovation of capitalism in an age that ties together financial power and the tech sector. And the way that California also became a kind of laboratory for certain forms of human disposability and surplusing in order to power the absurd but terrifying dreams of our new tech overlords. But before we get to that, I wanted to ask you about growing up in California. You’ve lived in California for most of your life. Can you tell us a little bit about, you know, your experiences of that state, that part of the world? And did something about growing up in that place attract you to doing this research or shape the kind of research you’re doing on California as a space of capital’s renovations? Well, first of all, I’m really glad to be here. And thank you for having me talk about these things.

Christian Nagler: It’s always good to talk to you, Max. Yeah, I have a lot to say about California. And I like this question about growing up in California. It’s something that I don’t maybe reflect on as much as I could. The first thing that comes to mind about California is that I grew up in a sort of, you know, middle class suburb of Los Angeles. And my dad worked in insurance. And I think that I’ve been increasingly pushed in my thinking towards the bioscientific and the kind of medical side of tech, which is kind of ascendant right now in a new way. And we’ll get to that a little bit later. But I think what you articulated about California as a space for the production of surplus populations, and I often think of it as the technical subsumption of the social wage.

Growing up, I think I wasn’t, you know, I slowly came to an awareness of this reality in California. You know, I slowly came into, you know, as a sort of white person in California, I think it wasn’t immediately clear to me. It became clear to me how California as a kind of mythologically progressive region actually relies in an incredibly kind of necropolitical way on its immigrant populations and its carceral population. Its carceral population as a political football, but also an economic driver. This is something that, you know, I think Ruth Wilson Gilmore has done amazing work on, and it’s been very influential on my thinking about California. And I think the way that tech comes into this is complicated.

It’s complicated, but it has to do with the very, I think, well-known at this point, early role of a kind of strong entrepreneurial state in the immediate post-war years. During the World War II and immediate post-war years in kind of mobilizing massive public-private university Department of Defense resources to really very quickly bootstrap the ubiquity of computation as the sort of benchmark of innovation, like on a scientific level, on an engineering level, and also as a benchmark for what human civilization, what a vision of human civilization looks like. And so this is the kind of advent of California futurism.

My big argument that I’m trying to make in my work is that this emerges directly out of the neoliberal political theory, particularly the French arm of neoliberal political theory. And it kind of, this French arm, as it moves into Silicon Valley and begins to establish the early think tanks and research institutes that basically become venture capital eventually in the 80s and 90s. This might be reductive in a certain way, but a particular vision of anti-collectivism and anti-welfarism that is dependent upon the tech ideologies or technical ideologies, strategic de-legitimation of ideas of the social wage. So it makes the human to a technical object, it makes populations into technical objects, and tries to kind of reduce the validity of the exercising of voice within social populations. And so this is something that I think we just see play out in California. It kind of disavows at a certain point its nationalist origins and becomes this sort of decentralized kind of network thing in the 1990s and the early 2000s.

But then, as we’ve seen, it kind of returns to its roost recently in a kind of nationalist project that becomes more recently a sort of cybersecurity Cold War that again is making new demands upon the need for all public funds to flow into technical innovation rather than supporting populations, supporting people, supporting a sort of thriving social life in many ways. So that’s kind of my overview of California as a laboratory for these doings.

Max Haiven: You make a distinction in an interview I was reading with you recently, which I think is really important and was very enlightening for me between different—and you just alluded to it now—between different eras of this venture capital, which is the way that most tech innovation, so-called, gets funded, and also the way that a huge amount of money gets made by a very small financial elite and tech elite.

But you make this distinction between this earlier moment, which I think you were just speaking about in terms of the kind of distancing of Silicon Valley, the tech sector, from this kind of nationalism, this militarized nationalism in the form of Sequoia Capital, this very famous venture capital firm, which had an investment strategy that was sort of like throw a ton of money at everything and see what sticks, see what grows. The kind of classic venture capital strategy that we associate with Silicon Valley. And now, especially in the last few years, perhaps since the financial crisis, the ascendancy of the Tealverse after Peter Teal, which, as you were saying, it comes back to this militarized nationalism, this sense of a world of borders and walls, where capital needs to be mobilized in the interests of a very dark vision of what the future holds.

Rather than the utopian visions of tech liberating us from every hardship, we now have a vision of tech protecting us from hordes of surplus populations that have been left to die on the other side of a border. I wonder if you could just speak a little bit about that transition and what it signifies for thinking about some of the topics we’re going to talk about a little bit more in a moment, which is like death and the organization of death. We typically think about, many of our listeners probably just think about death as something that happens to you, but actually, I think you’re working in a long tradition of scholars and thinkers who point out that death is actually socially organized.

Who gets to live and who gets to die at what rates, under what conditions? Spaces and structures of organized death, organized killing, and also organized abandonment.

Christian Nagler: Absolutely. Yeah. First of all, I do see different eras in tech’s social political ideology and its political economy. The networking age was distinct, I think, from the emergence of the Tealverse, as you just referred to. But I also think that these impulses were there all along, that they were always there. If we’re thinking about gaming and play, you could see it as a shell game that Silicon Valley plays between the decentralized and the nationalized. It’s always that one strategy is being played against the other. This is something that Mariana Matsukado, who wrote The Entrepreneurial State, refers to as a particularly kind of American political strategy.

The way that she refers to it is that the U.S. has always been Jeffersonian in discourse, but Hamiltonian in practice. There is this kind of camouflage sense of centralized nationalism that lives underneath a democratic and decentralized surface. I think that really describes Silicon Valley in a certain way. But then, it’s instructive, the moments in which the gloves come off. That’s the way that I think we can see the Tealverse. The Teal thing is very complicated, but I think that you can say that it is, on the one hand, a sort of bare attempt to remuster state resources, to basically accomplish the next phase of computational achievement, which is the ability to analyze in a cogent and low-resourced way, efficient way, rich data sets. This is really what Palantir does. It gets Department of Defense, Department of Homeland Security contracts in order to work on these hard problems of how to take huge multivariate data sets, data sets that are not well organized, basically. Sometimes they call them organic data sets that are drawing from a lot of different sources, a lot of weird social data that’s not necessarily already aggregated. Social media is a very ordered space of data. That’s what the role of social media has been in the surveillance state, has been to make data sets very orderly. But if you’re drawing from a lot of different surveillance data, video data, audio data, cultural data, basically, then you’re faced with this different problem.

That’s really what Palantir and what the Tealverse is going for. It’s always hard to tell what, for instance, Peter Teal’s, he’s kind of like Elon Muskian in the way that he plays with public perception and games public perception about things. But there is this very strong rationalized elitism and a kind of eugenic ideology that’s there that I think a lot of people have commented on this. But one of the, this goes back to this kind of like Silicon Valley kind of neo reactionary ideology from the late 90s of the sovereign individual. And the idea that a very, very small amount or number of extremely talented, extremely high IQ individuals represent the future in various ways and need to be safeguarded. They often draw upon the 19th century economists, this idea of the Pareto principle, which is, you could say it’s one of the kind of founding principles of what we call evolutionary economics. And it comes from Vilfredo Pareto, who is sometimes called a fascist. It was sort of complicated what he was, but his sort of rule of thumb was that 80% of things done in the world get done by 20% of people. And this goes for like, this is a kind of economic principle that gets applied within firms, across firms, 20% of capitalist firms do 80% of the work of allocating and producing goods, etc.

Within a firm, 20% of employees are doing 80% of the work. And this is something that economists who are prone to this sort of evolutionary impulse, they are really almost addicted to seeing the Pareto principle play out everywhere. And Peter Thiel often kind of calls out praise to this idea of the Pareto principle. And it’s basically, you know, it’s just like a mathematical naturalization of the idea of a surplus population.

Max Haiven: I want to shift now just briefly before we go into these ideas a little bit more to talk about your methods, because you’ve been doing something that I think is absolutely fascinating and our listeners would want to hear about in terms of how you’ve gathered all this information. For the past five or six years, more maybe, you’ve been going to Silicon Valley think tanks and venture capital meetups and all sorts of meetings and gatherings of people in a variety of communities associated with this kind of space around the thought world of big tech, let’s say. Can you tell us a bit more about that? And also how it ties into your background in performance, which I think people would be really interested to hear about as well.

Christian Nagler: This idea of wanting to kind of get socially hands-on involved in these spaces just started with this realization of the sheer magnitude of wealth. I was working on just kind of general, kind of more theoretical questions of economic subjectivity, the cultivation of economic subjectivity more generally. And then once I realized that Silicon Valley is said to be the largest accumulation of wealth in the history of the world, I was like, OK, I need to kind of go see what is actually happening here. And so looking back on it, I think at first I was spreading out a little bit more. I was going to a lot of meetups. Actually, no, I started with reaching out to people that I knew who were already in that world. And I think this was part of my autobiographical thing, too, is that I was an undergraduate at UC Berkeley and I was in the humanities. I was an artist and a creative writer and whatever. But a lot of the people that I lived with, that I was associated with, were computer scientists and engineers. And over the years, now over the decades, I came to see our differing life paths. And I’ve spent a lot of time just kind of barely able to survive in the Bay Area, whereas these people were obviously making a lot of money, etc. So the kind of events that I would go to were things that I, at first, things that I found out through kind of friends and acquaintances of mine that they were like, OK, you need to kind of check out this.

It was sort of word of mouth at first. That was the way that I first found out about things. And actually, there might have been a few things before this, but I think that the first thing that really sticks in my mind that I went to was a secret conference, a quote secret conference that was put on by a emerging cryptocurrency hedge fund. And it was like invite only and you had to kind of apply for an invitation. And so I applied. I think I said I was like a cryptocurrency researcher or something, which I kind of thought of myself at the time. But it was sort of a stretch because I didn’t know a lot about it at that time. But they let me in and it kind of hits all the points of the Silicon Valley caricature. It was put on by this crypto hedge fund.

It was a bunch of very, very young, very, very wealthy people, newly come into massive amounts of crypto wealth that they were wondering what to do with. And so they founded this crypto hedge fund. But at the same time, this was, I think, my first realization of how intimately tied together venture capital investment, or in this case, this crypto hedge fund investment, which was proximate in a certain way, is dependent upon these feats of kind of culture making that are elaborating on a kind of investment strategy and trying to very, very strategically publicize the stakes of that investment strategy. And so this one that I went to was, it exposed me to a lot of new things. And I think I was trying to, after this event, a lot of my work has been continuing to try to make sense of the particular suite of concerns that I encountered in that event, which I’ve seen again and again, as I’ve gone to different conferences, meetups, kind of culture making series that venture capitalists or venture capital proximate organizations kind of put on. And the suite that I’m thinking about of concerns within this particular event, which was held at this old Masonic Lodge in San Francisco, and it was private space exploration. That was a big concern of this one. Longevity and cryopreservation was a big part of it. And loosing the regulatory reins on venture capital in various ways, in very geographical ways, political ways. And then another part was strategies for tax evasion of crypto profits. These were the things that were being talked about there. And this was a conference that they would say again and again, this conference is secret and off the record. We can speak candidly at this event. And this is something that I think I told you a few weeks ago, Max, but the thing that blew my mind a little bit at this conference was that during one of the longevity panels, it was a few longevity and cryopreservation scientists, along with venture capitalists who have particular stakes in the kind of longevity space, where they had this kind of panel about ways of maximizing your long-term investments past the normative kind of death threshold. And so finding financial advisors who are willing to work on this problem of managing your investments while you’re in cryopreservation so that you can wake up in 100 years and have a billion dollars at your disposal. Venture capitalists, consultants who can steer you towards the right kind of long-termist investments in this sort of thing. And I was sitting there listening to it and I’m like, these people are totally insane. They’re absolutely insane. But the kind of rhetorical and discursive force that they had was surprising to me in that there was one point at which one of the panelists told the audience, they were like, this seems a little far out, but if you are not engaged in these things, you are basically stupid.

If you are rationally thinking about your future, you will be doing these things. It had this very forceful, disciplinary sort of tone to it. It wasn’t like, these are just speculative ideas. And part of it was that these people are very concerned about, even if these long-term visions don’t come to pass, the short-term multiplication of early venture profits often has to do with merely expanding the range of investment to first, second, third seed round. If it can get to a second seed venture round, which means to expand the venture network by which an investment is grown, then early stage investors have 10x to their investments. So it’s like a major way that venture capital differs from normative or conventional capital. It goes back to what you mentioned earlier, the Sequoia capital kind of spray and pray sort of thing. Oftentimes, they might not even want something to succeed in the long run. They might have a sense of like, you get it to a certain round.

And not to get too much into the kind of nitty gritty of how venture capital contracts and venture capital firms are structured, but it’s kind of embedded into the venture capital structure. The venture capital firm, oftentimes people say, is somewhere between a network and a conventional firm. There are certain contractual elements that hold people together that are more firm-like, and some that are more network-like. So you could think of a venture capital firm as a firm that has structurally written into it the kind of notion of growth into a certain network of investment in a particular period of time. So venture capital firms are sort of like growth value propositions in and of themselves.


Max Haiven
: That’s fascinating. Oh my goodness, I have so many questions, but I want to keep us focused on what you were mentioning just before about longevity and cryogenics and this space. So these are very rich people, newly rich, many of them, who want to live, if not forever, then for a very long time and cryogenic. So can you just give us a couple of quick definitions and how that community breaks down this thing that people speak about as longevity and cryogenics and these sorts of things?

Christian Nagler: I would say that the longevity space in Silicon Valley, which is sometimes they call also more broadly the space of the bio venture or bio ventures, it has an arm in the biotech and the biosciences. Some of the largest successes in Silicon Valley have been in the biotech thing. Genentech is kind of the biggest one, but there’s been a bunch over the years.

It’s gone through major rises and dips, the kind of Silicon Valley biotech world. It’s at a kind of high point right now, despite what people think of as the general kind of bear market right now in Silicon Valley or the kind of stagnation of venture investment generally. Biotech right now, or what people call the kind of hard problems or hard startups. They’re not just kind of IT startups. They’re actually working on kind of material problems. This is a big thing right now in Silicon Valley.

The longevity space is a complicated space. There are different strategies and ideas involved, but I think that generally we could say that it’s a way that venture capital has been increasingly approaching biotech in terms of not just trying to cure a disease or to just develop a useful, profitable drug or something like that, but to attack the root causes of aging. I would say that there are three broad approaches to this.

The first one is a slightly more modest experimental approach. There’s a famous research institute that a lot of people know about because there’s some very well-known longevity thought leaders that are associated with it called the SENSE Research Foundation. That stands for Engineered Negligible Senescence. The idea is to make senescence a negligible element of human life. That’s the kind of general goal that they have. But the sort of mission statement that they have in the kind of short run period is to help to increase the average lifespan of the human to 10 years more.

So instead of around the average lifespan right now is around 80, to increase that average lifespan to 90. The second category of longevity or in the longevity space category of people with certain goals are people who want to be able to double the human lifespan, usually within a certain period of time. Some people talk about, we’re going to double the human lifespan by 2050 or 2060.

The third one is people who are interested more in the aging moonshot, which is basically to be able to reverse aging, to be able to, quote, achieve immortality, or they call it longevity escape philosophy. So your innovations in longevity research are happening faster than the body’s current rate of aging. And this is something that longevity people talk about a lot is that the human body is particular, it’s not universal the way that humans age, that we actually, you could understand human aging as the exponential increase of the risk of death throughout a human lifespan, which is, it’s not the same distribution for different species. People talk in the longevity space a lot about three species, the rockfish, the lobster and the hydra, all of which supposedly do not age. There’s a horizontal rate of risk of death across the lifespan of these species, and they die eventually of something, but those deaths are all essentially accidents or exceptions, outliers to their regular rate of non-aging. So it is interesting to think about the differences between these three strategies, add 10 years to the lifespan of a human being, which some people who are interested in those more moderate goals are a little bit closer to conventional biosciences.

The idea there is that we’re fairly close to solving certain problems around some of the largest things that kill people on the planet, like arthrosclerotic problems, cardiac disease, etc., which kills more people than anything else. One thing that I think it’s also important to keep in mind with regards to the longevity space is that in my experience, including just this last weekend at an event that was focused a lot on longevity investment and innovative things that are happening in that space, that there’s very, very rarely any mention of systemic things that actually are affecting people’s health on a wide scale. People might talk about these things like cancer and arthrosclerosis, but people don’t often talk about poverty and work, which are probably killing a lot of people. Environmental pollution, I wouldn’t say that this is across the board, but there tends to be a coupling that happens between people who are very, very interested in longevity, life extension, and people who are interested in making investments in limitless energy production. A very high profile example of this is Sam Altman, who invests most of his wealth in longevity on the one hand and in nuclear fusion research on the other hand. It’s a fairly easy critique to make, but I think that it bears making that there is a systematic disavowal of the things that systemically compromise people’s mortality, that are causes of mortality.

The founder of this research institute place that put on this conference this last weekend at one point said during a panel that we’re now seeing this point in history where 2000 families on earth are like multi-billionaires, basically. This promises a massive amount of philanthropic funding of the longevity space. This person, I’m not making this up, actually said, this might lead us to believe that inequality is not really a problem. If we have this huge, unprecedented amount of philanthropic wealth on the horizon that has been increasingly focused on the greatest engineering challenge in the world, which is overcoming mortality, then we can say that it’s a net gain.

Max Haiven: There seems to be a sense generally that what’s good for the billionaires is going to be good for everyone else. There’s a trickle down theory of longevity here at work.

Christian Nagler: Absolutely. Part of this goes along with what we were talking about earlier about this shell game about being discursively anti-state and functionally subsumptive of state resources constantly. There’s a huge concern within the longevity space that regulatory barriers are a major problem and that public research into health and biosciences has to be liberated from the public sphere. Sometimes there’s quite aggressive attempts to do this, like in order to litigate around intellectual property produced within the public universities and public research foundations in order to bring them aggressively into the private sphere. This is something that Silicon Valley does a lot. This tries to liberate intellectual property into the private sector.

Max Haiven: I want to transition us towards it by asking you about the quantitative self-movement, which is connected to these longevity industries, but presents itself at least as more accessible. Can you tell us what the quantitative self-movement is, maybe where you first ran into it, an anecdote perhaps of a gathering you went to?

Christian Nagler: This actually is a good transition into quantified self, which I’ll get to, but I think that we could see the snake oil trends come and go within this world. There are some people who are more interested in jumping on the snake oil trends and some who are more interested in, let’s see what the data says. We have to really make a robust data set around all of these sorts of things.

One of the most current trends recently also has been taking metformin, the diabetes drug. Sam Altman supposedly was taking metformin, or maybe still is, or whatever. Some data that was speaking to the capacity of metformin, I think it was some aspect of cellular function.

Don’t quote me exactly on that, but the longevity space tends to be scientifically focused on a few different places. One is the cellular function and cellular autophagy, the capacity of a cell to get rid of its parts that are not working very well. Or the capacity to get rid of cells that aren’t working very well, to increase the body’s capacity towards the autophagy, the breaking down of dead or dying cells.

Then there’s a lot of other research that focuses on the extracellular matrix. Substances outside of cells that are the cellular environment, the environment that a cell is swimming in. This is actually why there’s all this interest in the blood plasma transfusion, which has also become peak caricature of Silicon Valley longevity, immortality dreams. Because the experiments that have been done with this have been around transfusing either mice, or usually it’s mice, with the blood plasma of younger mice. There’s been a lot of experimentation with this and it lends itself easily to metaphors of vampirism and intergenerational warfare between the old billionaires and young blood boys or whatever. But there’s also supposedly a strong scientific basis in being able to rejuvenate the extracellular matrix of the circulatory system, as well as of the bones and of other cellular environments. So that’s a little bit about the science. To go into quantified self, I think that the way that I see the quantified self now is it’s a discourse that has gone under a little bit. I don’t think that it sees itself as much now on the cutting edge of venture culture making as it was, say, seven or eight years ago.

Now, I think it’s pretty successfully gone into the mainstream with a proliferation of biotracking devices that people use on their own. And that also employers and health insurance companies, etc., that it’s become commercially viable as a way of cutting costs in health care or in employment insurance and things like that. People have had a lot of concerns about cybersecurity of the data, the very fine-grained data that gets produced by devices like the Oura Ring or the Apple Watch, Fitbits. I mean, there’s so many of these devices now. I get a constant stream of them through my Instagram feed because it’s like if you search for one, then you get a lot of others. But I think the point that I would like to make right now about the quantified self is that now it serves as this kind of connective tissue between what kind of becomes a mainstream trend for a bit in terms of health optimization or self-optimization and venture investment. And I think that there are some kind of long-termist hopes that a kind of master dataset of people’s health markers, correlating people’s fine-grained datasets around people’s heart rate variability, that seems to be a big one. People tracking their heart rate variability is a big one right now because that, I think, has been meaningfully correlated to risk of death at any particular time in life or risk of getting mortal diseases. I’m trying to square it with some things that I learned in that conference this weekend, which is about one of the major concerns of the Silicon Valley longevity space. It’s still an open question of what biomarkers of age are actually relevant. This is the thing that I think people in the space right now most argue about or have the biggest disagreements about, is how to actually construct a structure of biomarkers, of how to measure aging, basically. This is something that a lot of resources are putting into artificial machine learning around this right now.

For instance, Eric Schmidt, who is one of the Google people, he has this Moonshot Institute venture fund thing called Schmidt Futures that is trying to develop this AI scientist. To develop a scientist artificially, a scientific heuristic, basically, that evolves through machine learning and is able to not only work on datasets, but also to propose research lines, basically. From what I understand, this is a big thing.

The longevity space right now seems to be divided in all these different directions on where do we even look to measure the fundamental underpinnings of the aging process. Is it in, for instance, the methylization of certain DNA nucleotides? Is it within certain characteristics of the extracellular matrix or elasticity of tissue?

Each of these directions involves a billion dollars or more of early round funding. For instance, recently, you’ll be interested in this, Max. A big piece of news right now in the longevity space is that Jeff Bezos gave $3 billion, which is a record within the bio venture space, to Altos Labs. Which is, I think, betting a lot on this extracellular matrix plasma transfusion route. I think the gaming aspect of that is that Silicon Valley is always trying to find ways of making the contribution of data seem effortless and fun and social and something that is in your best interest, etc. I think that my sense right now is that we’re moving into a slightly different mode right now with regards to the tech sector in terms of its relation to data.

I’m not sure right now how much continued relevance, a kind of mainstreaming of the gaming, of trying to gather data of your own body and deliver it to some startup. I’m not sure how much continued relevance. I’m sure it’ll have a long afterlife, but a lot of excitement is there right now around not even needing to collect data, but just producing synthetic data sets through machine learning.

This is a big aspect of all the, again, despite the stagnation of venture investment right now, generally, there’s a lot going into, as I said, bio venture and a lot going into, obviously, artificial intelligence. Some of the biggest seed fundings in the history of Silicon Valley have gone into that recently. Part of the promise is to liberate tech from having to get data from people.

We have enough data now that machine learning processes can just produce, for instance, a virtual set of cells that will function just like cells, and you can get data from those cells. Or a brain emulation that can produce data, a network of brain emulations that can produce data. The holy grail right now for artificial intelligence is this idea of a brain emulation, that it’s given up on the process of being able to, for instance, understand the neurophysiology of the human nervous system.

We’re not going to understand that, we’re not going to be able to reconstruct that, it’s too hard of a problem, but we can just merely train a machine learning process on a set of artificially produced neurons. That’s something that people are doing. Actually, I saw at this conference just recently a computer that they made out of human neurons that were grown in a dish, basically. They grew somewhere between 800,000 and a million human nerve cells. Their big achievement was that they were able to, through machine learning processes, train this set of neurons, living neurons, to play the game Pong. I think my point is that it’s unclear right now how much gamifying your sleep data and your heart rate variability and catalytic skin reaction, all these things that were the benchmarks of quantified self research and making some master data set around human health. Again, the thing with the quantified self was being able to make the data sets more granular and more fine grained. Because if you’re experimenting on yourself medically, then you have this N plus one, or N of one sort of thing, where you’re sidestepping a lot of the ethical questions of between experimenter and experimentee. They’re the same thing.

The projects that I saw in quantified self were somewhat pretty interesting, often. It was people taking readings of their kidney function, separate from medical research, which might have you come in every few weeks to measure your kidney function if you’re part of an experiment around kidney function. It’s people who’ve developed ways of measuring real time, second to second kidney function, or glucose metabolization, things like this.

That was really the driving force of the quantified self movement, was to be able to make bio data ever finer grained, and to find ways of making that feasible and making the user experience around that fun and easy.

Max Haiven: It’s amazing what you were just saying before that about basically eliminating the need for humans, even as data sources, is pretty fucking grim.

Christian Nagler: They don’t even need our data exhaust anymore. That’s argued. I wouldn’t say that that’s at all a consensus, but it is definitely a frontier. It’s a frontier. Like I said, it’s part of the reason why so much money is put into AI research.

Max Haiven: I think that’s a good way to transition now into maybe the broader field of transhumanism, because some of these visions tie into that. What is transhumanism, and how is it related to what we were speaking about?

Christian Nagler: Transhumanism has a longer history in certain ways, going back to the 19th century. I think for our purposes in thinking about longevity, there’s the concern that I often think about, which is, again, it’s like peak caricature of Silicon Valley. But it’s also, I think, quite important for understanding a kind of legitimating ideology that often moves through Silicon Valley, which is that the substrate of the material world is essentially computational. And that transhumanism is a kind of extension to the human being or the inclusion of the human being in that vision. And so it’s a sort of training of the frontiers of computational biology onto the human being. So the idea of the singularity that people have maybe heard of, it’s in a lot of sci-fi, it’s in Ray Kurzweil’s thinking about it, who has kind of popularized the idea. And who is, I think, chief technology officer at Google or something like that, is that at a certain point, humans will have discovered and kind of put into operation the computational potential of matter, generally. And so the engineering of computational devices has been understood for a long time now, since the 1980s, to be functioning according to this pretty well-known kind of engineering law in Silicon Valley called Moore’s Law. Which is the idea that every year and a half, something like that, every 18 months, the computational capacity or the density of an integrated circuit doubles.

So there’s this exponential curve in computational capacity within silicon chips. And that has driven the production of integrated circuits. It’s what we call a performative or hyperstitial sort of law, which it coordinates production towards those ends.

But it’s also kind of seen also as maybe a sort of natural law, a natural law in terms of the way that human intelligence is an inherently expansive force and a force that inherently combines its intelligence with matter. So this kind of labor theory of human intelligence, by which computation is the way that humans serve as a bridge between this kind of mute material that is in the past and this kind of alive with computation material that’s in the future. This is basically what humans are for, is to accomplish this transfer. And so transhumanism as a sort of ideology really, really takes this idea pretty far and says that humans as they exist now in their current form, or what do they call it? Mostly original substrate humans, moshes, they call them sometimes, which is biological humans. The way that we exist now as mostly biological humans, although we have various external and internal prosthetics, technical prosthetics that cyborgize us, the transhumanist ideology sees our substrate as infinitely replaceable.

I think Robert Solow, the economist who is very important for technologists, focused a lot on substitutability of means within production of goods and services. For instance, we reach a point of climate disaster, but good news, human intelligence is infinitely adaptable in this way. We know how to substitute means, that’s what the human does, basically. Human intelligence, computational intelligence is able to use whatever substrate might be available to be able to perform an accomplishment. And that’s kind of like a measure of intelligence, is substitutability of means. And so that extends to the human body and we’re able to replace our human body with whatever substrate is available. And it doesn’t really matter because human intelligence is still functioning on its kind of historically inevitable course, which is to liberate computational intelligence within the matter of everything. So it seems like a caricature, and at the same time, it’s like a pretty comprehensive kind of cosmology of the modern. It really is. I think that we could take a kind of critical theoretical lens towards this and see the dialectic of enlightenment. I’ve been really interested in Zygmunt Bauman, who wrote a really great and very underrated book about immortality and mortality. I think it’s called, let me see, I have it here actually because I was just, Mortality, Immortality and Other Life Strategies. And he writes about immortality as pretty much the oldest kind of like strategy of class stratification. Whether it’s creating a system or monuments or funerary rites or traditions, etc. The social invention of immortality, the sociopolitical invention of immortality is the kind of sine qua non of class stratification.

It’s like the characteristics of any particular bid for immortality contain keys to the class stratification of the general socius as it’s operating. And so in the context of transhuman longevity, we could say that the monument to kind of like class achievement, this is maybe a little bit obvious, but it’s kind of a translation of some of the popular terms that we know, that the monument to kind of classedness, to ultimate class ascension is the remaking of the human body. If you have been liberated from your substrate, or we could say again, modern technicity, modernity’s relation to technicity is a principle of liberation from necessity. And the ultimate kind of benchmark of necessity being mortality. This pursuit kind of legitimates all other technical pursuits. All of that I think is fairly easy to articulate, and it’s like fairly familiar to us.

I think what’s really hard to articulate at this point and almost has become rhetorically indefensible is what the alternative is and why. Because this is the way that the longevity space and Silicon Valley generally tends to approach it rhetorically is, ‘Oh, you’re against us? You must want to die. Why? Why do you want to die? Why are you so irrational that you would be attached to death and you would be attached to mortality?’

First of all, you are perpetuating the suffering of almost everyone on earth who suffers primarily from death. And second of all, you’re a hypocrite because you do all of these things that obviously mean that you probably want to live longer. You might exercise or you might be interested in eating healthy or having good relationships. All of these things increase your lifespan, and you’re probably maybe interested in a few of them. So this is the kind of weird, I would say, gamified space in certain ways that the tech… I mean, I would say that this is common to advanced industrial society generally, that it poses problems within this sort of technical framework that puts you in an impossible and indefensible position and makes you feel as if by advocating for some other allocation of resources that you are actually the status quo. And that this, the overcoming of death, is progress, is progressive values. And so some people go further with this than others, but there’s all these ways that Silicon Valley culturally attaches this idea of progress to other more familiar ideas of progress that we might be fond of. Sociopolitical, racial, gender progress, all of these sorts of things.

There are various ways that the tech sphere plays with those significations and puts kind of dog whistles for more generalized ideas of progress, ties them to their investment strategies, while at the same time playing also to totally neo-reactionary ideas.

Max Haiven: Yeah, I mean, it’s uncanny because it both blows my mind and it explains something with which I’m intuitively familiar, I think. Living in the world created by Silicon Valley, these people having so much power, we can’t help but sort of sense the movements from the periphery of our vision, the ideological movements, the technological movements, the economic movements. But then I think what your research can do is actually show us quite nakedly how this works and with what contradictions.

I mean, I was thinking what you were just talking is a couple of things. One of them just goes back to, I think, a different way of framing your original point that you began with, which your framing was technological subjugation of the social wage or subsumption of the social wage. Another way of maybe framing that is that they’ve replaced a progressive narrative with one that eliminates the need to overcome inequality. In fact, inequality is good because it puts resources in the hands of these tech billionaires who can then use it to give us what we really want, which is escape from death. And if we just tax them and gave their money out to all us losers, we would all just die at average age of 80 and nobody would be dead. So actually inequality is a progressive value because it leads to the overcoming of what are truly the problems, because it puts in the hands of the 20% doers the capabilities to actually solve the problems for the rest of us losers.

Christian Nagler: Definitely. I mean, you can’t escape the eugenic logic of that. You really cannot.

I mean, I would say that it’s built into venture capital.

Max Haiven: I mean, the other thing I was thinking about, it was as you were talking, I was remembering Ingmar Bergman’s The Seventh Seal, which is this famous film, let’s see from what year, the 1950s, 1957, in which a knight returning from the Crusades, which also looms very large in the neo-reactionary imagination, the Crusades. It comes back to Sweden, you know, vexed by the plague. I mean, basically the subtext is the Crusades have drained all the money from society for basically an elite adventure to the Middle East.

And everyone’s dying of the plague. And this returned knight ends up playing this game of chess with death. But, I mean, it strikes me that here, in some ways, there’s a complete changing of the game.

In the same way, there’s a changing of the game around what a progressive value is. There’s a complete changing of the game around this relationship with death. Because, you know, even if in a certain way there were, as you point out, as Ingmar Bergman argues, you know, these methods by which a social elite always tries to immortalize itself, the immortality does include the death of the subject on some level, or the death of consciousness, at least. But this seems to be not only, you know, this longevity as it ties into the transhumanism and the singularity, it’s a kind of apotheosis of consciousness. It’s a moment where consciousness transcends the physical biological substrate and becomes something else. And I just wonder if there’s anything that jumps to your mind that ties this back to games. Partly, maybe I’m thinking biographically, because many of the people who are very involved in this are people who grew up with games and a gaming ethos. I mean, Silicon Valley grew out of gamer culture in many ways. Do you see any links there?

Christian Nagler: The thing that immediately comes to mind, and I don’t know if I’ll be able to articulate it, because it’s something that I think I’ve been trying to work on, but it’s hard to know always where to look to learn about this, is that, as you said, the apotheosis of consciousness that is contained within this inversion of a conventional theological narrative, which is that you have this kind of primordial intelligence, that kind of God, that produces these beings that are also intelligent and are kind of emulating aspects of this.

This is different because the God is in the future, and the God is what we currently know of computation, of computational logic, is what we know of that. Consciousness, defined as computational logic, and computational logic is not just one thing. It’s a historical assemblage of a bunch of different things, and it changes over time, and it gets new processes and attributes and things like that. But what I wonder is whether we can define computational logic as, in a certain sense, a derivation of certain game-like aspects of language function. I mean, we would bring it back to some of the 19th century bases of computational logic in Boolean grammar. I’m being very speculative here, but to understand the game-like aspects of the roots of computational logic might allow us to then frame the indefensible, what I often refer to as the indefensible aspects of…

At this point, it’s framed in terms of the biological, but it might also be framed in terms of the democratic or even the communist. There’s a lot of different ways that we could frame the indefensible other to computational logic. But we could understand that as, what game is computational logic playing, and what is outside of that game?

Let’s say that you imagine yourself, as maybe I do or as maybe you do, as someone who is interested in forming a cogent and effective and powerful critique of, as you said, like architect overlords or the particular tendency of capital that it represents, which I’m saying is embodied in a lot of ways in venture capital. Then how do we articulate aspects of what has become indefensible in ways that aren’t just playing into this sense of the venture capital complex and the particular social political visions of technicity and political economy that emerge from it as, on the one hand, like this dialectic that I was talking about before, is on the one hand completely iconoclastic, maverick, outside of conventional morality in all these different ways, but at the same time, rational, inevitable, and able to subsume discourse within certain games that become very familiar, like discursive games that become very familiar. This, I would put the whole discourse of accelerationism, or the whole discourse of, is consciousness inherent to the human or does it also exist outside the human? Can intelligence be artificial? These are discursive problems that I feel like are basic traps, are kind of traps. I have come to perceive them as traps, even though they feel like they can be fun games to play. A game that, say, cognitive science has played with engineering for a couple of decades or four or five decades.

I don’t know, I don’t want to overstep this, but I like to often even end things on this kind of plea for some way of mounting effective critiques that do not reify the ultimate power of computational logic. And of any particular kind of mode of venture capital as it’s operating now. I think it was interesting to you what I mentioned before about investment in AI and machine learning, trying to escape the sociopolitical problem of, say, people’s sense of an ownership of their data or not wanting to be surveilled. And so we see this problem happening here. It’s like so much kind of worry publicly about surveillance, etc. And those are valid worries in many ways, because certain populations are highly and densely surveilled, carceral populations in particular, or populations on their way into the carceral complex. But I think venture capital sees itself as having solved this problem. And this is oftentimes always this kind of atemporality that we see between technical solutionism and some public concern that is able to be fully articulated. And it’s sort of like the way that I see that venture capital really, really tries to, what’s the word, arbitrage public discourse.

Yeah, there’s a strategic arbitrage that happens with regards to what public discourse is doing. And it’s like keeping the bar of technical solutionism just above where public awareness is, in order to play these double cards of maverick status, iconoclasm status, and the rational next step. So the plea, again, is to being able to define our particular mode of indefensibility in a moment very, very well, in order to be able to mount an actually effective critique. Because I think that one last thing that I’ll say about this is that I think this has become more common of a sense of thinking, but five years ago it really wasn’t, how effectively Silicon Valley is able to use dystopian narrative to reify itself.

Max Haiven: We’re getting our money’s worth here. I can’t resist asking a kind of follow up for that, and then we’ll really let you go. Which is like, I hear what you’re saying about the, like waging the discursive struggle here. But, and this I don’t think we’re, I mean, maybe we could in a podcast, I don’t know. Like, the discursive struggle would imply that there is some power structure that could be convinced. Like if the tech, let’s stick with the tech overlords, you know, just for the sake of argument. Like the idea is that they’re making these arguments and they’re putting forward these kind of grand schemes that they’re like, you peasants wouldn’t even understand. You can’t even do the math.

Don’t even think about it.

Don’t even worry.

And then you basically have lawmakers and policymakers rushing to catch up, being driven by a media that gets infatuated with basically yesterday’s stories on some level. And so then, you know, basically what can happen is the tech overlords, they can be like, oh, you’re worried about, you’re worried about like data surveillance. We don’t even need you peasants. Like, we don’t even need to take your data anymore. Didn’t you get the memo? Oh, we don’t send memos anymore. So then there’s a kind of fantasy around the discourse. And I may agree with you. I think we need it. But there’s an idea that like if we could only have a better discourse, then some sort of power structure would be able to act on principle.

But the problem there is A) that any power structure that exists today would act on principle. And B) that the tech overlords are not more powerful in many ways than, you know, like the U.S. government, let’s say, you know, something like that. And so here’s where I come to the question.

Like, is the thing you are speaking about in terms of us developing a better discourse around it, at what point does that intersect with like new dimensions of like to use the old fashioned language class struggle, which is like all us losers, the 80 percent who are only doing 20 percent of the work, who are destined to be in various ways euthanized by the eugenic regime. You know, or basically left to die to the extent we can’t keep up with what they’re doing. Like, is there is that idea of building a counter discourse? How does that intersect with an uprising that might come from that space rather than the space of sort of hoping for regulation or hoping that someone will bring bring the tech overlords to heal? If you see what I mean. And here I’m thinking particularly about your the way you keep coming back to carcerality and and abolitionist struggles and these sorts of new new frontiers of struggle as well.

Christian Nagler: Mm hmm. Yeah, yeah, definitely. No, I really, I really appreciate that question, because I think that venture capital has been involved in an unprecedented, unprecedented, like, success of regulatory capture.

I mean, that’s like. We could almost have consensus about that, I think, you know, that that I mean, you see congressional panels where 80 percent of the congressmen don’t really understand what the person does. And they’re like, good job, young man. Like, you’ve really you’ve achieved the dream, you know. And at the same time, we have these like marginal concerns here that kind of are not seeming to really comprehend what even the issues are. You know, that’s that’s the kind of political theater that we see around the regulation of tech. And then we also see, you know, what has become very, very like a conventional set of business practices around regulatory arbitrage. I mean, not like people make startup value propositions around what their form of regulatory arbitrage is, you know. And that is regulatory arbitrage is basically like where you have a business plan that that calculates in the incapacity of regulation to to even catch up with what’s happening. So, I mean, the textbook example is Uber going against very egregiously all of the transportation laws by quite dishonestly, you know, positioning themselves as a as a data management company. You know, they’re not in the business of transportation. And so the labor contract reflects that everything. So to get to your question, you know, regulatory capture by the tech industry is a foregone conclusion, I would say. But I think that you’re right on in that I think it has to do with the practical bodily components of mass movements, you know, and abolition movements particularly. But also a sort of mustering of the kind of core of, I think, decolonial thought against tech logic, I think is really, really important.

If we’re talking about the longevity space and a sort of class bid for immortality, whether it’s sincere and personal or just systemic and in the interest of multiplying and investment, you know, like it begs the question of like, what’s at stake in like mourning? You know, like mourning not as like mourning is clearly not a resignation, this kind of irrational resignation to death, you know, more mourning is like a powerful form of political solidarity. That is like rooted in, I think, forms of intercorporeal bodily life that I believe like are somewhat, you know, agonistic to the idea that there’s a computational essence to the human cell or the human endocrine system.

I guess I’m saying that like to describe the forms of indefensibility that are not kind of just like able to be cast easily or maybe to articulate and to feel and articulate the sense in which, you know, something like mourning is not merely just a sort of non-co-evil lagger behind, traditional lagger behind in humanity’s like pursuit of ultimate, you know, the ultimate un-mournability of the human being, you know, or of the living being.

Halle Frost: Weird Economies is sponsored by the Canada Council for the Arts. You can listen to the whole podcast as well as read the transcript at weirdeconomies.com.