Credit: Anna Engelhardt

Anna Engelhardt (AE): I would assume for our viewers, the term reputation capital would actually ring a bell. The more normalized such words as reputation capital are in our everyday vocabulary, the more it is important to definitely raise them—[to] see how they gained their meaning, how such meaning changed over time, [and] how we ended up all knowing [these] terms, because of all [the] scientific terms in everyday speech. So would you mind giving an introduction to this term, explaining its relevance especially for social media today and maybe think about how we can distinguish reputation capital from other terms?

Emily Rosamond (ER):  So yeah, the term reputation capital is one that is definitely around. It probably has a bit of an air of familiarity and especially given that social media and online platforms are really quite often for grounding reputation measures of various sorts, and they can be in fact quite heterogeneous in nature. So a lot of different kinds of counters, for example, on many social media platforms could be part of giving a sense of what someone’s reputation might be like. For example, friend counts on Facebook—like and share counts and that sort of thing, but also very hidden kinds of measures such as the formerly used Facebook algorithm called edge rank which was used up until 2011 and that one would be analyzing the relationship between objects and edges, so, in other words, trying to sort of analyze the strength of your interactions with other Facebook users in order to sort news feeds and things like that, so there are all kinds of behind the scenes measures that might also, in complex ways reflect something like reputation.

The term reputation capital has a number of different uses and it’s used with some frequency in contexts such as management studies, for example. Obviously in discussions of how to manage a corporate brand reputation is an important thing, and it is often seen, roughly speaking, as an intangible asset. So, for example, a company might have a reputation for dealing with customers well or it might have good sort of value in its brand. These might be aspects of its reputation as an intangible asset. It is used with some frequency in discussions around management studies and things not always so precisely defined but certainly prevalent and one of the things that really interests me is that it seems to me that the use of reputation capital within kind of business and management context has actually kind of moved between a kind of corporate context and then also a kind of personal context.

So in fact, a certain language of reputation capital has actually kind of migrated, if you like, from the sort of management context to personal context. One example of this is the founder Michael Fertik who founded the business, which was designed to help businesses manage their reputations and to sort of offer design solutions around problems that different companies might have with their corporate reputation.

For example, the fact that sometimes people would be more likely to post a negative review if they were angry about having that service from a company and they might be much less motivated to do that if they felt that everything was fine. So, he comes up with ways to design around that, for example, having a quick and easy button that’s available right at the counter if you’re going into a shop or something so you can say, “How was the service?” “Oh yeah, it was fine.” And then it kind of removes that barrier—that sense of a boundary to getting those good reviews.

Michael Frederick also wrote a book for people—for not just companies to manage their brand but actually for online users to do the same thing. And so he is thinking about ways to curate your digital presence, such that your LinkedIn profile might rise to the top of prospective employers’ interest, what kind of things you can do to your online reputation to get preferential treatment or interest rates in various ways and, interestingly, what to do if there’s any bad information out there about you on the Internet. So he talks about ways to provide digital smoke screens to kind of - not get rid of things that are out there, necessarily, but sort of camouflage them—kind of de-rank them. Make them appear a bit lower on search results.

So there’s a kind of… there’s a shift from this sort of corporate context in which reputation capital has been used into a more personal way in which reputation capital is being used as an idea, and of course that is coming into conjunction with all kinds of really complex histories of reputation tactics and reputation management that, of course, have been going on in people’s private and personal lives for many centuries. So it’s sort of nothing new, but kind of newly inflected, if you like.

This brings us back to questions of how to situate the term reputation capital in relation to other similar and related terms. One—and I’m going to talk about two different ways that the term might be contextualized, although there are many others. One that comes up a lot is Pierre Bourdieu’s idea of social capital, so there are definitely several people who have theorized online reputation capital as a form of social capital or even called it digital social capital, for example, and I do not take that approach so much, but it certainly is out there, and of course this is an idea basically coming from Bourdieu that understands a reputation online as an extension of the Bourdieusian idea that it’s basically social capital. So, in other words, the sense of possible or actual tangible or intangible sort of assets or benefits or resources that might accrue to one via their social network. Via belonging to a particular group.

I happen to think that the term symbolic capital from Bourdieu is a better fit to talk about reputation than social capital is, although that does go against what some other people have said about how Bourdieusian ideas of capital relate to reputation capital. And symbolic capital for Bourdieu is not capital as such, but an effect of capital. In other words, what happens when another form of capitals kind of transformed into symbols that can then be read by those who have the tools to see it as legible in a particular way. And of course, when I think about the prevalence of like measures on social media platforms for example, that the idea of something becomes sort of self-referential in the world of science, the signs sort of accrue value into themselves and start acting almost a little bit separately from, for example, the actual social network to which they extensively refer. That kind of interests me more, I think, than taking the social capital line directly. However, there’s another possible framing that is more relevant to the work that I’ve been doing, and that is to think about the relationships between reputation capital and human capital. So human capital theory, broadly speaking, is a kind of thought that was particularly developed in the 1960s by Gary Becker and Theodore Schultz, among others, although there are precedents, you know, quite a bit earlier than those two thinkers. It’s basically a way of thinking of how, for example, one’s skill set, one’s abilities have value to oneself, obviously, insofar as one could make a living, but also maybe to others—one’s family, even one’s nation, one’s state. And so Becker and Schultz became kind of very key thinkers in a number of debates on neoliberalism as I’m sure will be familiar to you and to a lot of people out there, and this is especially through Foucault’s work on neoliberal subjects, and neoliberal governmentality in his “Birth of Biopolitics” lectures.

For Foucault the idea of human capital becomes a kind of basis for an entrepreneurial subject within neoliberalism. And a lot of thinkers that I have been very influenced by and, you know, kind of engaged with their work quite a lot have taken this sort of thinking and extended it in different ways. For example, Michel Feher, the Belgian philosopher who has written very interestingly on neoliberalism talks about how, basically, since Foucault’s lectures on neoliberalism, there has been a massive expansion of the credit market. And so neoliberalism isn’t so much about profit as Foucault might have imagined or, indeed, as the early neoliberal thinkers might have imagined and therefore it comes to relate to the idea of credit much more, and in fact Feher argues that the aspirations of human capital expand in financialised neoliberalism. In other words in a neoliberalism where the financialisation of the economy is on full display. So through the expansion of credit, through the expansion of derivatives and all kinds of investing practices, he argues that human capital becomes a category that doesn’t only refer anymore to, for example, how much benefit getting a college degree might have to somebody’s lifetime earnings. The kind of things that the 1960s human capital theorists were really interested in. But instead, human capital could be really kind of anything. It could be personal traits that might allow one to get a date, it might be going to the gym or not smoking or also education, but also social networks, all kinds of different things. And in fact, neoliberal subjects become something like portfolio managers of their human capital.

So this is something that I’m in close sort of dialogue within my own work and one of the ways that I’m interested in situating reputation capital is to think of it as a kind of derivative form of human capital. And so—what on earth does that mean? Basically a derivative, of course, is something that is derived from something else, right, in the simplest possible way. Within finance, a derivative asset is something that is derived from an underlying asset without that underlying asset necessarily needing to be traded in order for that derivative to be an asset. For example, I could purchase an option, that is the right, but not the obligation to sell a stock—let’s say at a particular time for a particular price. Right, so that’s kind of based on an underlying asset of some sort but it’s not the underlying asset. I don’t have to trade it. I’m sort of trading around it, I’m trading based on something that was derived from it. So there’s this way in which derivatives and finance kind of act as—at a remove—from whatever is being traded.

In fact, neoliberal subjects become something like portfolio managers of their human capital.

Emily Rosamond

I’m arguing that social media platforms sort of do the same thing with reputation in a way. So let’s say an online user in a really simple example gets a lot of likes for a particular Facebook post or something like that. The likes might signify something about their reputation, but they’re also… they’re not the reputation itself. They’re kind of a sign of the reputation, and they’re sort of derived from the reputation. But [they’re] also kind of infrastructuralized by the platform as well, so they can only be tabulated because of how the platform is set up in a particular way.

But then they [the likes] can kind of take on a life of their own because they’ve become kind of unlinked from the person in a way. So in a weird way, reputation—I’ve kind of talked about previously as something like the network extensivity of the subject. Like how an idea of the subject—almost like something that turns a subject inside out—how ideas of the subject circulate within a social network or social group. So reputation as a concept has always been a little bit at a remove from the person whose reputation it is. So somehow with online platforms and with this sort of concretization and mass mobilization of reputational signs and measures, I’m arguing that sort of distinction, that separation between the self and the reputation is, acts like a derivative—like a derivative from an underlying asset. And it’s all about, my whole interest really, is what happens in that distance between the “underlying asset”—whatever is seen as valuable in a person—and the measures that kind of acquire a life of their own and their own set of dynamics. Sorry, that was a very long-winded answer [laughs].

My whole interest, really, is what happens in that distance between the “underlying asset”—whatever is seen as valuable in a person—and the measures that kind of acquire a life of their own and their own set of dynamics.

Emily Rosamond

AE: No, it was actually amazing. So just to sum up, we as users have a reputation rate as an asset, and then it gets quantified by the platform—right? In terms of likes and other metrics. So we have this intangible thing that then can be quantified as likes or visualized [as] something else, but we have it as, like, “I am a blogger and I’m known as ‘X’” so as an ‘X’ I would have this thing potentially. And before, like reputation capital—you were talking about it as an external network that was not ascribed in terms of someone would own it, it would be like the intangible connections between the people. 

ER: Yeah, that’s interesting and I think this is something really important that you’re getting to here. So of course one of the rationales, let’s say, one of the reasons why social media platforms use reputation metrics—I mean there are many—but one of them might be because they’re kind of addictive. So I’m sure we can all think of examples of hearing people or even feeling this ourselves going like, “oh I’m so glad I got this many likes on my post or that my post went viral or whatever, or I was disappointed because, like nobody cares what I post… ” You know, all of these kinds of things, there’s definitely almost like a behaviorist bent to certain aspects of why reputation measures are there on a platform, because I mean, people like to be liked.

And when there’s this sort of language of liking that becomes prevalent, it becomes very tantalizing in a way to engage with that and to be rewarded. And of course the promise of that in part is that one, you know, my reputation will be legible, my reputation will exist on this platform. But one of the things that I think is very interesting is, of course, any online user’s reputation, whether that’s on a blog or on Reddit or on Facebook, or whatever, it’s not only valuable to them. It’s valuable to the platform obviously to kind of collect reputations.

I tend to think about certain platforms as almost securitizing reputation. So in finance, securitization refers to the practice of bundling together assets, such that they can then be made into interest-bearing sort of bundles, if you like. Similarly, if you think of a platform like Airbnb, for example, they obviously have a lot at stake in their rating system working for everyone. And the hosts and also the renters of various flats through Airbnb are also trying to presumably get the highest ratings, so that they can charge more or be granted access to more nice properties and things like that.

From Airbnb’s perspective, I think of it almost like they’re securitizing reputations. Let’s say one time somebody has a terrible experience on Airbnb and, of course, there are plenty of stories of that, or if one renter is awful and trashes a place… these are problems for Airbnb but not big problems, because it’s sort of invested in all the reputations. It’s almost securitizing the reputations. It might still lose on a few reputations that are bad or that go bad or that are unreliable, but as long as the whole thing kind of works, it’s fine. And so there’s a way in which, when we use platforms, we might enter into value systems for platforms, but also kind of within social networks, right? So if somebody is associated with somebody who becomes a big YouTube star, that might have residual benefits for their online presence as well. So there’s a very complex way in which reputation capital is not only the property of users. It’s of course the property of platforms, who I would say securitized reputations, but it’s also there’s also an invitation for everyone to almost invest in other people’s reputations. So in a sense that maybe… I think of online reputation as something that almost seems to have its own weather patterns in a way, you know? It becomes [and] people can talk very easily about how it shifts—the sort of broad patterns by which it shifts over time when, [for instance], someone becomes famous on a particular platform and another person gets cancelled or whatever. 

There’s a sense in which one can kind of, by liking a particular other personality, for instance, one might find oneself sort of rising with the tide if their reputational fortunes increase or, conversely, going down if, you know, a close friend has a sort of personal PR disaster if you like. So yeah, it’s definitely… by being derivative, online reputation is something that is very open to being invested in by a lot of other people.

AE: I think it’s very important what you were saying about securitization because… so we have been talking about reputation capital. But then the crucial part of your book is reputation volatility, so securitization also welcomes the shadow part of the situation. So, can you please explain the new shift to reputation volatility and also, importantly, what’s the difference between volatility and instability?

ER: Yeah, really good question. First of all, maybe just a few introductory remarks about the term volatility itself. So of course, this is a term that I find myself increasingly drawn to, particularly just thinking about the many, many ways in which this moment is a volatile moment. I mean—socially, politically, climatologically, etcetera. So of course there’s a more general definition of what volatility is—the more familiar one, probably, for most people—which has to do with of course changeability and maybe instability, a sense of the opposite of calmness, highly volatile conditions [such as] volatile weather conditions, financial conditions, etcetera. Within the world of finance, there’s also a slightly more technical sense in which volatility refers to something that’s very different from uncertainty. So it’s quite often differentiated from uncertainty. Whereas uncertainty might be something that completely can’t be predicted—something that’s absolutely not on anybody’s radar—volatility is more like the measurable distribution of changeability over time. And so, one way to think about this might be [looking] at financial market graphs. And of course the sort of squiggly line of the financial market fluctuations over time gives us a sense of the kind of measurable—the idea that markets are sort of always going up and down, always fluctuating, and [that] the kind of range of how well and not well they do is sort of like the measurable volatility of the market. Or you could think about a particular asset that might have a certain level of peaks and troughs over time, and that would be kind of a measurable distribution of different values. So volatility in that sense has a very different connotation to uncertainty which is, at least in theory, sort of not charted at all or not measurable in some way. And so I’m picking up on the term volatility a lot partly following a prompt from Benjamin Lee, who, amongst other writers, in the book that he co-edited and wrote some of with the late Randy Martin called Derivatives and the Wealth of Societies, argues for more cultural studies and art approaches to the problem of volatility given how important financial volatility is. And for more work within art and culture that tries to think about the differences between uncertainty and volatility as something that’s kind of measurable.

One of the things that I think becomes incredibly clear in online platforms is that they in some cases, we could say, heighten the uncertainty of reputation. So of course it’s possible for people’s reputations to increase or decrease for all kinds of different reasons and that’s been the case, you know, maybe, as long as reputation has been around—but certainly within online platforms, it might be possible to have a storm erupt around one’s reputation. And I have examples in my book, for instance, about particular individuals who have been extremely Twitter shamed. Where suddenly they might have had a very, very quiet profile on Twitter with not that many followers and just saying a few things [then] one of their tweets goes viral and is shamed and suddenly they’re the number one worldwide trend on Twitter and they’ve lost their job, and their whole life kind of falls apart for a while.

So we could say from that user’s perspective of being very heavily shamed and having that shame amplified on a platform like Twitter, we could say that—from the user’s perspective—is like a total uncertainty. How on earth would they have thought that would have happened? Maybe that’s becoming less unknown to people now that the pattern is becoming familiar, but certainly in, I would say, in the early 2010s these sorts of mass online shaming events were a little bit more surprising to people. So there’s a sense of total uncertainty. That you could go from having a very small social network to being the number one most shamed person in the universe. That’s [why it] makes sense to talk about that in terms of uncertainty, especially from the user’s perspective. But the platform itself might be translating that uncertainty into volatility. In other words, an idea of a measurable distribution of adulation or shame, a kind of sense of a trackable change in reputation that unfolds over time and that is measurable. So, for example, you could do a graph: the topsy graph, for example, that would show the use of a particular person’s name on Twitter who has been very shamed. And you would see nothing, nothing, nothing, [then a] massive spike and then slowly it comes down. And there are all kinds of platforms that make us very familiar with thinking about reputation in those terms. Thinking and chart[ing] something that’s almost like a market diagram of fluctuations of stock prices over time starts to be a familiar form that sort of shifts into personal reputations as well.

Of course, there are other platforms that do that. For example, platforms that allow you to see your “analytics,” right? For example, like many academics I’m on and it gives you the option to click on analytics and then you can see this chart of how many people looked at your work over time. So it’s a kind of automated diagramming of changeability and a marking of the bounds of changeability for people’s personal or professional reputations that comes to be really central, I think, to the logic of how reputation works.

AE: I know a lot of examples of this, actually, in Russian cyber warfare. Because I think Russian propaganda has been obsessed with likes and reputation for a very long time, so they have a lot of professional software solutions that quantify exactly the fluctuations you’re talking about. So, introducing the warfare situation in the conversation, I wanted to ask you… because what volatility, reputation volatility allows for, right, is reputation warfare. And I wanted to ask you to elaborate more on the term—so the difference between reputation warfare and reputation volatility etc.

ER: Yeah sure. So basically I argue that in the early 2010s let’s say, there was kind of a predominant narrative about why reputation measures were out there in all kinds of different platforms. And particularly, although not exclusively, in e-commerce and so-called sharing economy platforms. And that was to facilitate trust, so you had tech enthusiasts and writers, such as Rachel Botsman, for instance, who were writing about how great it was that, through online platforms, you could have a kind of distributed form of trust as everyone can kind of see how people have behaved. You know—have they paid people back, have they been a reliable guest, all of these sorts of things. And the idea of this seemed to be at the time, at least in part, that this would create trustworthy online environments and it would facilitate trust between people who might not know each other in advance. Of course, there are a lot of other things going on in reputation. For example, an idea of competition over status, for instance, is a big part of the logic of Facebook, for instance—kind of seeing how popular people might be instantly and being able to read this sort of index of hierarchy. And what I’m arguing is that there’s a shift away from—at least in many contexts, not necessarily all—there’s a shift away from what I’m calling the reputation capital paradigm of reputation. In other words, the idea that this is what reputation does, that it will facilitate trust.

Now this is not to say that an idea of reputation online as facilitating trust doesn’t exist. Of course it does, and probably many of us have had moments where we might read reviews before buying something online or you know things like this, you know people do kind of look for trustworthiness in this way. But one of the things that I’m arguing is that because reputation is a derivative form of human capital and also a performative form of human capital, I’m arguing human capital kind of becomes performative as well as derivative through online reputation measures. In other words, it is purporting to be kind of democratic in a way, so any online user you know could add their star rating out of five for a product or to like a post or not like it, and this would somehow capture the sentiment of the general public.

So there’s that sort of idea. In a way the prevalence of measures is actually, I would say, foregrounding the volatility of measures more in many cases than [the measures’] ability to create and facilitate trust. [We] could think about vote regaining, for instance, so an idea that a lot of social media platforms kind of facilitate, whether by design or by accident, really, a sort of brigading of votes—[the] idea that one can kind of coordinate the votes and actively make us perform a reputation into being, make something highly reputable or highly disreputable. And so there’s a kind of very performative dimension and, and I think of very publicized if not always clearly analyzed events such as Gamergate, for example, as really clear examples of this. You have, in [the] case [of Gamergate], a very misogynistic sort of deployment of trashing as a kind of performative mechanism. As a kind of performative intervention into people’s human capital such that somebody’s reputation can be trashed. And this is something that is playing out through a kind of vote brigading—through [the] kind of weaponized use of social media and social media measures. But it’s also very complex in terms of what’s going on because, in order to understand it, we need to think about—in this case—domestic abuse, which was part of the sort of motivation for Gamergate. Misogyny, racism, heteronormativity etcetera… which are also part of the mix but somehow get translated into reputational terms and so there’s the sense in which the act of trashing a reputation can seem very simple because it might relate to how a particular set of measures looks on a platform but of course it’s discursively incredibly complex to unpack what’s going on there. And now we get to more [of] what I guess relates to your research on cyber warfare, which is the very active militarization of likes and social media metrics and certainly you know a lot more about the Russian context than I do. But yeah there’s I mean there’s certainly a lot of governments kind of thinking in these sorts of terms and in various places deliberately trying to weaponize the appearance of likeability on social media, the appearance of popularity.

One example that comes to mind for me is the Israeli act.IL app which is designed to allow people interested in garnering the reputation of Israel online to complete social media missions. So, for example, it might be things like posting nice things on a foreign celebrity’s wall when they decide to visit Israel. Or it might be questioning a pro-Palestinian voice on a particular platform—so all kinds of app-coordinated interventions into the value of online reputation which are precisely taking advantage of the fact that reputation is, we could say, derivative human capital and performative human capital. That one can “like” it into being. To like something is to make it appear as likeable and, therefore, to give it reputation. 

Emily Rosamond

ER: I’m very interested in the fact that the derivative condition of online reputation that allows for what I refer to as a widespread becoming tactical of online reputation. And another thing that I think is very interesting is that this is something that kind of blurs the distinction between, let’s say, very clearly militarized examples such as an actual sort of cyber warfare operation linked to a particular military or to a particular troll factory, etc. But also, in a smaller way, all online users at least to a small extent have some small place in manipulating the distribution of reputation capital. And that’s kind of by design: by the performativity of the measures, by the fact that liking a particular thing, of course, is often aimed at getting something to appear on a news feed or maybe trend, or maybe even go viral. So there’s a kind of persistent appeal to that which might be present—this kind of sense of becoming tactical—might be present in something as simple and straightforward as likely friends’ posts, you know. [And] even that has probably a tiny bit of a tactical dimension, right up to a major sort of military operation that’s really thinking about tracking signs of likeability. 

AE: Yeah. So just to summarize briefly because I assume a lot of new information is coming for those who don’t have an economic background. So essentially the reputation capital paradigm [is that] as the user, you get more likes or followers over time. [Whereas] the reputation volatility paradigm is [that] it’s not guaranteed—you can get bashed and then you’re going to lose your reputation capital or like banned from the platform. So there’s this kind of switch from gradual growth or stability to this volatility, essentially. So warfare as a strategic response to get [an] advantage. 

What I wanted to ask you next is: so we can think about warfare more in the reputation paradigm. And if it’s warfare there must be a defensive and offensive ways of performing reputation warfare. So in my project which was building on your work essentially, “Circuits of Truth”, that is part of this exhibition, I look into reputation marks—the blue ticks—as an instance of such defensive reputation warfare. So I analyze how blue ticks save the most profitable users for the platform, which are influencers and micro-infuencers, from this instability, or bashing. And so verification marks defy the gained reputation of a user and allow them to be subject to the previous reputation capital paradigm—[to] be protected from this online bashing. And so I wanted to ask you—what are other examples of offensive reputation warfare you might know? And maybe another interesting thing would be to think about the difference between and offensive and defensive modes or reputation war fighting.

ER: That’s really, really interesting. Yeah no it’s great, and I think that’s absolutely right, I mean I’m interested in the fact that yes, there are strategic actors, and I talk a little bit about Steve Bannon as the Trump 2016 late arriving campaign strategist who was very much [demonstrated] a kind of thinking of reputational volatility, in this case of Hillary Clinton’s, as something that could be capitalized on. And I was thinking about his orchestration of reputation volatility that was very much at arm’s length from the campaign—quite outsourced. So through online troll armies and [having] gotten interested in politics through his news outlet Breitbart, it was almost like he was purchasing an option on reputation volatility saying right, we can claim that for the campaign at a later date if expedient for us at the time, but if not, then we can say ‘oh that’s not us, it wasn’t us who did that.’  So it’s kind of these really complex forums of orchestrating public opinion that I’m interested in and [these campaigns] seemed to me to be maybe at their most successful and effective when they are thinking of volatility itself as something to be intervened [in]. So something that can be valuable. Because of course volatility can be a very equalizing force, if you like. For example, let’s say there’s a huge stock market crash that might have incredibly violent consequences for many people who might lose their homes because their mortgages default, for instance. They might lose their job because the market is bad and jobs are being lost. And yet there might be a few strategic actors who can really capitalize very effectively on that volatility and who might, for instance, have shorted a particular asset that’s about to go down. In other words, to bet against the asset to make money if a particular stock goes down rather than if it goes up. So there are ways in which volatility can make things really awful for a lot of people, but also, there are a few people who can really capitalize on it and so that makes it a very unequal medium. 

I was really interested in your exhibition and the way that you had a really interesting diagram that shows a kind of feedback loop between reputation offense and reputation defense. And I was really interested in that because I hadn’t actually used the term defense or offense in my book but I was certainly very interested in the fact that you used it and interested in how you kind of took those ideas to a different place than I had. I suppose, in some ways, I hadn’t used those terms in part because of a slight feeling that they have a very binary connotation, of action and reaction, offense and defense. And I think it’s certainly true that there are instances of that where, for example, there might be an offensive where someone attacks a particular reputation and then the attacked person might defend themselves in a particular way. So there are certainly examples of a very binary offense-defense mechanism. But at least in my view—and this is something we might differ on in an interesting way—in my view, I would tend to think [differently] about things like the Twitter verified check mark, for example. There’s a media scholar Alison Hearn who writes a wonderful essay about this called “Verified.” And social media platforms are constantly producing very different conditions for different users—different classes of users, even. So [for] people who have the verified check mark that of course acts as a symbol of status for them. But it also ensures that they won’t lose their rank to the same extent that other users could. And I tend to think of that in terms of the term risk instead of defense exactly. And I think quite a lot about the fact that reputation, on the surface, can appear quite democratic, quite universally applicable—as if it makes it equally possible for everyone to cultivate their reputation capital, their human capital… but in fact, it seems to me that it’s more like it creates a distribution of different risk profiles for different users. So certain users might have almost no reputational risk on a particular platform. Like a celebrity with a verified check mark—maybe they don’t have that much risk at all. [Whereas] another use might have a truly enormous amount of risk. And these things are [not] clearly defined either and it’s also—it’s not straightforward to say that acts of for instance actively diminishing somebody’s online reputation—even sort of shorting it in a way, like betting on its demise… it’s not straightforward to say that these acts are necessarily politically regressive either.

For example, the ‘Me Too’  movement comes to mind. Now, there are all kinds of debates about the strengths and limitations of the ‘Me Too’ movement and how it relates to a kind of court of public opinion, and how it relates to the questions around race. But at the end of the day […] this was a sort of a viral campaign, using the hashtag ‘Me Too’ that was sort of around or catalyzed by the problem that Harvey Weinstein—who now is known as a major sex predator, for many, many years who preyed on aspiring Hollywood actresses often among other women working in Hollywood—[that] there was kind of no other way to bring down his reputation. And so the online platform really seemed to help with that, actually. Speaking of reputation defense, [it’s] interesting because it later emerged that part of the reason why the Harvey Weinstein allegations failed to come to light, I mean in one word, you could call it power, of course, because he had quite a lot of it—but more specifically he was threatening to make a big enemy out of any newspapers who dared to publish a story against him, for example. So in some ways I almost want to call that like an offensive defense, you know? I will defend my reputation by threatening news outlets if they go against me because they will make a very powerful enemy in the process, and of course these are all questions of power and there are probably relatively few people who have enough power to actually protect their reputations in that kind of way. But a counter to that somehow—kind of a mobilization of a massive counter speculation against Harvey Weinstein’s reputation—did sort of seem to help, in a way. And of course it wasn’t the only factor, because there were journalists who were working very hard to break the story in the New York Times, but it entered into it [as] an assemblage with all kinds of other mechanisms which, in my view at least, have everything to do with a kind of tipping point. A tipping point where something like a general sense shifts. Where news outlets for instance might say ‘my reputation will be also diminished if I go against this person’s reputation, so I can’t do it’ [their response] sort of switches and actually it will seem really behind the times if [they] don’t also publish a story against these allegations.

So there’s this sort of very complex sort of and very weather-like unfolding of tipping points in public opinion which can play out in ways that are… it’s almost like [within] these sorts of complexity, for me anyway, the terms “offense” and “defense” are in there, but I almost feel like the feedback loop between them is so fast that it almost breaks them down. And like offense is defense is offense, you know? So that’s kind of how I think about it, but I’d be curious to hear more about how you think about it.

AE: Yeah they’re actually very hard to distinguish and that’s why I think the feedback loop and the risk is a very good example in that because it’s more as if you’re standing in the field and it’s more about your position in the field and it’s not that there are two separate planes of defense and offense. It’s more about how much of a power relation you have in the situation.

I’m very interested in these subtle interactions between an appeal to mostly human crowdsourced credibility and then the sort of automated amplification or indeed dampening of the signals based on what the platform kind of thinks might be more marketable and [likely] to garner attention. 

Emily Rosamond

ER: Yeah absolutely and it’s interesting because there’s a chapter in progress in my book called, “Reputations at Risk, Reputation as Risk”—and it’s kind of thinking about the fact that… I mean there’s a lot there’s a lot of writing on risk societies. So, for example, Ulrich Beck famously wrote about the risk society—modern society as one that was sort of obsessed with predicting and mitigating risks but also as a sort of future-oriented disposition [that would], in doing that, also produce these massive systemic risks at the same time. And it’s in some ways that I was definitely kind of thinking about that work; thinking about how online reputation is kind of played out and [has] taken shape as something that at least from certain angles in early conversations seemed like it was about mitigating against risk—you know, “how will I be sure that I won’t be ripped off by the seller or this host or whatever.” But then it becomes clear that that kind of risk mitigation logic, that [the logic of] reputation kind [existed] long before platforms, right? I mean, [for instance], how do you know an employer’s any good before you take a job? You might try to listen to the gossip and see what kind of reputation that employer has as a good employer. But that sort of logic […] the becoming reflexive quality about [this particular] logic of risk mitigation via reputation starts to then produce more risks and more risks and very importantly more and more stratification of who is at risk, and who is exempt from the risk. And one of the things I’m kind of interested in, for example in the 2016 Donald Trump election campaign, is the idea that in that race reputational attack was perhaps heavily foregrounded partly because as the Cambridge Analytica analysts who were who are housed in the Trump campaign dubbed it, the key voters were what they termed ‘double haters.’ So in other words [there were] voters who could be swing voters who really disliked both Trump and Clinton, but were highly likely to turn up and vote. So therefore, tarnishing the other opponent’s reputation could be a very important strategy.

But also, it was interesting to me that there was a particular sort of demographic [which was] if you like the reason why reputation trashing was so important in that election. But also […] it seemed to me that part of the dynamic I saw kind of playing out was that attempts to tarnish Hillary Clinton’s reputation seemed to be quite effective because she had always been playing that game. She had been going for a good reputation as a Washington figure and a politician. Trump was never playing that game, and so nothing really seemed to stick when people tried to target his reputation or call him racist or misogynist. It didn’t seem to matter. And this has a lot to do with the relationships between race, gender and reputation, for sure. And in my writing I am trying to kind of link these terms to this kind of risk logic, because it seems very clear that certain subjects structurally bear a lot more reputational risk than others. But there’s [also] a kind of production of what I call a kind of reputational sovereign. He—and I’m using ‘he’ quite deliberately here, thinking of Trump as an example—[for] he who remains outside of reputation volatility, the reputation volatility will be very damaging for others, but will not touch the sovereign, the exception to the rule of reputation.

….attempts to tarnish Hillary Clinton’s reputation seemed to be quite effective because she had always been playing that game. She had been going for a good reputation as a Washington figure and a politician. Trump was never playing that game, and so nothing really seemed to stick when people tried to target his reputation or call him racist or misogynist. It didn’t seem to matter.

Emily Rosamond

AE: Yeah, that’s very interesting—[this idea of] the sovereign. I feel like Putin can be thought of as a reputation sovereign. Like so many investigations were published about him and still, he’s gonna go on for another 20 years, probably.

And I wanted to [move one] now to the cyberwar context, and highlight that I found it very interesting - this switch that you’re signifying—because in more broad cyber warfare literature I was stumbling upon this same term that you talk about. And in particular in the book Cyberwar and Revolution by Svitlana Matviyenko and Nick Dyer-Witheford they talk about the militarization of cyberspace, and this is a new kind of paradigm of social networks wherein social networks who have access to very sensitive political data, as we know from Cambridge Analytica, can create new unprecedented layers of polarization and influence in elections, etcetera. So cyberspace becomes this new, very important context for cyber conflict. And I was wondering, [following] the economy of cyberspace [content] published several years ago, or five years ago—in particular the platform economy books—[whether] maybe from ‘platform capitalism’ we now need to think more about platform militarism, in a sense.

ER: Yeah [in terms of] platform militarism, I think there’s a lot of interesting potential in that term and it’s not a term that I’ve used myself, although I do talk about the weaponizations of social media. And the sort of prevalence, even just the form of thought ‘the weaponization of X’ is something that William Davies talks about in his book[s] Nervous States for example, as part of […] a breakdown of the distinction between war and peace, which have often been seen as polar opposites of one another. But something like the weaponization of everyday tools - including social media, including likes, including all kinds of other things as well - speaks to a kind of erosion of the distinction between war and peace. And the term weaponization of social media is something that other people have done more in depth work on, especially [when] coming from a military background, which I personally do not. But, for instance, there’s a book by Singer and Brooking called LikeWar that goes into the weaponization of social media in a variety of ways. And they’re interested in, for example, how online aggression can increase offline violence for example in gang wars in Chicago let’s say, or indeed in the public relations kind of disposition that starts to sort of merge with military operations and things like the Act.IL app whereby the international reputation of Israel, at least in theory, is managed. And there’s also, by the way, some interesting scholarship around how those operations can fail. Because of course all of this stuff is really a gamble. It’s very complex and sometimes becoming tactical [within] the [online] space, the use of tactics can really spectacularly backfire actually, and the very well resourced actors who might think that they’re controlling the field can sometimes be really blindsided if they kind of grate with public opinion, for example. 

So it’s really quite complicated how this stuff plays out. Singer and Brooking are partly picking up on the military tome, On War, by Carl von Clausewitz published in 1832 [which is] a massive somewhat rambling book. But one of the things that they sort of pick up on is particularly von Clausewitz’s very—for the time—novel decision not to see war as the complete interruption of politics and business as usual, as was often done prior to that book. Instead, he was thinking about the fact that war is simply a continuation of politics by other means, and what is politics? Well, it’s getting what one wants. So there’s the sense of a lack of clear distinction between war and politics that they’re really picking up on when they think about how social media is being weaponized and again how this idea of the tools that social media purports to offer can very easily become weapons.

So I think that idea of weaponization is really interesting. Maybe one just one last thing to say [on this] is that there are a lot of terms that are associated with cyberwar and I don’t use the term cyberwar in my book, not because I think what I’m terming reputation warfare isn’t relevant for cyberwar - which I think it definitely is and I think some of your research really clearly shows that. It’s maybe that there are a lot of aspects of cyberwar that are not relevant to what I’m describing as reputation. So you know server hacks—I think I just have one server hack in my book to do with Climategate but it’s really used in a very precise context because it was absolutely targeting professional reputation. So that’s sort of why I don’t use that term myself, although there are so many ways in which what I’m talking about really does link with cyber warfare. And there are other terms like information war, for example—[or] propaganda—which come up a lot in these discussions and of course in what is sometimes termed a post truth moment—I use that term myself sometimes, although with a little bit of caution, because I’m not fond of the connotation… as if there was some kind of like erstwhile [more truthful] past or something like that, which doesn’t seem accurate at all. But the idea of the manipulation of information and the micro-targeting of voters, for example, as in the Cambridge Analytica scandal, are very important. On the other hand, I would say there are also very important caveats in this field. So scholars like Yochai Benkler are doing some really interesting work on, for example, making sure that the impact of Cambridge Analytica on the 2016 Trump election, for instance, is not overstated at the expense of a clear and more nuanced interpretation of what’s going on in the field. It’s very easy to identify behind the scenes ‘boogie men’, if you like, or just go like “oh no they’re manipulating us” and it’s true to an extent, but one of the things that Benkler argues is that it’s not immediately clear that Cambridge Analytica’s PSYOP operations were necessarily all that much more effective than previous attempts to persuade and even manipulate voters.

So these things certainly do represent clear long term threats to democracy. But at the same time, when we analyze them it’s very important to be thinking about what they haven’t accomplished as opposed to construing them as these magical all-powerful, all-knowing things that can totally succeed all the time and know exactly how to get every single voter to go their way. Because they’re not. Sometimes they fail, sometimes they work really well. I think a lot of what I’m talking about on online platforms actually emerges, partly by accident, really. The dynamics are so complex when thinking about the emergence of what I call reputational weather online. The patterns are so complicated that no evil mastermind or platform designer really could have anticipated things in advance, I think.

So there are a lot of unintended consequences but also, I suppose, one of the things I really wanted to do is to offer sort of a corollary to the framing of information war and to the very prevalent foregrounding of informational manipulations that are very important to all kinds of regimes, propaganda campaigns around the world, and [other] things. And in a way, for me, reputation warfare is a kind of parallel and related but distinct set of operations to something like information warfare or online propaganda, for instance. And one of the things that I take from Gloria Origgi’s work on reputation is the idea that reputation has an epistemological orientation. In other words, it is a sort of social epistemological tool. So it helps people make judgments about things, even when they’re not necessarily experts on those things. So, for instance if someone [is] choosing which doctor to go to—let’s say they probably don’t have the specialized knowledge to know exactly which doctor is the best—they might read the reviews or ask a friend for a recommendation, and that will help to make a choice they’re happy with. And one of the things that Gloria Origgi argues is that when there’s a massive glut of information available, it becomes harder and harder to ascertain the quality of that information. Therefore, reputation becomes more important and more foregrounded as a kind of shorthand—as an epistemological tool for trying to work out what is credible and what isn’t. So, for example, if I’m looking at an online article I might really be thinking about the reputation of the publisher of that article as a way of prejudging the content. So I’m interested in how ideas like information warfare, or even post-truth or propaganda kind of focus on the content side of things, [whilst] there’s also this reputational aspect that is kind of linked to the problem of evaluating information. And certainly, as we know, in many online platforms it’s incredibly contradictory, incredibly filtered and indeed sometimes very weaponized information that is being meted out. But the role of reputation judgments in [evaluating information] is sort of distinct—related [and] linked to [reputation warfare], but something else, I would say.

AE: In your work, you talk a lot about automation and algorithmic greetings. And there is also another trend in cyber warfare which talks about the use of algorithmic tools or automation to optimize cyber warfare. Bots would be the most known example as fully automated parts of the commenter doing comments or putting likes. But there are also other modes who are called cyborgs, which is essentially the human operator augmented by the set of scripts or trolls who would be using other modes of automation. So [in terms of] these kinds of automation and reputation, or the [use of] automation in making reputation more volatile and this kind of automation in cyber warfare, [I wondered] what would be the final result—would they affect each other and exacerbate each other’s effects? And with that I was thinking about the birth of high-frequency trading, and I think it’s interesting to think about how high-frequency trading was getting in our real life and what might happen next with these kinds of markets.

ER: Really interesting question. And the fact that there’s an automated layer—or I think probably better to say many layers—of online reputation is really significant. And it’s also interesting—I mean in my book, I kind of said that online platforms inaugurate partly accidental vast distributed reputational markets that kind of reorient what a sort of public sphere or public discourse could look like because of this vast distributed reputational market. You mentioned high-frequency trading, and I think there’s a really interesting example—it’s not it’s not what I’m focused on—but there’s another recent book that’s come out by an author called Tim Hwang and it’s a book called Subprime Attention Crisis. It’s quite interesting, and he’s talking about the construction of the online advertising auction system which is designed to allow all kinds of platforms to have revenue by displaying advertisements. And it was actually set up by ex-high-frequency trading people, which I did not know before reading his book, but it’s basically a real time auction system that’s very similar to how high-frequency trading goes down. So every time a user will load a page, there will be a real time automated high-frequency auction to get that particular ad spot on that page. And so there’s a direct link there with high frequency trading and how attention, in Hwang’s analysis, is being packaged and understood as a kind of derivative form. So his argument essentially is that there’s going to be a subprime attention crisis. Because what is this high-frequency online ad auction market actually selling? Well it’s selling users’ attention, of course. But it’s not selling that attention directly. It’s selling a derivative form of the attention, in other words, a particular measure of how you’re going to get your ad on a page in front of some eyeballs. So there’s a kind of derivative form but actually the value of the underlying asset, Hwang argues, has gone down considerably. There are huge generational differences, for example, [in terms of] how much people actually bother to look at the ads. Younger generations [for instance] barely even register them at all. So there’s almost no value in a way to this underlying asset of attention but it’s still being traded on this derivatives market for attention, which Hwang argues will kind of go subprime and then create a huge problem [in terms of] how platforms are supposed to generate revenue anymore. 

So that’s a kind of parallel but very intriguing example, I think, in terms of how there’s a kind of derivative market for online attention, if you like. And with online reputation, [while] I do talk about automation in my book there are scholars who go way deeper than I do in that direction. What I’m talking about is not so much, for example, platform algorithms themselves insofar as one can study them, given that they’re proprietary. It’s not so much trying to think about the algorithms themselves as a kind of meeting point between these automated mechanisms which will amplify or fail to amplify different reputational signals. But what I would say is actually at the heart of it, my interest is actually in the user-focused active clicking on stuff and liking stuff and friending and all of that. And thinking about the accidental vast distributed reputational market as something that inaugurates what I call crowdsourced credibility. In other words, the fundamental sort of pretense is that it’s a crowdsourced operation, and you can say “right, by liking this I’m contributing to the accruing reputation of this particular YouTuber” and [so forth]. But what I think is quite interesting is that act of participating in crowdsourced credibility—and of course it’s not always human users doing that. As we know, there are bots that routinely amplify politicians’ tweets, for example, to make them look more popular. Which is also a tactic that can backfire because it can look a bit desperate, shoddy, fake—you know, all of these sorts of things! So again, how effective these things actually are is not straightforward at all is it? So it’s not always an act of a human user but in many cases it is, and there’s certainly an appeal to the human user to be part of this sort of crowdsourced credibility—to kind of conflate voting and liking, in a certain way. But at the same time, something like the like button, for example. Part of its job is to generate data for a platform such as Facebook. So where there might previously have been no data about what people actually like that they see on a page, now there is some data, and then that can then be used, analyzed, mobilized in all kinds of different ways. 

I think there’s a funny, I would say almost retroactive or slightly residual quality to the Facebook “like” as a data point. Because my sense is that the “like” is still important data for Facebook, but at the same time, the data that they use keep getting more and more detailed, more and more nuanced. For example, video completion rates for how much of a particular video someone will watch from their newsfeed before they just let it scroll by. So there’s more and more minute data kind of going in all the time, and maybe constantly changing just how important the “like” is itself as a data point. But one of the things that I’m very interested in is that platform algorithms, for example, will quite often increase inequalities between different levels of popularity of a particular post or video, for example, and indeed in different users as well. And this is something that I talk about [when] thinking about YouTubers and the problem of trying to make it on YouTube for those who try that. And particularly in a kind of marketplace on YouTube, an attention marketplace that’s expanding at an extraordinary rate so that what one had to do to become a YouTube star six years ago or something is probably way less than what it would take to achieve that level of attention now. Because there are so many people trying it out that the possibility of grabbing attention is a lot weaker. But equally, I’m kind of arguing that there’s this interesting phenomenon that a lot of YouTubers will talk about where they’ll say that they are really beholden—if they’re trying to actually monetize their platform—to the algorithm quote unquote. Of course there’s not one single platform algorithm, but still the idea of the algorithm as a kind of imagined entity of what the platform seems to want. It becomes very palpable it seems for some YouTubers in terms of how they’re thinking about what they post. And there are indeed YouTubers who talk about how they will change the content that they’re posting and the frequency with which they’re posting in order to curry favor with the platform algorithms. [And] in some sense it might be all about trending, all about getting into that recommended videos stream, otherwise you may never ever reach a monetization threshold or get much of an audience.

I call them “status moods.” I’m not deeply analyzing automation, at least in this book, in and of itself, but I’m trying to think about how automation exacerbates status inequalities and then how that sort of funnels back into the moods of how people understand what status even is and what might be interesting or important about it. So if in grammar, a grammatical mood expresses the attitude of the speaker towards what is spoken about, I’m thinking about the status mood as the implied attitude of the person seeking the status towards status itself. I argue [for instance] that there’s a meritocratic mode of status […] the idea that based on merit—which is of course a very contested concept right now [and] generally there are a lot of debates on meritocracy—but there’s an idea that merit will beget reputation will beget status. And that exists on platforms. For example, if somebody’s looking at David Bowie videos on YouTube, certainly his status as a musician doesn’t depend on what’s on YouTube at all. 

Then there’s maybe a kind of cultivative mood that’s about a kind of giving of oneself—like “I give you my platform, my YouTube channel, here’s all my free yoga classes that I’m offering to you for free if you’ll just please like and share this channel and subscribe” and all that. So kind of a cultivation of a community through one’s online status as a kind of gift. And of course there’s a lot of historical debates on online gift economies and the problem of the gift—Tiziana Terranova’s work touches on that, certainly. And then, something that I call an “ascetised mood” or even a “lottery landslide mood,” which is very much this reflexive mood of like “I’ve got to appeal to what the algorithm will pick up” or “I will tweak the content of what I post until I find that exact sort of logic of what will get picked up.”

And so, I’m very interested in these subtle interactions between an appeal to mostly human crowdsourced credibility and then the sort of automated amplification or indeed dampening of the signals based on what the platform kind of thinks might be more marketable and [likely] to garner attention. 

AE: Thank you very much for the amazing conversation.