Skip to main content Skip to navigation

On Digital Data Dispossession and Capitalisation

Yannick Schütte, Cultural Studies, Leuphana Universität Lüneburg

Abstract

This article will examine the ways in which patterns that were ascribed a destabilising quality, regarding the capitalist system, function as reinforcements to the hegemony of the latter. I will turn to Virno's readings of Marx's 'Fragment on Machines' and his concept of immaterial labour as well as Deleuze's and Guattari's idea of capitalism as a system with an inherent ability to reterritorialise itself. Then I will outline the structures and workings of the aforementioned reterritorialisation, happening due to a dispossession of personal data or a new expropriation of the commons, as Jodi Dean has pointed out. The subsequent part will scrutinise how analysis and deployment of dispossessed data works as a strategy to capitalise on them. In that regard I will examine how the opaque operational modes of personalised services affect the agency and subjectivity of the user, by referring to Gilbert Simondon's technological philosophy. The final part will examine models and platforms that attempt to provide alternatives to the hegemonic structures that determine online traffic to a great extent.

Keywords: Immaterial labour, personalised web services, digital tools, reterritorialisation, capitalisation.

Introduction

Whether in our professional or private life – if this distinction can still be made – we are operating with digital tools that serve us to obtain information from various sources, to communicate in an uncomplicated manner or to purchase goods at the lowest price. Therefore we are turning towards personalised assistants as Amazon or Google offer them, that are seemingly providing us with these services for free and thereby depicting the narrative of the web as a democratising, liberalising and unifying structure. That this narrative does not correspond to its real modes of operation has been revealed within the scandals around governmental surveillance as well as the abuse of data. In the post-Snowden world we are aware that our online practices are subjected to surveillance on a large scale as well as monitoring – both governmental and corporate – and that they are utilised in order to create value. Consequently the internet does not prove to be the place for subversion that it was thought to be; instead it has been subsumed by the capitalist system. In the following I will examine how capitalism has incorporated the web space, scrutinise how it changes our modes of being together, and attempt to present private as well as governmental measures that try to address the issues at stake.

Reterritorialisation of cyberspace

In his 'Fragment on Machines' Karl Marx develops a scenario that depicts the emancipation from capitalism towards communism. According to him this process will be caused by the breakdown of a production based on exchange value, in favour of one based on abstract knowledge (Virno, 1992: 10). Paolo Virno discusses Marx's proposals in 'General Intellect', and compares them to the workings of the post-Fordist economic system. As predicted, abstract knowledge becomes the driving force of production, but has not genuinely contributed to destabilise the capitalist system. In fact abstract knowledge has been incorporated by it and thus reinforced its dominance. Characteristic of this is the involvement of the time outside of labour in the production of value. According to Virno's information, consumption or 'even the greater capacity to enjoy are on the verge of being turned into a laborious duty' (Virno, 1992: 17), a development that has led to the emergence of a prosuming subject. While addressing this very development, that Virno describes, Maurizio Lazzarato puts into focus that information as well as communication, in particular, have become largely more valuable in terms of productivity than actual labour. He states that within this process, societal communication along with its gist, the production of subjectivity, has become productive itself (Lazzarato, 1998: 58). With the emergence of Web 2.0 the tendency described by Lazzarato is reinforced and alters the definition of commodities, which also touches on the perception of communication yet again. As a consequence of communication shifting more and more online, 'commodities have just changed from material/immaterial artifacts to people and their data' (Petersen, 2008).

This shift illustrates an analogy to Virno's readings of Marx's 'Fragment', to the extent that the capitalist system adopts structures, that were originally ascribed a subversive potential. Marx expected the rise of abstract knowledge to be a decisive element in the emancipation from capitalism, so there was, or still is, a discourse and an image of the internet serving as technology of subversion: a discourse which has been subjected to thorough critique after (for example) the revelations on governmental surveillance and the abuse of user data (see Blas, 2016; Pasquale, 2015: 42–48). Despite the fact that there definitely are democratising or liberalising elements that contribute partly to a destabilisation of the system, through User-Generated Content as well as User-Driven Information, the same infrastructures are used in the reverse to assert the hegemony of capitalism. Due to capitalism's inherent capacity to reterritorialise and reinvent itself, as Deleuze and Guattari have pointed out in A Thousand Plateaus, the system is able to include structures that were ascribed a subversive potential in the first place. As content moves rather jauntily through the World Wide Web, it may make it easier for users to avoid paying for music or movies, but in reverse their data becomes visible as well, which has played a key role in including the internet in the cycle of capitalist production. Capturing and analysing user data has become one of the essential practices of the current capitalistic system, which marks the beginning of a new economic paradigm, as Jeremy Rifkin has pointed out (Arzt, 2014).

The extensive work of Christian Fuchs on the subject exposes which aspects have been critical for the emergence of the 'post-Fordist accumulation regime' (Fuchs, 2008: 85). According to him terms such as 'Customer-oriented production', 'Just-in-time production', 'Outsourcing' and 'Decentralisation' can be seen as representative of our current mode of production (Fuchs, 2008: 85–86). Moreover, he considers the intertwinements between information technologies, computer networks and knowledge critical to the formation of a system that is referred to as information, knowledge or networked capitalism (Fuchs, 2008: 83). Furthermore, he speaks 'of the tendency of the commodification of everything' (Fuchs, 2014: 113) and is thereby referring to the systematic and augmenting dispossession and exploitation of user data or as Hardt and Negri have argued on behalf of the system's composition:

There is no longer a factory wall that divides the one from the other, and 'externalities' are no longer external to the site of production that valorizes them. Workers produce throughout the metropolis, in its every crack and crevice. In fact, production of the common is becoming nothing but the life of the city itself.
(Hardt and Negri, 2009: 60)

Data dispossession

This development has generated a system that Jodi Dean refers to as communicative capitalism. It is based on the expropriation and exploitation of communicative processes. She states that 'linguistic, affective, and unconscious being-together, flows and processes constitutive not just of being human but of broader relationality and belonging, have been co-opted for capitalist production' (Dean, 2014: 4). Communication must be understood in a larger sense here. Dean is referring to all kinds of activities that materialise in form of data. GPS-locations, financial transactions, videos we watch or record, are being referred to as raw materials in the Big-Data discourse. The frequently drawn metaphors of data being the new gold or oil reveal how they are identified as natural resources. According to Dean, data, as a communal product in its origin, 'is seized, enclosed and privatised in a new round of primitive accumulation' (Dean, 2014: 10). She draws a parallel between this development and the separation of the producer from the means of production that Marx outlined in the 'Critique of Political Economy', while referring to the expropriation of common property, that leads to the transfer of the 'pigmy property of the many into the huge property of the few' (ibid.), as Marx has put it. Through this data dispossession our modes of acting and interacting are changed into capitalisable units and, moreover, an expropriation of certain temporalities takes place. Dean argues that for once the existence of the momentary is threatened because every action becomes traceable and secondly a certain kind of futurity seems to vanish due to the increasing predictability of our patterns of action (ibid.: 11).

Such accumulation of capital can exist due to a system working in anticipation of prospective value yet to be produced. Consequently companies working with this model are tied to the expectations of real-economy profits of other companies as well as 'an affective "law" of value', as Arvidsson and Colleoni (2012: 142) have argued. By that they refer to the ability of companies such as Facebook or Google to 'attract and aggregate various kinds of affective investments' (ibid.) that exceed their capacities to create revenues through advertising, 'but are related to their perceived capacity of attracting future investments' (ibid.: 145). Thus their powerful position regarding the extraction of data grants these companies vast potential to bring in financial rent in the future and in the present data serves as a tool for product placement and marketing as well as market research and is applied to recognise trends or behavioural patterns of users and influence their conduct.

The value of relations

Analysing relations is at the core for these companies to capitalise on data. Value is no longer a quality that depends on a single object or subject, but is created through correlation of data. José van Dijck has coined the term connectivity, which she describes 'as an advanced strategy of algorithmically connecting users to content, users to users, platforms to users, users to advertisers, and platforms to platforms' (van Dijck, 2013: 8). Accordingly the value of a network follows the logic of Metcalfe's law, which states: 'The value of a communications network is proportional to the square of the number of its' users' (Dean, 2014: 5). This implies that the more connections and links a network contains, the more opportunities exist for the accumulation of capital. Hence Google's PageRank algorithm is one of most valuable algorithmic tools and serves as one of the best information retrieval algorithms. Its strength derives from the ability to collect as much information and to correlate it, which allows it to establish numberless links between users and content, users and platforms as well as users and users, and so on (see Dean, 2014: 5).

Relations that reveal information concerning the individual user's behaviour, as opposed to the behaviour of the collective, are being attributed a particular value. As Wendy Chun points out in 'Networks Now', 'every interaction is being traced and then incorporated with other traces and used to understand you, where you is always singular and plural' (Chun, 2015: 306). The frequently used model of an enclosed platform, which is used by social media sites, bears as much attractiveness, because the creation of a digital huis clos facilitates to observe the relations and actions of the user as well as to identify and understand emerging trends, and furthermore to regulate the ways in which they act.

Companies deploy different strategies of data mining to obtain a precise perception in order to create a personalised profile of their users to place suitable content and advertisements to tie them to the site. And as data competition is fierce these days, an economy of attention has emerged, whose modes of value production can be summarised with the phrase 'repetition produces value' (Chun, 2015: 305). Clicks, 'likes', retweets or reposts have become direct economic factors, and are therefore highly coveted. One example could be the immediate linkage between a YouTuber's number of subscribers and the advertising revenue s/he receives from the platform.

But as Web 2.0 was conceived as a democratic structure, shouldn't it lead to a distribution of attention that assigns everybody the amount that he or she has merited? This certainly might sound plausible, aside from the circumstance that it couldn't be further away from the truth. According to Jodi Dean the internet is a place 'where numbers matter more than content where how many takes the place of how come, where correlation displaces causation' (Dean, 2014: 8). This proves to be true if we follow Albert Lazlo-Barabasi's network theory and his thoughts on the inherent power-law distributions. The scale-free networks we are dealing with, in the case of the World Wide Web, frequently display similar structures, which means that the node with most connections usually has more than twice as many connections than the node with second most connections. This leads to a structure where there are only slight differences between less connected nodes and well connected nodes, called hubs. This unequal distribution results from the logic of the 'preferential attachment' (Lazlo-Barabasi and Bonabeau, 2003: 55) which implies that new nodes entering the network rather connect with hubs and thereby support the development of hegemonic structures.

To illustrate this Dean turns to academic citation networks, where plenty of works are published, but only the same four are cited by everyone, leading to a winner-takes-all economy that rewards repetition and generates success of the few, but at the same time creates a long-tail, where there is competition among a multitude that hardly obtain any attention at all. Such a situation of disparity correlates with the extension of the field as well as the participation within it, which in the case of the internet is theoretically to a global extent, and 'the more participation, the larger the field, the greater the inequality, and therefore the greater the difference between the one and the many' (Dean, 2014: 8).

The power of personalisation

Repetition is furthermore not only decisive to understand power-law distributions, but also predominant for the composition and therefore comprehension of data sets. The ways in which we move around the internet are very strongly tied to habits, and consist mostly of repeating patterns of action, which reveal information about the relation to the agents we are connecting with. Friends are basically connections and the strength of the friendship can be quantified via the number of connections between two parties. Examining and comprehending habits is one of the principal tasks of data analysis, as they are practices acquired over time and operating beyond our perception, as they have moved from voluntary to involuntary (Chun, 2015: 309). Being able to understand the habits of the user allows companies to influence them, regarding shopping decisions etc., in a way that they are not even aware of.

For example, corporations like the second largest discount US retailer Target have been working with customer statistics to develop individual customer profiles, which enables them to draw uncannily precise conclusions about the personal life of customers. For instance they are able to determine with an 87 per cent probability if a woman who shops at Target is pregnant; and if so, they also know the approximate expected date of birth, just derived from customer statistics. While knowing that (shopping) habits are not very easily alterable, except for certain life periods such as the time of the birth of a child, Target tries to influence the customers in a subtle manner through personalised advertisements so that they would expand their usual purchases with products they would usually buy elsewhere; Target has been quite successful with this strategy for a while now (Duhigg, 2012).

Considering how precisely companies can deduce very personal information exclusively based on how we shop, it raises the question, how well do companies that have access to combined data sets of all the traces we leave in total know us? Beginning with calls that are being monitored for reasons of quality assurance, more subtle ways of tracking like analysing GPS signals and methods of monitoring that happen outside of our consciousness, such as the concentrated combination of data sets of our online activities, allow companies to know more about our behaviour and relationships than we might know ourselves, as they have at their disposal highly detailed digital records of us that we cannot access ourselves. The abundance of information existing on us, generated more or less voluntarily by ourselves and collected as well as combined by data mining companies, allows a high-precision personalisation (see Christl, 2017). In Filter Bubble Eli Pariser has worked on breaking down the strategies that Amazon, Google, Facebook and others deploy to obtain and consequently to capitalise on user data, and put into question how it affects the agency of our actions. He argues that by means of data dispossession and personalised web services, which aim at the creation of value, we are each put in our personal Filter Bubble. According to Pariser the consequences resulting from that, are to be regarded critically, as we are put in parallel but separated, coexisting universes that lead to an information determinism, which prevents us from dealing with different ways of thinking opposed to ours, without us being even aware of it and thereby depriving us of a key structure of the democratic system (see Parisier, 2012:13). Whether the outcome of this development is as drastic as depicted by Pariser can and will be discussed below. Nevertheless he offers some interesting case studies of the strategies of certain companies, which will be thematised in the following.

Amazon

At the heart of Amazon's success story is that it has worked as a personalised bookshop since it was launched in 1995. As is the norm these days on most platforms, customers were provided with shopping propositions or recommendations, that seem to correspond quite accurately to the preferences of each customer, based on the purchases of customers with similar shopping profiles. The more customer data, the better the functioning of personalisation; therefore Amazon is capturing data on a large scale. Pariser indicates the extent to which Amazon is accumulating user data. For instance, 'while reading books on your Kindle, information about tagged or skipped passages or pages are being send to Amazon's servers to predict what could interest you next' (Pariser, 2012: 37). Based on the information that customers' data reveals, the specific product propositions are inconspicuously altered and tried to match with the exact customer's preferences. This furthermore allows comapnies to reverse the mechanism, in that they can pay for that their products to be shown, disguised as 'objective' propositions.

Google

The tendency that is worrisome about Google, or at least gives reason for precise scrutiny of the company's actions, is that 'there has never been a company with explicit ambitions to connect individual minds with information on a global – in fact universal – scale' (Vaidhyanathan, 2011: 16). Google may have started out as a web search-engine, but as Siva Vaidhyanathan has argued, 'as the most successful supplier of Web-based advertising, Google is now an advertising company first and foremost' (Vaidhyanathan, 2011: 16). Therefore Google tries to make use of any kind of data that it can get its hands on. By analysing the user's click-signals, which contain information about the preferred search results of the user as well as about the time he or she spends between two clicks, information about their personality can be revealed (Pariser, 2012: 40). Aside from capturing each user's clicks and placing advertisements, Google has continuously extended its number of ventures and product offerings such as Gmail, Google Docs or Maps, not to mention the acquisition of YouTube, and thereby multiplied the possibilities for extracting user data, which also led to even stronger ties with customers. Additionally the company extracts data with their free advertisement service AdSense, an algorithm suited for blogs and other kinds of privately hosted sites which allows the host to place advertisements that correspond to the content of the site. Petersen argues that AdSense serves as a gold mine for Google due to the fact that on the one hand it promotes their 'Don't be evil' slogan, as they offer a useful service for other companies seemingly for free, while on the other hand it serves Google as a very productive monetary and data source (Petersen, 2008).

Facebook

The development of Facebook's news feed in combination with its algorithm Edge Rank led to its dominant position within social networks and to it being one of the most dominant web corporations of all. Edge Rank arranges each user's personal news feed in the way that only 'relevant' notifications are being shown. Edge Rank is constructed considering three basic factors. Firstly, notifications from persons we interact with more frequently are being shown preferably, which means that opinions on everybody's news feed are processed into sameness (Parisier, 2012:45). Secondly, content is rated differently in its importance so that notifications about a new profile picture or changed state of relationship probably appear on the news feed; thirdly, new interactions are preferred to older ones (ibid.: 45–46). Furthermore, Facebook has expanded its sphere of power and presence on almost every page in the web with the creation of Facebook Everywhere, which enabled it to transform largely the whole web into a social network where personalisation à la Facebook on millions of sites, rendered vast data resources accessible (ibid.: 47). Besides this, it also makes use of the so-called lock-in effect, as Facebook tries to keep users in its system in multiple ways. For instance, links accessed over the mobile Facebook app are displayed on a site built in to the app, to prevent users getting off it too easily. Similarly to Google it also deploys the strategy to tie users to its system through the amounts of work that were being invested in the creation of their profiles by the users themselves, which keep users from migrating to other sites. But the most powerful instrument to capture users in their system is arguably the the social coercion Facebook exerts. As Fuchs has argued, opting out of the social networking site 'threatens the user with isolation and social disadvantages' (Fuchs, 2014: 256). Hence the power that Facebook is able to exercise over users is strongly tied to the expectation of affective rewards.

Algorithmically governed agency?

The competition for the hegemony regarding personalisation is fierce, particularly between Facebook and Google as they are competing for the same advertising revenues. While personalisation is promoted as helpful for us, it really serves others for marketing purposes. As Bernard Stiegler points out: 'In today's service economies, consumers are "discharged" of the burden as well as the responsibility of shaping their own lives and are reduced to units of buying power controlled by marketing techniques' (Lemmens, 2012: 33). Personalisation is a decisive factor here, as it provides the user with what s/he likes to see, according to algorithms that determine what is relevant and what is not, which therefore leads to a deprivation of agency without the user even being aware of it. The Internet Big Five – Amazon, Apple, Facebook, Google/Alphabet and Microsoft – are caught in the battle for user data, which is to them, as Pariser puts it, the 'most important battle of our time' (Pariser, 2012: 14). What has already been discussed in the discourse around problems of surveillance is confirmed here: the internet does not work as the democratising, liberalising structure that it was declared to be but is instead misused for exploitative practices or as an apparatus of control (see for example Galloway, 2004).

According to Sherry Turkle:

Technology catalyzes changes not only in what we do but in how we think. It changes people's awareness of themselves, of one another, of their relationship with the world […]. It challenges our notions not only of time and distance, but of mind.
(Turkle, 2005: 18–19)

In this sense in the case of algorithmic alterations of daily life we do not only encounter tools that ameliorate the quality of our day-to-day routine, but we also observe these aforementioned, problematic tendencies.

Being subjected to personalised web services that are apparently perfectly fitted for our own taste and preferences, there are parts that are being edited transparently, which might be perceived as a deprivation of our right to decision-making, and which changes our perception of the world. In this situation there is another reference possible to Marx's 'Fragment on Machines', which could be seen as a hyperbolic representation of the present situation:

The worker's activity, reduced to a mere abstraction of activity, is determined and regulated on all sides by the movement of the machinery, and not the opposite. The science which compels the inanimate limbs of the machinery, by their construction, to act purposefully, as an automaton, does not exist in the worker's consciousness, but rather acts upon him through the machine as an alien power, as the power of the machine itself.
(Virno, 1992: 3)

The personalised but inanimate algorithm acts upon the user as a black box, as an opaque structure, that operates under the claim of neutrality and the foundation of data. It becomes therefore essential to consider the inanimate character of algorithms, as they do not operate according to an ethical impetus and to bear in mind if the 'recommendation engine' (Pasquale, 2015: 8) pays attention to whether the recommended result fulfils certain social requirements or not.

For two major reasons, which are firstly the secretiveness of algorithms and secondly their general complexity, they are unreadable for the majority and appear as obscure black boxes. Such conditions bring to mind Gilbert Simondon's philosophy of technology, which refers to the estrangement between the human and the machine.

Still, the machine is a stranger to us; it is a stranger in which what is human is locked in, unrecognized, materialized and enslaved, but human nonetheless. The most powerful cause of alienation in the world of today is based on misunderstanding of the machine. The alienation in question is not caused by the machine but by a failure to come to an understanding of the nature and essence of the machine, by the absence of the machine from the world of meanings, and by its omission from the table of values and concepts that are an integral part of culture.
(Simondon, 1980: 11)

To reverse this very phenomenon can be understood as one of the essential propositions that arise from this work of Simondon. The ideal état he imagined was the human as a sort of conductor of the technical instruments, which do not work against but with him (Hörl, 2008: 642–43). This cognition led Simondon to think that a general supplementary technological education was necessary (ibid.) to avert a falling into hylomorphic patterns of thinking that conceive technology as opposed to culture or even humanity, as the technological reality is perfectly suited to be modified, widened or completed (Simondon 2011: 92). With this knowledge in mind it becomes distinct, that in the moment of encounter with problems seemingly evoked by technology a close examination of the technology rather contributes to a solution than its damnation.

The structures of the internet still provide the possibilities to position oneself in the network and to alter this positioning constantly. In order to achieve the objective to reverse the failure of understanding the machine, Simondon voiced the belief in the necessity of a 'technical culture' (Simondon 2012: 75), which could be perceived as a vehicle to perform an adequate analysis of technology. Therefore Simondon suggests that we should recognise and examine the existence and activity of technology as well as its materiality, before it is subjected to trans-individual and inter-psychological workings of society (Simondon, 2012: 234). Therefore we should bring to mind the materiality of silicon chips and server farms that are 'directly related to practical knowledge and the practical business of consumption' (Walker, 1985: 50) as they constitute the existence of the network and its algorithms, in order to oppose the alienation to the machine and to and to show that it would be misleading to argue that the subject is determined by the algorithm. In fact it would be more suitable to speak of a composite subjectivity that comes into being through the interaction of multiple (technological as well as human) actors, but which is originally in no way deterministic, as Clemens Apprich has proposed, in reference to Simondon (Apprich, 2015: 154). This suggests the necessity for a re-evaluation of our perception of the technology and raises the question, how the issue can be addressed? Is governance an option for regulation and which other possibilities remain?

Conclusion

As the internet is still a human construct, it can be subjected to alterations. As legal initiatives in the EU show, companies can be forced to change the settings of their algorithms, as it happened when Google was obliged to remove content that infringed on people's personal rights. Whether these kind of governmental interventions have been sufficient is highly disputable; in addition, they are highly complicated due to the vast disparity of technological knowledge between the authorities and the Silicon Valley corporations. Yet, they reveal for once the general contingency of the status quo and secondly support the necessity of a technical culture in Simondon's sense which requires a rethinking of technology, not as a mere extension of human labour power, but as a foundation of a collective creation of world (Apprich, 2015: 153).

Besides, a consequent restructuring of web space will hardly be possible solely by means of governmental interventions. In fact it will require initiatives of the common to construct alternative network models. One example for such endeavours could be the emerging movement of platform co-operativism – a term framed by Trebor Scholz. Platform co-operativism refers to collectivities that offer alternative web services which identify and fill niches of the predominant capitalist system and primarily serve the user instead of commercial interests on the basis of democratic governance and ownership. Furthermore the initiators comprehend the project as a process, that is in need of a variety of structural developments in respect of finance, law, policy and culture (Schneider et al., 2016: 22–24). MiData could serve as an example for platform co-operativism. The co-operative focuses on the securing, storage and management as well as control of health-related data. By means of combining conventional health data with information from FitBit devices, MiData attempts to cut out private, commercially oriented data brokers and redistribute the control over personal data to those who have generated them (MiData, 2017).

In my view, movements like platform co-operativism or comparable efforts that aim at more democratic and fairer models of internet platforms could contribute a critical part to outcompete the conventional actors. Admittedly, such enterprises can only be successful if alternative models match or even surpass the currently dominant in terms of designs, functionality and performance. This is why there is a need to communicate the matter distinctly to society as a whole as well as for governmental and private investment in projects that aim for the creation of alternative business models and architectures.

References

Apprich, C. (2015), Vernetzt – Zur Entstehung der Netzwerkgesellschaft, Bielefeld: Transcript Verlag

Arvidsson, A. and E. Colleoni (2012),'Value in informational capitalism and on the internet', The Information Society: An International Journal, 28 (3), 135–50

Arzt, I. (2014), 'Der Markt funktioniert nicht mehr. Jeremy Rifkin über den Kapitalismus', available at http://www.taz.de/!5032225/, accessed 27 January 2017

Barabasi, A.-L. and E. Bonabeau (2003), 'Scale Free Networks', Scientific American, 288, 50–59 Blas, Z. (2016), 'Contra-Internet', available at http://www.e-flux.com/journal/74/59816/contra-internet/, accessed 12 July 2017

Christl, W. (2017), 'Corporate surveillance in everyday life', available at http://crackedlabs.org/en/corporate-surveillance, accessed 6 August 2017

Chun, W. (2015),'Networks now: Belated too early', in Berry, D. M. and D. Michael (eds), Postdigital Aesthetics. Art Computation and Design, London: Palgrave Macmillan, pp. 289–315

Dean, J. (2014),'Communicative capitalism and class struggles', Spheres. Journal for Digital Cultures, 1, 1–16

Deleuze, G. and F. Guattari (1987), A Thousand Plateaus. Capitalism And Schizophrenia, Minneapolis: University of Minnesota Press

Duhigg, C. (2012), 'How companies learn your secrets', available at http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html? pagewanted=1&_r=3&hp, accessed 15 March 2016

Fuchs, C. (2008), Internet and Society: Social Theory in the Information Age, New York: Routledge

Fuchs, C. (2014), Digital Labor and Karl Marx, New York: Routledge

Galloway, A. (2004), Protocol – How Control Exists after Decentralization, Cambridge, MA: The MIT Press

Hardt, M. and T. Negri (2009), Commonwealth, Cambridge: Harvard University Press

Hörl, E. (2008), 'Die offene Maschine. Heidegger, Günther und Simondon über die technologische Bedingung' MLN, 123 (3), 632–55

Jenkins Jr., H. (2010), 'Google and the search for the future', available at http://www.wsj.com/articles/SB10001424052748704901104575423294099527212, accessed 15 March 2016

Kittler, F. (1999), Grammophone, Film, Typewriter, Stanford: Stanford University Press

Lazlo-Barabasi, A. and E. Bonabeau (2003), 'Scale free networks', Scientific American, 288, 50–59

Lazzarato, M. (1998), 'Verwertung und Kommunikation. Der Zyklus immaterieller Produktion' in Atzert, T. (ed.), Umherschweifende Produzenten. Immaterielle Arbeit und Subversion, Berlin: ID Verlag, pp. 53–65

Lemmens, P. (2011), 'This system does not produce pleasure anymore. An interview with Bernd Stiegler', Krisis, 1, available at http://www.krisis.eu/content/2011-1/krisis-2011-1- 05-lemmens.pdf, accessed 23 March 2015

Marx, K. (1983), Das Kapital. Grundrisse der Kritik der politischen Ökonomie, Berlin: Dietz Verlag (originally published in 1867) MiData (2017), 'About MiData', available at https://www.midata.coop/#about, accessed 6 August 2017

Pariser, E. (2012), Filter Bubble. Wie wir im Internet entmündigt werden, Munich: Hanser

Pasquale, F. (2015), The Black Box Society. The Secret Algorithms That Control Money And Information, Cambridge, MA: Harvard University Press

Petersen, S. (2008), 'Loser generated content: From participation to exploitation', First Monday 13, available at http://firstmonday.org/ojs/index.php/fm/article/view/2141/1948, accessed 14 March 2016

Schneider, N. and T. Scholz (2016), Ours to Hack and to Own. The Rise of Platform Cooperativism, A New Vision for the Future of Work and a Fairer Internet, London: OR Books

Simondon, G. (1980), 'On the mode of existence of technical objects', available at http://dephasage.ocular-witness.com/pdf/SimondonGilbert.OnTheModeOfExistence.pdf, accessed 10 February 2017

Simondon, G. (2011), 'Die technische Einstellung', in Hörl, E. (ed.), Die technologische Bedingung. Beiträge zur Beschreibung der technischen Welt, Berlin: Suhrkamp, pp. 73–92

Simondon, G. (2012): Die Existenzweise technischer Objekte, Zürich: Diaphanes

Turkle, S. (2005), The Second Self: Computers and the Human Spirit, Cambridge, MA: The MIT Press

Vaidhyanathan, S. (2011), The Googlization of Everything (And Why We Should Worry), Berkeley: University of California Press

van Dijck, J. (2013), 'Understanding Social Media Logic', Media and Communication, 1 (1), 2–14 Virno, P. (1992), 'General intellect' available at http://www.bbk.ac.uk/bih/lcts/summer-school-2014/reading-materials-1/harvey-readings/General%20intellect.pdf, accessed 10 March 2016

Walker, R. (1985), 'Is there a service economy? The changing capitalist division of labor', Science & Society, 49 (1), 42–83

 

To cite this paper please use the following details: Schütte, Y. (2017), 'On Digital Data Dispossession and Capitalisation', Reinvention: an International Journal of Undergraduate Research, BLASTER 2017, Special Issue, http://www.warwick.ac.uk/reinventionjournal/issues/blaster2017specialissue/schutte. Date accessed [insert date]. If you cite this article or use it in any teaching or other related activities please let us know by e-mailing us at Reinventionjournal at warwick dot ac dot uk.