Present: roughly 40 people. Artists, scientists, people working in the area of privacy and surveillance as well as laypersons.
Opening: Jaap-Henk Hoepman (When Art Meets Science Foundation).
Moderator: Jetse Goris.
Panellists: (see biographies here) Artists: Bastashevski/CH, Curran/IRL, Crispin/USA, Schokker/NL. Scientists: Gürses/USA, Hildebrandt/NL, Bellanova/N.
Goal of the debate was to increase awareness about privacy & surveillance, as well as connecting the scientific and the artistic world
on this topic. All panellists were asked to show a picture of a piece of art, and reflect on that in their opening statements.
Seda Gürses showed a small part of Hito Steyerl’s video “How Not To Be Seen. A Fucking Didactic Educational .MOV File”. She used this to illustrate that not everyone is treated equally under surveillance: some suffer more, some suffer less. She argued that talking about mass surveillance in terms of privacy misses the bigger picture. It overlooks the political and economic component. We should be asking ourselves questions like: Why is the surveillance infrastructure there? Why are so many companies and governments
working on surveillance? What is the overarching political and economic project supported by these activities? And what do we think of that overarching project?
Quote from the movie: “Today the most important thing in the world is how to remain invisible. Love is invisible. War is invisible. Capital is invisible.”
Mireille Hildebrandt showed a picture of a manhole cover with Google’s logo on it: a telling visualisation of the fact that a private company (Google) is owning a critical infrastructure (Search). Increasingly this infrastructure is used to interfere with the world we live in. Google can already bias election results by influencing the ranking of results returned for election related search queries. It is therefore crucial that we start thinking about how such infrastructure should be designed, and how it can be inspected and subjected to public scrutiny.
The image provides a metaphor for how search works: we live our digital lives by flushing our digital debris, our search queries, metadata, through the sewer owned by Google. We do not expect to be confronted with our debris, yet Google collects, saves, measures and aggregates this debris and turns it into something valuable. “Data Flush” instead of “Data Rush”. Michel Serres in his book “Malfeasance: Appropriation Through Pollution?” argues that we own what we pollute. We pollute what we wish to possess, and we clean what we make available to others. This offers a new view on privacy: perhaps one dimension of privacy is related to our trust that other will not trespass on our filth.
Rocco Bellanova showed a picture of a bird caught in a net, for the purpose of being tagged. He explained that more the bird tries to escape the more he gets trapped, and the more the very thinly weaved network becomes visible to us. A very nice visualisation of the state we are currently in. Rocco reflected on the concept of data, noting that the modern state heavily relies on data, and that data stands between the reality and our knowledge of that reality and our ability to act on that reality.
Mari Bastashevski started by showing a video commercial by Verint (Israel), a producer of surveillance technology. The video was very joyful, innocent if you like, with a very happy, lively tune as its soundtrack. Verint may be an extreme example, but hundreds of successful surveillance companies (Gamma, Hacking Team) have similar promotional material. Interestingly, nothing in the self representation of these companies and their promotional material discloses what the companies actually do. They simply portray privacy as a form of secrecy (that benefits the bad guy), and mask surveillance as a normal and everyday activity. Mari believes the extreme banality presented by these catalogues is extremely dangerous, because it serves as a façade for these institutions that otherwise operate in extreme secrecy.
Mark Curran showed a photograph of the trading floor of the Chicago Board of trade, taken in 1999, filled with people in bright coloured shirts. This a good example of how a photograph itself is only ‘surface’. The real story behind it is the fact that in 1999 (when the picture was taken) there were 1500 traders working on the floor, against only 400 in 2011 (and even less now). Mark is a practice lead researcher, thinking about such questions of representation. In his latest project “THE MARKET” he provides cultural description to the functioning and condition of global markets, with particular emphasis on machinery of financial capital as central innovator of ‘algorithmic technology’.
The market is the dominant frame in which we discuss society at large, and algorithms in particular. We mainly look at them from an economic perspective and hence understand them mainly in terms of revenue. This frame is too narrow to fairly discuss the role of algorithms in society, looking at both the potential benefits as well as the possible negative impacts.
Sterling Crispin showed an image of his “Data Masks” (also on display in the exhibition), created using facial detection algorithms. These data masks in a way represent the underlying facial recognition algorithms, give these algorithms a kind of physicality, a way to talk about them and point to them (and the threat of surveillance in general) in more concrete terms. On the other hand these masks also pave the way to a more ‘animistic’ understanding of these algorithms as they suddenly have a face.
In his view, technology is this pulsing organism that we are helping to evolve. And we still have a long way to go. Right now we are at the single cell organism level of technology. The fastest computer is maybe as smart as a dragon fly, but it releases as much heat as a bomb going off. We humans on the other hand don’t need any cooling for our brains. Perhaps we can learn more from the way systems are engineered in nature, to design technology in a more human friendly way.
Adri Schokker showed a 3D reconstruction he made of the island Utoya, where Breivik performed his terrorist attack. This is a recent example of his work that is created based on found images and video footage to reconstruct certain events. He is triggered by news stories and photographs of events that reveal how we report on these events in a new way based on new information technology, and how these technologies change the way we view and perceive these events. These new technologies are a new interface between us and the images we see, blocking or distorting our perception. They also raise important questions relating to how far you can go as an artist to freely use everything that surrounds you or that can be found on the Internet.
How do you make the invisible visible in a world mediated by technology?
There are several ways to go about this. The first approach (taken by Trevor Paglen, but also Mari Bastashevski) is to explore the perimeter between the visible and the invisble, thus giving some visibility to the invisible.
The other approach (explored in the exhibition through the work of James Bridle) is to recreate the invisible based on information about the invisible object.
Intangible things, like algorithms, can be made ‘visible’ by interacting with them, playing with them, and learn how they respond to inputs. This creates an understanding of what the algorithms do, creates a mental image of their workings that allow us to adapt our interactions with these algorithms for our own benefit. This playing is a very human thing to do, and something we also do when interacting with real people and real institutions.
This translation, in a way this demystification of the algorithm has a potential to shift the power balance in a similar way to when the bible was translated from Latin or complex English into the common tongue of the people.
If you ask people that are targets of surveillance, they will not say it is invisible. In the US, black people will tell you that they have been under the same surveillance that Snowden has revealed to us for decades…
It is important to realise that even the visible may in fact be invisible. Verint’s promotion video is already an example of this, hiding from plain view what surveillance really is like. The public image, the recirculation of the same images time and again is in a way a cloak of publicity that hides the real things that are happening, the real things these people are doing from plain sight.
Invisibility can also be understood in terms of access. The invisible consists of the same properties, the same things as the visible. It consists of the same tables, and chairs and people in suits and ties. But the access permissions are different.
The economic and political dimension of surveillance.
The surveillance debate has so far focussed on the difference between dragnet, mass surveillance and targeted surveillance. The general consensus seems to be that mass surveillance is bad, but targeted surveillance is good. But this avoids the question who is targeted, how, why and whether this is ok. (See also Seda’s opening statement that relates to these issues.) Why is surveillance in place? Who is it serving?
The problem is exacerbated because we live in a globalised world. Some people in the panel linked the surveillance problem to climate change and the existence of power vacuums.
Others warned that we should avoid the pitfall of saying that it is all politics, all economics, and to blame it on the ‘traditional’ bad guys. If we want to properly understand the problem, we have to look at what difference this new technology making. And then talk about the moral and ethical dimension and how to regulate and enforce these norms.
Surveillance is rarely discussed in terms of who is making money out of it. Surveillance programs are rarely seen as a multi-billion deal for someone (else).
Broadening the perspective
We should drop the notion we have nothing to hide. We should stop looking at the issue from the perspective of security vs privacy. We have to look at the distribution: who is giving up his privacy for whose security? Instead of you giving up some privacy to get some security in return, it is usually you giving up some privacy so someone else gets some security.
When talking about algorithms, we have to realise there is a trend towards embedding these algorithms in robots. Think of drones without operators, but algorithms that decide where to fly, when to drop a bomb or fire a missile. And just as surveillance has become cheap, these killer robots will become cheap. Essentially making war cheap.
Algorithms make irreversible decisions, they influence the world we live in. We need to understand and provide a cultural description of what it means for machines, algorithms, to have agency.
Maybe we should slow down technological developments, stop adding layers of complexity if we do not even understand the previous layers.
On the question: how does art help science (or vice versa)?
Artists raise questions that are not raised by scientist because scientists are so busy with their disciplinary research, because their output is measured for that and they are funded for that. Artists think out of the box, more than scientists do.
The panel suggested to explore the idea of applying ‘design crit’ (critically looking at and commenting on a design as commonly done in the arts) to software (and hardware) and thus find new ways of looking and understanding them.
The tags don’t matter anymore. Anyone can decide what he wants to be: artist or scientist, depending on the context, and what he is working on. This is getting more and more accepted, also in academic circles, and this cross disciplinary communication is beneficial for everyone.
Final comments based on questions from the audience
The ‘Data Rush’ problem needs to be politicised, but not in a trivial “do we want to be part of this?” kind of way. It is not interesting to frame it as a battle between us the victims versus them the system, the bad guys. We are all part of it, contributing to it, using smartphones, generating data. Also, there are real security issues that need to be addressed. So we have to think about how to create a system that keeps intelligence services, law enforcement, government and private industry complexes in check.
Having said that, we have to keep in mind that although we can develop usable privacy tools that help protect people, that protection is only limited. And we cannot expect individuals to solve the problems created by the surveillance assemblage by themselves. There are no tools that do not leave some trace, and even if there were, there are data mining tools that will single you out as the person that doesn’t like to leave a trace. There is no possible ‘outside’ that people can escape to.
Bureaucratic states have effectively been replaced by information states, and in some of these information states there are victims. At the same time, any resistance to government also relies completely on the internet.
There is reason for optimism though. There is certainly a possibility to take charge, to take control over the technology that surrounds us. We cannot expect citizens to do this individually. This is an effort for all of us, our society, together.