Coping Tests as a Method for Software Studies

In this post I want to begin to outline a method for software reading that in some senses can form the basis of a method in software studies more generally. The idea is to use the pragmata of code, combined with its implicit temporality and goal-orientedness to develop an idea of what I call coping tests. This notion draws from the idea developed by Heidegger, as "coping" being a specific means of experiencing that takes account of the at-handiness (zuhandenheit) of equipment (that is entities/things/objects which are being used in action)  – in other words coping tests help us to observe the breakdowns of coded objects. This is useful because it helps us to think about the way software/code is in some senses a project that is not just static text on a screen, but a temporal structure that has a past, a processing present, and a futural orientation to the completion (or not) of a computational task. I want to develop this in contrast to attempts by others to focus on the code through either through a heavily textual approach (and critical code studies tends towards this direction), or else a purely functionality driven approach (which can have idealist implications in some forms, whereby a heavily mathematised approach tends towards a platonic notion of form).

In my previous book, The Philosophy of Software (Berry 2011), I use obfuscated code as a helpful example, not as a case of unreadable reading, or even for the spectacular, rather I use it as a stepping off point to talk about the materiality of code through the notion of software testing. Obfuscated code is code deliberately written to be unreadable to humans but perfectly readable to machines. This can take the form of a number of different approaches, from simply mangling the text (from a human point of view), to using distraction techniques, such as confusing or deliberately mislabeling variables, functions, calls, etc. It can even take the form of aesthetic effects, like drawing obvious patterns, streams, and lines in the code, or forming images through the arrangement of the text.

Testing is a hugely important part of the software lifecycle and links the textual source code to the mechanic software and creates the feedback cycle between the two. This I linked to Callon and Latour (via Boltanski and Thevenot) use of the notion of 'tests' (or trials of strength) - implying that it is crucially the running of these obfuscated code programs that shows that they are legitimate code (they call these legitimate tests), rather than nonsense. The fact that they are unreadable by humans and yet testable is very interesting, more so as they become aesthetic objects in themselves as the programmers start to create ASCII art both as a way of making the code (unreadable), now readable as an image, but also adding another semiotic layer to the meaning of the code's function.

The nature of coping that these tests imply (as trials of strength) combined with the mutability of code is then constrained through limits placed in terms of the testing and structure of the project-orientation. This is also how restrictions are delegated into the code which serve as what Boltanski and Thevenot can then be retested through 'trials of strength'. The borders of the code are also enforced through tests of strength which define the code qua code, in other words as the required/tested coded object. It is important to note that these also can be reflexively "played with" in terms of clever programming that works at the borderline of acceptability for programming practices (hacking is an obvious example of this).

In other words testing as coping tests can be understood in two different modes, (i) ontic coping tests: which legitimate and approval the functionality and content of the code, in other words that the code is doing what it should, so instrumentally, ethically, etc. But we need to work and think at a number of different levels, of course, from unit testing, application testing, user interface testing, and system testing, more generally in addition to taking account of the context and materialities that serve as conditions of possibility for testing (so this could take the form of a number of approaches, including ethnographies, discursive approaches, etc.).; and (ii) ontological coping tests: which legitimate the code qua code, that it is code at all, for example, authenticating that the code is the code we think it is – we can think of code signing as an example of this, although it has a deeper significance as the quiddity of code. This then has a more philosophical approach towards how we can understand, recognise or agree on the status of code as code and identify underlying ontological structural features, etc.

For critical theory, I think tests are a useful abstraction as an alternative (or in addition to) the close reading of source code. This can be useful in a humanities perspective for teaching some notions of 'code' through the idea of 'iteracy' for reading code, and will be discussed throughout my new book, Critical Theory and the Digital, in relation to critical readings of software/code opened up through the categories given by critical theory. But this is also extremely important for contemporary critical researchers and student, who require a much firmer grasp of computational principles in order to understand the economy, culture and society which has become softwarized, but also more generally for the humanities today, where some knowledge of computation is becoming required to undertake research.

One of the most interest aspects of this approach, I think, is that it helps sidestep the problems associated with literally reading source code, and the problematic of computational thinking in situ as a programming practice. Coping tests can be developed within a framework of "depth" in as much as different kinds of tests can be performed by different research communities, in some senses this is analogous to a test suite in programming. For example, one might have UI/UX coping tests, functionality coping tests, API tests, forensic tests (linking to Matthew Kirschenbaum's notion of forensic media), and even archaeological coping tests (drawing from media archaeology, and particularly theorists such as Jussi Parikka) – and here I am thinking both in terms of coping tests written in the present to "test" the "past", as it were, but also there must be an interesting history of software testing, which could be reconceptualised through this notion of coping tests, both as test scripts (discursive) but also in terms of software programming practice more generally, social ontologies of testing, testing machines, and so forth.[1] We might also think about the possibilities for thinking in terms of social epistemologies of software (drawing on Steve Fuller's work, for example).

As culture and society are increasingly softwarized, it seems to me that it is very important that critical theory is able to develop concepts in relation to software and code, as the digital. In a later post I hope to lay out a framework for studying software/code through coping tests and a framework/method with case studies (which I am developing with Anders Fagerjord, from IMK, Oslo University).




Notes

[1] Perhaps this is the beginning of a method for what we might call software archaeology. 


Comments

Popular Posts