« Upcoming NYC events | Main | The Topos of the Earth @ CUNY, The Center for the Humanities »

November 23, 2012

Abducting the Outside (a summary of the talk, part 1)

Here is brief and perfunctory summary of the overall trajectory of the recent NYC talk until the critique of ultra-normativity and an introduction to acceleration as an epistemico-performative vector (the first half of the talk as I recall). Ben Woodard at Speculative Heresy has already made an extensive post, so this is just to complement his:

The talk started within the context of an introduction to a disenthralled account of the modern system of knowledge. Then the idea that we need to understand acceleration's penchant for metis, catastrophic rearrangement of parameters responsible for the behavior of the system, manipulative action, etc. within the modern system of knowledge and a terminalized trajectory of theoretical reason. If I remember correctly I suggested that when I frequently invoke references to the project of rationalism as the center of my emphasis, a couple of things need to be taken into account first: That for me a terminalized trajectory of reason does not have any necessity to coincide with that of the reasonable. Nor does my prejudice for the modern system of knowledge need to be understood within an analytical regime of knowledge. We are in a moment in history where we should prevent, at all costs, thought from becoming a vicious analytical attack dog that confuses its short leash with rational fidelity and precision. In fact, once we unbind the scope of the rationalism project and terminalize the transcendental asymptocity of knowledge, we realize that the ambition of rationalism is to sever the purported alliance between reason and human and to accelerate the dislocating and renegotiating power of the modern system of knowledge by which the human is humiliated at each and every turn.

Then I gave a summary of the structure of the talk. Rather than moving from one point to another in a sequential manner, we began to examine a number of seemingly dissociated thought pieces. These thought pieces were presented as the elements of my introduction to the modern system of knowledge and the possibility of a genuine project of inhumanism. The goal was to integrate these thought pieces into a coherent multi-frontal introduction throughout the talk:

1. Knowledge has an object but it asymptotically approaches its object via concepts. Understanding the so-called deep or universal ecology of the concept, that is, what is conception (both the conditioning of the concept by the concept-less space/exteriority and conceiving information into qualitatively well-organized spaces, i.e. global-to-local and local-to-global adjoints of conception), how does the theoretical reason approach the domain of the concept, what is the concept-space constituted of, how are concepts stabilized in a dynamic fashion, etc. To answer these questions, I talked about the ontology of the concept which coincides with the epistemology of it. We do not ask what the concept X looks like, or what it is (the classical ontological approach). Instead we ask how the concept X is conceived which basically leads us to another question, where is the concept X or where does the concept X subsist. To answer this question, we approached the concept as a locus (a local horizon) immersed within a generic medium (the extension of the concept that ramifies into the global structure of knowledge). So the ultimate question following Mazzola is 'where is the topos of the concept?' Again in order to tackle this question, we briefly examined how the topos of the concept is parametrized by its generic medium or concept-less space and how a topos-oriented analysis of the concept leads to what we call a ramified path structure: The concept can be identified (i.e. it can be conceived) via different alternative addresses or paths. This understanding of the concept and the process of conception leads to a new interpretation of knowledge as a navigation system of concept-spaces endowed with universal orientation (i.e. all global-local paths, structures, levels of organization and layers of the concept should be navigated, the eleventh commandment: "if it (navigation) is possible, then it is mandatory"). With all this, we started to talk about the local and global structures of the modern system of knowledge. So the first thought piece was to understand the valencies and imports of the modern system of knowledge through an examination of the genesis of the concept (via gestures through which the concept-less space condition the stability of the concept), its deep or universal ecology and how various modes of epistemic mediation and inference are identified by the overall methods or ways they navigate the global-local topology of the concept.

2. Mobilizing a trifurcating line of assault against (a) Land's idea of machinic efficacy and a technological singularity or philosophy of inhumanism conditioned by this machinic efficacy (via Longo), (b) ultra-normative understanding of epistemology and variants of rigidified accounts of epistemic methodologies and processes of inference (Brandom and Brassier as possible examples); (c) axiomatic decelerationists or variants of classical Marxism that I charged with local myopia (I really didn't talk about this third category that much as we were short on time. That was unfortunate!).

3. Explaining Oresmean accelerationism as a classical example of how manipulative epistemology, catastrophic change of the parameters responsible for the behavior of the system, designated action by way of focalized destabilizing disequilibrium (which creates spaces of reason through dialectical disequilibrium and disjunction), etc. produces a horizon of epistemic mediation and a new methodology for navigating the deep or universal ecology of the concept-space. (This we didn't really talk about).

4. Understanding acceleration as a vector of epistemic mediation or global navigation of concept-spaces which is basically operates as an alternative to ultra-normative approaches to epistemology. A form of metisocratic production of knowledge and manipulative abductive inference.

5. Rediscovering inhumanism as not only conditioned by but also the veritable expression of the modern system of knowledge and the local-global navigation of its concept-spaces.

Land and machinic efficacy: We started to work on the genesis of the machinic efficacy by way of examining its deep roots in certain forms of metaphysics and revolutions in mathematics and logics, namely, Fregean logicism (derived from a Newtonian metaphysics of the absolute normativity) and Hilbertian formalism (derived from Laplacian classical determinism and his famous conjecture). Both of these constitute the foundations of the works of Shannon and Turing. We examined how digitalization is a form of classical determinism via the idea of approximation-perservation or digital rounding (a form of perturbation-preserving system) that makes the machine effective. Then we also discussed how the classical normativity -- what preserves the so-called reasonable core of reason -- as an absolute and super-ideal form of occurrence of the norm with regard to the variable or the locus of information (the concept) in the Fregean system does not change the variable according to a contingent trajectory but merely relocates it within a specific hence absolute space of norm over and over again. Then we moved to show what it means to be a digital machine and to be effective and mechanizable in that sense. This we did by way of three key objections:

1. Iteration fetishizes finitude and vice versa. The faster the iteration, the faster the machine. Showing that iteration loop is simply a form of determinism that is not able to unfold genuine epistemic encounters with different non-finite and contingent conceptions of time (it does not produce intelligibility from a contingent universe). The so-called deterritorializing speed of the machine is the outcome of its restricted ambit. Nothing is deterritorializing nor special about the speed of a digital machine insofar as it merely repeats the regularities of a finite conception of time. And that we have to seek alternatives for producing intelligibility not through computational iteration but by way of developing new conceptual frameworks for a non-iterative recursive theory. Finally, iteration constitutes the strongly metaphysical conceptual regime of computational algorithms. The new paradigm of the next machine should be furnished with epistemic encounters with interweavings and ramification of continuity and contingency (non-finite conceptions of time).

2. Discretization regime as the causal regime of computational algorithms. The regime of the discrete violates the first law of knowledge which is the conservation of information principle. The geodetic principles of the physical universe (namely, that of the continuum and Lagranian optimality) are sensitive to encoding. All information regarding geodetic continuities (the law of the least action) of the physical domain and symmetry breakings / extended criticalities of the biological domain are lost in discretization (following Longo). Also the discrete cannot approach the universal space or deep ecology of the concept insofar as the discrete does not use the information regarding the generic space which parametrizes information and the topos of the concept. The discrete (causal) and the iterative (conceptual) regimes of the computational dynamics are responsible for the machinic efficacy but this efficacy mutilates the intelligible and prevents genuine conditions for the mobilization of the inhumanist drive of knowledge, namely, encounters with different conceptions of time and space, normative improvisation, processing of information on the basis of the generic space that parametrizes them, contingent epistemic mediations and understanding of the landscape of knowledge as interweavings of continuity and contingency. All of these effectuate as irreversible renegotiations and dislocations of the human sphere. Capitalism investment on the so-called machinic efficacy and global digitalization, in this sense, is completely in line with its axiomatic preservation of the local ambit of thought and restricted knowledge generation (to degenerate the true scope of the global further and further).

Universe as a computer is a cheap metaphor at best, and at worst steps backward in the project of knowledge. The idea that the limits of computation (and accordingly, machinic efficacy) demarcate the limits of knowledge (and hence, a project of inhumanism conditioned by the modern system of knowledge) is the epitome of myopia.

3. If we understand the import of any epistemic vector as a response to two poles of not knowing (ignorance) and knowing not (falsity), we come across the third flaw of the computational algorithm and machinic efficacy. If ignorance is the drive of knowledge, we cannot use forms of producing intelligibility that don't have any place for ignorance. Algorithms cannot work with falsity outside of an a-priori set of frameworks. Their operation in this regard is the little game of 'I know you know that I know that you know ad infinitum'. The algorithm can only present falsity (knowing not) according to given and ideal instances of truth-values. It does not give us the procedural proof of falsity. Nor can the algorithm operate with ignorance. You ask an algorithm 'is X the case?', if the answer is positive then Yes and if it is not then No but if it doesn't know then it cannot remain silent, it has to yield an answer. The undecidability of Turing's halting problem tells us that there is no algorithmic way to say 'I am ignorant' or 'I don't know'. Computational algorithms work with the truth-perservation kernel of classical logic. But the transcendental asymptocity of knowledge is simultaneously ignorance-perserving and ignorance-mitigating. In fact, ignorance-perservation is the warrant of the infinite task of reason, it maintains reason's asymptotic trajectory. Because if there was a canonical truth already posited, the infinite task of reason would come to an end and its asymptotic trajectory would be disrupted (as in the case of mysticism). So this calls for new forms of reasoning and non-classical modes of inference and epistemic mediation which are capable of preserving as well as mitigating ignorance. Acceleration, abductive manipulation and epistemico-performative expriments are examples of these non-classical modes.

So in a nutshell, computational machines cannot embrace the global structure of knowledge and develop a navigation with universal orientation. Sophisticated computational methods like quantum computation, cellular automata can produce -- or more precisely, simulate -- contingencies, critical states, etc. but only according to their own highly modified and ideologically consolidated causal and conceptual regimes which have nothing to do with the physical universe and its principles of continuity and contingency. In this sense, computation does not render the universe intelligible, it produces a different form of intelligibility and even objectivity strictly corresponding to its own causal regime. With that said, I remember I said that the effectivity of computation should be defended as a useful approach on a local level: we do something that actually works, then the next iterative loop we do it better and again better, .... This is how we can refine our local procedures. But we should avoid blowing this out of proportion as something that has actually a global valence for thought or knowledge. Otherwise, we are simply cloning the image of the local onto the global.

The second line of assault focused on deficiencies of rigidified accounts of normativity and classical modes of epistemic mediation. The goal was to subsequently find an alternative, but having in mind that we absolutely need these standard and perhaps rigidified modes on the local level of navigation of the concept-space. Accelerative navigation of the concept-space (as a non-standard form of epistemic mediation) is more of a methodology that navigates the global ramifications of the concept-space and universally broadens the scope of knowledge beyond its local ambits. In order to do embark on this critique, we introduced acceleration as a particular form of gesture (cf. Chatelet), a designated action capable of introducing focalized violent instability and disequilibrium within a certain rate and through a certain synthetic procedure. Also at the same time, warning against metaphysically inflating the gestural constitution of acceleration as a mode of epistemic mediation into a vapid form of enactivism.

To be continued soon.

Posted by Reza Negarestani at November 23, 2012 2:17 AM