Discussion about this post

User's avatar
Robert Shepherd's avatar

I suppose what I keep coming back to is that I don’t think capital is agential.

I don’t mean “and therefore it isn’t intelligent.” Almost the reverse, really. So much AI discourse seems to assume that an AI has a conscious goal which it consciously achieves. But capital is able to enmesh agential goals in a massive non-agential system, then sort of twist them towards its own remorseless logic.

I agree with the interviewer that capital doesn’t sound like an artificial superintelligence as we tend to describe it— but I wonder if that’s a challenge to our descriptions, instead of to the concept itself. To me, “a runaway optimising system might outcompete and subvert *any* goal-seeking entity” is a haunting idea, and although it seems implicit in all of this I don’t know that it’s stated explicitly

No posts

Ready for more?