Keeping Up with the Zizians: TechnoHelter Skelter and the Manson Family of Our Time (Part 2)
A deep dive into the new Manson Family—a Yudkowsky-pilled vegan trans-humanist AI doomsday cult—as well as what it tells us about the vibe shift since the MAGA and e/acc alliance's victory
In this essay’s first part, I held my detective’s magnifying glass over the Zizian cult’s alleged crime spree, and particularly their mysterious leader Ziz’s twisting of Yudkowskyian rationalism and longterm effective altruism into her warped vision of a coming AI apocalypse which must be prevented at all costs. In this post, I want to connect a few more dots—this time between the Zizians and their wider historical context.
With their San Francisco setting, conspiracy string connections to prominent figures, apocalyptic ideology, and all-around weirdness, there are striking parallels between the Zizians and another death cult, the Manson Family. As the story usually goes, in 1967, after an abusive childhood and several stints in prison, Charles Manson made his way to San Francisco. There he gathered around him a coterie of mostly young and homeless women drawn to his hippie preachings about peace and free love, not to mention free drugs. At first, his plan was to become a pop star and he actually managed to make contact with Dennis Wilson of The Beach Boys, with the two even writing some songs together. But after his makeshift “family” moved to Spahn Ranch, a former Western movie set where they grew ever more isolated during one too many acid trips and speed binges, the brain-fried guru came to believe that they were destined to ignite a race war by murdering A-list celebrities. The Family would hide out in a hole in the Death Valley desert and reemerge after the war to rule over the survivors. Manson’s vision was called “Helter Skelter” after the song by The Beatles, whom he thought were directly sending him encrypted messages through their music. In 1969, several Manson Family members broke into the home of Roman Polanski’s pregnant wife and Hollywood actress Sharon Tate, brutally murdering her and four others. The following night, they also murdered Leno and Rosemary LaBianca in their home. The 1970 trial captivated the nation as these twee-passing, yet killer acid casualties pulled many stunts, such as shaving their heads and carving Xs into their foreheads while merrily singing, smiling, and skipping to court.
As it turned out, the Manson Family became a cultural symbol of a larger historical vibe shift. The Tate-LaBianca murders shook many Americans’ perception of the sixties counterculture. It effectively ended the connotation of hippies with innocence, peace, and love, instead tying them up like one of Manson’s victims to substance abuse, madness, and satanic murder rituals. The same hands that picked flowers and made peace signs had now been used to write “pig” on the wall with a pregnant woman’s blood. A slightly bad smell from unwashed hair and walking barefoot swelled into the horrific stench of rotting corpses. If the Woodstock Festival marked the counterculture’s peak, the Manson Family’s murderous rampage that took place just one week prior proved to be its top signal. This widespread disenchantment undoubtedly helped the then governor of California and future President Ronald Reagan usher in the end of the laid-back, peacenik counterculture in favour of neoliberal competition. If the Manson Family is what everybody harmoniously holding hands in a circle and achieving “oneness” really meant, then we can see why fellow neoliberal Margaret Thatcher’s assertion that “there is no such thing as society” could come as a relief to many.
Could the Zizians also be a cultural symbol of our own historical vibe shift? The Maland shooting took place on the same day as Trump’s second inauguration. A major factor in his electoral success was that a large part of the tech industry threw their support behind him. A large part, but by no means all. For over the past few years, a civil war in the tech industry has been brewing between two competing factions. It can sometimes feel as if there’s no theory or ideology after Marxism that can really drive people anymore. But there is actually not one but two vying to become the new motor of history: effective altruism and accelerationism.
On the one hand, there are the AI safetyists like Yudkowsky and his associates at the Machine Intelligence Research Institute (MIRI) and Center for Applied Rationality (CFAR). We could also include here the Zizians—albeit in a more twisted version as mutilated and grotesque as their alleged victims’ bodies. As discussed in this essay’s previous installment, AI safetyists endorse a longterm vision of effective altruism that seeks to harness our rational faculties to most effectively allocate resources to the benefit of the greatest number of people possible. These people not only include those living today, but the future generations to come, too. Seeing as there could be many more people in the future than are currently alive today, AI safetyists focus most of their efforts on preventing existential catastrophic risks (or x-risks) from strangling them at birth.
As their name would suggest, AI safetyists are particularly concerned about the threat posed by an artificial superintelligence outside our control. As Yudkowsky writes in his widely circulated 2023 essay in Time, “many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.”[1] His reasoning is that, while technological breakthroughs like ChatGPT4—and we could now add DeepSeek—continue to race on ahead, there are currently no known methods for ensuring that an artificial superintelligence would be friendly to inferior human intellects who would be powerless to control it:
We are not prepared. We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems. If we actually do this, we are all going to die.[2]
Consequently, Yudkowsky and the AI safetyists warn, the only solution is to “just shut it all down” by whatever means necessary:
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are defined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.[3]
As the meme goes, the AI safetyists call for an almost total and complete shutdown of AI research and development until they can figure out what the hell is going on.
Like a street preacher screaming about hellfire and brimstone, Yudkowsky can come across as rather alarmist here. But it should be noted that he was writing in the wake of an open letter published by the Future of Life Institute, an organization dedicated to studying x-risks and funded by Elon Musk, the co-founder of Ethereum Vitalik Buterin, and other tech world heavyweights. Signed by hundreds of prominent figures like Musk, Apple co-founder Steve Wozniak, and computer scientist Yoshua Bengio, the letter called “on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT4.”[4] This would hopefully give governments and tech companies enough time to “jointly develop and implement a set of shared safety protocols for advanced AI design and development,” which would “ensure that systems adhering to them are safe beyond a reasonable doubt.”[5] Backed both financially and ideologically by major tech industry players throwing at them hundreds of millions of dollars worth of Shiba shitcoins, it seemed for a time that the AI safetyists were really starting to influence government policy and regulation, with the US, the UK, and Japan among others having since established their own AI Safety Institutes.
In the last few years, however, another faction has emerged called “effective accelerationism.” Coined by pseudonymous tech bros and X addicts Beff Jezos and Bayeslord, effective accelerationism (or e/acc) espouses a radical version of tech solutionism: the most optimal way to solve any problem is through the technological innovations that emerge out of capitalist competition. As Marc Andreesen, the co-creator of the first web browser, writes in his Techno-Optimist Manifesto (2023), “we believe that there is no material problem—whether created by nature or by technology—that cannot be solved with more technology.”[6] So if we really want to improve our lives and solve global problems like poverty, war, and climate change, then all we need to do is simply ramp up the unrestricted market competition from whence the technological quick fixes arise:
We believe in accelerationism—the conscious and deliberate propulsion of technological development—to ensure the fulfilment of the Law of Accelerating Returns. To ensure the techno-capital upward spiral continues forever.
We believe the techno-capital machine is not anti-human—in fact, it may be the most pro-human thing there is. It serves us. The techno-capital machine works for us. All the machines work for us.[7]
From this perspective, the AI safetyists—or what effective accelerationists call “doomers” or “decels”—are at best surreptitiously trying to protect the current tech monopolies by regulating the competition. At worst, they may even be card-carrying commies and totalitarian commissars in the making. As Beff Jezos and Bayeslord put it, “e/acc believes that higher variance marketplaces and competition are better at identifying and capitalizing on utility from our environment over other methods such as top-down optimal control”:
e/acc believes … over-regulating technologies suppresses variance and hence slows down progress towards higher utility technologies and advancement of civilization, a contrast to anti-AGI factions of EA [effective altruism].
e/acc is about having faith in the dynamical adaptation process and aiming to accelerate the advent of its asymptotic limit; often referred to as the technocapital singularity.[8]
It is the American tech industry’s team e/acc faction that came out in strong support of Trump, with both campaign donations and constant meme posting helping to hurl him back into power. So it’s unsurprising that, just one day after his inauguration, Trump returned the favor by announcing the StarGate Project. Created as a joint venture between the US government and OpenAI, Softbank, Oracle, and MGX, StarGate aims to invest up to US$500 billion in AI research and development by 2029. Trump would assist this endeavor by using executive orders and emergency declarations to fast-track projects that would hopefully lead to vaccines to cure cancer among other miracles finally made technologically feasible. In the wake of StarGate, it would seem that the AI safety camp has been resoundingly defeated for now. As Vice President JD Vance made it clear to world leaders at the 2025 Paris AI Summit, “the AI future is not going to be won by handwringing about safety. It will be won by building.” Think not so much Trump’s “you’re fired,” but the other Apprentice Arnold Schwarzenegger’s “you’re terminated.”
Much as the Manson Family was an omen of the hippie counterculture’s decline in favor of Reagan’s neoliberalism, so would I suggest that the Zizians are a symptom of a potentially world-historic vibe shift currently underway. In however distorted a form, the Zizians exemplify everything that the e/acc and MAGA alliance detest most. After all, they are not only hardcore AI safetyists but also support veganism and trans rights. With these typically leftie causes now covered in blood, the Zizians are a better gift than dinner at a steakhouse to the alliance, fuelled by capitalism, technology, and typically a carnivore diet. For the cult provides the perfect opportunity for the alliance to discredit its opponents. It has recently been suggested that Manson helped Reagan discredit the sixties counterculture so successfully that he might have been an actual CIA asset. Likewise, the Zizians can seem like another psyop, a suspicion only strengthened by the fact that their trail of blood appears to lead back to the rationalist institutes that were funded by Peter Thiel and Musk (before the latter switched sides, going full dark MAGA accelerationist mode). Psyop or not, I would simply suggest that the Zizians might be the last dying battle cry of the AI safetyists’ big government ambitions. They will not of course go away completely, but they are likely to be reduced to a series of violent, yet ultimately impotent outbursts in the style of the Unabomber or Luigi Mangione. With them out of the way, it would appear that—at least for the foreseeable future—we have entered the age of full-blown technocapitalist acceleration, straight to singularity.
[1] Eliezer Yudkowsky, “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down,” Time, March 29, 2023, accessed March 30, 2023, https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/.
[2] Yudkowsky, “Pausing AI.”
[3] Yudkowsky, “Pausing AI.”
[4] Future of Life Institute, “Pause Giant AI Experiments: An Open Letter,” Future of Life Institute, March 22, 2023, accessed March 24, 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
[5] Future of Life Institute, “Pause Giant AI.”
[6] Marc Andreesen, “The Techno-Optimist Manifesto,” Andreesen.Horowitz, October 16, 2023, accessed December 13, 2024, https://a16z.com/the-techno-optimist-manifesto/.
[7] Andreesen, “Techno-Optimist Manifesto.”
[8] Beff Jezos and Bayeslord, “Notes on e/acc Principles and Tenets,” Beff’s Newsleter, July 10, 2022, accessed July 30, 2022.
Transon Family.