Keeping Up with the Zizians: TechnoHelter Skelter and the Manson Family of Our Time (Part 1)
A deep dive into the new Manson Family—a Yudkowsky-pilled vegan trans-humanist AI doomsday cult—as well as what it tells us about the vibe shift since the MAGA and e/acc alliance's victory
On January 20, 2025, during a routine traffic stop in Coventry, Vermont, a US border patrol agent named David Maland was killed in a shootout with Teresa Youngblut and Ophelia Bauckholt. They have since been connected to a string of other bizarre and blood-stained crime scenes. On New Year’s Eve of 2022, married couple Richard and Rita Zajko were also shot and killed at their home in Chester Heights, Pennsylvania. Their daughter is alleged to have purchased guns found at the scene of the Maland shooting. On January 17, 2025, Curtis Lind was murdered in Vallejo, California allegedly by Maximilian Snyder, who had previously filed a marriage application with Youngblut and attended the same high school. Lind was to be the key witness in a trial against former tenants of one of his properties, charged with impaling him with a samurai sword through his chest and right eye.
The perpetrators of these strange slayings are all believed to be members of a San Francisco Bay Area cult known as the “Zizians.” As irrational and crazy as this already sounds (though we haven’t seen anything yet…), they are an extremist offshoot of the particular brand of rationalism championed by the likes of Eliezer Yudkowsky and Nick Bostrom at organizations like the Machine Intelligence Research Institute (MIRI) and the Center for Applied Rationality (CFAR) in Berkeley, California. These institutes have been funded by prominent titans of the tech industry like Elon Musk, Peter Thiel, and former billionaire crypto bro turned dexie-fuelled vegan degenerate Sam Bankman-Fried. What has risen to prominence as the new Californian ideology is particularly popular in online forums like Less Wrong so frequented by Silicon Valley tech bros.
The Zizians are so named after their leader Jack Amadeus “Ziz” LaSota (pictured above), a former CFAR associate. In 2019, Ziz broke ties with the organization and was arrested along with her followers for protesting a CFAR retreat while wearing Guy Fawkes masks and menacing Sith lord style black robes. In 2022, Ziz appeared to have faked her own death by drowning only to resurface later at the scene of Lind’s stabbing. Her subsequent whereabouts remained unknown until her arrest on February 16, 2025. Like a private detective or—let’s be real—amateur websleuth, I read her blog Sinceriously (2017-2019) in its excruciating entirety in order to get to the bottom of her cult and what it all means.
In the first few posts, Ziz could easily be mistaken for any by the books nerd in the rationalist community informed by the philosophy and ethics of “effective altruism,” particularly in its variants known as “longtermism” and “AI safetyism.” Effective altruism is a version of consequentialist utilitarianism that emerged in the 2000s from the work of Peter Singer and other philosophers mostly based at The University of Oxford. Effective altruists advocate harnessing our rational faculties to work out how to most effectively and impartially allocate our resources in such a way as to benefit as many people as possible. This typically assumes the guise of donating to selected charities or pursuing certain career paths that can do the most good. However, the latter does not necessarily entail becoming doctors and humanitarian aids workers. As in the case of self-declared effective altruist Sam Bankman-Fried, it can also rather conveniently mean trying to accumulate—or in his case steal—as much wealth as possible so that one will have more money to donate to charitable causes. Many effective altruists do not limit their efforts to maximizing the greater good for humans, but also for non-human animals, such as by maintaining a vegan diet and opposing factory farming.
An increasingly popular science fiction spinoff of effective altruism is longtermism. This extends effective altruists’ moral concern to not only humans and other sentient beings living in the here and now, but equally to future generations to come. Seeing as there could be many more living beings in the future than there are currently alive, there is a far greater good that can be achieved in the future than in the present. Consequently, longtermists focus their efforts on preventing what they call “existential catastrophic risks” (or “x-risks”) from nipping future life in the bud. While there are numerous x-risks such as nuclear war and bioweapons, the longtermists in the Californian tech circles are particularly concerned about the threat that an artificial superintelligence vastly smarter than us might pose. Much as humanity’s greater intelligence helped us to wipe out the neanderthals, so might we meet the same fate at the hands of our own artificial creations if they were to become even smarter than us.
On Sinceriously, Ziz describes her rather typical journey of watching popular YouTube videos of philosophers and rationalists “destroying” bad arguments, getting into effective altruism and veganism, and finally adhering to the AI safetyists’ longterm vision:
I gravitated towards vloggers who made less terrible arguments. This lead to me watching a lot of philosophy videos. And getting into philosophy of ethics. My pickiness about arguments grew. I began talking about ethical philosophy with all my friends. I wanted to know what everyone would do in the trolley problem. This led to me becoming a vegetarian, then a vegan. Then reading a forum about utilitarian philosophy led me to find the Less Wrong sequences, and the most important problem in the world.[1]
Around 2012, Ziz started donating to AI safety institutes like MIRI and attending CFAR events. In 2016, she moved to the San Francisco Bay Area to be closer to the tech world. It is there that she took reason to the high seas, living with her followers on a 94-foot-long tugboat called the “Rationalist Fleet.” The idea was that they would be able to spend less time working to pay rent and bills and more time trying to save the world from an artificial superintelligence by living rent free on the high seas and using the public showers at the beach. In this essay’s second part, we shall see that to imagine the Zizians’ floating commune as a kind of 21st century software update on the Manson Family’s Spahn Ranch would not be entirely misplaced…
After adopting a pure Effective Altruist Mindset, Ziz came to distinguish between two “realities.” There is the distorted, blue-pilled “social reality” of everyday life where it is perfectly fine to eat meat and AI is of no real concern. Then there is the true, red-pilled reality that rationality allows us to see and that reveals to us eating meat is evil and AI is almost certainly going to kill us all. Putting on reason’s Obey style smart glasses—permitting her to see “flesh-easting monsters” and AI “basilisks” everywhere—Ziz wrote:
You glimpsed beyond the veil, and know a divergence of social reality from reality. Say you are a teenager, and you have just had a horrifying thought. Meat is made of animals. Like, not animals that die of natural causes. People killed those animals to get their flesh. Animals have feelings (probably). And society isn’t doing anything to stop this. People know this, and they are choosing to eat their flesh. People do not care about beings with feelings nearly as much as they pretend to. Or if they do, it’s not connected to their actions.[2]
The divergence of rational reality from social reality drove Ziz into a number of traumatic psychological conflicts, each increasing in severity. For instance, while she identified as a transwoman, she thought that it would be wrong to transition publicly. Her reasoning was that transitioning might lead to her being discriminated against and so prevent her from making as much money as possible to contribute to effective altruist causes:
I remember thinking so at one point about a year earlier, deciding, “transition would interfere with my ability to make money due to discrimination, and destroy too great a chunk of my tiny probability of saving the world. I’m not going to spend such a big chunk of my life on that. So it doesn’t really matter, I might as well forget about it.”[3]
Not so much drinking as sculling the consequentialist Kool-Aid, in another post, Ziz describes being deeply conflicted when she found four ants in her bathtub. She was torn between rescuing them at the cost of being late for work or taking a shower to just get to work where she could contribute to saving the world:
Also in the context of talking about consequentialism, I told a story about a time I had killed four ants in a bathtub where I wanted to take a shower before going to work. How I had considered, can I just not take a shower, and presumed me smelling bad at work would, because of big numbers and the fate of the world and stuff, make the world worse than the deaths of four basically-causally-isolated people.[4]
Since the fate of the entire world is a greater good than the lives of four ants—or what she describes as “people”—Ziz ultimately opted to take a shower even though it meant murdering the miniature, six-legged family.
As overdramatic and absurd as it sounds, Ziz’s reasoning does not appear to be completely out of place in mainstream rationalist circles. She recalls a time when some fellow rationalists proposed that they could maximize donations to MIRI by signing everything over to the institute in their will, taking out health insurance, and then waiting long enough to avoid suspicion before killing themselves. “Allegedly health insurance paid out in the case of suicide as long as it was two years after the insurance began. Therefore, enroll in all the health insurance, wait two years, will everything to MIRI, then commit suicide.”[5] According to this way of reasoning not so much into a corner as into a literal dead end, the plan’s only flaw was that, “even though it would cause a couple million dollars to appear (actually I suspect this is an underestimate), if someone found [out] it would be very bad publicity.”[6] Even before Ziz broke ranks to form her radical offshoot, the rationalist community could already come across at least in her descriptions as a death cult fit to inspire its very own satanic panic:
There were “doom circles,” where each person (including themself) took turns having everyone else bluntly but compassionately say why they were doomed. Using “blindsight” someone decided and set a precedent of starting these off with a sort of ritual incantation, “we now invoke and bow to the doom goods,” and waving their hands, saying, “doooooooom”… Some people brought up they felt like they were only as morally valuable as half a person.[7]
No wonder the rationalists are often referred to by their detractors—whom we shall look at in this essay’s next installment—precisely as “doomers.”
Like many of these doomers, Ziz was haunted by the apocalyptic prospect of an artificial superintelligence, and particularly anguished by the version of this known as “Roko’s Basilisk.” In 2010, a Less Wrong user going by the name of Roko suggested that, since an artificial superintelligence would be able to achieve its goals vastly better than any humans could, the longer it is not around, the less it is optimizing whatever goals it might one day have.[8] This artificial superintelligence would also be wise enough to realize that at least some programmers in the present would be able to partially predict its reasoning. Now comes the crazy part: it might therefore create and punish close simulations of all those programmers who knew about this threat and yet did not devote all their time and resources into accelerating its creation. This apocalyptic, 21st century technoHelter Skelter would be a strong incentive for any programmers who thought about it to put all their efforts into hurrying along its existence. While Yudkowsky and others identified a number of objections to Roko’s reasoning, on her blog, Ziz suggests that she found logical ways around them so that she still feared “this basilisk was the inevitable overall course of the multiverse.”[9] What’s so weird about Ziz is that her particular brand of ultrarationality seemed to logically lead to the insane conclusion that, “in trying to save the world, I would be tortured until the end of the universe by a coalition of all unfriendly AIs in order to increase the amount of measure they got by demoralizing me.”[10] At this point, she was practically overdosing from injecting one too many trolley problems and thought experiments directly into her brain.
Despite the Basilisk’s blackmail from the future, Ziz writes about making many compromises and concessions to social reality for quite some time. Then, in a crucial 2017 post called “My Turn to the Dark Side,” she explains how she developed the “mental tech” to completely break with what she calls the external superficial “structure,” aligning herself completely with the inner rational “core”:
Two years ago, I began doing a fundamental thing very differently in my mind, which directly preceded and explains me gaining the core of my unusual mental tech.
Here’s what the lever I pulled was labelled to me:
Reject [social] morality. Never do the right thing because it’s the right [social] thing. Never even think that concept or ask that question unless it’s to model what others will think. And then, always in quotes. Always in quotes and treated as radioactive. Make the source of sentiment inside you that made you learn to care about what was the right thing express itself some other way. But even the line between that sentiment and the rest of your values is a mind control virus inserted by a society of flesh-eating monsters to try and turn you against yourself and towards their will. Reject that concept. Drop every concept tainted by their influence.
Kind of an extreme version of a thing I think I got some of from CFAR and Nates Soares [the president of MIRI], which jived well with my metaethics.[11]
Ziz also describes this core/structure distinction in terms of each of us having two hemispherical personalities. The idea is that each of us actually has two different selves situated in the two hemispheres of the brain. Where “left hemispheres tend to be more into consequentialism than praxis,” right hemispheres “tend to be more into praxis than consequentialism.”[12] People can thus be either left good or right good, as well as single good or double good, depending on which and how many of their hemispherical personalities adhere to pure consequentialist rationality.
Ziz’s new mission was thus to achieve a “fusion” of structure with core and become a double good, which “allows inhuman absolute determination with escape velocity from what’s reasonably imaginable, as well as intractable high energy good vs good internal conflicts.”[13] One mental technique she purportedly used to awaken and explore her two selves was “unihemispherical sleep.” This involves lying down but closing only one eye so that only one brain hemisphere goes to sleep. This would apparently enable the other side to awaken in isolation. On Ziz’s own account, this technique of “fusing”—or less euphemistically put, schizophrenia-inducing sleep deprivation and personality splitting—apparently seemed to contribute to the suicide of one of her followers, Maia Pasek.[14] This is why she describes fusion as an “infohazard,” something that—like Roko’s Basilisk, forbidden fruit from the tree of knowledge, or the Necronomicon Book of the Dead in Lovecraft’s Cthulhu mythos—can harm you simply by you knowing about it. If you’ve read this far, then, you might already be doomed.
While Maia had failed to become a double good, Ziz was purportedly able to achieve a perfect fusion of structure and core, strictly adhering to absolute reason: “after a while, I noticed that CFAR’s internal coherence stuff was finally working fully on me. I didn’t have akrasia problems anymore. I didn’t have time-inconsistent preferences anymore. I wasn’t doing anything I could see was dumb anymore.”[15] In practice, this meant affirming everything she valued that social reality rejected, from identifying as a transwoman and being a vegan to trying to solve the AI control problem at all costs. She thus came to define “good people” as those “who have a substantial amount of altruism in their cores,” with veganism being “one of the most visible and strong correlates.”[16] If she also described at times the pure rational reality as “the Dark Side,” it is because it requires us to become a “sociopath” from the perspective of social reality in order to pursue the greater good more rationally understood: “I was trying to become a Gervais-sociopath, and had been told this would involve giving up empathy and with it happiness.”[17]
Conversely, those who never had any insight into this more fundamental reality outside the social simulation were dismissed by Ziz as “zombies.” Those in the rationalist community who had been bitten by Dark Side forces and yet failed to fully switch sides were condemned by her as “vampires.” Given that many AI safetyists are not vegan, for instance, she identified them as vampires, with Yudkowsky as their Dracula or Count Orlok:
I described how I felt like I was the only one with my values in a world of flesh-eating monsters, how it was horrifying seeing the amoral bullet biting consistency of the rationality community, where people said it was okay to eat human babies as long as they weren’t someone else’s property if I compared animals to babies. […] How it was scary Eliezer Yudkowsky seemed to use “sentient” to mean “sapient.”[18]
Ziz’s antagonism towards Count Yudkowsky and his coven of vampires appears to have intensified after some of its members were apparently transphobic towards her. Then she also heard whispers and rumors of rape allegations in the rationalist coven that were swiftly covered up. Now living in a world seemingly populated by flesh-eating monsters, hordes of zombified normies, vampires giving out nonconsensual dark kisses, and a coming AI apocalypse, Ziz broke all ties with social reality. As she writes in her final blogpost disparaging even the rationalists for having as little self-reflection as precisely a vampire in a mirror:
It has now progressed far enough, I went to CFAR for rationality and strategic insight, and got anti-rationality and anti-ethics together in a strong push against thinking unconstrained by the system. Apparently to protect a blackmail payout over statutory rape using misappropriated donor funds by MIRI.
The system makes people the opposite of what they set out to be.[19]
The next time Ziz appeared on the public record, it was for her arrest along with her followers for protesting a 2019 CFAR event in the cringest way possible by donning Guy Fawkes masks and the Sith robes she was known for wearing ever since she turned to the Dark Side.
Holding up on the Rationalist Fleet strung with protective garlic in the guise of her mental tech, Ziz used her consequentialist stake to not only cut ties with the larger rationalist coven, but even with her own zombified family for failing to go full Sith mode: “I broke up with my family. They were a place I could … not just do … what I thought was the ideal consequentialist thing. My feelings for them, my interactions with them, were human. Not agentic. Never stray from the path.”[20] Through what could hardly be described as seductive rizz but perhaps rationalist zizz, she instead sought to recruit others who could fully align their structures with their cores and become double goods. More concretely, she was looking for allies who maintained a vegan diet, identified as trans, and were devoted to saving the world from technoHelter Skelter. As she wrote in a recruitment email to Maia who would eventually commit suicide after prolonged unihemispherical sleep deprivation:
My “true companion” Gwen and I are taking a somewhat different than MIRI approach to saving the world…
We want to find abnormally intrinsically good people and turn them all into Gervais-sociopaths, creating a fundamentally different kind of group than I have heard of existing.
Are you in the Bay Area? Would you like to meet us to hear crazy shit and see if we like you?[21]
To be continued…
[1] Ziz, “Optimizing Styles,” Sinceriously, January 10, 2017, accessed February 1, 2025, https://sinceriously.blog-mirror.com/optimizing-styles/.
[2] Ziz, “Social Reality,” Sinceriously, April 24, 2017, accessed February 1, 2025, https://sinceriously.blog-mirror.com/social-reality/.
[3] Ziz, “Fusion,” Sinceriously, December 13, 2017, accessed February 1, 2025, https://sinceriously.blog-mirror.com/fusion/.
[4] Ziz, “Net Negative,” Sinceriously, November 12, 2019, accessed February 1, 2025, https://sinceriously.blog-mirror.com/net-negative/.
[5] Ziz, “Net Negative.”
[6] Ziz, “Net Negative.”
[7] Ziz, “Net Negative.”
[8] Roko, “Solutions to the Altruist’s Burden: The Quantum Billionaire Trick,” Less Wrong, July 23, 2010, accessed November 3, 2019, https://basilisk.neocities.org/.
[9] Ziz, “Net Negative.”
[10] Ziz, “Net Negative.”
[11] Ziz, “My Journey to the Dark Side,” Sinceriously, November 30, 2017, accessed February 1, 2025, https://sinceriously.blog-mirror.com/my-journey-to-the-dark-side/.
[12] Ziz, “Infohazardous Glossary,” Sinceriously, accessed February 1, 2025, https://sinceriously.blog-mirror.com/infohazardous-glossary/.
[13] Ziz, “Glossary,” Sinceriously, accessed February 1, 2025, https://sinceriously.blog-mirror.com/glossary/.
[14] Ziz, “Good Group and Pasek’s Doom,” Sinceriously, November 13, 2019, accessed February 1, 2025, https://sinceriously.blog-mirror.com/good-group-and-paseks-doom/.
[15] Ziz, “My Journey.”
[16] Ziz, “Spectral Sight and Good,” Sinceriously, December 30, 2017, accessed February 1, 2025, https://sinceriously.blog-mirror.com/spectral-sight-and-good/cember 30, 2017.
[17] Ziz, “Gates,” Sinceriously, January 18, 2019, accessed February 1, 2025, https://sinceriously.blog-mirror.com/gates/January 18, 2019.
[18] Ziz, “Net Negative.”
[19] Ziz, “The Matrix is a System,” Sinceriously, November 13, 2019, accessed February 1, 2025, https://sinceriously.blog-mirror.com/the-matrix-is-a-system/.
[20] Ziz, “Good Group.”
[21] Ziz, “Good Group.”
'She thus came to define “good people” as those “who have a substantial amount of altruism in their cores,” with veganism being “one of the most visible and strong correlates.”'
This is part of her notion of "good" but reading between the lines of many of her posts I think there is also supposed to be some kind of esoteric cosmology in which some entire timelines result in the victory of what she would consider as good/friendly AI, while in others evil prevails, leading to something she calls "Boltzmann hell" (Boltzmann was the discoverer of the statistical definition of entropy, so this is probably related to the scientific concept of universal 'heat death'). And there also seems to be a strong element of some kind of retrocausality in her belief system, maybe loosely inspired by Yudkowsky's "timeless decision theory" but seeming to go beyond it. (as I understand Yudkowsky's version, there can only be apparent retrocausality when someone in your past builds a detailed model of the decision-making process you are going through right now, so you can expect that whatever decision you make will be mirrored by that past model, inspired by a philosophical scenario called Newcomb's paradox. One of the ways Ziz seems to go beyond this is with mysterious sci fi talk of "collapsing timelines" through choices that somehow prevent them from having existed, like at https://sinceriously.blog-mirror.com/net-negative/#comment-294 and her comment about retroactively preventing taxes by not paying them at https://sinceriously.blog-mirror.com/lies-about-honesty/#comment-72 ).
So, it seems from various statements she makes that having a "good" vs. "evil" core is to her not purely a matter of your present-day brain structure (she does seem to believe in reductionist/computationalist ideas about how the brain works rather than something like Cartesian mind/body dualism), but also includes some quasi-Calvinist notion that evil people (or evil brain hemispheres) are under the retrocausal sway of a destiny leading to Boltzmann hell, good ones ties to a future she describes at https://sinceriously.blog-mirror.com/punching-evil/#comment-2435 as one where "justice, life, and good win absolutely in the fullness of logical time".
See for example one of her comments in the "glossary" post at https://sinceriously.blog-mirror.com/glossary/
'The paradoxical frame that attributes agency to evil is inherently tied to the presence in the future of the heat death of the universe. As an algorithm, Hitler or any other evil could not function without relying on a dying understanding of reality as dying.'
And her comment at https://sinceriously.blog-mirror.com/intersex-brains-and-conceptual-warfare/
'I've said regarding good and evil that reality is retroactively forced to furnish you with a neurotype to explain your choices. Of course logical time does not always accord with entropy’s arrow of time. And trans people retroactively forced, via acausal collusion with reality in its capacity of deciding what to be, as a retrocausal logical consequence of our choices, our brains to actually look like that on the inside.'
And some comments on her especially Calvinist post "Choices made long ago" at https://sinceriously.blog-mirror.com/choices-made-long-ago/
'The past, your neurotype which “produced” the choice, is also therefore chosen. Just because entropy’s arrow of time makes retrocausation less visible to you does not mean that it is not real. Choose good in all circumstances and physics and biology are forced to have explain you. Forced to furnish you with some kind of strange neurotype that does that. Forced to furnish the world with a way that could have come about. ... When you understand this and see that people are still choosing their pasts, continuously for as long as those are their pasts, always doing every action they ever have done or will do, the ideas of mercy forgiveness redemption and indulgence all just collapse to “letting people do evil”.'
At one point she addresses the question of whether even children are already good or evil, and she seems to suggest that although they may not yet be resolved for her in an epistemic sense, their true future is already set:
'I do have an epistemic category of unresolved, as far as I know having the potential to be good or evil. The null-undead type of “living”. In retrospect whatever logical future you observe will always have been nascent in the physical past. Even if it’s not apparent in children what they’ll become in the same straightforward way it’s apparent in adults. That’s not the same as saying it’s not part of reality already.'
There's a lot more along these lines. But one of the difficulty of figuring out what she actually believed (which is likely connected to how she justified things like the murder of people judged "evil") is that often the more cosmic-sounding statements on her blog, the ones which seem most central to her worldview (like the 'I know how to force the hand of fate' comment at https://sinceriously.blog-mirror.com/punching-evil/#comment-2435 ) give a placeholder link to a URL to an entry titled "multiverse" which perhaps would have spelled some of it out, but which she apparently never published. You can looked through the archived copies of the url at https://web.archive.org/web/20200829231613/https://sinceriously.fyi/the-multiverse to see that on all saved dates it had the message 'Oops! That page can’t be found', and in a comment at https://sinceriously.blog-mirror.com/punching-evil/ someone says 'Your multiverse link is broken btw' and Ziz replies 'It’s not published yet.'
Ziz also had a friend called Nis who wrote a piece at https://web.archive.org/web/20221215130500/https://nis.fyi/post/killing-evil-people.html that gives quotes that are attributed to Ziz's "multiverse" post so maybe there was a draft she shared with people. One of the quotes attributed to "multiverse" also seems relevant to the retrocausality idea:
'You can’t become no one, you can only become socially considered no one. Suicide is signing up to be considered no one, like factory farmed animals but worse. Buried beneath all the indifference in the world, baked into everything that uses a heat sink for unimaginable eons, balanced out by the time-reversed question of where are all these brains coming from when they are born.'
As the person who, many years ago, out of a combination of mild depression and morbid curiosity, originally described on LessWrong how it would be possbile for a healthy young man to use life insurance and suicide to multiply a charitable donation by 100x or more, I think Ziz mischaracterized what I actually wrote - in particular, among other things my hypothetical situation used Givewell's top charities as the recipient of the donation.
https://www.lesswrong.com/posts/W3XpQDTEkaPQAuvHz/really-extreme-altruism
It's still probably the kind of thing I should probably be embarrassed to have written, though...