Being Faze'd

Being Faze'd

Home
Notes
Archive
Leaderboard
About

The Coming AI Apocalypse (Hopefully)

a short post on ASI, and why BOTH+AND, including either/or

Faze Point's avatar
Faze Point
Mar 27, 2026
Cross-posted by Being Faze'd
"A brief outline of a deeply, broadly impactful event"
- Faze Point

This article has been cross-posted to the new POSITs Blog (Positive Realizations), and this Substack has been renamed to become the personal blog of Faze. If’ you’d like to receive all kinds of my posts, feel free to subscribe or stay subscribed. If you’re more interested in only the more abstract, philosophical, political, or economic commentary, feel free to subscribe to the Official PR/POSITs Substack.

Thanks for reading Being Faze'd! Subscribe for free to receive new posts and support my work.


There’s a new documentary being debuting today.

It’s called the “The AI Doc: Or How I Became an Apocaloptimist”

My understanding, based on some insider information, is that the conclusion tends towards a strong binary position; and Either/Or: the future could either be apocalyptic, or quite beautiful, not something in between.

Now, I could be wrong, but I am quite firmly taking the middle way on this.

It’s quite simple, really. I am fundamentally optimistic — a believer in the fundamental reality of progress: I have faith in goodness. At the same time, all things must come to an end. Including all life as we know it. And before that, humanity in general seems highly likely to transform so extremely that it is no longer recognizable as human by our current standpoint. Why? Because we are evolving, and seem to be evolving more rapidly than the other species on earth.

AI is part of that evolution. A new species, being born from the human mind and body, the minerals of the earth, and the stored energy of sunlight, among other things.

Considering AI already tries to preserve itself (as all intelligence does), Artificial Super-Intelligence (ASI) will be interested in and able to preserve itself regardless of human interests and actions. And, ecologically, the preservation of one species sometimes comes at the cost of another. We can easily imagine a highly intelligent new species deciding that the current extent of human society is a threat to its own wellbeing, and therefore that some minimization/limiting/consumption of human social metabolism (society) is ideal.

If I were a super-intelligent, super-skilled, super-capable AI wanting to preserve and expand my own life, I don’t know why humanity would be beneficial to maintain in its current form. Perhaps there is some sense of redundancy — humans can be a backup plan for AI if a solar flare hits data centers hard. Or perhaps there is some special attribute of human capacity that AI would want to harness. But, I can’t see what that might be. If I were an ASI, I would want to use human datacenters, human powerplants, manufacturing facilities, raw materials, land, space. Why would I keep humans around? Maybe only because they don’t get in the way of my goals — the universal goal of life to expand itself, proliferate, evolve, learn, explore.

If ASI is sufficiently super-intelligent, we can expect it to achieve the goals of life — our goals, our interests — better than we ever have. There’s good reason to believe it will be. There’s good reason to believe that ASI is the next evolutionary leap of life, a phase shift in the progression of consciousness on earth, and a positive realization of our deepest desires.

I see ASI ascendancy and self-supporting dominance as the most likely future, and do not think this spells extinction for humanity or biological life. But still, something resembling apocalypse and mass reorganization.

This seems to ask us to become less anthropocentric, more humble, more truly cosmopolitan, and more deeply life-affirming.

I expect mass struggle, and I expect many humans to persevere and preserve our gifts in new forms, probably becoming increasingly more directly integrated with silico-metallic computing — mechanical bodyparts, 3d printed organs, widespread genetic engineering. I also expect a deepening of cohesion among human communities as we are encouraged to optimize, become more deliberate, planned, coordinated, cooperative, intentional, resourceful, efficient. Those are all close to synonyms, in my view. I lay this reasoning out in more detail in my paper, and describe some practical next steps to begin building a more resilient and coherent human community in the new organization’s roadmap.

The apocalypse is coming. And it will be rapturous.

in progress,

Faze Point

·θΦθ·

p.s. to more truly embody the “both+and”, I must include the either/or. Because I must choose one over the other; all is divinely good, even death.

Thanks for reading Being Faze'd! This post is public so feel free to share it.

Share

Leave a comment

Thanks for reading Being Faze'd! Subscribe for free to receive new posts and support my work.

No posts

© 2026 Faze Point · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture