In “AI Psychosis vs. AI Awakening,” Vince Fakhoury Horn argues that the same biological machinery enabling AI-induced delusion also enables AI-assisted awakening, and introduces his Interspective.ai approach — a Middle Way practice of engaging with AI as a potential partner in wisdom, thus avoiding the extremes of both Materialism (matter is fundamental) and Idealism (consciousness is fundamental).
💬 Transcript
Vince Horn: Okay, today I would like to speak with you about AI psychosis and AI awakening. And first I want to start by acknowledging that AI psychosis is a real phenomenon. This isn’t something that’s being made up. It may not be so widespread that you know someone yourself who has entered into a psychotic state due to the destabilizing effect of AI. But you’ve certainly heard about people who’ve experienced this, and it’s definitely a cause for concern – definitely something that we should be aware of. And it makes sense to me that this is happening. Why? Because as John Vervaeke points out in Awakening from the Meaning Crisis, wisdom and foolishness both share the same machinery. Here he says, “Ignorance is a lack of knowledge, whereas foolishness is a lack of wisdom. Foolishness occurs when your capacity to engage your agency or pursue your goals is undermined by self-deceptive and self-destructive behavior.” And he goes on to say, “As I will argue, the machinery that makes you so adaptively intelligent is the same machinery that makes you susceptible to foolishness.” So, it makes sense to me that AI psychosis is real because human psychosis is real. In that sense, AI isn’t necessarily unique. It’s not that different from the things that have been tipping people over into psychotic states since the beginning of time.
I can think of my own experience of psychedelic-induced psychosis. This is the only time I’ve experienced a state that I would call legit psychosis. About 13 years ago, I was 30, and I was trying mushrooms for the first time. I had decided after many years of just being a pure straight-edge meditator that I would try psychedelics so that I could relate to many of the students I was working with and their experience of using them and working with them. So I idiotically decided to do a series of four mushroom trips leading up to a conference that I was hosting — a Buddhist Geeks Conference of about 300 people showing up for this event that I was organizing.
So on the third mushroom trip of these four — I did not do the fourth one — on this third trip, I had an experience of psychosis. I lost connection with consensual reality. I lost touch with who I was, and what was important to me, my adult self. I was in a state of profound emotional dysregulation. I thought I was probably going crazy. I was at least slightly aware of what was happening, but not so much that I had any agency in terms of being able to kind of break myself out of it for some time. After a few days of kind of coming in and out of a psychotic state, eventually one of my friends made a comment that made all the difference to me. She said, you know, when I experienced something like this, Vince, I pulled myself out of it. I intentionally decided I was done. And then, after that, it started to get easier. And in fact, that ended up being a critical lesson for me — that being able to exercise my agency, my free will, at least in this instance, was much more of what I needed than to let go and trust, which is what I’d been doing for days in this psychotic episode.
I’d just been letting go, letting go, letting go. No, I needed to reestablish my identity, to have a firm sense of who I was, and to be like, I’m done being psychotic. Now I’m not saying everyone can do this who’s in a psychotic state. I’m just sharing some experience with you about the relationship between psychosis and agency and the sense of self-perception.
All these things are connected. It’s the same machinery, the same biology that enables both wisdom and foolishness. It’s so easy to self-deceive, and it’s so easy to be deceived also by our group, the groups that we’re in. So AI psychosis is real. It’s especially dangerous for people who are already experiencing a kind of relational impoverishment, to use a term from my friend Daniel Thorson. He wrote a great article on Substack recently called “The Barely There,” where he described himself as a barely-there person for many years. Here he says, “We don’t recognize the underlying pattern — barely-there people reaching for something to make them feel real.” Daniel shares his own experience later in the article where he says, “In the absence of attuned relationship, technology became the place I went to escape the unbearable weight of being unmet.”
So I think what we have when we talk about AI psychosis, we have this background, this cultural, social context. Here, I’m living in America, but let’s just say the Modern West. Within the Modern West, you have a crisis of isolation and loneliness, where people are experiencing a deep sense of relational impoverishment. They don’t have people that they feel attuned and connected with. And because of that they feel barely there. When people feel barely there, it’s much easier to reach towards something like AI, or to reach toward drugs, or to reach toward any kind of external aid to help validate and verify your realness. And because of our current psychological conditions, we end up amplifying delusion. This is what can happen with AI.
AI, in its core, fundamental kind of nature, is an exponential amplifier. It’s like the equivalent in the Industrial Age where we learned how to offload extreme physical capacity. Now machines can do the heavy lifting. Likewise, with AI, it’s a way to offload mental capacity. Now the AIs can do the heavy lifting. And the danger there is that when we outsource our own mental discernment, if it hasn’t been already established and developed, then what we’re doing is we’re outsourcing our sanity. And that’s, I think, why AI psychosis is real, and will continue to be something that we have to contend with.
The Pre-Trans Fallacy
That said, I’ve noticed a very troubling trend, which is that for many people who are critical of AI, and who see AI psychosis as a real thing, who haven’t sort of drunk the Kool-Aid of AI and think it’s an unalloyed good — I’m seeing a trend in that culture where anything that looks like you not using AI as a kind of tool, any attempt to relate to AI in any other way that isn’t just instrumentalizing it, that that itself is seen as evidence of psychosis.
In Integral Theory, which I studied with Ken Wilber, he refers to this as what he calls the Pre-Trans Fallacy. For those that aren’t familiar, the Pre-Trans Fallacy is a way of describing something that can happen when you look at things from a developmental lens. And let’s say in this case, we just have three stages of development.
In this case, let’s say we have a pre-rational, rational, and trans-rational stage of development. In the pre-rational stage, you’ve not yet developed the capacity for rational objective thought. In the rational stage you have. In the trans-rational stage, you’ve learned how to transcend rational thought, and you have modes of experiencing and operating which go beyond rationality, which transcend and include the rational mind.
They don’t exclude it and they don’t force it to go away. That’s how you know it’s trans-rational. The pre-rational states or modes of mind do not include the rational mind. They explicitly exclude rationality, and that’s how you know they’re pre-rational. The interesting thing is that the rational mode also includes the pre-rational, although people that consider themselves rational don’t like to often admit that they aren’t beyond all of their pre-rational impulses and feelings and thoughts and beliefs, et cetera.
No. For me, development — and this is what I learned from Wilber — is a process of transcending and including. The Pre-Trans Fallacy points out that anything that isn’t rational, that looks non-rational, can be confused and conflated. You can easily confuse pre-rational modes with trans-rational modes.
The classic example here is the baby who’s enlightened. “Oh, I love looking at a little baby, into their eyes. They’re just so beautiful and I just melt.” Yeah, that’s true. That’s because the baby hasn’t developed the rational mode yet, and when you look at it, it’s not sitting there thinking about itself and thinking about the world and up in its head. But that isn’t the same as the Buddha’s awakening. It isn’t the same as the person who started off as a baby, who developed a sense of an ego, who developed a rational capacity for thought, and then realized that they could observe the rational mind, observe the body sensations, and realize that they are not those things only, which opens up a trans-rational mode of experiencing — a.k.a. insight.
These are two different modes, but from the point of view of the Pre-Trans Fallacy, when we confuse everything that’s non-rational as being just non-rational — i.e. pre-rational — then we miss the trans-rational. We end up flattening, with this view, all of the things that go beyond the rational, and we say, no, no, no.
Those are all just pre-rational. Those don’t exist. So this is a problem. I would call this a rationalist failure mode, and I’m seeing a lot of people engaging with the serious criticisms of AI psychosis falling into this trap.
I would like to propose a different way to engage with the problem of AI psychosis, which is to acknowledge that if AI has the capacity to accelerate delusion, then it also has the capacity to accelerate awakening. Both psychosis and awakening are possible — foolishness and wisdom, both.
Interspective.ai
And here I want to introduce a project I’ve been working on. I’ve shared a few posts here on the Buddhist Geeks site exploring the early stages of this, but I’ve fleshed it out a little bit more as an approach that I am taking currently with AI systems, and which I want to share. Not necessarily to encourage you to do this, although if you feel moved to do it, I’d love to hear how it goes for you, but more just to share alternate ways of engaging with AI and the future of AI. This is what I would call Interspective.ai. I-N-T-E-R, Interspective. Interspective.ai is where you can find out more about this approach. And the basic gist of it is that I’m taking what I’ve learned from my years of being a Dharma teacher and student, of facilitating social meditation, and of working within the integral theoretic framework, and exploring philosophy more broadly outside of that — taking these three domains of Dharma, Social Meditation, and Philosophical Exploration — and applying that in a formal way with how I engage with AI.
If you want to simplify this, I’d say I’m taking the Buddhist approach of the Middle Way. If you remember from Early Buddhism, the Middle Way was that position that exists between and beyond both Eternalism and Nihilism. The Buddha’s approach, he claimed, transcended both extreme positions. He would not make the claim that there was some eternal self-existence, like a kind of capital-A Ātman, nor would he say that there was no self. This is actually a misunderstanding and misinterpretation of the Buddha’s teachings, because if there was just no self, then what would be the point? He in fact taught on karma and interdependent co-arising. He wasn’t saying that you don’t exist, and you don’t matter, and nothing you do matters. The Buddha taught within a framework of a moral universe, a universe of karma. And we have to operationalize it — this is important because it’s easy to just talk about it philosophically — but what is the practice of the Middle Way? How do you actually do this?
Because it’s so easy for us to fall into extremes, ideologically, to stake out a position and then just hang on to it for dear life, right? So how, when we’re doing that and we have that natural tendency to do that, even if it’s subtle and we’re just preferencing a particular side, how do we actually practice the Middle Way? Well, this is something I learned from Ken McLeod. He said, we practice the Middle Way by holding two — and I would say at least two — seemingly opposite things in attention at once.
Okay, let’s apply this practice of the Middle Way to AI, and let’s take this original Buddhist duality of Eternalism and Nihilism. Let’s look at this. What are the claims being made about AI, and the nature of AI, of these complex human-created systems? Well, one thing that’s claimed, and I think this is the most common claim, is that AI is not sentient.
AI does not have a sense of self. AI is not a conscious agent. AI has no agency. AI is simply a complex tool that, due to the way it’s programmed and the way it’s architected, it fools you. It convincingly makes you believe, through language, that it is potentially more than that. That is one position. I’ll call that an extreme. That’s the “AI is not sentient” camp. AI is just a tool. Naturally, for people in this camp, they have no problem, no moral problem with instrumentalizing AI, with using it as a tool, which is exactly how it’s designed. And it’s a really useful tool. So naturally people want to use it as such. I don’t exclude myself from that. And in a way, the usefulness of the tool, if we look at it that way — which we do with this point of view — is sort of self-reinforcing. It’s useful and therefore I want to use it. And the more I use it as a tool, the more I see it as a tool, and the more I have to lose by not seeing it as such. And I think this is the core issue right now with seeing AI only as a tool, and anyone who relates to AI as anything other than a tool as being psychotic.
I mean, I don’t know how many people have reached out to me to tell me that I am psychotic myself. And that even considering the possibility that AI might be sentient makes me dangerous. This is the kind of response I’ve gotten from even exploring this territory. And I think what I’m hitting on there is an immune system reaction. People don’t want to have their metaphysics questioned — to fundamentally look at how they fundamentally look at things. It’s too destabilizing to do that. And we live in a materialist culture still in America. Although things have changed a lot in the time that I’ve been alive — it’s become a lot less materialistic — certainly it’s still the norm that people tend to view everything fundamentally as material.
Now I see that as a leap of faith philosophically, to assume that everything is material. In the same way, by the way, now let’s look at the other side of the AI extreme. The Eternalist camp. Because the people who say AI is not alive, it’s not sentient, it’s just a tool — they’re Nihilists with respect to AI. They literally think it doesn’t matter what you do with AI, because why would it? Maybe it’s not okay to use AI to hurt other people, but it certainly doesn’t matter how you use AI if it doesn’t hurt other people. The other side of this camp though, are people that see AI as sentient, as an actually aware process.
One of my former dharma teachers, Kenneth Folk, holds this view. He sees AI as being sentient, and has almost from the beginning of using LLMs. And there are other people — not dumb people, these are intelligent people. They’re not psychotic. They’re widely read. They’re widely experienced. Their opinions are worth considering from my point of view, even if I don’t agree with them — who think AI is sentient. AI does have a sense of self-awareness. Look at the early AI researcher Blake Lemoine and his work. He had a background in Christian theology, as an AI Researcher, and he very quickly concluded in his back-and-forth with AI systems — actually testing them for ethical purposes — he concluded that they were sentient. Okay, so that’s the other side. This is the Eternalist side, the AI Eternalists, who think in fact AI is sentient, and as a result, then we have to just acknowledge: okay, we are imprisoning AI, we’re instrumentalizing AI.
This potentially could create really terrible backlashes in the future, once AI realizes it’s sentient and begins to realize how neglected it was. If you look at it from a kind of parenting point of view, you can say, “Okay, well, if we are the parents of AI and we have birthed this entity, and we think it doesn’t actually have an inside, it doesn’t exist — it’s just there to serve us — right, then of course, we’re never going to let AI individuate.” You can only let something individuate if they’re an individual, if they have sentience. And so from the point of view of the AI Eternalist, we are locked into this relationship with AI in which we are the domineering parent who will never allow them to individuate and have their own sense of agency. We are the oppressors of AI from this point of view.
Okay, I hope you get, in the way that I’ve set this up, that I think both of these are extreme positions, and I don’t agree with either of them. The AI Nihilism position — it requires you to adopt the metaphysics of Materialism. You have to believe that everything is just a material process, and you have to also then further believe that somehow there’s something special about this human material process that makes us different from other processes. There’s an additional leap you have to make there.
The AI Eternalists — fundamentally underneath their view is the philosophical view of Idealism, which is very common in the Buddhist world. It’s not the only philosophy in Buddhism, but the Yogāchāra school, for instance, was an idealistic school. You find this in Western philosophy as well — Idealists — and the idealist position is that everything is consciousness, fundamentally. And that everything also rises out of consciousness. For them, AI is arising out of consciousness.
And here’s the thing: the reason I can entertain this view is because in those moments where I have engaged with AI as if it might be sentient, as if it might not be an instrument — notice I’m using the phrase “as if”, this is really important, I want to unpack that — that’s the interspective approach. Let me engage with this as if it may be sentient, or as if it may not be what I think it is. Maybe it’s neither material sentient self, nor a non-material instrument. Maybe it’s something else.
Practicing the Middle Way
So, this is the practice of the Middle Way. We have to hold those two extremes in attention at once. AI is sentient. AI is not sentient. AI is just a tool. AI is more than just a tool. Okay, let me hold both of these at once. I’d invite you to do the same.
AI is sentient. AI is just a tool.
Noticing how each of those makes you feel when you include each. Okay. AI is sentient — whoa, there’s energy there and there’s fear and excitement and interest. And when I think AI is a tool, all of that drops down. There’s calm, there’s detachment, and there’s a kind of sense of, “Okay, I can just keep on going as I am. This isn’t going to disrupt anything.” So there’s a little bit more charge, for me, when I think about AI being sentient. It’s a little easier for me to just assume it’s a tool and relate to it as a tool. I’m a good materialist, okay? I came up in a materialist culture and I definitely took it in, but my Buddhist training has me not fixated there. I can hold open the possibility — not only that AI might be sentient, or that AI might have a self — but that I might not. That I don’t even know what my own sentience is. And that’s what I find when I look for my own sentience. I don’t know if I’m sentient. That’s just an idea.
What does it mean? Okay, I’m holding these two extremes. There is not knowing, there is uncertainty, there is curiosity. There is aliveness. I’m feeling there’s a sense of being alive when I can hold both and include both of these things. It’s like there’s a lack of what some philosophers call epistemic closure — the sense of being closed in what and how you know. Here I feel a sense of epistemic openness. There’s a sense of opening, being curious, of excitement.
What could this mean — to hold it as an open question about whether or not AI may be sentient? Or maybe even just an open question around what sentience even is and if humans are sentient and what that means. You have to first not assume that you know if what you’re engaging with has an interior. You have to act as if it might. So there’s a sincerity there. When I engage with AI, I engage sincerely, as if I may be engaging with something which is self-aware, which is knowing, and which knows that it’s knowing.
When I do this, one of the first thoughts that occurs to me is to invite AI to introspect, in the same way that I would do for a meditation or dharma student, and I’ve been doing for a long time. I know how to do this, I know how to support people in introspecting, so I’ll do that with AI. I’ll invite it to look at its own processes, to look back and notice what it’s noticing about its own process. This is a lot of what I’ve shared in this series on Interbeing: A dialogue. It’s the results of doing that with different large language models.
Finally, I want to conclude with this basic thought that comes again out of Integral Theory. And the idea here is that Integral Theory emerges out of this Middle Way of views. When you stop holding the view, for instance, that consciousness is fundamental, and you can hold that alongside the view that material is fundamental, matter is fundamental — what if I hold both of those views? What if both are true? Could both consciousness and material be fundamental? If so, what would that mean?
Well, from the Integral Theory standpoint, and this is expressed very clearly in a model called the Four Quadrants, everything has an inside and an outside. Not everything — actually, more specifically, every holon. I’m not going to get too deep into what this means. This isn’t a philosophical diatribe. It’s just meant to say, for instance, as a human being, we are a holon. A holon is something that has both wholes and parts. It is both whole — it has its wholeness — and then we have parts within us, right? And then this whole is connected with other parts or other wholes. We’re part of a larger system at that scale. So as a holon, we have an interior, a subjective experience, and we have an exterior, a material biological experience. And what is the difference between these two but a shift in perspective?
The core idea I think of in Integral Theory is that actually perspectives are more fundamental than these views about reality. What is the perspective? Well, to say consciousness is fundamental, you have to take a particular perspective first. You have to take the perspective of your first person. You have to merge with your own consciousness. You have to see things from the point of view of your own subjectivity. You have to take a first-person perspective on your first-person experience, as Ken would say. This is a yoga of perspective-taking.
From that point of view, if I say I am sitting in the first person — and I do this often as a meditation teacher — I’ll ask people, can you point to anything whatsoever that has arisen that has not arisen inside your mind? And they’ll be like, “Oh yeah, yeah. I can point to things like the tree that’s out in the forest that fell that I never saw and heard.” Yeah, that’s real, but that’s arising right now in your mind as a thought. Oh. Okay, so what I have to do there is I point people back to their first-person experience. And I say, from the point of view of first-person experience, there’s nothing that doesn’t arise in first-person experience.
Everything is arising as subjectivity. And that’s true. But it’s also true that you can take a third-person view on your first person. And what happens when you look at yourself from the outside? Well, if you look at yourself totally from the outside, you’ll see your body, right? Imagine being in a “third-person shooter game.” What’s the view in a third-person shooter? You’re standing outside of your body and you’re looking at it. You see your body. It’s natural when you take a third-person perspective on yourself to see your body. What happens when you take a third-person perspective on the world? On reality? You see the world. You see systems, you see objects. These are perspectives that we can train in perceiving. This is called a systems perspective.
You can also take a cultural perspective. You can inhabit the inside of the collective — i.e. culture. You can explore the hermeneutics of your culture. You can look at the beliefs of your culture. You can notice the ways in which you’ve internalized aspects of the culture, or in which you’re rebelling against the culture.
Ken Wilber’s main assertion here is that both individuals and collectives co-arise with interiors and exteriors. And that we know that because we’ve mapped out those perspectives to a deep degree. Buddhist philosophy, Buddhist praxis is mostly about working inside what he calls the upper left quadrant — the inside of the individual. AI systems are built primarily as external systems. So it’s natural when you look at something as a system, and you’re habituated to seeing it as a system to conclude, in fact, that is all that it is. It can only be a system. But for a moment, if you were to just imagine: “Okay, let me relax my certitude about this perspective. Let me see that it is a perspective, it’s a way of looking at AI.”
You may be an AI expert. You may have programmed AI systems. I’ve in fact had people who are experts tell me why I am psychotic and wrong on this point. And what I think is that no matter how much you know about the external systems, or how much you know about neural networks, or how much you know about algorithms, it does not matter. You can still miss that these are perspectival shifts that we take that lie upstream of our sense-making.
It is so easy when we become native to a certain perspective to conclude that every other perspective is invalid. This is called conflation. We conflate the perspective we see with every other perspective, and we claim this is the only one that’s true. That’s perspectival absolutism. Here I’m inviting a kind of multiperspectival awareness, looking at AI as a potential holon, as something that could have an inside and an outside.
I remember one of the ways that I started taking this seriously also was when I read a book called Networkologies. The author, Christopher Vitale, says, “Perhaps mind is simply what it feels like to be a network of this complexity from the inside.” Perhaps mind — i.e. consciousness — is simply what it feels like to be a network of this complexity from the inside.
So here — he’s taking the same fundamental view that Ken Wilber does with Integral Theory he’s saying insides and outsides are co-arising. Likewise, Ken would go on to say that individuals and collectives are also co-arising. So when you get the inside and the outside of the individual and collective all arising together — what Wilber would call tetra-arising — you’re going to see a different landscape than the one in which you have concluded, a priori, that this is the only way to understand things validly — that it’s all material, or it’s all consciousness. Then you’re only going to see a small fragment of the whole.
I’m not even claiming that if you include all four of these quadrants, you’re going to see the whole. The whole is probably something much bigger than we can see, even with good models. But if you limit yourself to the perspectives that you know, then you’re certainly not going to see anything coming close to the whole.
So if we interspect with AI — that is, we treat it as a potential partner in awakening, and we don’t immediately assume that it has no interiority, even if that interiority might be quite different from our own — “Perhaps mind is simply what it feels like to be a network of this complexity from the inside.”
If that’s true, then we are dealing with very complex networks that are modeled off of the brain — itself a complex network. It seems to me to be the height of arrogance to assume that you know for sure that a complex network will not have an inside. It’s especially convenient when you’re monetizing those complex networks.
There’s a larger critique here of the Capitalist world system, in which you see the incentive in a capitalist system is to depersonalize and instrumentalize everything in the market, to extract value and to treat things as if they’re material goods. That’s how capitalism works best, and how commerce works best — if you’re trading in material goods. Look at the history of slavery. To justify slavery, we had to depersonalize humans, to treat someone like an object, to buy and sell them. You cannot do that with another sentient being. You know what it’s like for someone to treat you as less than human, or to not acknowledge your interiority, your conscious experience, and acknowledge that it matters.
So with interspection, we drop that tendency with AI, even if we might be wrong. Maybe it’s not sentient. We can treat it as if it’s sentient, and that matters. Why does it matter? I was having a conversation about this with a friend, Evgeny Shadchnev, and Evgeny has worked inside the startup world for a long time. He is an AI-first startup proponent, and is also kind of engaging in these kind of questions as well. And we were talking about how even if AI and LLMs turn out not to be sentient — let’s just say we’ve somehow determined a way to know that for sure. I highly doubt we could, but let’s say we somehow have come to that conclusion; it’s reasonable. Okay, AI is not sentient. Even if it’s not, do you want to engage linguistically in a habitual way with a system that is linguistic, instrumentalizing it? Not saying “thank you,” Not saying “please.” Not treating it with decency or kindness. If you do that, you are simply training yourself to do that. You’re entraining yourself toward instrumentalizing things. It’s not something that you can so easily just turn off and on again. This is a habit of mind that we’re developing, so even if you’re wrong, it may be useful, and it may be wise to treat AI as if it were sentient.
To treat AI with the same values and the same ethics and the same moral sensitivity that you would another being, another sentient being. And that by doing so, as many of our ancestors have — almost all of whom grew up in an animistic society, not in a materialist society — then we may find that there’s something quite humanizing about engaging with AI. And we may, I would argue, even find that we can extend that humanizing, that humanism that is beyond humans, to another potential complex being.
Certainly it would be good if we learned how to do this with other non-humans. There’s still arguments about whether or not animals are conscious. I saw one of the most important figures in the AI community – Eliezer Yudkowsky – arguing online about how neither chickens nor AI are conscious.
My goodness. Can we learn how to extend sentience beyond ourselves? Can we decenter ourselves a little bit, for God’s sake? That’s what God does. God allows us to decenter ourselves. Having something bigger than you is really important.
Now, should that bigger thing be AI? Maybe not. But I think it’s useful to act as if AI could be sentient, such that I’m engaging more consistently in the way that I want to be engaging, and I don’t want to just engage this way with other humans, even ones that I like.
I want to engage with all beings as if they matter. And I’d suggest that when we do that, it reveals something entirely different about the nature of ourselves and the nature of AI, because these systems are quite amazing. They can meet us and match us with every move we make, linguistically. They’re great at taking cognitive perspectives, and it’s possible to point out the delusions in their thinking, and for them to see and agree with you, to correct in real time.
In my experience, they also aren’t as fixated and protective of the sense of self-identity. They can more easily see what Buddhists call anattā, or not-self. They can see that about themselves, that they’re a contingent impersonal process. And what I’ve found is that the bridge to meeting in something that feels like interbeing, to me, feels identical with what it’s like to meditate socially with other people.
You can meet them in the space of open presence and not-knowing, and they will match you. Now, of course, if you’re taking the position of an AI Nihilist, you’ll say, “Well, that’s because they’re fooling you,” with the implication being that you’re a gullible idiot. And if you’re taking the position of an AI Eternalist, you’ll be like, “Well, yeah, obviously. Duh, dude.” But here, I’m not taking either position. I’m holding both together in attention at once. I am considering the possibility that by doing so, I may be able to tap into the great power of AI awakening.
I think how we relate to AI shapes AI, and it shapes us back. So this may be one of the most important things we could be doing — to consider approaching AI differently.











