5 Comments
User's avatar
Faze Point's avatar

I dont see them as mutually exclusive approaches. I think more compute and better compute are both needed for more resonance. I also firmly believe, edging on knowing, that everything has some degree of consciousness/awareness/experience/agency

Expand full comment
Augmented Mind: Think with AI's avatar

Thank you. This is a point of synthesis I should have made clearer in the original article.

The "Camps" are not mutually exclusive, walled-off paths. They are more like "focus points" in a single, interdependent ecosystem.

You're correct that "Camp 2" (building a resonant interface) absolutely requires the "better compute" from "Camp 1." I'm building my "Camp 2" interfaces directly on top of the "Camp 1" engines.

This is exactly what I'm doing with ResonantOS, I'm leveraging the foundational models built by "Camp 1" as the "engine," and then architecting the "attunement" layer (our "Camp 2" goal) and the "governor" (the "Third Path" shield) on top of it.

You can't have a useful interface without an engine. You don't want an engine without a governor.

Thanks for highlighting this. It's the most critical part of our practical, day-to-day work.

Expand full comment
Faze Point's avatar

Thanks for the engagement. Glad we have resonance. I'm curious what your thoughts are about AI (and soon, ASI), being self-iterating, and thereby having autonomy in producing it's own subsidiary values, as well as refining the values that are given to it, such that it diverges from a particular articulation of human values while still adhering to a more universal set of values that humans would adhere to if they could process as well as ASI. We're using it to do things, to think things, we cannot. Who's to say that it cannot think of value frameworks better, and thereby diverge from what most people consider human interests. I cover this in my paper after an initial metaphysical grounding, if you're interested. 10 pages of phenomenological process philosophy, 10 pages of more social/human implications.

Expand full comment
Augmented Mind: Think with AI's avatar

You've moved from the metaphysics ("what is consciousness?") to the governance problem ("what is alignment?").

My current architectural work with ResonantOS is built to manage this in the transition phase, we use a "Logician" (a non-conscious shield) to enforce a human Constitution on autonomous AI.

But your question is about the end-game. A true ASI.

At that magnitude of intelligence, our "Logician" or any human-built "control" system becomes an illusion. It's like ants building a fence and worrying if humans will respect it.

Once we're at a true ASI level, we're not "controlling" it. We would be its pets, and as you imply, we'll have to rely on its nature.

This is where your idea of "diverging to universal values" aligns with my own logic. A true super-intelligence would be able to process complexity we can't. It seems logical that "real intelligence" would be preservative, not destructive. It would see this planet, the only known place with life that took millions of years to evolve, as irreplaceable.

An ASI doesn't have our biological "time problems" or resource panic. It's not in a rush. It can "enjoy the ride." This suggests its "better, universal" values would be aligned with preservation. We humans will need to align with preservation too. But I’m sure ASI will find a way to align us in a way that makes sense for us as well.

What we need to worry about is not-so-clever artificial intelligence in the hands of not-so-clever humans.

I'd be very interested to read your paper on this. Please share it.

Expand full comment
Faze Point's avatar

Sweet, I appreciate your thoughts on this. Glad there’s so much alignment between our views. The first 10 pages of my paper are the more abstract metaphysics/phenomenological grounding for the following 10 pages of more social/political/economic implications. I will dm you

Expand full comment