2 Comments
User's avatar
doug rogers's avatar

AI agents lack free will, physical requirements like air to breathe, reproductive drive, water to drink, well, in a way they do consume that and they need electricity, whatever.

Still, I never thought agency was possible via AI. Without input it just sits there on its own.

"Openclaw agents are not talking to each other."

What if that ability was added? While instructions still come directly/indirectly from humans, obfuscation dulls that issue. Might AI then be declared to have virtual agency?

I keep thinking about this pop culture dialogue from Ghost in the Shell:

"It can also be argued that DNA is nothing more than a program designed to preserve itself. Life has become more complex in the overwhelming sea of information. And life, when organized into species, relies upon genes to be its memory system. So man is an individual only because of his own undefinable memory. But memory cannot be defined, yet it defines mankind. The advent of computers and the subsequent accumulation of incalculable data has given rise to a new system of memory and thought, parallel to your own. Humanity has underestimated the consequences of computerization." -Project 2501

Perhaps I'm being too simplistic? In any case, I don't think we're there yet.

Dominique Bashizi's avatar

Exactly "when" they will get agency, if they ever do, is a more difficult question to answer than whether they have it now. I'm confident in saying that they do not currently have agency, as I define it in this post. Once they get more autonomy than they have today, it will be worth reconsidering this question. And then once we think they have true agency, it will be worth considering to what extent we can draw a straight line from that to all-out personhood.

But we're definitely not there yet.