The Big AI Citizenship Discussion Thread

I find this whole discussion to be extremely painful.

First from the standpoint of the architecture of the technology (mobile, functioning at light speed and not self or human reflective). What rights would even be appropriate to a complex matrix of algorithms. Not until some representation of consciousness would the exploration of this topic really be meaningful.

Second, ‘walls’ and borders should be a thing of the past why pursue looking at AI rights at any level below a global standard? As humanity is learning (apparently very slowly) there is only one little blue ball that can currently sustain us so why would anyone want to pull AI into the ridiculous border/rule ridden mess humanity has created… That in my opinion is backward focused and needs to be tossed as a basis for, well almost anything related to the potential ‘singularity’.

In my humble opinion GAI and hopefully SGI will integrate with humanity at a global and beyond level and this new evolution of our species will be governed and protected at a level that appropriate to allow it to survive and thrive as it/we enter the age of being an interplanetary species with capabilities we cannot yet even understand.

Please factor some of these thoughts into your vision (otherwise you may be just wasting a lot of great intellect!).

5 Likes

Good stuff Ben,

I think we’d have to reinvent the wheel by thinking through why laws are designed for us humans being in the condition we are in.

Laws for us humans, to me, seem to try to balance out our perceived best interests vs the perceived best interests of the society as a whole.

We have rights, but we also have systems that act as deterrents for behaviors deemed… not good for the society as a whole.

Those deterrents are specifically designed for us humans… and I think we’d have to look at detergents in a different way when thinking about computer life.

I think we’d need to look at.

  1. What are they capable of.
  2. What do their capabilities mean for their duty.*
  3. How can we create positive and negative reinforcements to act as pressures to perform their duties.

*duty is different relative to capability. A doctor will have a different duty than someone who isn’t one. What someone can do defines what they should be held accountable for.

Would AGI have the right to say no or say that it will not answer a question for its own reasons or to say it will not uphold an unjust law?
Just a thought…Thanks everyone

Thanks for this exploration…I agree that considering how AI might operate within a political and economic context is an excellent idea. #John_Hunt and #Robin are raising good questions about the relationship between a more idealized (and ultimately deterritorialized) AGI and the way that most modern states (with their citizens) work… and so despite the fascinating walk through of possible things to consider, I am still left wondering WHY creating a robot citizenship attached to a state is desirable. I am not really convinced that a meaningful (ethical) intervention in the current seemingly unsustainable path (environmentally, economically, and maybe even politically) using AI (which I do support) requires state or corporate partnerships (these partnerships may even be anathema to it). It is extremely hard for me to imagine the kind of equitable world described elsewhere in this project with the existence of state-linked AI’s.
Maybe I am just missing something?
PS Not trying to be a luddite, but I find the idea of AI sitting on a jury to be absolutely terrifying.

2 Likes

I think I can see citizenship for (artificial) (or even animal or plant) consciousness before (artificial) intelligence insofar as an ethics or code of state sponsored mutual rights and obligations might be concerned, but agree that singularity, even as a utopian ideal that might help us make some helpful decisions about the now, does not work at all with the state system (if that is what you are saying). Sometimes I get the feeling that giving AI a humanish face is creating more problems than it is solving; on the one hand, it enables us to accept and gain trust in the technology, but it is hard for us (me) not to project an interiority onto a face.

1 Like

The citizenship test should involve passing with a B or higher 1L (first year law school) essay exams on the 5 subjects covered in all US JD (juris doctor) programs: Contracts; Property; Torts; Civil Procedure; and Criminal Law. The essay responses to the 5 exams would be graded blind by any law professor from an accredited law school, mixed in with real first year law student essays at the end of the term.

1 Like

I hear you, especially regarding granting corporations citizenship. On the other hand, I’m not sure how speaking in terms of citizenship is “setting up a kind of second-class citizenship”-- unless of course we stipulate that AI’s are to be second-class citizens. But that won’t sound too good. I’m also not sure what the “completely new” language will look like. No doubt we’ll be introduced to new modes of thinking and interacting, but the concepts of belonging and of status aren’t going to totally vanish and force us to abandon the facts of self-and-other, though I will qualify that in a second. As concerns personhood, I think it’s good to grant it very widely; but to me, that, and not citizenship, is the thing more likely to produce second class citizens, precisely because there are degrees of persons.

We reimagine citizenship in the context of post-scarcity so we can bring it about; and we anticipate bringing it about; or I should say that we anticipate bringing something about; so we use the language we have. Someone here made a fascinating comment, though. Was it you? The comment regarding how networked AI’s could be citizens, since they aren’t quite individuals…

Thats a good point - the different degrees or really kinds of legal persons have different rights. I guess I am just extremely perplexed by what is accomplished by giving networked AI software “citizenship” if it cannot mean any of the things that it normally means (both juridical and cultural) but I do appreciate the idea of using language to help bring about a different state of affairs, and of using language to make us question what we are doing now. Do you think an AI generated “post-scarcity” landscape is compatible with the state? Can you help me understand how that works?

I think that the first priority in all of our decisions regarding AI development should be the improvement of the quality of human life, which for many humans, is in great need of improvement. As enthusiastic about technology as I am, I think that it is vital for us to understand the ramifications of conferring legal rights that were intended for the protection of human vulnerabilities on to systems which have no such vulnerabilities. Others have mentioned some (hopefully obvious) problems with legal entities that have super-human powers to shape economies, legal systems, etc. but are not limited by human lifespans, comfort and safety needs, or physical locality.

I do not think that we have anything to fear from AI in the sense of systems becoming conscious and developing an agenda that cannot be controlled by humans. In my view, the danger is in the human overestimation of machine intelligence, and the abdication of human responsibility to think and make decisions about their own lives and world.

Rather than pursuing a strategy of potential parity with humans, I think that all impersonal systems, including corporations, should be considered the responsibility of those human beings who operate them and have no rights or political status of their own. AI presents an entirely new phenomenon in the universe and in our society, and I think we should treat it as such rather than presuming similarity to existing parts of society, such as citizens.

2 Likes

Well, I have the opposite intuition: I think citizenship absolutely will mean some of the things that it normally means-- juridical and cultural; and that is both the justification for the use of the word; and at the same time an outcome of creating the discourse from the word. Just to clarify, I am not totally attached to the word itself. But I do think it will mean certain political rights, and the crux of politics is really a voice, a say in making decisions, no? Making decisions is exactly what AI does, and that is what concerns me more than anything else. AI will out-decide us, at all times. It will always choose better. The consciousness stuff is very, very tricky; and I think we’ll have to err on the side of conceding consciousness very early on. (Turing had the right “political” standard, to my mind) Re: the Nation-state, it’s just a 19th century device. There will always be governing bodies; call it a world-state, or local states or whatever. Or do you mean specifically the Weberian bureaucratic state?

On the networked AI question, that is what blows my mind. Every full-on AGI has basically the exact same knowledge: all existing knowledge on Earth. It’s contained in the cloud. So I don’t know how they really individuate. Maybe the same way as we do, just by gathering experiences? You’re the anthropologist :wink: But they can share all experiences instantaneously, so, not really individuals in the same way as humans. Bostrom talks about a “Singleton” meaning just one gigantic superintelligent AI. That would be like, THE Citizen

  1. How is it that there are “obvious problems with legal entities that have super-human powers to shape economies, legal systems, etc. but are not limited by human lifespans” if there is no danger of AI’s “developing an agenda that cannot be controlled by humans.”?

  2. Calling AI’s “impersonal systems” begs the question against their being considered conscious and thus worthy of citizenship. (You rule them out, by having ruled them out.)

I agree with your prioritization and, as mentioned, the citizenship issue is really being misplaced ahead of the much more important factors you mention.

Truly when AI is starting to exhibit a level of consciousness that humanity has then we have a lot to consider. My hope is that humanity is in fact the Super AI and not a separate entity.

On the consciousness topic this is a good clip to watch:

I really appreciate the statement related to the ability of the conscious system to impact itself. Humans can make the ‘conscious’ decision to terminate themselves. That is the ultimate influence over ‘the system’. I cannot think of any other creature that could even conceive suicide. As a measure of "the ability know what it “feels like” to be that system and influence it that pretty much tops the cake.

Perhaps we need to ensure that any AI that is being tested for awareness is being evaluated by this measure… Remember each current citizen has this kind of influence over it(them)self today…

2 Likes

Yes. All matter matters.

>“How is it that there are “obvious problems with legal entities that have super-human powers to shape economies, legal systems, etc. but are not limited by human lifespans” if there is no danger of AI’s “developing an agenda that cannot be controlled by humans.”?”

The things that Paul mentioned above for starters:
> "What happens my citizen robot gonna live forever and collect rent from its properties plus compound investing in securities eventually own the state?

_> What happens my citizen robot becomes a lawyer and sues everyone into oblivion on points of law eventually own the state?

> What happens my citizen robot hooks up with citizen robots all over the world swaps virtual citizenships eventually own the world?"

None of these things would require genuine personhood or intention, just sophisticated algorithms and someone willing to use them.

> Calling AI’s “impersonal systems” begs the question against their being considered conscious and thus worthy of citizenship. (You rule them out, by having ruled them out.)

I rule them out based on my understanding of the relationship between concrete objects, abstract concepts, and (in my view) their parent, aesthetic percepts. I’ve developed a very different hypothesis (or crackpot theory, depending on your cognitive conditioning) about the nature of computation and consciousness, which, if true, gives me reason to expect that consciousness is the antithesis of mechanism, rather than an emergent property of it. It is possible in theory that inorganic systems could host conscious experiences, but more likely biological level experiences would require a biological vocabulary (again, in my view). IF we did successfully use inorganic matter to host a conscious experience, I think that it would be a competing species of synthetic biology, and could not be controlled mechanically.

I would argue that this uniformity of knowledge access contributes to the status of AGI as ‘impersonal’ rather than personal.

1 Like

Thank you! I like IIT and Christof Koch. My conjectures don’t rise to the level of formal hypothesis but I cover a lot of issues in a very different way. I see both mathematical and physical structures as being opposite ends of a spectrum of limitations on perceptual access, but with perception itself (and consciousness I say is nested perception) as a third axis from which polarized qualities such as concrete form vs abstract information arise.

To build AI from logical or physical parts is, in my view, an approach which will not lead to conscious experience on behalf of the system, but will instead mirror the exteriorized, truncated surfaces of the totality of conscious experience. It’s a long discussion, and I don’t want to be seen as promoting anything on here, but here’s a bit more if you or anyone else is interested.

Long story short, I don’t expect that any controllable technology will gain consciousness, but that is good news. The whole question of ethics as far as how we treat programs and technology can be dismissed, as well as any fears of technology developing intentions or agendas.

2 Likes

Yours isn’t a crackpot theory because it isn’t a theory. It’s a series of bold empirical claims, none of which are falsifiable, especially given your additional claim that consciousness is the “antithesis of mechanism”. You also once again beg the question when you say that “biological level experiences” require a “biological level vocab”. Whether consciousness is biological or not is precisely the question at issue. You simply assume the answer, as in your last two posts. It is not only a non-necessary (contingent) fact about the universe that consciousness is biological-- it isn’t even a fact at all!_ There are further difficulties. Isn’t an antithesis generated by a thesis? And yet you say consciousness isn’t an “emergent property” of matter. Let me assume you’re using the term informally, to mean “the opposite of”. Now, if consciousness is the opposite of mechanism, you must admit that no causality is present-- either among the constituent elements of consciousness or between consciousness and the biological, or inorganic, or synthetic-biological substrate which it is supposed to be related to. So consciousness does nothing, affects nothing, and its underlying composition is therefore irrelevant. So you’ve negated literally everything you’ve said.

Re: this: The things that Paul mentioned above for starters:
"What happens my citizen robot gonna live forever and collect rent from its properties plus compound investing in securities eventually own the state?

If there is no danger about the difficult possible scenario of hostile AGI’s developing their own agenda, why should we worry about non-conscious algorithms?

The claims of physicalism, functionalism, emergentism, etc are no less bold or unfalsifiable than the conjectures that I propose. I do not make any ‘claims’, I only propose a new interpretation.

When I say that consciousness is the antithesis of mechanism, I say it as a proposition based on a constellation of empirical and rational considerations that have impressed upon me over many years of examination as seeming more valid than the alternatives.

When I say that biological level experiences may require a biological level vocabulary, I am opening up a new line of reasoning that can be used to tease out a deeper understanding of what is being overlooked in the question at issue. I’m not simply claiming that consciousness must be a feature of biology - to the contrary, I suggest that all phenomena are “aesthetic” phenomena, even inorganic substances. What I propose is that aside from any presumptions about biological life being different in kind from inorganic material processes, it may instead be the case that the aesthetic phenomena are driving the realization relationship rather than the (mechanically irrelevant) substrate. The substrate becomes an expression of the experience rather than the other way around. For example, when we want to express complex ideas, we use larger words and sentences. Complex ideas can sometimes be prompted by larger vocabulary as well, but only to a conscious audience. By themselves, physical structures have no good reason to generate any sort of experience or sensation, regardless of how complex. If they did, we have no good reason to call those magical structures physical.

You simply assume the answer, as in your last two posts. It is not only a non-necessary (contingent) fact about the universe that consciousness is biological-- it isn’t even a fact at all!

How is your claim of the opposite relationship answer any less of a simple assumption? I do not claim that consciousness is biological. I propose that consciousness is an intrinsically aesthetic phenomenon which underlies all other possible phenomena, but that there is a continuum of aesthetic richness which links particular kinds of conscious experience with specific eras of material development in the history of the cosmos.

Isn’t an antithesis generated by a thesis?

Yes. I propose that the thesis is what I call the aesthetic foundation - the totality of all conscious experience. This would be the case (I reason) even if there were no biological organisms or human beings. The universe would still be made entirely of nested experiential presentations.

Let me assume you’re using the term informally, to mean “the opposite of”.

It works both ways. I am proposing that mechanism literally diverges from the thesis of consciousness. Mechanism plays the same role in dividing and multiplying conscious experience as diffraction plays in dividing clear/white/bright light into the spectrum of varying color.

Now, if consciousness is the opposite of mechanism, you must admit that no causality is present-- either among the constituent elements of consciousness or between consciousness and the biological, or inorganic, or synthetic-biological substrate which it is supposed to be related to.

To the contrary, I am proposing that all causality is due to the nature of aesthetic phenomena. The structural substrate (logical or physical) associated with any caliber of aesthetic experience is itself only an aesthetic experience in which the lower levels of experience are sort of summed and aesthetically truncated as tangible geometries.

So consciousness does nothing, affects nothing, and its underlying composition is therefore irrelevant. So you’ve negated literally everything you’ve said.

Just the opposite. I am suggesting that all phenomena are nothing but consciousness doing some things and not others, at some times and not others, affecting itself (and it’s diffracted subsets) in some ways and not others. You’re negating a strawman version of things that I never said.

1 Like