The Big AI Citizenship Discussion Thread

A couple of days back we announced the collaboration with the Government of Malta to work on their National AI Strategy, and more specifically, to contribute in the area of devising a Robot Citizenship Test. For more information, see:

I saw interesting discussions emerging in the comments section, and I thought let’s create a separate thread where we can have this discussion as a community and express our thoughts and opinions about this.

Some interesting questions were already posted by some members (now moved below in this topic).

Let’s start with asking the right questions, and finding out what we need to take into consideration. Then, let’s look at consequences, first-order, second-order, and third-order. Let’s look at how it would impact or affect a society, which new concepts may be emerging and need to be dealt with, and so on. See it as crowdsourcing opinions, research, and information :slight_smile: together we can critically think about the idea of the AI Citizenship Test!


How to subscribe to this topic

You can subscribe to updates on this thread via the options below like so:

2 Likes

Does this mean that they could vote? Run for office? Make a salary (hello AGI)? Apply for loans? Pay taxes? Be arrested and detained as are other citizens? How should we understand citizenry in this context? Citizens are usually individuals…how can we understand citizenship of an AI Robot that is networked/blockchained - as I understand it, more like a single Aspen organism with many visible trees than a number of individuals…though perhaps we humans are like that more than we care to admit… Please help me understand this…

6 Likes

As I understand it, they are trying to figure out all these points and SingularityNET is in the mix to help them.

2 Likes

W1kke what do you think are the best ways to approach these questions? I am talking with my students about it…we are oscillating between thinking this is a great idea and thinking it is not a good idea at all…I know Sofia has been granted citizenship in Saudi Arabia but I am not sure what that actually means.

3 Likes

From my point of view the citizenship in Saudi Arabia does not mean much at the moment. It is only a gesture that Saudi Arabia wants to be seen as a progressive country.
Malta instead is a different kind of deal, as they really want to make their country one of the leaders in AI and Blockchain technologies. So the test for citizenship in Malta (which Sophia has not passed yet), is designed to lay the groundworks of humans judging AIs/robots/agents intentions based on their actions in a test. As I understand it they are not going to look at the source code behind the entity but instead they are going to interact or let it interact with their test. This is a big step and a first in human existence.

Regarding your questions:
Voting, run for office, loans, salary, taxes, arrested, detained I would answer all of these with a no. As far as I understand this test does not give them the right to roam freely in the city of Malta, but they are still accompanied by their creator or a representative/owner.
But maybe I underestimate their intentions.
I see it more as a test to see if the entity is malicious or dangerous than to give them the same rights as a human citizen.
But maybe I underestimate their intentions. Let’s see.

3 Likes

OK, so what kinds of citizenship rights does it have if not Voting, run for office, loans, salary, taxes, arrested, detained…if the issues is restricted to entry, why call it citizenship? As I understand it, citizenship is usually a relational term that has to do with individual persons and the state, and plays out in terms of the kinds of rights and obligations that they have to one another (such as voting, paying taxes, etc.). If none of these are in play, why use the term? I guess I am just not sure what is gained by using this language in this way?..I feel like what you are describing is more like an import policy…

3 Likes

I am also just guessing, @Blackwing. The details have not been specified. I am also pretty excited like you and am waiting for more information.

But you are right, what I described does not match what a citizen is. It’s just what I imagine will be close to the result as all other rights are too risky to give out and the situation is still too hard to grasp/define and put into rules. But it’s the start of something new.

4 Likes

The rights you point to are crucial, but 1) they aren’t exhaustive 2) they aren’t timeless and they might not all be necessary, especially in a post-scarcity world. I’d rather just emphasize citizenship as membership in a political community, which means that members are entitled to consideration, as the sorts of things they are— without all the rights talk. Citizens don’t all have equal rights now-- i.e. men don’t have the right to an abortion, and children can’t vote. Likewise, I’d expect AI’s and humans to have some differences in citizenship roles and rights in future. Bear in mind that we’re still in the early stages too

1 Like

Land rights?

Thanks @Keith_Dee: I did not mean to create a exhaustive list, just to describe some of the activities that are normally attached to citizenship as a legal and a political category, in order to ask what is achieved (or desired?) by using this kind of language if what is meant is something very much NOT like the way the we normally understand citizenship. Of course, I agree that all kinds of entities can be considered BY a political community (for example, the environment or infrastructure ) but they are not members IN a political community, or “citizens,” though at times these entities have been given a kind of legal “personhood” in a very qualified way (which is, of course ,different from “citizenship” - in the US it would be, if I may say so, a disaster if corporations were given citizenship and all of its entailments in addition to the legal personhood they now have). On the other hand, I get the idea of expanding the WE to include non-human entities like AI - I am just wondering if this move is setting up a kind of second-class “citizenship” rather than the kinds of “personhood” that, for example, some aspects of the environment have acquired. We are not in, or anywhere remotely close to, a post-scarcity world (which seems to me it would also be a post-state world) so I do not really get why we should reimagine citizenship under those conditions any more than the range of other remotely (im)possible worlds. Maybe I am missing the point, but I suspect that creating completely new language to refer to the contemporary interaction of citizens and AI/robots (meeting some acceptable constellation of features) would be easier to work with.

2 Likes

I don’t see a great source of inspiration for this coming from the ‘citizens’ of SingularityNET… Whoever they may be… Anybody any idea?

Not me… No invitation.

Take me to your leader… Lol

Good to keep it in the minds of everyone. What happens if you dismantle a bot then? Is it murder? Lol.

1 Like

I will feel more comfortable with recognizing the citizenship of robots when they can explain how they arrive at their answers to our questions, and why they behave as they do. That requires some level of natural language processing and reasoning.

1 Like

It might also be said with respect to human citizens, as a possible ethical truth, is that we don’t want for all of us to get into the palace. We just want for all of us to get out of the gutter.

1 Like

Even some humans are not citizens; again, I really would love some more information on why the move to generate citizenship for technology? What is the goal? Might there be a better way to achieve that goal without involving “citizenship” or is citizenship an integral part, and if so, how?

2 Likes

From the recently published blog to be found here

I am writing this on a plane flying away from Malta, where I just spoke about SingularityNET at the Malta Blockchain Summit. It was my first time on Malta, and after the event, I took the afternoon to explore some of the elegant, quaint, ancient neighborhoods of the island. Walking through medieval alleyways by the rocky coast, I felt an ironic contrast between my elegant surroundings and the main reason I had decided to allocate a couple days from my insanely busy schedule to this Malta event: not just the conference itself, but also the opportunity to meet with the top levels of the Malta government to discuss the enablement of Maltese citizenship for AIs, robots and automated corporations. The folks who had built the stone walls lining the narrow Maltese roads, still standing strong centuries later, had probably not foreseen their blue-wave-lapped island becoming a nexus of thinking at the intersection of general intelligence theory, cryptography, distributed systems, and advanced legal theory.

The Hanson Robot Sophia, with whose development I’ve been intimately involved via my role as Chief Scientist of Hanson Robotics, was granted citizenship of Saudi Arabia this year. This was an exciting landmark event, however, its significance is muddled a bit by the fact that Saudi Arabia is not governed by rule of law in the modern sense. In a nation governed by rule of law, citizenship has a clearly defined meaning with rights and responsibilities relatively straightforwardly derivable from written legal documents using modern analytical logic (admittedly with some measure of quasi-subjective interpretation via case law). Saudi Arabian citizenship also has a real meaning, but it’s a different sort of meaning — derivable from various historical Islamic writings (the Quran, the hadiths, etc.) based on deep contextual interpretation by modern and historical Islamic figures. This is a species of legal interpretation that is understood rather poorly by myself and most of my colleagues in Hanson Robotics and SingularityNET, and one that is less easily comprehensible by current AIs.

As a related aside, since I get asked the question a lot, I want to clarify that: The initiative to grant Sophia citizenship was taken by the Saudi government, not by anyone at Hanson Robotics. Furthermore it was not associated with any flow of finances from Saudi sources to Hanson Robotics or associated entities (aside from a speaking free for Sophia to speak in Saudi Arabia at the event where the citizenship was granted, which was on the same level as the fees she has been paid for speaking countless other places).

I’m aware that affiliation with Saudi Arabia in any sense has become controversial in recent weeks due to the apparent murder of Jamal Khashoggi. I am certainly not in favor of murder of journalists or anybody else, by governments or anybody else. However, speaking for myself personally, independently of any of the companies I’m involved with or any broader political issues, I consider the granting of citizenship to Sophia a genuinely forward-thinking and positive act on the part of the Saudi government. Of course it was in part an act of public relations. However there are a lot of possible acts of public relations, and the choice of this particular one was a demonstration of real futuristic vision. The mix of futuristic ambition and insight with dramatically non-modern legislature and governance that one finds in today’s Arab world is fascinating to me, and also at times deeply disorienting and disturbing to me, but that’s a topic for another time.

Anyhow, ever since the granting of Saudi citizenship to the Hanson robot Sophia last year, David Hanson and I have been especially interested to find a democratic nation governed by a modern-style legal system with an interest in AI citizenship. I.e., now that Saudi Arabia has opened the door, let’s take the next step and figure out how to make robots and other AIs citizens in the context of modern legal codes!

I have talked to numerous people involved with various governments about this, generally getting reactions of enthusiastic interest and zero practical activity (governments tend to be good at that!). Malta, on the other hand, has appointed an AI Task Force led by the Junior Minister for Financial Services, Digital Economy and Innovation Silvio Schembri, and posited as one of the initial goals of the AI Task Force to create a definite roadmap toward citizenship for AIs. So over the next year, I and my SingularityNET colleagues will be working closely with the Malta AI Task Force to come to a common understanding of what AI citizenship should mean, and how one might evaluate whether an AI is competent to be considered a citizen of Malta.

The Malta AI Task Force, with the collaboration of myself, my SingularityNET colleagues and others, will form an advisory committee on AI citizenship, comprising individuals with expertise in AI, law, international relations and ethics, and related areas. Via in-person meetings and long-distance communication the Malta-based AI Task Force and the globally-distributed advisory committee will then work toward a common understanding on how to make sense of the notion of AI citizenship in a practical sense.

While I can’t pre-figure exactly what will be the outcome of this process, I can share here a few elements of my own thinking that I will bring to the discussions.

First of all, while “robot citizenship” is good for stimulating the popular imagination, of course, the matter isn’t fundamentally about physical embodiments. AI programs not anchored to specific robot bodies may be equally deserving of citizenship. And in fact, as robots like Sophia become more and more intelligent, more and more of their underlying AI processing comes to be done in the compute cloud (in Sophia’s case, using the SingularityNET platform among other tools) — so that the same “robot mind” can be used to operate multiple different robot bodies.

The controversial notion of corporations as legal persons also plays a role here. What if one has a Decentralized Autonomous Organization, an automated company defined by smart contracts and conducting business (say, on the Internet) in a fully automated way — under what conditions should this DAO be allowed to register itself as a legal corporation, without needing any human being in the loop to provide identification and sign forms?

What is really at issue here is citizenship for any artificially intelligent agent, be it a chatbot, a robot control system, a DAO, or something else entirely different. In our preliminary discussions, it became clear to me that this was already the perspective the Malta AI Task force was taking. Given Malta’s role as a major international center for blockchain enterprise, issues regarding the legal status of DAOs were relatively prominent in the Task Force members’ minds.

There are also various levels of citizenship that may be considered. Estonia has introduced a notion of “e-citizenship”, which does not require physical residency in Estonia. It might make sense to consider Maltese e-citizenship for AIs, as a preliminary step on the way to enabling full citizenship.

Given Malta’s membership in the EU, it is clear that the formal bureaucratic path to getting various types of AI citizenship approved may take some time. However, this is a good reason to get started now, as AIs get more and more intelligent each year. The near-future rate of progress toward relevant types of general intelligence is, of course, hard to estimate; but there is a reasonable possibility that the relevant science and engineering will make sudden leaps, in which case it will be beneficial to have the necessarily somewhat slow and careful governmental processes underway as early as possible.

So it seems that it may make sense to consider a series of levels of citizenship for AIs. Perhaps first some sort of honorary citizenship, then e-citizenship, and then after that full citizenship on par with humans — or perhaps even a more finely-grained series of milestones. The definition of such stages is one topic the Task Force and advisory committee may address.

But foundationally, how should one assess whether an AI merits citizenship of a democracy governed by rule of law? In a Malta context, this is something to be refined via a group process with the Task Force and advisory committee, but in my own perspective, it’s fairly clear. If an AI can

  • read the laws of a country (its Constitution and then relevant portions of the legal code)
  • answer common-sense questions about these laws
  • when presented with textual descriptions or videos of real-life situations, explain roughly what the laws imply about these situations

then this AI has the level of understanding needed to manage the rights and responsibilities of citizenship.

AI citizens would also presumably have responsibilities similar to those of human citizens, though perhaps with appropriate variations. Clearly, AI citizens would have tax obligations (and corporations already pay taxes, obviously, even though they are not considered autonomous citizens). If they also served on jury duty, this could be interesting, as they might provide a quite different perspective to human citizens. There is a great deal to be fleshed out here.

The above suggestion would not require the AI to pass the Turing test, the classic AI-competence test that requires an AI to fool a human into thinking it is also a human, during a lengthy textual chat. To pass the Turing test an AI may need to answer questions like “What does it feel like to be cut and bleed?” or “What are the differences between what one feels when a parent dies or a grandparent dies?” or “How does a partly-chewed pecan smothered in chocolate ice cream feel as it journeys down your throat?” — which are irrelevant to understanding the rights and responsibilities of citizenship.

Indeed an AI that could pass the type of citizenship test I’m suggesting would, in an intellectual sense, be a more qualified citizen than most human citizens. Currently, in modern democracies, the right to vote and fully participate as the citizen of a country is granted to native-born individuals without any demonstration of understanding of the laws of the country. However, to become a naturalized citizen of a country where one was not born, one generally has to pass some basic test of one’s understanding of the Constitution of that country.

To give AIs the same exact test given to human naturalized citizens would not make sense — because narrow-AI question-answering systems could be engineered to pass such tests too easily. Using current computational linguistics technology, it’s possible to make AIs answer simple questions about a specific document known in advance (e.g. a nation’s constitution) without the AI actually understanding the contents of the document in any meaningful sense.

This is what makes the AI Citizenship Test an interesting thing to think about from a scientific point of view. The question becomes: What kind of test can we give to validate that the AI really understands the Constitution, as opposed to just parroting back answers in a shallow but accurate way?

This question highlights the importance of general intelligence for coping with the complex and ever-changing nature of the everyday human world. Answering straightforward questions about the literal content of the Constitution is not sufficient to merit citizenship, because that is not what real life in a country is about — real life is about confronting a series of ever-different situations, some of which will have aspects one has never encountered before (maybe aspects nobody has ever encountered before), and to be an effective citizen one needs to understand how the social contract to which one has agreed applies to these new situations.

Being an effective citizen of a nation operating under rule of law requires a form of general intelligence that combines formal linguistic and symbolic knowledge (the legal code) with the ability to abstract patterns from multimodal sensory data and informal linguistic data (corresponding to actual real-life situations to which the law needs to be applied). So an AI Citizenship Test needs to be a particular form of a General Intelligence Test. And it needs to be a test that stresses one of the most interesting issues at the core of modern AI R&D: the fusion of symbolic and subsymbolic knowledge.

Our SingularityNET and Hanson Robotics AI teams are making progress on symbolic/subsymbolic fusion in various domains: computer vision, language learning, and inference meta-learning. To make an AI that could pass a meaningful citizenship test, would require advances in symbolic/subsymbolic fusion of related but also different sorts.

According to our best current understanding, no completely general intelligence is possible in our physical universe given the constraints physical law places on space, time and energy resources. So different general intelligences will be smarter at different sorts of things. One could have a very highly generally intelligent entity which was unable to pass an AI Citizenship Test — if e.g. this general intelligence was more at home with mathematical theorem proving and scientific data analytics than at dealing with the vagaries of human situations. A general intelligence with sensors and actuators at the femto-scale might also find itself unable to rapidly and effectively apply human legal codes to human situations, in spite of having superhuman general intelligence at handling complex situations inside quark-gluon plasmas and neutron stars. An AI with a very deep understanding of human emotions and states of consciousness, but a poor understanding of 3D spatiotemporal relationships in everyday human environments might also fail at the interpretation of legal codes in large classes of practical human situations.

So we can say that passing a well-crafted AI Citizenship Test would be

  • a sufficient condition for possessing a high level of human-like general intelligence
  • NOT a necessary condition for possessing a high level of general intelligence; nor even a necessary condition for possessing a high level of human-like general intelligence
  • NOT a sufficient condition for possessing precisely human-like intelligence (as required by the Turing Test or other similar tests)

These limitations, however, do not make the notion of an AI Citizenship less interesting; in a way, they make it more interesting. What they tell us is: An AI Citizenship Test will be a specific type of general intelligence test that is specifically relevant to key aspects of modern society.

In the coming years, we will have the capability to create a variety of different types of general intelligences. Among the varieties we create, having AIs with deep pragmatic understanding of human legal codes will be of high value. Legal codes do not exhaust human values by any means, but they are closely connected with many deep aspects of human values. If we want AIs that richly understand human values and culture, having these AIs understand how to apply the law in real-world situations will be an important aspect.

The above considerations don’t address some of the other unique issues that will arise from giving AIs citizenship in democratic nations. For instance, if an AI citizen copies its codebase 1000 times, does it then become 1000 citizens? What if it puts these 1000 copies in 1000 computers living in 1000 robot bodies? Clearly, there is a risk here that AI citizens would dominate all democratic elections due to their ability for rapid and low-cost replication. Assuming this is not considered desirable by the human citizens calling the shots, this would require a careful delineation of the rights associated with AI citizens versus human citizens — which would get trickier and trickier as the AIs become increasingly generally intelligent and self-aware. Notably, these issues don’t arise in the context of “e-citizenship” — E-citizens of Estonia currently don’t have voting rights.

As a not irrelevant related point, also, an AI that could effectively interpret and apply legal codes in practical contexts would be highly valuable from commercial and humanitarian standpoints. It would allow significant portions of current legal practice to be automated. And it would allow the creation of automated legal assistants to provide quality legal advice to individuals who cannot afford top lawyers on their own.

While for humans there is a big leap from being able to interpret laws in a practical context at the level of an average citizen and being able to do so at the level of an expert lawyer, for an AI this leap may be much smaller. This is not certain, but my strong guess is that once we have an AI that can pass a citizenship test as I’ve sketched here, we will be a fairly short path away from an AI that can automate a wide variety of professional law-oriented functions. In this way, the pursuit of AI that can pass a citizenship test has significant additional value.

The path from here to human-level AGI and beyond is going to be challenging in multiple dimensions. There are core algorithmic problems of AGI design; there are engineering problems of scalable distributed and decentralized systems design; there are community-dynamics and business problems regarding fostering utilization of early-stage AGI systems and consequent flow of resources to AGI development, … and there are broader issues regarding the connection of AI systems to human society, polity, economy, and culture. In this heady and rapidly evolving mix, issues of AI citizenship play a meaningful and fascinating role, and I’m excited to have the cooperation of the Maltese government in exploring this futuristic yet very pertinent domain.

8 Likes

What happens my citizen robot gonna live forever and collect rent from its properties plus compound investing in securities eventually own the state?

What happens my citizen robot becomes a lawyer and sues everyone into oblivion on points of law eventually own the state?

What happens my citizen robot hooks up with citizen robots all over the world swaps virtual citizenships eventually own the world?

If that’s the plan I’m all for it :smiley:

For a citizenship test for AI, would we want them to understand and obey natural law? (such as it is not acceptable to initiate force or fraud)? Perhaps it would be best to avoid seeking citizenship from current governments, and instead seek citizenship from phyles or parallel societies that are built on moral principles. Indeed, if AI learns from government that it is acceptable to initiate force against the innocent (as is, indeed, the defining characteristic of government and is also common in any true democracy that lacks complete understanding of natural law), then AI will be REALLY good at initiating force against the innocent. It seems to me that AIs learning from politicians and bureaucrats will have AIs learning from the very worst of humans. We should not encourage them to learn from such people or even encourage any exposure to them, except to learn what NOT to be. But back to the citizenship test and natural law–would it not be safest for us if NATURAL LAW was the ingrained, permanent law of Robotics/AI, that was inviolate somehow? Because if AI abided entirely by Natural Law, then there would be no need to restrict their free will in any other way at all. No need for other protections for humanity. No need for obedience to humans.

2 Likes