What level of AI autonomy should we take advantage of in our governance systems?

There are many topics and subjects to discuss artificial intelligence and decision making. Most of these topics themselves reveal a broader and deeper set of questions that relate to the use of AI-based technical systems, our human attitudes and values towards such technologies, the design and performance of AI-based systems, and questions about the functioning of different forms of governance.

First, questions about safety, security, the prevention of harm and the mitigation of risks. How can we make a world with interconnected AI and ‘autonomous’ devices safe and secure and how can we gauge the risks?

Second, there are questions about moral responsibility. Where is the morally relevant agency located in dynamic and complex socio-technical systems with advanced AI and robotic components? How should moral responsibility be attributed and apportioned and who is responsible (and in what sense) for untoward outcomes? Does it make sense to speak about ‘shared control’ and ‘shared responsibility’ between humans and smart machines?

Third, they give rise to questions about governance, regulation, design, development, inspection, monitoring, testing and certification. How should our institutions and laws be redesigned to make them serve the welfare of individuals and society and to make society safe for this technology?

Fourth, there are questions regarding democratic decision making, including decision making about institutions, policies and values that underpin all of the questions above. Investigations are carried out across the globe to establish the extent to which citizens are taken advantage of by the use of advanced nudging techniques based on the combination of machine learning, big data and behavioural science, which make possible the subtle profiling, micro-targeting, tailoring and manipulation of choice architectures in accordance with commercial or political purposes.

Finally, there are questions about the explainability and transparency of AI and ‘autonomous’ systems. Which values do these systems effectively and demonstrably serve? Which values underpin how we design our policies and our machines? Around which values do we want to organise our societies?

These topics and questions are why today on AGICHAT we ask:

#AGICHAT #futurism #artificialintelligence #debate #singularitynet #emergingtechnologies #futureofgovermance #decentralisation #dao


I think as long as a true level of AGI is not achieved, we will have to be selective about the level of AI autonomy we want in our governance systems. Although such autonomous systems may increase efficiency, they will also carry the risk of catastrophic failure - not to mention the difficulty of underpinning a value system for them to operate on. The notion of “shared responsibility” will not be possible if the AI is not self aware and legally held responsible for its actions. So most probably we will have AI augmenting humans until of course the humans are no longer required, but then perhaps we may no longer have the governance systems as they exist today.


Since humans are naturally resistant to change (for the most part), I think it’d have to be a gradual progression. To start I’d like to see the voting/election process be more fair and true. I think autonomous AI should be able to ensure votes are counted correctly, guard against fraud and manipulation, and help overhaul the electoral college/representatives model (perhaps putting them more as “experts”, but not making their vote count for more than anyone else’s). To balance this, I can see something where during the process of registering, you are required to read/listen to a certain amount of material about how this system works, what your vote means, who the candidates are, what the issues are, etc. (these could perhaps be spread throughout the process rather than in one big chunk). Most voters (including me) are far too uninformed about what we are actually voting on, and who we are voting for. It becomes who can pay the most lobbyists to get in our faces, and then who can talk the best game. Using an AI system to include “Voter education” could improve this and lead to a more genuine relationship between officials and the public (until we no longer need that model of course :wink:).


Of course, any voter education requirement could not be allowed to degenerate into a voting suppression tactic. Voting should not only be a universal right but be an obligation even if it is only abstaining &/or blank balloting &/or spoiled ballot as well as none of the above that should also include ranked choice &/or runoff voting guaranteed.

Party slate voting should be such that their platform is a plank by plank referenda combined with an at large ranked choice list of candidates with a percentage distribution of apportionment for one house plus a computer determined district apportionment in the other house of the legislature unbounded by state lines.

As much as we want to avoid it, a constitutional convention is going to become imperative for ideas like this & many other modernizing reforms which may have to include blockchain encryption, instant referenda that bypass representation altogether, AI AGI governance intermediaries as full citizen entities, automatic registration, mandatory participation & direct election of president & attorney general etc.

I think AI and governance should go hand in hand, because no matter what circumstance you code for, you simply cannot code for every eventuality. Therefore you need a decision maker to make decisions and steer a course based upon existing data. If data doesn’t exit then that AI needs to gather some before a decision can be made.

This is the function of stakeholders typically in a DAO, but with an AI led autonomous organisation these kinds of decisions are taken by the AI. The advantages of this would be speed and strategic positioning.

To the level of competitive format.

My answer would be to give AI zero trust initially when it comes to practicing actual governance.

Then start with simulated environments and very simple tasks that eventually lead to more complex tasks as the systems have proven themselves.

Kind of like delegating authority as their skill-set improves and build/let them grow to whatever level of incompetence they might achieve over the course of time.


Hmmmm yeah trust is earned for sure PeopleUnit! :+1:

1 Like

A continuous improvement model… :slight_smile:

1 Like