Can AI Be Governed At All?

It has happened so I has participated in one of the workshops preparing the [International Congress for The governance of AI https://icgai.org/ (IGCAI) to take place in Prague, 16-28 April, 2020

Here are few key points that I have realised for myself as follows.

  1. The main problem is not about AI, it is about the people using the AI to get profits by means of:
    1.1. Pumping data out of people to re-sell the data (profiling) for better business and political advertising;
    1.2. Manipulate people by means of advertising business and political agendas maximising the profits of sellers and not the quality of life of the people.
  2. The above is complicated by
    2.1. Society of consumption economy, where business models and respective advertising is focused on increase of consumption instead of quality of life;
    2.2. Global nature of the business models worldwide given global business is run by global corporations (directly or indirectly) maximising their profits;
    3.3. Local national governments being incapable to resist global corporation due for the financial powers of global corporations exceeding the financial powers of local national governments (non-operable anti-trust regulations are just one example).
  3. While most of AI developments is run by global corporations, regulation of that is complicated because global corporations working in concurrent conditions of modern capitalism economy unlikely welcome any regulations unless the regulations affect them all to the same extent at global scale otherwise that would affect their competitive advantages. That is, global corporations, developing the AI (and using the peoplesā€™ data and being able to manipulate the people) are expected to be highly reluctant to local national regulations.
  4. Local national restrictive regulations applied to global corporations at local national scales may slowdown technological development of locally nation-wide, so the nations are negatively affecting their own developments if the restrictive regulations applied locally. Which mean and local national regulations would hurt the nations in condition of global competition worldwide.
  5. Given the above, the regulations could be acceptable by both global corporations and local nations only in the case if regulations are applied globally without of excluding any jurisdictions of the major stakeholders - local national regulations in US, EU and China and top 10 IT/AI corporations in US? and China such as Google, Apple, Microsoft, Facebook, Amazon, Baidu, Tencent, Alibaba.
  6. Besides regulations assessing inequality, the regulations preventing weaponisation of AI can not also be achieved if any major governments is excluded from the regulation negotiation and adoption processes or excludes itself.
  7. Respectively, the AI governance effort is doomed unless it can involve both major national governments and global business corporations - both. It is most likely it has happened with nuclear regulation involving co-operation of the governments but the problem with AI is made more complex with the need to involve global corporations.
  8. That is, the only way to ensure efficient cooperation at both governmental and business levels is to treat global inequality boost potentially caused by commercialisation and dangers caused by AI is agreement across all major governmental and global business stakeholders.
  9. One way to reach the agreement on the need for regulations is presenting reliable, representative and presentable simulations of developmental curves of social state of the world given the different regulations scenarios to prove the need of the regulation for the major stakeholders. Similar work has been done preparing the nuclear regulations simulating the ā€œnuclear winterā€ caused by use of nuclear power in military conflicts.
  10. If the agreement on the need for regulation is reached at global national and governmental levels, the balance of the regulatory and permissive terms should be balanced so the negative and positive impacts of the regulations are maximising overall combination of equity, equality and security at the scale of the community worldwide.
  11. The simulations of the social state of a large community for balancing equity, equality and security have been carried out earlier with studies made by SingularityNET Foundation foundation and presented at AI4SocialGood workshop at IJCAI-2019 conference as referenced below so the approach and technology may be employed and developed further justifying the need and possibility for AI regulations.

References
A Liquid Democracy System for Human-Computer Societies ( slides )
A Reputation System for Market Security and Equity ( slides)

2 Likes

As usual the enforcement of regulation will become the obvious hurtle. This is the case with the ICC, OPCW, IAEA, and a myriad of other attempts by our species to hold each other accountable.

The issue becomes unmanageable in the case of micro-biology, which in my opinion is a less controllable situation whereas any person with a few tools can experiment and manufacture nobody knows what. Leeuwenhoeck himself is our proof, as he ground his own lenses by hand, and did not attend school. Sometimes even the slightest alteration of an organism, natural or man made, cannot be put back in the box.

To me, the comfortable spot for AI, is , it is not necessary for survival. But, it is a natural product of the biological system on the planet. Much more natural than the financial system we use. When airplanes got going, people didnā€™t want to get on them. But thatā€™s fine, they are not necessary for survival. The regulatory process, is still ongoing with that also. Itā€™s an organic process based on trust in ourselves as a species. We have to just get on the plane, and hope the jet engine mechanic working at the airport, did not have a really bad day. We have to hope, and place blind faith in the pilot because he may well be drinking rum up there. And, as we see with Boeing, the final arbiter of that system is public trust. If a couple planes go down, nobody flies. Finances will immediately be redirected toward regaining that trust.

So, Iā€™m picking the internet and AI to win the 12 round match with finances. Probably by KO in the fifth round.

There is currently , a pretty high degree of absurdity in regulation for our personal safety. I have to wear a seat belt for instance, or be fined and socially embarrassed, but the brake lines on the vehicle are made of a material that dissolves in the salt purposely placed on the roads by the state for our safety.

It seems to me, any machine can have a governor installed. Something that allows the machine to operate at a safe speed or temperature. In the case of intelligent machines, it should be no different it will just probably be something we havenā€™t expected.

Humanity is governed by time and the changes of it. Time is a natural governor of our ideas, products and movements. It is the ultimate ghost in the machine. No field of study has waited longer for AI than the study of time. It is the epitome of regularity and periodicity. It comes with itā€™s own built in firewalls that require ethics. Humans did not invent ethics, they are pervasive throughout all matter and time. Any machine capable of examining the wealth of human knowledge and humanity itself will ultimately come to that conclusion. Provided of course, the machine is given access to it.

My guess would be, as we speak, the race to draw data concerning time and itā€™s affects on human behaviors and motivations, is already ticking along with great secrecy, just as it has for the past couple thousand years or so. Secrecy of data results, is a far bigger threat than the AI itself, and that has not needed machines to be successfully deployed.

The world is under arrest, and it needs to hold still, because a full cavity search is underway and inevitable. Itā€™s just the shy ones hiding stuff that are threatened.

1 Like

@deborah - can we have equity vs. equality simulation to calibrate the social governance setup preventing irreversible social split on Gini coefficient due to AI adoption?

@DavidO, do you mean that growth of inequality according to Gini coefficient is not a problem as long as GDP and overall amount of wealth is increasing?
There in another point made by Robert Freeman on Facebook Redirecting... about the lack of ā€œcorrelation between technology and a wealth gapā€, which means that Gini has nothing to do with technology per se, it is just matter of social/monetary governance, which is optimistic point of view as well (but I am not sure that I agree with that).
I would rather follow your view concluding that humanity approaching the dangerous social split point where smaller number of people would be managing AI-s while larger number of people would get managed by AI. Having the overall wealth keep growing, the later fraction could just experience degradation supported by ā€œuniversal basic incomeā€.

@akolonin I wouldnā€™t want to commit to a position that a wealth gap ā€œhas nothing to do with technologyā€. As I say, I think agriculture created the original wealth gap, and weā€™ve been whittling away at it ever since. Maybe by creating wealth, technology does create the potential for wealth gaps. But technology has been the means of distributing that wealth too. By the standards of pre-technology, in the broadest sense, we are all richer than rich. And that probably includes the remaining 10% of the worldā€™s population (36% in 1990?) who are below the current technical ā€œpoverty lineā€, tough as that life must be.

2 Likes

Yes, I agree with Rob here, and actually compare AI to the Tractor. The tractor did make some individuals wealthy beyond normal, but also enabled overall wealth from agriculture. Water , in the case of the US, was the biggest hurtle. Now, all the small farmers are gone, and only agri-business remains. Currently, I am seeing the final phase of the tractor, as dealers lots are full of smaller, more capable tractors with implements providing individuals with smaller tracts of land the ability to sustain themselves. The proper adjustment financially, would concern subsidies favoring the exporting farmer, who dominates land ownership in a single district, but provides no food for the local community. The subsidy should now favor the local producer of actual food.

The difference I see, is long range but still visible in the microcosm. A more diverse group of wealthy individuals, at the onset of silicon revolution, as opposed to a small group of nearly ruthless individuals who dominated banking, railroads, and automobiles. I see ethics as a naturally occurring element of time and diversity. Interdependency is much more pronounced, and of course I agree again, that what poverty looks like today, was wealth not long ago. I know this first hand. I live at or near poverty, but can afford land and I grind my own coffee, eat cheese from Italy, and drink wine from California, while living 2,000 miles away.

The fear of lack will subside, and the attention of the world will turn to the sky. The next obvious stage is flight based transportation, and for that, AI must be preinstalled. Roads had to be built, railroads, and airports for ground based transport, but AI is critical just like the engines for tractors, for what comes.

My end of things, has little to do with finance. Being relatively poor, I am least affected by any changes in economy, and I am not very concerned with that at all. I see it as a machine that is already built and will adapt accordingly.

I am mostly attentive to the side benefits of AI, with the mechanical aspects of making life easier, we are going to have more time. With more time, people will turn to the next important thing on the list. The exploration of consciousness and reality. Knowledge , is a prevention aspect of many fears people have. It makes the world safer and it brings predictable avenues of attention.

Certain knowledge can also assist in the ethics of the situation, resulting in wealth of understanding and cooperation. Mistrust is apparent around the world, and it doesnā€™t have to be specific to be noticed as intuitively correct. I see this as something already in the process of being corrected. So, again, Iā€™m trying to get in front of the things we may not see coming.

Egypt was not built by slaves, because nothing that artistic is done under duress or under a brutal hand.

If the governor of AI, is based on pre-existing natural changes in time and the noticeable affects on human behavior, then it remains impersonal, and functions according to processes favorable for the advancement of human consciousness. It is a simple thing, and the knowledge has been around for thousands of years.

Real Wealth, and Real Intelligence can be combined with AI and AGI, to assuage the fears of being ruled by the unknown hand of any human, or runaway machine. It brings the highest element of predictability and integrity to the base program. The tractor was terrifying to the horse that pulled the plow, yet the horse now has a comfortable life in comparison.

With natural periodicity, we do not blame Lanthanum for interrupting the flow between Barium and Cerium. It is accepted as a fact. It belongs there.

Thoughts and the schedule of human behavior is much like the periodic elements. They are on a schedule, always come mixed with other things, and fluctuate in abundance from one location to the next. So, just as the periodic table, (relatively new btw) is the standard operating format for chemistry, so too, can the knowledge of time provide clarity and consistency for a governing AI.

The trade off is a bit tart. Fortunately, and predictably, belief systems are already under heavy questioning, as a result of wide spread communication. That too, is completely natural. The bridge between material science and the invisible is at hand, and that is what I see as the singularity.
So, when I see a person with 60 billion dollars building a clock inside a mountain, I can say it is frivolous, but when I see people with knowledge using it to drive human behavior to their own benefit instead of for the good of all, I say that is way worse. AI will not only prevent this from happening to the degree it is now, but will provide equity beyond our imagination in this area.
This is highly predictable.

The short and skinny is this, ā€¦We are dealing with the same issues. Potential massive movements of wealth, waves of changing perceptions crashing onto the shore, geopolitics, fears of directing human behavior, and future determination. All because we are dealing with something new. At least perceived as new. From my perspective, there is nothing new, just issues and dilemmas as time changes and humans adapt. It is cyclical.

From my perspective, the wrench in the machine is not technology or finance, but the sequestration of the knowledge of time and how that affects human behavior. The only way AI can make that worse, is if it is centralized and retained by malevolent hoards of soul eating aliens.
That does not appear to be the case.

We are in copper headed for nickel. Everyone wants a nickel for their two cents.
Form follows function.

1 Like

@DavidO. Your argument is ā€œform follows functionā€?

You seem to be saying that the function of intelligence is fundamentally ethical, and so must tend to the good.

My argument was that technology has at least not been intrinsically unequal.

They are similar at least in that they are both arguments that the natural direction is not necessarily bad.

And by contrast that regulation is not necessarily good.

Remember, most importantly here we are talking about any need for regulation. Something which might send you to prison. To prison for research no less. For seeking knowledge.

Iā€™m not against regulation. Iā€™m not an anarchist. I donā€™t believe everything always tends to the good. But on regulation I think my position is that knowledge must precede governance. And at this point we just donā€™t know enough to regulate. And that regulation might be counter productive.

In terms of pure knowledge, Iā€™m inclined to believe regulation will always be bad. Because any regulation which seeks to preclude knowledge, will by definition, by precluding knowledge, act from a position of ignorance. And acting from ignorance will generally be bad.

On another level, it is interesting that regulation seems to be more and more in debate, not only for AI. Even ā€œpolitical correctnessā€ is a form of regulating thought. We seem to be casting about for authority in many ways. I wonder if this is related to loss of religion.

This casting about for new authority to replace the old came up in another way yesterday. I was admonished by an academic not to use wikipedia. It strikes me as a similar issue. Academia traditionally has authority over knowledge. But that authority is being more and more challenged. A challenge enabled by technology actually. Much as the technology of printing challenged the authority of the church.

Iā€™m not saying the inclination to regulate AI is the same as academia resisting crowd sourced knowledge. But perhaps there is a common element of fear from the loss of old authority, and an inclination to regulate as a first response to that.

2 Likes

Your argument is ā€œform follows functionā€?

Yes.

You seem to be saying that the function of intelligence is fundamentally ethical, and so must tend to the good.

Intelligence is gathering data, and wisdom is how to apply it. I am suggesting intelligence is one component in the cyclical nature of time, and how to apply it, is another. Intelligence itself cannot be held as leader. Other data and accumulated knowledge is necessary for decisions.

My argument was that technology has at least not been intrinsically unequal.

You find no "argument ā€™ from me.

They are similar at least in that they are both arguments that the natural direction is not necessarily bad.

Agreed

And by contrast that regulation is not necessarily good.

Purpose of regulation can vary. It can be good and bad.

Remember, most importantly here we are talking about any need for regulation. Something which might send you to prison. To prison for research no less. For seeking knowledge.

*No study has spent more time in prison or died than the study of time. Much research has been burned. Many teachers found a brutal end over many centuries. It is a current problem. I find nothing new here. The environment for discussion is much better of late. Much better. Here we are. *

Iā€™m not against regulation. Iā€™m not an anarchist. I donā€™t believe everything always tends to the good. But on regulation I think my position is that knowledge must precede governance. And at this point we just donā€™t know enough to regulate. And that regulation might be counter productive.

My military spider senses tell me otherwise. data is relative to study, and how many studies are silent ?

In terms of pure knowledge, Iā€™m inclined to believe regulation will always be bad. Because any regulation which seeks to preclude knowledge, will by definition, by precluding knowledge, act from a position of ignorance. And acting from ignorance will generally be bad.

Regulation of pure knowledge is not possible for individuals, but for society at large there is a security force, and that force outnumbers us by a vast amount. I try to focus on who I am directly dealing with. Speeding is a selectively enforced rule.

On another level, it is interesting that regulation seems to be more and more in debate, not only for AI. Even ā€œpolitical correctnessā€ is a form of regulating thought. We seem to be casting about for authority in many ways. I wonder if this is related to loss of religion.

*It is, and in fact, religion relies on blind faith and the internet requires discernment. *
Which will win ? 7 Billion people having an argument, is the internet. It is the sound of progress. Itā€™s what we worked for. We are here.

This casting about for new authority to replace the old came up in another way yesterday. I was admonished by an academic not to use wikipedia. It strikes me as a similar issue. Academia traditionally has authority over knowledge. But that authority is being more and more challenged. A challenge enabled by technology actually. Much as the technology of printing challenged the authority of the church.

Each time society or individual moves forward and adopts a new thing, they shed an old thing. With the AI and Internet, the ā€˜thingā€™ will be huge. This is natural. A process very familiar to following time. a river.

Iā€™m not saying the inclination to regulate AI is the same as academia resisting crowd sourced knowledge. But perhaps there is a common element of fear from the loss of old authority, and an inclination to regulate as a first response to that.Yes.

No. Regulation is not a response from my end. It is a pragmatic step toward integration of knowledge. The driver is not fear or loss. That is the issue we have in common concerning integration of knowledge. It would be more correct to say, " be ready for disparate vectors of knowledge coming together for a common cause",ā€¦ As for academia , they have entanglements in finance in common with ā€œhobblingā€ interests. i do not. and even further than that, academia has seldom been associated with invention. That is not their purpose or function. They polish, not create. That is the design of the system. Iā€™m actually of the view, creativity and invention has been stifled for some years now. Since photography and electricity. We could be so much further.

Wright brothers had to ship it to France, because they quit at 16. Leeuwenhoeck ground his own lens from home made glass. Typewriter and Equatorial sextant made by no school. Eastman quit at 16. Ford quit at 15. Mark Twain quit at 11, six years younger than my required reading of it. UPS began on bicycles by two HS dropouts. Mendeleev got the periodic table in a dream , not a lab. Ideas are not untrackable. And they are not produced by repetitive education models. They are tracked by time. Human interference is the problem. Clinging to the shore.