Reputation System for Artificial Communities, Human Societies and Hybrid Ecosystems


Any operation within highly interconnected community of artificial agents or alive being needs good understanding of reliability of your partners and security of your interactions with them.
It is true for blockchain/crypto systems as well as modern human society relying on social networks.
To address this, we are developing idea of reputation-based consensus applicable for any of the cases mentioned above.
To implement this, we are targeting to create independent Reputation Agency that could be serving SingularityNET as well as other ecosystems, implementing principles of Social Computing.
The prototype of such system is available as Personal Social Analytics service of Aigents platform, now available for social networks such as Google+ and blockchains such as Steemit and Ethereum.


A fascinating topic that, in reality, is one of the big keys to realising the vision of SNet working as an actual network. I have a lot to talk about here, but first a question or two.

Your article suggests that reputation will be borne of market transactions, i.e. meeting customer expectations in a transaction and receiving recommendation based upon this. The secondary staking is a mechanism whereby an individual can indicate anticipation of an agent that will succeed in the primary source of reputation given above, or by receiving further staking (hype) and therefore become used by more customers.

The above seems to infer that to receive reputation for doing anything other than fulfilling the narrow scope of the contract will require the customer to use more dimensions to score their feedback than simple economic benefit of cost vs return.

Do you see any evidence that this will happen?

Will there be any encouragement, guidance or similar to steer a customer’s assessment to consider more than pure utility?

Do you provide any metrics to the customer that will allow them to make anything other than this simplistic and potentially damaging assessment?

My fear is that a lack of transparency in where our money is being used is causing so much damage. I find it disconcerting if the basic metric of reputation in our brave new world is utility vs financial cost.


Hi, thank you for comments.
First of all, there are many options to compute reputations discussed here, the two option that you are mentioning are specific to SingularityNET design and may be extended in other cases.

The “require the customer to use more dimensions to score their feedback than simple economic benefit of cost vs return” should be possible, yet not required. Simplified UI design may live just with one dimension as you suggest and it is my personal preference as well.

Re: “Do you see any evidence that this will happen?” - what is this?

Re: “this simplistic and potentially damaging assessment” - what is simplistic and damaging exactly? Can you give example of not simplistic and safe approaches?

Re: “transparency in where our money” - which money and how this is related to the subject?

Re: “disconcerting if the basic metric of reputation in our brave new world is utility vs financial”, so what is your suggestion then?


Hi there,

I often fail to explain myself well; a definite shortcoming I need to work to improve. Let me have a go at addressing your points/questions.

So, to recap and ensure I don’t have things wrong. The recommendation is that customers rate an agent based on dimensions relating to how the task was completed. The financial transaction made lends legitimacy to the scoring. Second, there is a staking system which allows individuals to anticipate high scoring (and therefore more used) agents, using money to legitimise the scoring.

Essentially the customers get asked “was this a good service?” and stakers get asked “do you think future customers will rate this as a good service?”. The latter is therefore a forward looking expectation of the former. So it all comes down to if the customer is happy.

If a big spending org is going to use more than a cold, hard profit-calculation to score, a few things need to happen.

First they’re going to have to want to do so. Which hasn’t really been borne out by current experience. We’re working really hard to establish standards that organisations can use to make ethical choices on AI, but there’s some pretty shitty precedent from other areas which suggests this is going to be kinda tough. There’s also pressure from end customers, but transparency here is difficult - though I found out recently my savings were being used to invest in arms exports… nice.

Second, if they do want to rate on a broader range of criteria, they’re going to need to be able to KNOW if they’re doing shady things. How will they know if the data set used to train the agent has strong bias against minority groups?

Third, this has to be something that is fast and can be automated if SNet is going to work. There has to be metrics around it, measurables. A key SNet USP is that the agents used can be dynamic. Corporates are not going to bother using SNet if they need to go hunting round to see if there is any risk of them invoking a twitterstorm everytime their backend uses a different agent.

For me it’s risky to just use purely utilitarian metrics.

To answer your questions:

Re: “Do you see any evidence that this will happen?” - what is this? - “this” is “use anything other than raw cost/utility to recommend an agent”

Re: “this simplistic and potentially damaging assessment” - my bank’s approach to measuring the reputation of funds is an example. Their measurements was output/cost, so they invested in arming developing countries. Arguably if my bank had no idea that the investments were heading into guns, it’d be difficult to blame them too much. In the case of AI agents they could justifiably say they had no idea the agent they were using was biased otherwise “we’d never recommend it”

Re: “transparency in where our money” - see previous

Re: “disconcerting if the basic metric of reputation in our brave new world is utility vs financial”, so what is your suggestion then? - That will take some more thought, but it’s a conversation worth having.


Ok, thats clearer, thank you.
First ret me stress on two things:

  1. selecting multiple dimensions is optional, one can just rate to which extent service is good in general or stake to which extent in would be good in general;
  2. these ratings and stakings are just two possible options to express reputation projected for use in SingularityNET, while the other options discussed under other links that I have provided.

Making raters and stakers to desire to make the rates and stakes in interesting topic but I’d think it is social - you are replying me and I am replying you just in assumption that we are doing good for out community - no money is involved. We may imagine more complex design where stakers and raters may be paid for doing stakes and ratings, but it would be extra layer on top of current design and we should be careful to avoid situation when helping to community turns into shady business with reputation gaming and gambling.

To reduce risk of shady businesses and gaming and gambling, we expect all rating/staking data to be open and available for audit by any member of community and involving AI algorithms fo fraud detection as well.

Re: “use anything other than raw cost/utility to recommend an agent” - will you be happy if cheap and tasty pizza for your birthday is delivered 48 hours past ordered time? If not so, timeliness is a dimension to account for, while the cost and quality is fine. Still, repeating another time, I myself think only one “overall service utility and cheapness” should be enough, but we keep design open to be configured for as many dimensions as we may decide later.

Re: “my bank’s approach to measuring the reputation of funds is an example” - if my bank gives me low/rate credits, I rate the bank high. If I learn that my bank is also investing in arms production, I may stop using this bank and stop rating it, or remove my stake on it if I dislike arms. If you have better measures, please suggest.

Matter of having something valuable in this world beyond utility and cash has two layers:
A) First, in the original idea we want exactly what you are concerned about - replace brutal power and dirty money with intangible reputation :slight_smile:
B) Second, we want the reputation calculation to be reliable and prevent armies of bots rating and staking their owners for free - that is why we want to bound value of rating to volume of transaction and value of staking to amount on stake.


If considering consensus for blockchain - here is why Proof-of-Stake is a bad idea and why Proof-of-Reputation may be better:


And “Delegated Proof-of-Stake” isn’t working either, because as publication states “If you’re looking for the kind of money that can buy elections, you’ll find it inside the top 0.1 percent alone.”


Here is the latest design of the Reputation System:


There is also practical discussion on what is the better combination of the settings to prevent reputation gaming + @Justjoe


There is no doubt in my mind that proof of reputation is the chipset (ems) for our Democratic engine. The user manual will need a constitution… Vision mission and values, why we are here , where we are heading, who is on board, why, how, when and what I have canvassed duplexing with Ben a while ago and he agreed that it is a fair approach whilst agi is in the formative stages. A bit like duel controls for learner drivers. The POR simulation could be intergrated and tested as we develop our communities democratic governance framework. I am not sure if our POR should have anything on the community dashboard… I can’t think of legitimate reason the driver would want to switch it off…