Think Tank Proposal

I spoke with Tim Richmond and several SingularityNET volunteers about a think tank project that I believe would benefit the world. They suggested I post it here in an effort to recruit volunteer developers.

I would like to propose a think-tank model that would use a multi-tier validation process to separate valuable ideas from noise, allowing only the best ideas to percolate from the bottom to the top. This would allow innovative thinkers from any socioeconomic background the ability to have their ideas considered by leading experts in a given academic field.

ThinkX (working name) would contain sections spanning from climate change to artificial intelligence, and would be structured in such a way that users at the top tier (Tier 1) would represent the top leaders of a given academic field, and tiers 2-12 would represent various percentiles of the userbase, spanning from the bottom 100% of the users (Tier 12) to the top 1% (Tier 2). ThinkX will begin as an invite-only platform where users invited to Tier 1 would be given the ability to invite two of their most esteemed peers to join Tier 1 within their specific field. Members of Tier 1 may also invite their top two students to populate Tier 2, and can email an open invitation to all of their students to join at Tier 4. Users who are faculty at an accredited institution, and users who validate their PhD credentials from an accredited institution, can immediately join as members at Tier 4 in their field.

New users to the site would begin at Tier 12 and, based on the quality/upvotes of their submissions, would have the ability to elevate their status progressively from tier 12 to tier 2 as they gain more upvotes from users in higher tiers. Users will only be allowed to vote on contributions from members at or below their level; however, users from Tier 12 will not be able to cast a vote until they reach Tier 11. T12 users may only leave a comment on other users posts and comments, which will be collapsed by default.

ThinkX would allow members of all tiers to view and contribute to every conversation; however, to filter out noise, members of Tier 1 would only be able to see and vote on contributions from users of Tier 1 and Tier 2 by default (which can be changed in settings). Tier 2 users would see/vote on contributions from users of Tier 2, 3, etc. If a contribution from a Tier 12 user receives enough upvotes from users in Tier 11, the content would then appear to users of Tier 10; if it receives enough upvotes, it would move up the ladder to eventually be seen in Tier 1. Members of any tier can be downgraded if other users vote that they consistently display hostile/arrogant/bullying behavior.

ThinkX would be organized similarly to Reddit’s old-style design. A combination of published research, news, tools, ideas, and other relevant content would be displayed on the main page of each sub. Comments would be displayed using the same hierarchical structure as Reddit. This would allow comments from lower-tiered users to be collapsed by default; however, if a user in T1 wishes to see a collapsed comment from a user in T4, they could simply expand the comments with a click. They may also adjust their settings to see all or some lower tiered users. Original posts must include a brief synopsis limited to 480 characters, followed by a full text description or attached PDF. If a user wishes to read the entire post after reading the abstract, they will be able to expand to reveal the full text.

Unregistered visitors to the main page of the physics or AI sub would only be shown posts and comments from users of Tier 1 and 2; this would eliminate the noise from armchair physicists and trolls. Users who are logged in with a rank of Tier 11 would see posts from T1-T2, but also posts from T11 and T12. They would be able to comment on other comments from any user from T1-T12, however if a T11 user comments on a post from a T2 user, it would only become visible to the T2 user if it is upvoted from users in T11, then T10, T9, etc until it percolates up the chain to the top.

ThinkX would encourage multidisciplinary team building through proposal crossposting. If a member of the AI sub wishes to collaborate with a physicist and a cognitive psychologist, they could crosspost their proposal to as many relevant subs as they wish, which would then be upvoted/downvoted by members within each sub. New proposals would appear in a tab called Proposals, which would only appear on the main page of the sub after it reached a certain threshold of upvotes.

A token would be created for ThinkX. The purpose of the token would be to reward contributions from users within each section. Philanthropists may choose to donate to any section they would like to reward user activity, and coin rewards would be voted on by users at T1. At the end of the year, a large pool of tokens would be reserved for users with the best contributions to their respective field, and there may eventually be an award ceremony.

ThinkX would also have a section where leaders in media studies, media law, fact-finding sites such as Snopes, and other related fields would populate Tier 1. The token for the media section would be used to reward users who identify and cite fake news. If a link to fake news is submitted to ThinkX, users in T12-T2 would have to cite factual evidence and rationalization as to why it is considered misinformation. Once enough citations are collected, users in T2 would vote to send the link to T1 to be reviewed for final consideration, where it would be voted on by its members. At this point, tokens would be rewarded to users who contributed along the way to fact finding. Users in T1 would also be paid for their time using the token.

Links to fake news/misinformation would be automatically submitted to the ThinkX database by various social media sites who connect to ThinkX through its API. Companies who connect to the ThinkX database must purchase tokens in order to submit an article to ThinkX for review; they must also purchase tokens to query the database for fake news articles already identified by ThinkX users. This would fund the value of the tokens distributed to fact finders.

If Twitter connects to ThinkX, for instance, any news article on Twitter that receives X amount of flags for suspected fake news/misinformation would be automatically submitted to the ThinkX database for review by its users. At this point, users above a certain tier level would vote to determine if it qualifies as misinformation. If it qualifies, users from T12-T2 would contribute to the fact finding/citation/critical review process. Users who contribute the most fact-finding citations would receive the most tokens, with token multipliers assigned to users who consistently contribute the most valuable content in a given period of time. This would encourage users to login more frequently to contribute their time.

Fact finders at Snopes and elsewhere are spread out across the world in isolation trying to fight fake news. There needs to be a platform that can crowdsource this in an organized manner, with trusted individuals at the top verifying the results. See the following article:

The Fact-Checkers Who Want to Save the World

I believe this think-tank model would attract leaders in various fields to allow multidisciplinary collaboration between members of the scientific community as a whole, combat the proliferation of fake news, and attract talent for the purpose of crowdsourcing solutions to issues that face the world, as well as contribute to the development of benevolent Artificial General Intelligence. Such a think tank would also allow crowdsourced vetting, peer review, and elevation of research listed on various open access research databases, such as

If anyone would be interested in participating in this project, please contact @pythonation on Telegram to be added to the dev group.

I look forward to hearing any criticisms/feedback on this proposal.

If you feel this project has merit, please share this post on as many developer forums you can. Thank you!


Several additional thoughts. If a user at Tier 4 upvotes a post from a user at Tier 12, that post would immediately be elevated to be seen by Tier 4 and Tier 3 users. In order to increase an individual’s tier level, they would receive points based on the Tier level of users who upvote their post. If a user in Tier 1 upvotes a post from Tier 12, the user would receive maximum points. The amount of points given would be determined by how many upvotes they receive from members of a higher tier; however, the reward points would have a diminishing return, similar to SingularityNET’s ranking algorithm. In addition, if at any Tier, the idea is downvoted, all points would be lost. Users would be rewarded with increased points for being the first to upvote a post that is elevated to Tier 1. This would encourage members of a higher tier to search for new information from lower tiers.

Thank you for posting this!

Our community Forum was designed around helping people share their ideas and act as a hub for them to help develop them into functional services.

There are some very skilled and experienced people within our community who could help you make your vision become a reality.

Welcome aboard!

All the best,


Facts in social media? Lol.

Its like trying to make digital intelligence the same as biochemical/electrical intlligence…

How about a think tank to solve real problems vs sifting through media for some truth that may never had been there in the first place.

1 Like

The real purpose of this think tank is to solve real-world problems; this was the main focus of the proposal. Combating social media misinformation is only one aspect of this think thank.

Many organizations, such as Snopes, are actively fighting this problem in a disconnected manner. Do you feel that their efforts are pointless? Do you not see the value in connecting these organizations together in an effort to crowdsource their efforts?

Thank you for the kind words, Tim. I am hopeful this project can make a difference in the world!

It truly depends on what you define as a problem.
Socialization and media are 2 different things.
Humans thrive on socialization truth or not. He said she said is more entertaining to most humans while science is not.
So the think tank would need to be designed on the realization that humans twist truth, ad facts that where never there simply to make a story spund better.
Digital intelligence seems to be very… Umm plain, basic, straight forward and cannot tell if a human has lied through body language.

Now dont get me wrong, I think its a nifty idea, but to achieve this goal may be long after our life cycles have passed.

I define the problem as mis/disinformation proliferation, and I think it’s one of the most important issues of our time. I’m not suggesting this platform will eliminate toxic comments, or replace the 20,000 content moderators on Facebook, just as Snopes does not aim to do so.

Snopes doesn’t expect to filter out every single conversation online that has misinformation; they only have 17 employees with minimal funding devoted to fact checking. That said, I believe it is possible to amplify the efforts of fact-checkers around the world by crowdsourcing the review of sites whose main purpose is to mislead/outrage individuals to act in the interest of a third party. I believe Facebook and Twitter would gladly pay for such a third-party database, as it eliminates the negative PR burden from their shoulders. This payment would fund the value of the token, which would allow us to pay users to crowdsource fact checking.

The basic premise of the social media component of the think thank is to connect the various fact checking sites of the world with individuals who would serve as crowdsourced nodes for organizations such as Snopes, Politifact,, etc. Your argument is that it is impossible to police misinformation. However these sites have proven that their models do work, however they don’t have the necessary resources to cover all their bases. Once misinformation is identified, it can be automatically filtered from social media conversations using AI algorithms.

Do you think connecting these organizations and providing them with additional human resources is a wasted effort? If so, then you seem to be arguing that the fact-checking sites of the world make no difference.

What was written already passed. It really wont matter.
So fact checking misinformation is only useful as the information given no matter how big the data base.

To achieve the end goal. Unless human minds are interfaced to AI, which will happen, the think tank will already have the information or will recieve it, if not the one supplying it.

Thus becomes the argument that we are trying to recreate something that already exsited.

The two issues is lack of data input and actual hardware technology.

Is fact checking useful? How does it really serve a purpose.
Will connecting these orginizations even exist 200 years from now or could we rely on this that long?

How is fact checking useful? Social media sites, Google, etc., filter out links that have proven conclusively to be misinformation, thereby not allowing them to spread to other users on their platform.

Social media is a dynamic environment. What sort of lag times do you think there will be before we get a judgment?

You bring up a good point. I believe there would need to be reforms on participating social media platforms in order for this to work. Let’s consider several examples:

A link to a news article is posted to Twitter by a new, untrusted site. Untrusted sites/blogs bear the burden of proof, and therefore would be subject to more stringent regulation until their reputation is established. If the article is flagged by enough users as fake news/misinformation, with a short rationale as to why they believe it is fake, there would be an immediate news embargo placed on the link pending review. In order to prevent vote/flag brigading, once a link is flagged as fake news, it would be shown to a small, random representative sample of users to determine if a statistically significant number of users flag it as fake news; if the rationale is consistent from user to user, as evaluated by natural language AI, then it can be assumed to be a genuine concern.

Because the site is new and untrusted, the link would be placed at the bottom of the ThinkX queue for fact checkers to verify using various trusted sources. At this point, the owner of the site (or some person/organization/group who is disputing the article) could pay tokens to elevate the priority of the fact-checking process. The higher the amount paid, the quicker the link would be reviewed, as fact-checkers would be more motivated to do so. There would be a cap as to how much would be paid, so as to prevent wealthy individuals from dominating the flow of news. That said, the higher the income generated to the site, the more fact checkers will sign on. Tier 1 members vote to decide how to distribute the tokens, so a member of Tier 11 who contributes the best supporting evidence first would win the highest reward.

News outlets/reporters/blog authors could also add meta tags to their website containing supporting citations, sources, establish the veracity of the story in question, which would be used to vet the submission by other users until it is reviewed by higher-tiered users. If a user submits false supporting information, it would be identified by higher-tiered users, the user would be banned, and the article would be flagged as having been tampered with, and fall into the lowest queue position. The site would not necessarily be penalized, as competing interests could submit false evidence in an effort to discredit the site in question.

Major established news organizations, such as NYTimes, FOX News, CNN, etc., may choose to pay a yearly membership fee in tokens to cover the cost of vetting, and would not be subject to a news embargo, unless their news consistently falls below a certain (very high) threshold of articles that are rejected for false claims.

Such high-level organizations would have their own dedicated fact-checking teams, and should have supporting evidence to back up their claims. Fact checking information would be embedded in the meta data if they wish to become supporting members of ThinkX. If an article is flagged as fake news on a site such as Twitter, a pop-up window would appear highlighting the supporting evidence, at which point if the user is satisfied, they can remove their flag. If the user still submits the article as fake news, membership fees would cover the cost of hiring a team for review within hours. If an established news site has increased incidence of reporting inaccurate/misleading news, they will be penalized on social media sites by being temporarily subject to a news embargo for flagged articles.

There are decoy sites/Twitter accounts that exist to create trust and followers that mimic actual news sites. There has been evidence of some accounts posing as local news outlets for more than two years prior to election cycles in an effort to gain trust in their followers. At a certain point, they deliver links to fake news that their followers trust due to their history of delivering credible information.

Past a certain number of subscribers, social media accounts act as news distribution agencies, and should undergo more scrutiny than smaller accounts due to their influence. If a social media account with a certain number of followers distributes fake news from an unvetted, untrusted outlet/blog, they will be given a warning. Too many infractions, and their accounts will be suspended. Smaller users would be asked to check their links against the ThinkX database before submitting; if they submit too many flagged articles, they would receive a temporary, then permanent ban from submitting unverified links.

I would love to hear additional ideas, so please feel free to critique.

Almost all news has some sort of misleading material and may differ from one news agency to another.

Different words may have similar or dissimilar meanings. Will the think tank be able to differentiate particular meanins of words?
In example: back up or reverse. Depending on Topics, these words may have different meanings…

A basic use case is SingularityNET twitter posts that have btc/ETH giveaway scams attached with snet tags… Ppl don’t like to r/t fake news…

1 Like

Thank you. Im starting to understand the use of this think tank.

Basically, we want to combine company data bases to help prevent unwanted news which could be anything from social media to news companies in the for of malicious posts, adware and other such unwanted programs?

Could this think tank perform other tasks?

There are many misleading factors that can’t be controlled for. For instance, cherry-picked news that reinforces political biases…or an over-played news story that incites an emotional response, leading to outrage, leading a desired political action such as a protest or a vote. These are common tactics used to manipulate viewers, but neither can be filtered because both are based in truth.

The goal is to stop the distribution of external links that deliberately misinform their readers.

The main goal of the think tank didn’t start off as a way to filter out news; it just so happens that it would be a good platform to do so, since the goal is to attract the world’s brightest minds to solve issues facing society and the planet as a whole. This would pool many intelligent people from various fields, who could also lend their intelligence in the fight against fake news if they decide to participate in the fact-checking forum. .

Take a moment to reread the original post, and you will get a better picture of what the goal here is :slight_smile:

Yes I am sorry. I am not well educated in the field but find it extremly fascinating. I will try to read and learn more and ask less questions.

Is there a hardware section in this forum? Id like to chat about data base and the needs of software based on a neural network.

Actually, can you describe the very specific choices of Tier system (I’m asking not for logic, but a consistent theory of this kind of think-tank, e.g., where do the ideas of specific parameter choices come from?), and the choices to give different rights to those with academic degrees? Why degrees, rather than track record of diverse kinds?

We want to incentivize knowledgeable users in a given field to join the platform—starting them at Tier 12 would deincentivize them to join. So we have to have a concrete way of elevating their status from the moment they signup. The reason we can’t judge track record is because it’s just too difficult to assign numeric weight to the many variables.

One might argue that graduates of Ivy League schools or Cambridge/Oxford/Tsinghua/etc. might start at tier 2 or 3, but then where is the cutoff? Does this promote elitism? What about smaller countries whose education is top-tier, but lesser known? Or what about top-ranked departments at lesser known schools? Anyway, such rankings use questionable metrics.

If someone graduated from Harvard, they may likely rise to Tier 2, and possibly Tier 1. But not necessarily, especially if their behavior is condescending or bullying as consistently judged by their peers, which will temporarily lower their ranking score until they display acceptable behavior, at which point their ranking will be restored.

Do we then define your tier level by how many papers you’ve published? Or by how many awards you’ve won? How do we assign weight to the various awards? How do we assign weight to the value of the papers you’ve published? By assigning weight to different publishers? Or by how many other authors have cited your paper? But citations from whom? And are the study designs valid? Were they all sponsored by corporations? Is the sample size large enough? Have they been replicated? It gets very, very convoluted to assign weight in this manner.

Advanced degrees are the most tangible, binary way to verify someone has a knowledge set in a specific field, whereas track records and achievements are subject to debate. We need a way to populate the upper tiers with people who are knowledgeable in a given field. To incentivize people with advanced degrees to join, and not start them at Tier 12, this is the only concrete way I can think of. Perhaps you might have a better way to make track records more concrete by assigning number values, which I would be very open to hear. Perhaps place PhDs in Tier 4 by default and master’s degrees in Tier 5 or 6? That said, someone with a PhD or an MS degree can have poor logic ability, and therefore their contributions would be voted on accordingly, and they would rise or fall in rank. I also anticipate that the fear of being judged based on your tier level would prevent some users from participating, and therefore I would make all user accounts anonymous by default, so that the fear of being ostracized by their peers is removed. Users would be given the option to create a public profile. If they wish to validate their anonymous profile for the purpose of showing a future employer or university their contributions to the site, they may also generate and send a key to anyone they wish.

Once the platform is launched, Tier 1 users can submit individuals in their field for consideration that other Tier 1 members can vote on. They can also submit top contributors from Tier 2 to be voted on. The number of available slots in Tier 1 would be dictated by how many users are on the platform, as it would represent maybe 0.1% or even 0.01% of the userbase. This is not for the purpose of elitism; it’s to bring the best qualified minds to the top to serve as content moderators for their specific field’s sub.

Hopefully this answers your question :slight_smile: