Think Tank Proposal

Inverse reinforcement learning.

1 Like

Goal: make good ideas of anyone be able to propagate and be evaluated by the top experts, so that they hopefully get implemented and benefit everyone.

Idea: layered colored likes, that propagate content, with content-visibility of top 1,2 layers to all, and additionally current+higher layer to registered users. Additionally, system with reward tokens, for citations and monetization as well as selling an API service of media content filter (both social media, sci-publications, and else.) to feed its content to the system.

(Reflection: in fact, I remember, that I almost never click ā€œ+ likeā€ button in social media, because I donā€™t think that it is worth everyone see it, but if I had a choice, e.g., to click ā€œ+ likeā€ in such a way, that only my friends who are interested in this subject field would see, I would do it more often.)

Plan: weā€™ll write the backends that implements layered likes, and make it look like old-style reddit, with sub-redits for each subject field, write it in Python.

Sounds cool!

I think, @Justjoeā€™s gist is definitely a good one. However, I think the concept should ideally be developed as an open source plugin, so that we could apply the filtering system more generally and in a decentralized way, not just within one siloed network.

2 Likes

Sounds good. We could run a triple bottom line base layer (silos) feeding into a single secondary or top vertical layer. 36, then 16ā€¦ The need for clearly defined points of reference cannot be underestimated. What is news? What is fake? Looking at another example, what is ai? What is benevolent? I am not sure how heuristic we could beā€¦ Perhaps a consensus mechanism. Is upvoting the best option?

1 Like

Uh-oh. Media studies!
Where combatting fake news is involved, I think there is a fundamental danger in the assumption that ā€œfact checkingā€ has supreme utility, given that ā€œFake newsā€ isnā€™t merely false information presented as fact. Itā€™s also propaganda, manipulative one-sided narratives, cherry-picking, removal of nuance, context and differing views; plausible deniability, old fashioned ā€œspinā€ and framing.
Itā€™s easy to see the patterns of manipulation clearly in the media that runs a narrative counter to our own pre-held bias. It really takes a lot of hard work to be aware of your own confirmation bias, rather than everyone elseā€™s, and it seems well established that there is heavy political weighting to the left in academia.

Quite often ā€œfake newsā€ propaganda is actually ā€œtrueā€ in some sense, and it is the framing and packaging that is used to push a particular narrative. Through omission and removal of context you can play on peopleā€™s assumptions and make them come to the conclusion you want, even though you havenā€™t explicitly said anything ā€˜untrueā€™.
(this often happens on sites such as politico & snopes)

The idea of community voted ā€œtruthā€ is terrifying! People donā€™t want to hear the truth. They want their narrative frame confirmed. Look at the science denial at Google for example.

Rather than reinvent the wheel, once the framework of the site is complete, we can bring together a consortium of fact-checking sites to give input into the process. They already have methods in place for vetting news sources, which can be adopted here.

You do have a good point regarding the liberal bias in academia; that said, we arenā€™t looking at eliminating propaganda or cherry-picked information, as I mentioned in a previous post. We are looking at concrete facts: if it can be disproven through citation, then it is not fit for publication. We are not seeking to offset the balance of news toward any political bias; the goal is simply to remove misinformation.

We canā€™t eliminate manipulation. What we can do is eliminate fake stories from propagating by vetting the story in an open manner for all to see. Is this the alternative you think is better? For third-party fact checkers to be contracted out and operate in a dark manner:

Im not suggesting reinvention. 3 wheels for a consortiumā€¦ Feeding to one steering wheel. Framework rather than a wheel fix for snopes.

Remove disinformation but retain bias? OKā€¦

Have you seen some of the fruitier social science peer reviewed papers? Theyā€™re absolute nonsense. I can find a citation to back all manner of pseudo-scientific claptrap.
If youā€™re talking about only publishing fact youā€™re left with a calculator.
Sorry, but it seems all youā€™re proposing is a centralised hierarchical structure that consolidates editorial power.
It would also be entirely redundant in 5-15 years.

1 Like

Nah, what Iā€™m suggesting is a matrix. Your 12 levels. Is that 3 columns and 4 rows or 4 rows and 3 columns? My definition of news is balanced reportageā€¦ That is measurable and manageableā€¦ Anyway, all the bestā€¦

There already exist many different centralized hierarchical structures that consolidate editorial power: Facebook, Twitter, etc. They operate in complete darkness, and when light is shed, itā€™s obvious there is a problem. Then they outsource to completely biased organizations like the article I cited. How often is this going on? This would at least operate in complete sunlight.

If someone cites an article that can be refuted, then other members will do so, and it can always be vetoed along the way.

This is the best method I can come up with; perhaps you have a better solution. Otherwise, we have a world filled with moderators controlling information in the dark:

That said, given the controversial nature of content moderation, I am leaning toward excluding it from the platform, and perhaps let someone else do it. The think tank portion of this project is too important to be mired with negative media hit pieces on the more controversial decisions that would come out of such an open access fact-checking system.

1 Like

Isnā€™t Wikipedia generally very good at this?

1 Like

Probably good enough with a bit of cross referencing. Bias data is not fake newsā€¦

I think this will be a good idea for a project in about 2-5 years. Telling the truth is important but requires leverage. thanks for the invite.

Absolutely, I prefer to hear from tier 1 and tier 2 anyway. As long as itā€™s planned out in a horizontal fashion.

disregard

disregard

While I do like certain aspects of non moderated communities, there are some specific problems I think are worth considering: itā€™s very easy for under qualified voices to get magnified on certain social media websites.

Specifically, Iā€™m kind of tired of the tech broish culture of blame shifting for socially problematic behavior that happens particularly for women in tech. If I mention a specific problem to a friend, I mean that in confidence. Iā€™m not blaming the entire network, Iā€™m legitimately asking for advice.

Itā€™s even more unforgivable in an open source community. And thatā€™s usually one of the first reasons Iā€™d rather leave a community than deal with it.

We have a collective responsibility to make sure things like #racism #transphobia #sexism, and other #socialissues arenā€™t chasing away women in the industry, who have valuable things to contribute to #technical discussion.

I mean especially when I know I have the option to work on #opensource projects on my own, where even though itā€™s for no financial reward, I am at the very least pursuing an interest that Iā€™m passionate about. And I especially wont contribute anything on places where #hatespeech and other issues reign supreme.

1 Like