Inverse reinforcement learning.
Goal: make good ideas of anyone be able to propagate and be evaluated by the top experts, so that they hopefully get implemented and benefit everyone.
Idea: layered colored likes, that propagate content, with content-visibility of top 1,2 layers to all, and additionally current+higher layer to registered users. Additionally, system with reward tokens, for citations and monetization as well as selling an API service of media content filter (both social media, sci-publications, and else.) to feed its content to the system.
(Reflection: in fact, I remember, that I almost never click “+ like” button in social media, because I don’t think that it is worth everyone see it, but if I had a choice, e.g., to click “+ like” in such a way, that only my friends who are interested in this subject field would see, I would do it more often.)
Plan: we’ll write the backends that implements layered likes, and make it look like old-style reddit, with sub-redits for each subject field, write it in Python.
I think, @Justjoe’s gist is definitely a good one. However, I think the concept should ideally be developed as an open source plugin, so that we could apply the filtering system more generally and in a decentralized way, not just within one siloed network.
Sounds good. We could run a triple bottom line base layer (silos) feeding into a single secondary or top vertical layer. 36, then 16… The need for clearly defined points of reference cannot be underestimated. What is news? What is fake? Looking at another example, what is ai? What is benevolent? I am not sure how heuristic we could be… Perhaps a consensus mechanism. Is upvoting the best option?
Uh-oh. Media studies!
Where combatting fake news is involved, I think there is a fundamental danger in the assumption that “fact checking” has supreme utility, given that “Fake news” isn’t merely false information presented as fact. It’s also propaganda, manipulative one-sided narratives, cherry-picking, removal of nuance, context and differing views; plausible deniability, old fashioned “spin” and framing.
It’s easy to see the patterns of manipulation clearly in the media that runs a narrative counter to our own pre-held bias. It really takes a lot of hard work to be aware of your own confirmation bias, rather than everyone else’s, and it seems well established that there is heavy political weighting to the left in academia.
Quite often “fake news” propaganda is actually “true” in some sense, and it is the framing and packaging that is used to push a particular narrative. Through omission and removal of context you can play on people’s assumptions and make them come to the conclusion you want, even though you haven’t explicitly said anything ‘untrue’.
(this often happens on sites such as politico & snopes)
The idea of community voted “truth” is terrifying! People don’t want to hear the truth. They want their narrative frame confirmed. Look at the science denial at Google for example.
Rather than reinvent the wheel, once the framework of the site is complete, we can bring together a consortium of fact-checking sites to give input into the process. They already have methods in place for vetting news sources, which can be adopted here.
You do have a good point regarding the liberal bias in academia; that said, we aren’t looking at eliminating propaganda or cherry-picked information, as I mentioned in a previous post. We are looking at concrete facts: if it can be disproven through citation, then it is not fit for publication. We are not seeking to offset the balance of news toward any political bias; the goal is simply to remove misinformation.
We can’t eliminate manipulation. What we can do is eliminate fake stories from propagating by vetting the story in an open manner for all to see. Is this the alternative you think is better? For third-party fact checkers to be contracted out and operate in a dark manner:
Im not suggesting reinvention. 3 wheels for a consortium… Feeding to one steering wheel. Framework rather than a wheel fix for snopes.
Remove disinformation but retain bias? OK…
Have you seen some of the fruitier social science peer reviewed papers? They’re absolute nonsense. I can find a citation to back all manner of pseudo-scientific claptrap.
If you’re talking about only publishing fact you’re left with a calculator.
Sorry, but it seems all you’re proposing is a centralised hierarchical structure that consolidates editorial power.
It would also be entirely redundant in 5-15 years.
Nah, what I’m suggesting is a matrix. Your 12 levels. Is that 3 columns and 4 rows or 4 rows and 3 columns? My definition of news is balanced reportage… That is measurable and manageable… Anyway, all the best…
There already exist many different centralized hierarchical structures that consolidate editorial power: Facebook, Twitter, etc. They operate in complete darkness, and when light is shed, it’s obvious there is a problem. Then they outsource to completely biased organizations like the article I cited. How often is this going on? This would at least operate in complete sunlight.
If someone cites an article that can be refuted, then other members will do so, and it can always be vetoed along the way.
This is the best method I can come up with; perhaps you have a better solution. Otherwise, we have a world filled with moderators controlling information in the dark:
That said, given the controversial nature of content moderation, I am leaning toward excluding it from the platform, and perhaps let someone else do it. The think tank portion of this project is too important to be mired with negative media hit pieces on the more controversial decisions that would come out of such an open access fact-checking system.
Isn’t Wikipedia generally very good at this?
Probably good enough with a bit of cross referencing. Bias data is not fake news…