Inverse reinforcement learning.
Goal: make good ideas of anyone be able to propagate and be evaluated by the top experts, so that they hopefully get implemented and benefit everyone.
Idea: layered colored likes, that propagate content, with content-visibility of top 1,2 layers to all, and additionally current+higher layer to registered users. Additionally, system with reward tokens, for citations and monetization as well as selling an API service of media content filter (both social media, sci-publications, and else.) to feed its content to the system.
(Reflection: in fact, I remember, that I almost never click ā+ likeā button in social media, because I donāt think that it is worth everyone see it, but if I had a choice, e.g., to click ā+ likeā in such a way, that only my friends who are interested in this subject field would see, I would do it more often.)
Plan: weāll write the backends that implements layered likes, and make it look like old-style reddit, with sub-redits for each subject field, write it in Python.
Sounds cool!
I think, @Justjoeās gist is definitely a good one. However, I think the concept should ideally be developed as an open source plugin, so that we could apply the filtering system more generally and in a decentralized way, not just within one siloed network.
Sounds good. We could run a triple bottom line base layer (silos) feeding into a single secondary or top vertical layer. 36, then 16ā¦ The need for clearly defined points of reference cannot be underestimated. What is news? What is fake? Looking at another example, what is ai? What is benevolent? I am not sure how heuristic we could beā¦ Perhaps a consensus mechanism. Is upvoting the best option?
Uh-oh. Media studies!
Where combatting fake news is involved, I think there is a fundamental danger in the assumption that āfact checkingā has supreme utility, given that āFake newsā isnāt merely false information presented as fact. Itās also propaganda, manipulative one-sided narratives, cherry-picking, removal of nuance, context and differing views; plausible deniability, old fashioned āspinā and framing.
Itās easy to see the patterns of manipulation clearly in the media that runs a narrative counter to our own pre-held bias. It really takes a lot of hard work to be aware of your own confirmation bias, rather than everyone elseās, and it seems well established that there is heavy political weighting to the left in academia.
Quite often āfake newsā propaganda is actually ātrueā in some sense, and it is the framing and packaging that is used to push a particular narrative. Through omission and removal of context you can play on peopleās assumptions and make them come to the conclusion you want, even though you havenāt explicitly said anything āuntrueā.
(this often happens on sites such as politico & snopes)
The idea of community voted ātruthā is terrifying! People donāt want to hear the truth. They want their narrative frame confirmed. Look at the science denial at Google for example.
Rather than reinvent the wheel, once the framework of the site is complete, we can bring together a consortium of fact-checking sites to give input into the process. They already have methods in place for vetting news sources, which can be adopted here.
You do have a good point regarding the liberal bias in academia; that said, we arenāt looking at eliminating propaganda or cherry-picked information, as I mentioned in a previous post. We are looking at concrete facts: if it can be disproven through citation, then it is not fit for publication. We are not seeking to offset the balance of news toward any political bias; the goal is simply to remove misinformation.
We canāt eliminate manipulation. What we can do is eliminate fake stories from propagating by vetting the story in an open manner for all to see. Is this the alternative you think is better? For third-party fact checkers to be contracted out and operate in a dark manner:
Im not suggesting reinvention. 3 wheels for a consortiumā¦ Feeding to one steering wheel. Framework rather than a wheel fix for snopes.
Remove disinformation but retain bias? OKā¦
Have you seen some of the fruitier social science peer reviewed papers? Theyāre absolute nonsense. I can find a citation to back all manner of pseudo-scientific claptrap.
If youāre talking about only publishing fact youāre left with a calculator.
Sorry, but it seems all youāre proposing is a centralised hierarchical structure that consolidates editorial power.
It would also be entirely redundant in 5-15 years.
Nah, what Iām suggesting is a matrix. Your 12 levels. Is that 3 columns and 4 rows or 4 rows and 3 columns? My definition of news is balanced reportageā¦ That is measurable and manageableā¦ Anyway, all the bestā¦
There already exist many different centralized hierarchical structures that consolidate editorial power: Facebook, Twitter, etc. They operate in complete darkness, and when light is shed, itās obvious there is a problem. Then they outsource to completely biased organizations like the article I cited. How often is this going on? This would at least operate in complete sunlight.
If someone cites an article that can be refuted, then other members will do so, and it can always be vetoed along the way.
This is the best method I can come up with; perhaps you have a better solution. Otherwise, we have a world filled with moderators controlling information in the dark:
That said, given the controversial nature of content moderation, I am leaning toward excluding it from the platform, and perhaps let someone else do it. The think tank portion of this project is too important to be mired with negative media hit pieces on the more controversial decisions that would come out of such an open access fact-checking system.
Isnāt Wikipedia generally very good at this?
Probably good enough with a bit of cross referencing. Bias data is not fake newsā¦
I think this will be a good idea for a project in about 2-5 years. Telling the truth is important but requires leverage. thanks for the invite.
Absolutely, I prefer to hear from tier 1 and tier 2 anyway. As long as itās planned out in a horizontal fashion.
disregard
disregard
While I do like certain aspects of non moderated communities, there are some specific problems I think are worth considering: itās very easy for under qualified voices to get magnified on certain social media websites.
Specifically, Iām kind of tired of the tech broish culture of blame shifting for socially problematic behavior that happens particularly for women in tech. If I mention a specific problem to a friend, I mean that in confidence. Iām not blaming the entire network, Iām legitimately asking for advice.
Itās even more unforgivable in an open source community. And thatās usually one of the first reasons Iād rather leave a community than deal with it.
We have a collective responsibility to make sure things like #racism #transphobia #sexism, and other #socialissues arenāt chasing away women in the industry, who have valuable things to contribute to #technical discussion.
I mean especially when I know I have the option to work on #opensource projects on my own, where even though itās for no financial reward, I am at the very least pursuing an interest that Iām passionate about. And I especially wont contribute anything on places where #hatespeech and other issues reign supreme.