I found that idea interesting. Will we consider it the norm in the future to have a “firewall” layer between news and ourselves?
I once wrote a short story where the protagonist was receiving news of the death of a friend but it was intercepted by its AI assistant that said “when you will have time, there is an emotional news that does not require urgent action that you will need to digest”. I feel it could become the norm.
EDIT: For context, Karpathy is a very famous deep learning researcher who just came back from a 2-weeks break from internet. I think he does not talks about politics there but it applies quite a bit.
EDIT2: I find it interesting that many reactions here are (IMO) missing the point. This is not about shielding one from information that one may be uncomfortable with but with tweets especially designed to elicit reactions, which is kind of becoming a plague on twitter due to their new incentives. It is to make the difference between presenting news in a neutral way and as “incredibly atrocious crime done to CHILDREN and you are a monster for not caring!”. The second one does feel a lot like exploit of emotional backdoors in my opinion.
Not really. An executable controlled by an attacker could likely “own” you. A toot tweet or comment can not, it’s just an idea or thought that you can accept or reject.
We already distance ourselves from sources of always bad ideas. For example, we’re all here instead of on truth social.
Jokes on you, all of my posts are infohazards that make you breathe manually when you read them.
That’s why I stick with platforms where hardline communist teenagers can curate what I’m exposed to.
That’s the only way.
I remember watching a video from a psychiatrist with eastern Monk training. He was explaining about why yogis spend decades meditating in remote caves - he said it was to control information/stimuli exposure.
Ideas are like seeds, once they take root they grow. You can weed out unwanted ones, but it takes time and mental energy. It pulls at your attention and keeps you from functioning at your best
The concept really spoke to me. It’s easier to consciously control your environment than it is to consciously control your thoughts and emotions.
Reading, watching, and listening to anything is like this. You accept communications into your brain and sort it out there. It’s why people censor things, to shield others and/or to prevent the spread of certain ideas/concepts/information.
Misinformation, lies, scams, etc function entirely on exploiting it
Yea, no thanks. I don’t want things filtered based on what someone else thinks I should see.
What if it’s based on what you think you should see?
Either it’s you deciding as you see it (ie there is no filter), or it’s past you who’s deciding in which case it’s a different person. I’ve grown mentally and emotionally as I’ve got older and I certainly don’t want me-from-10-years-ago to be in control of what me-right-now is even allowed to see
Just like diet, some people prefer balancing food types and practicing moderation, and others overindulge on what makes them feel good in the moment.
Having food options tightly controlled would restrict personal liberty, but doing nothing and letting people choose will lead to bad outcomes.
The solution is to educate people on what kinds of choices are healthy and what are not, financially subsidize the healthy options so they are within reach to all, and only use law to restrict things that are explicitly harmful.
Mapping that back to news and media, I’d like to see public education promoting the value of a balanced media and news diet. Put more money into non-politically-aligned news organizations. Look closely at news orgs that knowingly peddle falsehoods and either bring libel charges against them or create new laws that address the public harm done by maliciously spreading misinformation.
But I’m no lawyer, so I don’t know how to do that last part without creating some form of tyranny.
Why would it be someone else? Why would someone assume it, especially here on lemmy?
isn’t that what the upvote/downvote buttons are for? although to be fair, i’d much rather the people of lemmy decide which things are good and interesting than some “algorithm”
There’s a real risk to this belief.
There are elements of lemmy who use votes to manipulate which ideas appear popular, with the intention of manipulating discourse rather than having open discussions.
yeah. you’re right.
it’s not like i blindly trust the votes to tell me what’s right and wrong, but they still influence my thoughts. i could just sort by new, but i feel like that’s almost as easy to manipulate.
i guess it comes back to the topic of the post. where and how i get my information is always going to affect me.
i’m sure other platforms are no better than lemmy with manipulating content, but maybe for different reasons. i just have to choose the right places to spend my time.
Yeah this is an “unpopular opinion” but I don’t believe the lemmyverse in it’s current form is sustainable for this reason.
Instances federate with everyone by default. It’s only when instances are really egregious that admins will defederate from them.
Sooner or later Lemmy will present more of a target for state actors wishing to stoke foment and such. At that time the only redress will be for admins to defederate with other instances by default, and only federate with those who’s moderation policies align with their own.
You might say, the lemmyverse will shatter.
I don’t think that’s necessarily a bad thing.
End rant.
Our mind is built on that “malware”. I think it’s more accurate to compare brain + knowledge to our immune system: the more samples you have, the better you are armed against mal-information.
People are thinking of the firewall here as something external. You can do this without outside help.
Who is this source. Why are they telling me this. How do they know this. What infomation might they be ommiting.
From that point you have enough infomation to make a judgement for yourself what a point of infomation is.
The real question then becomes: what would you trust to filter comments and information for you?
In the past, it was newspaper editors, TV news teams, journalists, and so on. Assuming we can’t have a return to form on that front, would it be down to some AI?
Most recent Ezra Klein podcast was talking about the future of AI assistants helping us digest and curate the amount of information that comes at us each day. I thought that was a cool idea.
*Edit: create to curate
It makes a lot of sense. It also presents an opportunity to hand off such filtering to a more responsible entity/agency than media companies of the past. In the end, I sincerely hope we have a huge number of options rather than the same established players (FANG) as everything else right now.
Why do people, especially here in the fediverse, immediately assume that the only way to do it is to give power of censorship to a third party?
Just have an optional, automatic, user-parameterized, auto-tagger and set parameters yourself for what you want to see.
Have a list of things that should receive trigger warnings. Group things by anger-inducing factors.
I’d love to have a way to filter things out by actionnable items: things I can get angry about but that I have little ways of changing, no need to give me more than a monthly update on.
Because your “auto-tagger” is a third party and you have to trust it to filter stuff correctly.
How about no? You set it up with your parameters, it is optional and open source.
Do we have an iamverysmart community yet?
Hüman brain just liek PC, me so smort.
It’s definitely an angle worth considering when we talk about how the weakest link in any security system is its human users. We’re not just “not immune” to propaganda, we’re ideological petri dishes filled with second-hand agar agar.
Perhaps we can establish some governmental office for truth that decides whether any shitpost can be posted without the sterilization and lobotomization of the poster
Or maybe some kind of “community value” score for people with the right thinking
Counterpoint: only allow elected governing bodies own or control media outlets, platforms, and critical communications infrastructure
I think the right approach would be to learn to deal with any kind of information, rather than to censor anything we might not like hearing.
Reminds me of Snow Crash by Nealyboi
I think most people already have this firewall installed, and it’s working too well - they’re absorbing minimal information that contradicts their self-image or world view. :) Scammers just know how to bypass the firewall. :)
Leaving aside the dystopian echo chamber that this could result in, you could argue that this would help with fake news by a lot. Fake news are so easy to spread and more present than ever. And for every person there is probably that one piece of news that is just believable enough to not question it. And then the next just believable piece of news. and another. I believe no one is immune to being influenced by fake stories, maybe even radicalized if they are targeted just right. A firewall just filtering out everything non-factual would already prevent so much societal damage I think.
There are enormous issues with who decides what makes it through the filter, how to handle things that are of unknown truth (say ongoing research), and the hazards of training consumers of information to assume everything that makes it to them is completely factual (the whole point of said fake news filter). If you’d argue that people on the far side of the filter can still be skeptical, then just train that and avoid censorship via filter.
Yes, lemmy too is that. We need to meet people and then form groups online. I had devised a solution for exchanging public keys in person and verifying each content thereafter with that key.