Front-end Developers can choose to look at content flags before displaying images or videos on their sites. Flags are G, PG, PG-13, R, XXX, just like the movie industry. We can't stop what people post to the chain, or to chains like jackal and akash for the decentralised image/video storage, but, we can flag everything from now on that gets uploaded. There are quite a few products out there now, some of them ai, all of them are free, that do image detection and when more than one is used when examining an image or video, a "score" is given to the media and the flag gets applied (via a separate operation id, bound to the hash of the media file),
If you don't want your front-end to display R or XXX imagery, then do not enable those flags when recalling images from jackal and akash.
I know it sounds complex, but in my mind seems quite easy, especially with all the tools out there now at our disposal.
RE: Possible new methods for Blurt image storage
You are viewing a single comment's thread from:
Possible new methods for Blurt image storage
Thanks for the info, but again, most went right over my head!
Sounds like you're saying a human would pre-screen the images before "block explorers like ecosynthesizer" post "a list of all the images or videos that have been flagged" for "users to upvote images" which aren't verboten? Even then, how does that stop creeps from gathering around that page, enjoying (and perhaps keeping a copy of) all the nastiest of the nasty? Are there any other platforms that put all their illegal content in one spot for the public to browse? Maybe I'm still not grasping your vision.
Have a great weekend.
Humans could, yeah, it's just an idea. So if an image gets flagged as R or NSFW or XXX or something, users can upvote it on the block explorer to get the negative flag lowered. I don't know, like I said, just thinking out loud. a.i. is getting good, but it may take years before it is trusted.
That page that serves all the nasty imagery can be on a web3 domain, rather than an ICANN domain. Like BlurtImages.loki or something like that. There needs to be a way for the flase-positives to be reversed since a.i. isn't perfect yet and we can't hire some poor soul to look at thousands of nasty images all day, yikes.
I adminned a large gaming forum (150k active users) for quite a number of years in the earlier days of the internet, and one thing we noticed very clearly was that spam (and other unwanted stuff) encourages spam. Keeping on top of it meant FAR less work. Spammers (and people wanting to post illegal/nasty junk) will look for places that aren't being watched and enforced. If you're strict with the rules and get rid of garbage fast, you often won't have to do it again for quite a while. You get a reputation for somewhere that isn't good for spam, advertising, underage media, etc. It still happens, but far less often. I'm sure the same principle applies on a blockchain, although we now have to deal with more bots than we did back then of course. Point being though, if we get something set up, after an initial burst, it's going to be pretty minimal. A stitch in time saves nine.
I think this is already in effect here to a degree. We've kept things fairly clean, without impacting freedom of speech too much. We already aren't known as a good spot for illegal crap, drama, and other unwanted content. Hopefully that scales up as we grow : )