RE: Possible new methods for Blurt image storage

You are viewing a single comment's thread from:

Possible new methods for Blurt image storage

in blurt-101010 •  13 days ago 

I'm a yes on FOSS ai-assist for Blurt. The chain is "dumb" and will store whatever someone throws at it. I'm a fan of Ollama ai models, but for detecting NSFW posts, I would definitely like to implement one or more of those other libs too that I mentioned in the OP, so that human intervention/moderation is needed less and less.

This way, our chain (and jackal) includes content flags (as attached ops). The flags are applied of course after the user posts the content (which we have no auth to stop). Some minimal custom_json op should do the trick. Front-end devs can watch for specific flags of their choice.

If the ai or lib does any questionable/false positives/negatives, the content could be upvoted from other front-ends (by user accounts that meet minimum criteria) which can then insert a mod op so the flag is downgraded or removed. This part needs more discussion of course.

Imagine a private investigator or lawyer needs to permanently store encrypted evidence, or the plans for building an earth aether energy plant, or meeting minutes, etc.

We don't have to decide on just one FOSS ai model or lib either, we can have onion layers of filters that text, an image, video or other blob run thru before getting marked or flagged, by default. Front-end webhosts can run the filter on their system, or not, their choice.

As for ai assisted post creation, @unklebonehead knows of tons of great libs for that stuff, I consult him regularly for his advice on various libs and packages out there that we can use for various tasks, privacy and automation.

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE BLURT!