Ah can I just confirm is that $15 per TB once off for eternal storage? Because say we currently pay $30 per month for the same 150 GB of space it would be cheaper right away to place our image store on Jackal.
Yes that's right Patrick is the name of the guy.
If we can do Blurt/IBC that would open us up to be listed on Osmosis and we would have the edge over all Graphene chains.
I'm now with you on Matrix, will continue discussing there.
No, I am pretty sure it's $15/TB per month. They'll need that cashflow to keep things running, especially in the beginning before $JKL price takes off.
You are spot-on about IBC, that's why I think it should be a top priority for Blurt. It's one of the things I think @Saboin and I should discuss because it would open the door to all of the things that the many blockchains (like Jackal) can offer to Blurt and vice versa. Cosmos ecosystem does have a couple self-proclaimed socialfi chains on it already, but they are nowhere near as time tested and solid as Blurt.
We can quite easily make Blurt stand out as the go-to socialfi in Cosmos.
100% Agreed, oh and if you have any ideas about incorporating AI into Blurt I would be interested.
I know instagram has AI to assist you with dm responses, me could have that for comments maybe, perhaps AI post creation assist or even AI content search and suggestions.
I'm a yes on FOSS ai-assist for Blurt. The chain is "dumb" and will store whatever someone throws at it. I'm a fan of Ollama ai models, but for detecting NSFW posts, I would definitely like to implement one or more of those other libs too that I mentioned in the OP, so that human intervention/moderation is needed less and less.
This way, our chain (and jackal) includes content flags (as attached ops). The flags are applied of course after the user posts the content (which we have no auth to stop). Some minimal custom_json op should do the trick. Front-end devs can watch for specific flags of their choice.
If the ai or lib does any questionable/false positives/negatives, the content could be upvoted from other front-ends (by user accounts that meet minimum criteria) which can then insert a mod op so the flag is downgraded or removed. This part needs more discussion of course.
Imagine a private investigator or lawyer needs to permanently store encrypted evidence, or the plans for building an earth aether energy plant, or meeting minutes, etc.
We don't have to decide on just one FOSS ai model or lib either, we can have onion layers of filters that text, an image, video or other blob run thru before getting marked or flagged, by default. Front-end webhosts can run the filter on their system, or not, their choice.
As for ai assisted post creation, @unklebonehead knows of tons of great libs for that stuff, I consult him regularly for his advice on various libs and packages out there that we can use for various tasks, privacy and automation.