Proposal: Issuing a Red Card to Reward Abuse

in witness •  4 years ago 



image source

Preamble

Blurt is and always will be, a free speech platform, where people are free to post their thoughts and ideas, to just Blurt out whatever they feel like. But free speech doesn't mean free rewards!

Blurt only has value when rewards are backed by some sort of value themselves, example Proof-of-Brain. Value gets eroded when we start seeing "Proof-of-Copy-Paste" and "Proof-of-Plagiarism", which is currently rearing its ugly head on Blurt.

Good content creates SEO (Search Engine Optimisation) value and discoverability on search engines, which helps Blurt frontends and content to get discovered by external users. Plagiarism and copy-paste actually erodes the SEO value of Blurt, users think they are winning by upvoting their own garbage for rewards, but long-term they are not helping the token price.

I iterate that accounts should not be stopped from posting on chain, but they should not be allowed to gain from the reward pool if the content is of low value.

The Core Team has discussed many solutions at length, we certainly don't want to bring back the downvote because that will cause flag wars and bullying, if you think downvotes are cool, have a look at this Youtube Video first about the Downvote Predicament.



I will briefly cover some ideas we looked at.

Code name: Plan A

This is an idea dreamed up by our resident mathematician @rycharde, the idea is we create an account that is used as a herald/trigger, this account will vote on posts that are not desirable with 1% and then we code the blockchain to simply not pay out rewards at the 7-day mark to posts that have been marked by a micro-vote from the abuse account.

In this model the post will not earn rewards, the curators perhaps still will, however, should we be rewarding them for voting trash?

Advantages: Granular, this method allows spam fighters to target individual posts and not an account as a whole. Two abuse accounts with different severity can be used, one with high severity that signals that all rewards should be forfeit and another with moderate severity that only chops off say 60% of the rewards.

Disadvantages: Centralised approach, so with this method a group of people would be given posting authority over the abuse accounts, the accounts will have to be owned by the foundation and the foundation will add/remove posting authority of abuse fighters as needed, this creates central reliance on the foundation.

Diversity Index

The Diversity Index (DI) also conceived by @rycharde would be like a reputation system, if you vote just a few of the same circle of accounts each day and don't spread your votes to new authors, your DI will be low. If you spread your votes widely your DI will be high, up to 100% max.

The idea is the blockchain will allow you to receive proportionate rewards according to your DI score. So say you are an author and you earn 100 BP on a post after curation is deducted, and let's say your DI score is 80%, in that case you will only earn 80 BP in rewards.

Advantages: The DI incentives users to curate widely and not circle jerk the same people. It can also be used alongside any of the other abuse fighting ideas.

Disadvantages: Complex to implement and might be compute-heavy. Doesn't stop users from voting trash, only helps distribute votes.

Proof-of-Human Oracles

This is an idea by @michelangelo3 which I was heavily in support of until I had a further discussion with @jacobgadikian.

The concept is that we would add a field to each user account, let's call it "verified" and then an external verification service api would be used as an oracle to write to chain to update that field as TRUE or FALSE. All users would be set to verified as FALSE post-Hardfork and would have to verify with a service such as brightID.org or similar.

Only users verified as humans and verified as TRUE would gain access to the rewards pool. All accounts will still be able to post and be ranked using the decline payout posting method but just cannot receive rewards payouts until verified.

Advantages: This solution cuts out all sockpuppet accounts from the rewards pool immediately and only lets real humans earn rewards. Depending on the verification partner chosen, it could add trust benefits and access to a wider network of already verified users on the verification partner's network

Disadvantages: The chain will no longer be self-sovereign and will have an intrinsic dependency on an external service. This service could become corrupt or the oracle could be tampered with to verify bad actors. If a third party is used that requires ID and Address verification, that service could be compromised via leaks or even by court orders from authorities wanting to find out the identities of pseudonymous bloggers they want to target.

Services like BrightID luckily do not require ID or anything, they rely on you getting verified by human friends when you share a single-use QR code with them, the more friends you verify with, the more human you are. This can be gamed by installing the app on multiple devices or even virtual devices and creating multiple virtual profiles this way, Jacob and I tested this and Jacob created two BrightID identities using two devices he had. If a better non-gameable Proof-of-Human solution comes along in the future, it could be an option.

Another disadvantage is that verification needs to be done regularly, maybe every 3 months, otherwise there will become a black market for verified accounts and destroy the whole concept.

Witness Operated Abuse Lists

@jacobgadikian proposed this idea, where witnesses would run abuse lists, where essentially each witness will have an abuse list on their server with names of accounts that they don't want to access the rewards pool. There would be a consensus threshold of say 15/20 witnesses would have to have the same name on their list for the chain to exclude it from rewards.

Advantages: Possibly the easiest to implement, it has been done on Steem before.

Disadvantages: Might be hard to get all witnesses to agree and might be a slow process to get consensus.

Abuse lists will not be public, as they reside on the witness server so will not be transparent.

With this method witnesses will always be playing catchup, abusers can switch to multiple accounts when the former have been abuse listed.

Witness bribery and corruption could occur, for example, whales that abuse rewards could vote only witnesses that don't have them on their abuse lists or pay them off not to add them. The same goes for witness threats, where witnesses can be targetted by abuse listed users in their personal capacity.

My Proposed Solution - Blockchain Moderators

After speaking with @jacobgadikian, I formulated an idea based on his suggestion above regarding witness operated abuse lists.

The idea is that we want witnesses to be corruption free and to stay on task of securing the blockchain without having to worry about policing rewards as well, not every witness is good at abuse fighting or even wants to get involved with it. Their job is to be ambassadors and developers of the chain and to run reliable nodes.

So I propose that we create a different set of validators for the rewards pool, that are voted in the same way as witnesses are, by stake-weighted voting. In this solution, up to 20 moderators would be voted in and they will be tasked to seek out accounts that will be added to their abuse lists.

Much like in Jacob's solution, there would be a consensus threshold of say 15/20 where an abuser's name would have to appear on 15 moderator abuse lists before the blockchain will suspend reward pool access of the abuser's account.

The difference however is that the abuse lists would be public and auditable, they would be posted using Custom JSON by each moderator and recorded on-chain, that way they do not need to run servers like witnesses do.

We can also redirect some of the blockchain inflation to the moderators, much like how witnesses share in 10% of inflationary rewards, moderators could perhaps share in 3% since they don't have server costs, which could be reduced from the @blurt.dao for the time being.

Advantages: Fast and transparent Custom JSON blacklist updating.

Helps focus witnesses on blockchain security and keeps them free from corruption, bribery, and threats.

Moderators have to perform well otherwise will be unvoted by the community and lose their share of inflation rewards.

Disadvantages: Moderators could still be subjected to corruption, threats, and bribery. Moderators could however choose to have different anonymous accounts that are not linked to their social profile accounts, it would be harder to get voted in this way but perhaps in time they can be voted in for their diligent work and not because of their social standing and reputation.

Targets rewards denial at account level instead of post level, no granularity, blanket rewards denial until removed from abuse list. However, perhaps this solution can be used in conjunction with Plan A, where the 20 moderators able to issue the special trigger vote on content to be demonetised of rewards will be voted in by the community in a decentralised manner, in this way specific posts are targetted and not entire accounts, unless placed on autovote for repeat offender types.

Question

Should rewards be limited only on the author's side or should curators that voted for the garbage content also not receive rewards for the post by the abuse listed author they voted for?

Conclusion

This is a lot of food for thought, please take your time to digest it, and please comment below and offer suggestions, thoughts, and improvements.

Sincerely,

Ricardo Ferreira
@megadrive
Blurt Co-Founder

Authors get paid when people like you upvote their post.
If you enjoyed what you read here, create your account today and start earning FREE BLURT!
Sort Order:  
  ·  4 years ago  ·  

This can only work if you have a well defined set of guidelines that everyone is expected to follow. The second you start making it subjective, it falls apart. So, you need to come up with a set of rules or guidelines that everyone is expected to abide by and make sure users are well aware what those are, then you can go about enforcing them.

  ·  4 years ago  ·  

@megadrive. WOW your idea is completely outside the box and it is brilliant. It is less complex for users to understand and easier to implement. Very similar to that of @jacobgadikian.

If there was a ballot to cast for each ideas. I will cast YES for yours, but a mix of your idea and that of @rychrade's diversity index would be cool

  ·  4 years ago  ·  

Hope the Diversity Index will still be developed as it solves a slightly different - though related - issue, which is vote diversity and vote-rings, whether the posts have quality or not. This is especially useful as Blurt grows - my experience on Steem is that many reward-extraction techniques took place out of public view as the activity was so much higher. The aim here is to solve the problem before it becomes a problem.

IMO:

  1. Ricardo's thing
  2. Diversity index
  ·  4 years ago  ·  

@rychrade's diversity index ...

... is, well, something @steemchiller does already with his "Voting CSI" in SteemWorld.

I mentioned the core of this idea as "diminishing returns" already about three years ago.

Yes, the idea is good, but a whale can try to reach 'diversity' by creating and upvoting 100+ accounts ...

So in the witness maintained case, that would be transparent as well because the witnesses would have to commit their list to the chain so they would be accountable for their list but it doesn't matter, your idea is still several times better than anything I have heard so far.

I hope to build consensus around this in the community and push it out rapidly. Moderation has value, particularly when defending an asset is precious as the reward pool.

My belief is that "blockchain moderators" is several times better than the other suggestions including my own.

I think it's better because it will be easier to implement and it does not burden the witnesses further. They can focus on running reliable machines. This new group, called wardens, will look over the reward pool.

BTW: That's just downvotes with extra steps!

No it is not just downvotes with extra steps.

Likely the biggest issue with downvotes was psychological. Having money and reputation attached in the way that it was on steem, downvotes had a pretty nasty impact on people.

Because don't votes had a nasty impact on people, I believe that they had a nasty impact on the platform and the price as well. Let's not forget, blurt is alive, and it has a symbiotic relationship with the community. When the community is happy, blurt is happy.

  ·  4 years ago  ·  

Cool - we did discuss the wardens :-) ok, this seems to be condensing around a framework we can build upon and see what the flow of decisions could be - voting Mods in and out and even Mods wishing to leave for other reasons etc.

Also, just wish to put this into context, that a very similar setup exists on Steemhunt - although it only affects their platform, they started with a hierarchy of curators and sub-curators to both promote the best and to clamp down on spammy crap - hence they were both curators and moderators. It always looked as if it worked very well - also coz some very good curators joined them with experience and talent. Here on Blurt, looks possible that Mods and Curators will be different jobs, although as I wrote below, there could be individuals with both roles as the job of "quality control" is similar.

  ·  4 years ago  ·  

I think your DI is very similar to what @steemchiller does already with his "Voting CSI" in SteemWorld.

I mentioned the core of this idea as "diminishing returns" already about three years ago (and I think we were talking about it in Bangkok).

Yes, the idea is good, but a whale can try to reach 'diversity' by creating and upvoting 100+ accounts ...

Do you use Discord? I would be curious to talk with you and hear your very personal opinion about Blurt and its future.

Have a great day!

  ·  4 years ago  ·  

Hi, same id on discord. I looked into the CSI many moons ago and the formula was kept secret, altho I did find a heuristic. You may even recall some years back I had a first stab at such "diminishing returns" idea on steem. Anything similar needs, of course, to be implemented before those affected are able to veto it.

It ended up being called Diversity as a way not to scare people ;-)

  ·  4 years ago  ·  

Thanks!
I think I need also the number behind the name to be able to find you on Discord, right?

  ·  4 years ago  ·  

No, unless lots of users have the same name can easily find with search. You can always just come say Hi here: https://discord.gg/YYVQXrfA

  ·  4 years ago  ·   (edited)

Hello @jacobgadikian!
I am not a fan of downvoting even if I would have the power, I won´t exercise it. It is nasty, you are right! It gives a bad connotation as if content creators are abusing the system. I had been downvoted by the same curator twice in another platform, who didn´t even explain what was wrong with my content and just because I was given huge rewards for a good content I created.

Wardens I guess would be better than witness doing the job..they can concentrate on policing the flow of rewards one is getting on their contents and at the same time control who are spamming. I have experienced the system of curation in another platform where the curator of a community uses the power vested upon her/him by the community and use this power to give huge reward to its own content and the contents of its curating team. It´s pretty obvious that this has been the case since that platform started. In my opinion, that system is bad for the platform. The result is: good content creators go away since in no way they could level up their rewards with those community creators no matter how good their contents are.

  ·  4 years ago  ·  

I think that part of the problem can be solved if each of the curators assumes the role with greater commitment. This generates more time, more reading, searching, research and even exhaustion, but it is part of the responsibility assumed, it may be necessary to expand the work to at least 2 trusted curators for each account.
If each curator of the foundation responsibly assumes the role, users who wish to enter any of the communities would know the strengths and would not post on Blurt foundation accounts.
Another is the case, with account farming and clandestine management without the use of tags visible by a curator, it may be necessary to extend the review to other tags or create an application that examines the posts perhaps not for content but for quantity depending on each label. or involve another team strategy to mitigate failures.

  • @rycharde a query, Blurt receives a new user identifies her, and then with the support of some program can her movements be verified within the platform? Apart from the ones we already have, that is, you can know: who created your account, is it new or was created by the chain, how many publications does it make, what labels it uses regularly, something like that close to a diagnostic and control application. This application is only handled by witnesses or can be used by a highly trusted conservator.
    I am imagining things, in view of the increase in plagiarism, perhaps this implies more work for a curator, but it can be a helpful tool if it is possible to segment by user groups and know their movements. It is possible that the study of behavior patterns can shed light on curators so that they can more easily identify a false account, plagiarizing or abusing it.
    It is possible that the behavior of an account within the chain can be estimated or predicted.

Answer to question @megadrive:

  • Remove the reward, this can guide any user reading a post to be careful when giving their vote.

  • Plan A of 20 moderators seems like a good idea to me, I can only suggest that it be an odd number either 19 or 21, for the decision point and thus draws are avoided in decision making.

What @jacobgadikian indicates from Working with a list of abuses by witnesses is also interesting, because it segments the groups and allows smaller groups to work in an organized way and in a team, that is, real curators with fewer users to verify.

Good vibes everyone

  ·  4 years ago  ·  

Easier to write separate comments for each issue so that threads don't become braided! lol.

So the Moderators are really the same as what are currently called Wardens within the discord server.

I think expecting them to achieve consensus would really slow down the process and potentially make many names not appear on the master-list.

20 is not a magic number, could be fewer. How many Wardens are there at the moment?

The idea of creating a user category of Moderators is, however, good.

I shall ponder the election model, tho ;-) Indeed, it could be a very good way to experiment and investigate different models so that other good Moderators can be added - and those who wish to leave can leave - without the inertia that comes from voting only once.

  ·  4 years ago  ·  

I’d say we have less than 10 good active wardens right now. Yeah we do need a decentralised election process and their own share of inflation. Maybe we make consensus like 51%+ or 11/20 dor faster response to populating abuse lists, we could go as quick as if listed on any 3 moderator lists ypu can’t get rewards.

  ·  4 years ago  ·  

If blacklisted accounts receive no rewards then anybody voting on them are wasting their votes - this is what currently happens if a post is made with "decline rewards". Why would we reward bad-voters on bad-actor posts? Surely one of the aims is for those accounts to make their curation rewards elsewhere, on legit posts.

I always thought this was an evil ploy to leave vote-buttons usable even on posts that have either expired or have declined rewards. This is a separate issue but could be added - blanking out the vote button itself.

We do not allow upvotes on these posts. The upvotes will simply be disabled.

  ·  4 years ago  ·  

Ok :-)
so all the talk of "curation rewards" was possibly at cross-purposes. ok np.
and OK, disabling votes is kinder to the "accidental voter" than leaving the button enabled yet valueless.

  ·  4 years ago  ·  

I brought it up because Jacob said on our call that curators should get rewards, maybe he meant blacklisted accounts should still be able to earn from curation when they do vote good content from authors that are not abuse listed. Also note I am trying to be PC and use the term abuse list rather than blacklist.

  ·  4 years ago  ·  

erm... I don't understand that either!

Yes, an abusive account should still be able to vote on posts from non-abusive accounts - which in itself excludes self-voting ;-) so their stake is unaffected.

But to think that someone votes on an abuser and that post gets zero author-rewards but the voter can still get their share of curation-rewards strikes me as pointless. How will that affect behaviour?

All that would achieve is decreasing the author-rewards, so that any vote-ring would yield only half the current income. It's a POV, I guess.

Also, don't forget comments, else they will shift attention to upvoting a stream of shit-comments. I've seen that done too!

  ·  4 years ago  ·   (edited)

@megadrive I'm in favour of your proposal, as it's decentralized and involves community participation.

If I could suggest a modification it would be to make the moderator role time limited, perhaps the stake weighted vote decays, or requires a fresh round of voting every X days or X months. This ensures only active moderators are in the role, as voted by active community members. As we see from witness voting, votes can remain in place for years, even after those accounts have long been inactive.

Regarding author and curator loosing rewards on bad posts, I would be in favour of rewards being removed for both.

The Diversity Index is really cool, I would also be in favour of that proposal.

  ·  4 years ago  ·  

Yes good idea but may not be easy to code in the first draft, the decay may have to come in a future HF and might as well add it to witnesses as well.

The idea of each witness having a abuse list sounds good and easy to implement.
Nice idea @jacobgadikian but you can do something to make sure that witness will not grant a special space for anyone. also the idea of Diversity Index sounds good. it will distribute vote to everyone.

@megadrive sir will you bring the verification thing after the next hardfork...?
also it is a request to not bring the downvote thing on blurt. so many people will be bullied for sure if that happens.
also it is also one of the good feature that their is no downvote so I think witness handling the abuse list for now will be a good idea.
But also one last question what about the future when the users will increase here on the platform , there need to be more and more trusted people as witness in future.

Thank you

  ·  4 years ago  ·  

If you read carefully the post, the idea of witnesses running abuse lists evolved into a separate group of moderators that are voted in by the community kind of like witnesses.

Also if you read again we decided against the verification because of thirdparty risk and reliance.

And no plans for downvotes, they create wars and bullying and Blurt is different because it has no downvotes

sounds good , and i kind a love blurt so much because of no downvotes here.
people here are free from bullying and wars.

thank you sir , we all are waiting for the best next HF.

  ·  4 years ago  ·  

I am think I like for now a mix of plan A and your plan. The idea of 20 mods is great idea I think and them triggering the dust vote that removes rewards.

Now I think all Rewards should be removed, Yes it is someone's stake to vote how they want but they should still be voting quality content and not garbage shit post.

  ·  4 years ago  ·  

I support the idea of Blockchain Moderators and I think it will be easier if it is community-based as each community will have at least one moderator. And if the community is bigger, then there can be multiple moderators.

  ·  4 years ago  ·  

The idea that community Curators could also be Moderators is good, but I would keep the roles distinct. An individual can, indeed, hold both roles, but most vote-abuse does not take place within community tags, hence there need to be Moderators with a broader brief than just their community.

  ·  4 years ago  ·  

Well actually that is what I was saying. If there are 2 curators at a community then the 3rd will be the Moderator who will regulate and flag plagiarized contents and vote-abusers. But if the Moderators are supervising as a whole, isn't it too overwhelming? I mean there are such diversified contents and on top of that different languages. If you say 5 or 6 Moderators will be responsible then it'll be quite challenging

  • Proof-of-Human

Thank you and Jacob for taking up the idea and trying it out.

What I still like about this method, the focus is on the "good" accounts, the effort to fight bad accounts could be reduced. Energy on the good things.

Maybe we could develop a Blurt internal Proof-of-Human system in the future.

In addition, as far as I know Blurt has 3m accounts, how many of them are active? 500k maybe. So we have 2.5 million sleeping accounts. Does it make sense to remove them from the chain? I think so. It should also be considered to remove inactive accounts after a certain period of time.

  • Blockchain Moderators

Good idea, currently this is probably the best and quickest solution. The only disadvantage I see is that it requires human time and energy in long term.

Perhaps this could be combined with proof-of-human, moderators could also give a kind of blurt score to good accounts.

  ·  4 years ago  ·  

Hi, Andreas,

I think it is absolutely forbidden to remove inactive accounts that have a value.

I agree, nobody should lose their Airdrop.

But first of all I see a threat in the sleeping accounts, thousands of accounts created by single persons, if they are activated maliciously, they could cause serious damage to Blurt.

Secondly, these accounts are unnecessary ballast. The amount of storage required to set up a Witness server would be drastically reduced, which should also improve the overall performance.

It could be done like this:

  • Inactive accounts will be removed, a list of these accounts including the corresponding owner keys will be saved.

  • Deleted accounts can be restored manually using the owner key and authorisation from a mod/witness.

This way nobody would be taken anything, the servers would be relieved and a possible threat would be excluded in advance. It's also a step towards one person, one account.

  ·  4 years ago  ·  

Inactive accounts will be removed, a list of these accounts including the corresponding owner keys will be saved.

Deleted accounts can be restored manually using the owner key and authorisation from a mod/witness.

I think you have a very good idea there!

I just don't know if the Chain or our government can store owner keys. They shouldn't even be known.

It's also a step towards one person, one account.

My original suggestion was to allow all non-identified accounts only the transfers. All other functions should be disabled.

can store owner keys

Yes, the public owner key's are known and can be saved. From the private key, which only the user has, the public key can be calculated and compared.

only the transfers

Would also be a possibility, but would not relieve the servers.

In my time, disk space was used very sparingly and the database was kept as small as possible ;-)

  ·  4 years ago  ·  

I don’t think it works from a technical perspective, example we delete accounts, save the public keys for later recreation right, but then since the account is deleted the name is now free for anyone to claim so when the original account holder comes back to try reclaim it he cannot.

  ·  4 years ago  ·   (edited)

to try reclaim it he cannot.

Yes, that could happen. However, say the deletion would be a year after the launch of Blurt, what would be so bad if the original name is no longer available?

The stack of deleted accounts would have to be transfered to a collective account, from which the claims can then be paid in the event of recovery.

How do you see the threat of sleeping accounts and a possible server relief? Would it be worth the effort?

EDIT: I think of automatically created accounts from those times. Most of them will have little BP, so we could also set a filter and only remove accounts with less than for example 100 BP.

  ·  4 years ago  ·  

Agreed on this

  ·  4 years ago  ·  

I think it’s dangerous to remove inactive accounts from the chain, they are someones property.

  ·  4 years ago  ·  

Good content creates SEO (Search Engine Optimisation) value and discoverability on search engines, which helps Blurt frontends and content to get discovered by external users. Plagiarism and copy-paste actually erodes the SEO value of Blurt, users think they are winning by upvoting their own garbage for rewards, but long-term they are not helping the token price.

Exactly ! And this thing people not understanding. Well i liked the " Blockchain Moderators" and this is gonna be good solution to fight against spammers and abusers. Thanks

  ·  4 years ago  ·  

I think a system with human input is necessary, if there's needed, we must have a way to appeal the inclusion to such a list. Also I'd like to add that those who judge the appeal shouldn't be the same people as those that added the person to the list in the forst place

Loading...

Farming by any means isn't favorable. Question of centralized & decentralized matters and need to be paid attention. Holding the rewards would be tricky :) and leading to a centralized approach. Hopefully, this issue is dealt with the best possible option, Low quality posts ain't good at all for the community at large.

  ·  4 years ago  ·  

For me this is a complicated topic because the discussion is looking for the ideal format. Blurt has to be different from other platforms and for people to work honestly. Hopefully the best solution will be found soon.

Loading...
  ·  4 years ago  ·  

As you you said @megadrive, there's a lot of stuff to think about... oddly enough, this morning I was just writing about the destructive effect of manipulation/exploitation on the community building efforts of social sites, so I guess a lot of us are having these thoughts.

In my own case, I have been blogging since 1998, and have been part of better than 50 venues that attempted various versions of "rewarding creators for content," and abuse is a very old problem, as well as a very destructive one.

In reading the above options... many of which are quite good... it occurred to me that we are missing one person's valuable input here, namely Asher aka @abh12345 who ran/runs the "Engagement Leagues" on first Steem and now Hive. Of course, he doesn't have an active account here, being one of the many who just cashed out his airdrop. However, the thing is, his metrics for measuring engagement include the mechanics to negate/downvalue abusive practices from League scores... your points total (and thus rank) is reduced by self voting, lack of diversity in voting, copy-pasta, two-word comments, and so on... while desirable activity earns relatively more points. I'm just thinking that the queries he uses for his rankings include elements of many of the above proposals, and his weekly league rankings (which have been running for two years+) are a damn-near perfect representation of people who are working really hard in a positive way.

You and @rycharde and @jacobgadikian and anyone else on the front lines of this project might consider opening a dialogue with him on Discord, if nothing else just to brainstorm. Whereas his methodology is not designed to police anything, it IS designed to extract and rank contributions by quality and originality... whether it can be scaled to the entire chain I don't know... but I think we can use all available experience/wisdom we can get our hands on.

Back to the point: Your solution actually looks like the approach of one of the more successful venues I was part of, which ended up with a hierarchy of "Community Managers" who were voted in by community members (a bit like witnesses) to oversee abuse and fair play within their areas of expertise. However, that was more easily done on that site because tags were from a pre-defined pool of a couple of 100, not flexible.

It's all definitely a challenge, and it is also VERY IMPORTANT that these issues be addressed before they become an insurmountable problem!

  ·  4 years ago  ·  

The title says "red card"

So lets do it:

Like a downvote button is a button for red card, each click on it makes the card more red, from green to yellow to red in lets say 20 steps.
If it reaches the 20th step the rewards will be removed.
The red card is on every post.
The red card status for an account is on their blog page.
There should be a way to get it back to green.

  ·  4 years ago  ·  

That could lead to redcard wars, if i don’t like you i can just come tap your red card :)

  ·  4 years ago  ·  

Nicely thought out... But I think, we should allow blurters write whatsoever they desire provided it's not copied from anywhere or plagiarized..

Congratulations, your post has been curated by @r2cornell-curate.

Manually curated by @melissaofficial

Also, find us on Discord (https://discord.gg/BAn2amn)
logo3 Discord.png

https://discord.gg/BAn2amn

  ·  4 years ago  ·  

Blurt is and always will be, a free speech platform, where people are free to post their thoughts and ideas, to just Blurt out whatever they feel like. But free speech doesn't mean free rewards!

Cept the vast majority of users here EXPECT free rewards.

What does free mean in this context?

  • excessive foundation use of power to the extent that stakeholders are diluted beyond any meaningful reason for them to invest.
  • no skin in the game financially, and no reason to create stellar content that would justify the value it is attributed beyond this echo chamber.

People who purchase blurt have a financial stake that is not free. Whenever people get cross about abuse on Steem-like platforms, it's usually because they have some notion of the price being affected by the content getting the rewards - either it's too little, or shit-posts are getting too much. And by extension, the ill-gotten rewards going to market and harming the price, while presenting the platform as an unattractive investment.

How much impact is ill-gotten gains having on price (vs massive exodus of existing stake) ?

I challenge someone to answer that question with blockchain proof.

The crux of the matter simply boils down to this:

Do you want the price to go up because of the illusion of "quality" content driving user adoption and purchase of stake - (from the looks of things, there isn't a clear reason to invest outside of speculation)

Or, do you want the price to go up because there is large influx of investment for high ROI.

Someone will always eventually hold the bag, but the core experience (condenser + blogging) is a tried and tested failure. To add further rules would strip blurt of any ideological advantage over it's sister chains.

I say, let it go full crypto-anarchy.

cjdns and the Koreans are a useful bastion of hope (and of significant investment potential), they participate and don't have "hand-out" entitlement. Those are the kinds of people that will drive up demand.

  ·  4 years ago  ·  

I am 100% supporting this proposal.

  ·  4 years ago  ·  

Question
Should rewards be limited only on the author's side or should curators that voted for the garbage content also not receive rewards for the post by the abuse listed author they voted for?

In my opinion a redlisted account should not able to be rewarded by curators.

  ·  4 years ago  ·  

@steemhunt used few nice tricks with coming up with a score which they use to vote and rank Daily Hunts. It has worked pretty decently. You should take a look if you haven't already.