René is looking for Community Managers, Community Moderators
and
2 more
A discussion about better discussions
For this I have set-up a so far closed-source project with the following goal:
VeriVotion — Esperanto for “reasoning” — is a secure website for civilised debates — no shouting, rudeness or irrationality allowed! - that supports more relevant opinion-building as a foundation for smarter decisions and votings along social, political and organisatorial debates and large scale problem resolution activities.
From a technical perspective, the project's primary Phase I deliverable is a generic Commenting- and Discussion-WebSite-Plugin that supports smarter social debates on otherwise public accessible websites.
What is Polywork?
Comments
Have reworked the current goal of a Phase-I project called "VeriVotion" or "VV" in Short.
VeriVotion — Esperanto for “reasoning” — is a secure website **for civilised debates** — no shouting, rudeness or irrationality allowed! - that supports more relevant opinion-building as a foundation for smarter decisions and votings along social, political and organisatorial debates and large scale problem resolution activities.
From a technical perspective, the project's primary Phase I deliverable is a **generic Commenting- and Discussion-WebSite-Plugin** that supports smarter social debates on otherwise public accessible websites.
Right now, a lot of the collective intelligence that moderators gain from being on the front lines of the internet is lost. There’s no effective feedback loop for them to tell their employers what they’re seeing, such as trends and warning signs. I’ve talked to moderators who were probably some of the first people outside the U.S. State Department to know about certain conflict zones, with nowhere to go with that information.
Business ideas could also be gathered from these workers around how to improve online rules or better serve the public. But because moderators are at a remove organizationally, and often geographically, and because communication lines with leadership are nonexistent or broken, they’re seen as automatons who enforce extant rules — not as valued employees with knowledge to contribute to the larger ecosystem. That strikes me as a real missed opportunity.
Hi All, Thx for showing up interest in this topic.
"Discussions" is a huge topic. So the "problem" might be where to start:
Two suggestions:
1. Universal traits and habits to improve discussions regardless where and with whome they are held.
2. I am a big fan of the http://Kialo.com discussion platform and would like to get your feedback, and maybe you know some other great platforms as well.
What do you think?
When going for the BIG-picture, first talking about safe spaces and protecting contributors so that open discussions will primarely have a home first, some great inspirations might be taken from The World Economy Forum hosted "Global Coalition for Digital Safety" that aims to accelerate public-private cooperation to tackle harmful content online and will serve to exchange best practices for new online safety regulation, take coordinated action to reduce the risk of online harms, and drive forward collaboration on programs to enhance digital media literacy.
https://initiatives.weforum.org/global-coalition-for-digital-safety/home
Another great Source is the Oasis Consortium working on Ethical Standards and Technologies: -> https://www.oasisconsortium.com/
What do we mean by "better"? better for whom?
Take oil and vinegar for example. Famously, they don’t mix but with the right inputs and methods you’ll get yourself a delicious vinaigrette.
You and your team are no different. Especially as distributed and hybrid environments have become the norm, it’s critical to understand how to collaborate effectively.
Hi René!
It's really an interesting topic, and I do feel that now is the right time to discuss the issues of current social media platforms — and to start working on creating effective and plausible solutions.
I have to say, at this time I do not feel competent enough to discuss such a complex topic, but I do have some thoughts.
There's been a lot of talk about censorship lately, and even though I feel that some level of censorship is necessary for all online social spaces — I do think that people who interact in these spaces should be able to present themselves to the public as they are, even (and especially) if it's not in the best light.
That brings me to the topic of discussion threads and user interfaces. I believe that interaction between users in most online social spaces is too restricted.
In my opinion, it should in some ways be able to replicate real-life human interaction.
Unfortunately, most people in real-life scenarios are not always calm, collected, polite, or even civil — and I personally do not support the idea that we should moderate and control social spaces to the extent where these people or their opinions are removed as it doesn't portray society accurately.
What I do support is the creation of tools and a UI that would allow users to moderate the social space themselves, by rating and reacting to each other's posts and content, which would help other users perceive a specific user, or their post as credible, misleading, respectful, disrespectful, inconsiderate, impolite and so on.
Having 'like' and 'dislike' reaction buttons is in my opinion insufficient, and most platforms don't even have the 'dislike' option anymore.
But people in real life don't offer positive feedback only, and not every person's feedback is valued the same.
So perhaps, there could be a system that regulates which users' posts and comments would be pushed to the top, based on select criteria that would also be chosen in collaboration with the community, or perhaps the users would be able to filter out the posts of users that were given certain labels by the community.