> Pure Intentions: Founders, like myself, genuinely want to connect people, share authentic moments, and build community. The early versions feel magical because they follow this original mission.
No. This idea that SC (or similar business cultures) want to change the world for the better is a cliche. At a certain point, after all the history of these companies, you don’t get a pure-intentions pass any more.
And in particular: I don’t believe that Zuckerberg’s motivations were ever good.
> Different Funding Methods: What if social platforms were funded like utilities or public goods instead of venture-backed and advertisement driven growth machines? Subscription models, cooperatives, or public funding could prioritize user wellbeing over engagement metrics. Wikipedia thrives as a donation-funded cooperative. These models exist - they just don’t scale at venture-required rates.
This is good. You need to rethink the whole model. Even if that means that you don’t get to growth-hack people.
This is far better than the usual model:
1. Admit there’s a problem
2. It’s systemic
3. The fake solution: some feel-good manifesto about making X but “for humans”. But nothing about the incentives have been changed. The fundamental axiom is still there: we are entitled to make money off social media just like before, but we have a lot of words and paragraphs about doing so goodly.
The same thing happened with an attempt at making neoliberalism palpatable. “Stakeholder Capitalism”. The idea is that we just continue with neoliberalism but have a manifesto about how all stakeholders are taken into account. But nothing about the system or the power centers are changed. So you still get a corporations and their boards of directors having as much power as before.
> Regulated Algorithms: We regulate tobacco companies because their products are addictive and harmful. Algorithmic transparency or giving users control could preserve the benefits while reducing the addictive design patterns. The EU’s Digital Services Act already requires algorithmic transparency from large platforms.
I’m not sure. This seems like a half-measure. Regulating something which is inherently harmful (according to the simile) just causes more bloat.
If I’m addicted to nicotine I’m already hooked. No advertisement doesn’t help me. I already know what I want. You’re gonna make the packaging less sexy? Make me go into the Sin Section of the shop to get my fix? I’m gonna do that anyway.
What you need is an effective and mandatory opt-out option. Let me ban myself from buying these products. And provide me with alternatives (grocery shops already sell both tobacco and nicotine gum).
Give me an option to opt-out. Don’t just make a whole regulatory beast which can prey on me but one hundred checkboxes are marked and it’s very ethical and so on.
> Alternative Metrics: Instead of measuring daily active users and time-on-platform, what if platforms were evaluated on user wellbeing, relationship quality, or real-world connections facilitated? What if we measured social platforms like we measure hospitals?
This would have been naive and just a loophole if the premise was to let companies keep doing their thing. They would just rebrand with fake metrics. Oh you have sent X messages this month, that means you are connecting and according to some research people who IM more are happier blah blah blah.
But this could have some merit given the previous utilities/public goods point.