When Stack Overflow was created in 2008 as a forum for questions about computer programming, there was no need to worry about understanding the community. Co-founders Joel Spolsky and Jeff Atwood had long and storied histories working in the software industry. But as the Stack Overflow blossomed into Stack Exchange, a group of more than 70 sites covering topics from photography to parenting to cooking, they found that groups of humans do not respond well to being managed by an algorithm.
Everyone knows the drill. A community springs up online, leaders naturally emerge, and their commitment earns them the right to become moderators. But over time whatever small biases these folks bring with them are amplified in the minds of new users, until the inevitable charges of fascism begin to fly and a full-on flame war breaks out.
Is it possible to find a formula for combating this decline? In a row of two desks at the far end of the Stack Exchange office, just off the ping pong table, sits the CHAOS team (Cheerful Helpful Advocates of Stack), a group of community managers who spend their days experimenting in the laboratory of human interaction. “We’re trying to derive some universal principles about how to grow a community on the internet that can govern itself and regenerate after a conflict,” said CHAOS member Abby Miller. “So far we’ve learned that there are no universal principles.” Read More