The projects that don’t get built

Before working on projects, there are a few questions that we should all ask: can it be abused, how can it be made safe, and does it need to be built in the first place?

The projects that don’t get built

Lately I’ve been having conversations with random people about consideration, harm, abuse, and systems. This is especially timely now, as it has been ever since The Internet started connecting people together, and even earlier since... well, since humans have started creating systems of any kind.

Two of my favourite talks are both by Mike Monteiro. They are:

  1. How to Fight Fascism.

2. and How Designers Ruined the World

I’d like you to spend the hour and a half watching these before you continue. Or not, because whatever, but come back and watch these at a later date.

I wanted you to watch those two to have a context. I don’t know where you are in the whole “maybe we should have ethics in software engineering” journey, but these two are talks I rewatch over and over again.

All of this was to get you to start thinking about a few questions before you’re going to contribute time, effort, and skills to work or open source projects.

How will people abuse this app / software / feature?

This comes at a time when Twitter just announced Birdwatch (link takes you to twitter.com), a crowdsourced way to combat misinformation.

A lot of people, certainly most of the people I follow on the bird hellsite, have immediately pointed out numerous ways of abusing that feature and how it’s yet another step by Twitter to sidestep taking responsibility for moderation.

After all, if it’s crowdsourced, then they, as Twitter, can’t surely be responsible for the content of the crowdsourced data, because by definition Twitter does not have control over the people sourcing the data. Control in this case would be any sort of contractual relationship where Twitter can say “do this thing,” and people would have to do it or face consequences (of getting fired or something).

A few people however got super upset that we dared point out that maybe a new project that gets launched has problems. In that sense thanks James for the inspiration for this blog post.

I would love to believe that people in general are good and nice to each other. But because I can’t, nor would I want to if I could, control them, I also can’t say with 100% certainty that there won’t be bad people interacting with any project that I’m building. In the Birdwatch case, Twitter can’t limit double checks and extra info to be submitted only by genuinely nice and knowledgable people. There will be entire groups who will use Birdwatch to troll, harass, and seed even more misinformation in a remarkably efficient manner because those will be disguised labeled as “trusted”, double checked, and crowdsourced information.

We’ve already seen this happen with GamerGate, anti vaccination info, misinfo about the 2016 US elections, misinfo about the 2020 US elections, school shootings in the US, and Covid-19 to name a few that literally popped into my mind.

It really boggles my mind how there are people still of the opinion that all users are going to behave given the recency of these and the readily available stories on Twitter literally a search or two away. Most of your users are probably going to be decent, but you only really need like, one malicious user. Or a dozen.

Then there’s the whole issue with toxic positivity, and how the adage of “assume good intentions” are only for white people, but at the very least bullshit, and doesn’t do what you think it does. These two are used to dismiss concerns around malicious users because for whatever godforsaken reason the mere thought of considering the possibility that some users are assholes is a taboo.

But back to the question: has anyone on your team thought about how the thing you’re building can be misused / abused / used for harassment? Do you have people of colour in your team? Women? Disabled people? Non–cis or non–binary folks? If not, why do you think they aren’t applying to your company? Or if they are applying, why do you think you’re not hiring them?

Do you have people on your team who do flag up issues? Are you listening to them, or are you dismissing their concerns? Why are you dismissing their concerns?

If these concerns make you uncomfortable, are you second guessing whether you should go ahead with the project? Is your dismissal of their concerns a way to push yourself through this discomfort? Have you given these thought recently? Or... ever?

What can be done to mitigate the risks / abuse?

How can you reduce the risk of abuse? Is it more granular permissions? Is it having permissions off by default and users of your system must explicitly enable things one by one?

Can you eliminate dark patterns? Why are you using dark patterns anyways?

Is it changing how the feature works? Does the feature need to be rethought from the ground up?

Have you listened to the people who brought you their concerns on what they think the thing should work like?

But more importantly:

Does that feature / app / software need to be built in the first place?

Yes, I get that you’re an employee and you get paid to do what they tell you to do and you’re kinda stuck because you have a mortgage to pay and car payments and whatnot.

Can you still do anything in your power to question the validity of the project? Do you have the power to pull the plug?

What are you comfortable with putting out into the world knowing that it’s going to cause harm?

You’re probably thinking “if I don’t do this, then someone else will”, and you’re absolutely right. But you don’t have to, and you can have slightly better sleep at night.

None of this is new

If you watched Mike’s two videos, I have said nothing new, and he is far far more eloquent than I am.

This is where ethics, history, literature, philosophy, and morality seep into the software engineering industry. Without those we will all be building systems that are ripe for abuse, not learning from our past, heck, not learning from our goddamn present.

Personally I hate that we have this hustle culture where we have to build and ship whatever under a week otherwise we’re not true creators, entrepreneurs, business owners, girlbosses, whatever the fuck you want to label yourself.

Alongside that thought is another that I get to hear with some frequency: just ship it, it doesn’t have to be perfect!

Sure, I’m okay shipping something where the front page is slightly broken on mobile, or I don’t have client side validation for my forms, or even an entire section is missing, because I’m not yet ready with that part.

Those however are materially different from the “not perfect” that some people mean where software:

  • allows targeted harassment
  • discloses personal data accidentally or in a way that was known to the team, they just did not care to fix it, because those disclosures wouldn’t affect them in a negative way, so what’s the big deal?
  • a database is left open
  • users are freely discoverable in the name of social connections!
  • the service can be co-opted by hate groups and there are no effective counter-measures
  • the service gives legitimacy to science deniers / hate groups

But most of all, I fucking hate when people bring these issues up, and are then shrugged off, or even worse, actively mocked and attacked because “we never launched anything” and “that’s not gonna make you rich.”

Fair enough. If that’s the price of becoming rich, I don’t want to be rich.

I want to do good, responsible work, making sure everyone who comes in contact with the things I build remain safe as an absolute minimum baseline.

If you ever thought of launching a project and then shelved it because you couldn’t resolve one of these issues, it’s perfectly valid, and you have my thanks.

I have three projects that I haven’t built because I can’t make them safe to use.

Let’s talk about these as well, because we desperately need to normlize not launching stuff too.

Photo by Noah Buscher on Unsplash