Why We Don't Use AI

We get asked about AI a lot. Whether we’re going to add it to Yarn Spinner, whether we use it ourselves, what we think about it. Fair questions. Time to write it all down.

Yarn Spinner doesn’t use the technology that’s currently being called AI. We don’t have generative AI features in the product, and we don’t use code generation tools to build it, and we don’t accept contributions we know contain generated material. Let’s talk about why.

TL;DR: AI companies make tools for hurting people and we don’t want to support that.

The Past

A little history first. We come from a background that did a decent amount of work with AI and ML (terms we shouldn’t but will use interchangeably because everyone else does).

We gave talks about it for game developers and non-programmers. We wrote little ML bots for games. We did research and academic work. We wrote books1 about using ML in games, mostly for procedural animation. It was a fun series of techniques to explore, and explore we did.

O’Reilly books on AI and ML that we wrote

When we started at university, neural networks and deep learning (the main underlying techniques most AI products use today) were just too slow and hard to work with. By the time we finished our doctorates, that had changed. Tools like TensorFlow made this stuff easier and fun, and the increase in GPU access made training and inference possible for people without Big Tech budgets. For quite a while, we were genuinely excited about the potential.

Then things started to change.

It’s hard to say exactly when. Maybe it was always like this and we just didn’t see it. But by the end of 2020 (a year famous for absolutely nothing world changing whatsoever happening /s) it was clear that the AI we liked was not what the tech companies were interested in. They were increasingly about generative imagery, chatbots writing your material for you, and summaries of art instead of exposure to it. Efforts to mitigate known problems (reinforcing cultural biases, being difficult to make deterministic or explainable) were disparaged and diminished. Researchers and developers who raised concerns were being fired.

Things have only gotten worse since.

If you look at what AI companies promote now, it’s not what we wanted. When you boil down everything they say and strip it right back, what they make are tools to either fire people or demand more work without hiring anyone new to help. That’s the problem AI companies want to solve.

Anything else they achieve is a happy accident on the road to firing as many of your friends and colleagues as possible.

AI is now a tool for firing people, in a time when getting re-employed is especially difficult and being unemployed can be life-threatening. We don’t want to be part of that. Until this is fixed we won’t use AI in our work, nor integrate it into Yarn Spinner for others to use.

We don’t want to support the companies making these tools or normalise their behaviour. So we don’t.

The Future

There’s a comment we see every so often, always phrased as a fait accompli: “you’ll be left behind if you don’t adopt AI”, or its cousin, “everyone is using it”. We disagree.

This isn’t the right approach regardless of our opinions on AI. It’s tool driven development. The goal should never be “we use this tool”. It should be “how do we help you make better games?”.

Great games are made when people are passionate about an idea and push it into existence. Often this means reduction, not addition. Changing ideas. Keeping yourself and colleagues healthy. Being willing to adapt and take feedback. Good tools need to do the same.

We’re constantly asking “how would this help make better games?” and following where that leads. The exploration matters, and most of the time we find an idea doesn’t survive even a little scrutiny. We’d rather have fewer polished features that solve real problems than a load of garbage that exists for the sake of marketing copy.

We’re proud of Yarn Spinner. We don’t think it’s a coincidence it’s used in so many games. Our process works, and we’re always adding new features. We also change and remove features if they don’t meet the needs of devs. We’re always chatting, internally and with other game devs and even non-devs, about potential ideas and approaches. We’re going to keep asking “how would this help make better games?” and ship what survives that gauntlet.

Who knows. Maybe the world will change and we can take another look at ML.

Likely to be Frequently Asked Questions

Why do you only care about people getting fired? I read that AI is also bad for SOME OTHER REASON! There are so many issues AI (and in particular the companies making it) have. Some are potential or even hypothetical concerns that might eventuate. Some are very real and happening right now in front of our eyes. Some are much worse than people being fired. Some of these worse issues appeared between us starting this blog post and publishing it. If the labour concerns around AI suddenly vanished, there are still many issues we’d need to see fixed before we’d be comfortable using it. But it’s stronger to argue a single point at a time. The labour concerns can be fixed and should be pushed back on. Once that’s sorted we can look at the next issue.

Why don’t you do ML correctly so no one gets hurt? Given our background and experience, we probably could make our own AI tooling in a way we feel is helpful, ethical, and doesn’t fund companies we disagree with. Two problems with this. First, these things take a lot of time to make, and like we said, most ideas don’t survive initial exploration. It would be very difficult to balance exploring an idea while also building new models to test it. Second, while we could make our own tools, most people can’t. If they saw us using a technique and wanted to try it, they’d end up supporting the very companies we object to. We don’t want to normalise it, so we have to not use it.

My boss wants me to use AI for work, am I part of the problem? Getting and keeping a job is a necessity, and this has only gotten worse recently. If you can push back on this, do. But no one will fault you for wanting to keep your job.

Are you going to ban people who use AI from using Yarn Spinner? No. While we wish you didn’t use it, we get this is our line in the sand, not yours. We’ll still advocate against these tools and we’re still concerned about the harm they do. You need to realise that if you use them, you’re both financially and socially supporting dodgy companies doing dodgy things. They will use your support to push their agenda. If these tools are working for you, we’re genuinely pleased. But please also stop using them.

I kinda like using AI and no one at my work is getting fired? This comment pops up a few times, often from programmers. Unfortunately, because of how messy the term AI now is, the same concerns still apply. Your adoption helps promote the companies making these tools. People see you using it and force it onto others at the studio, or at other workplaces entirely. From what we’ve seen, this is followed by people getting fired and overworked. If it isn’t happening to you and your colleagues, great. But you’re still helping it happen elsewhere. And as we said, even if you fixed the labour concerns tomorrow, there are still many other issues. There’s more than just being fired to worry about.

Are you zealots or luddites who just hate AI? Nah. Just upset at the people making these things. There’s great potential in AI and machine learning, and it’s being squandered to make already dodgy rich people richer and more dodgy. We still keep up with developments because we hope one day we can explore it again. But for now, the people pushing these tools aren’t people we want to give money or support to.

Header image: WGA Strike, June 21, 2023 via Wikimedia Commons