Can You Really Put on Blinders If You Know There May Be Adverse Effects of Your Actions?

Nick Bergson-Shilcock of Recurse Center, Developing our position on AI (Emphasis mine):

We chose at the outset to limit our focus to the personal and professional implications of LLMs on Recursers, since that’s what we’re knowledgeable about. You won’t find positions or pontification in this post on energy usage, misinformation, industry disruption, centralization of power, existential risk, the potential for job displacement, or responsible training data.

That’s not because we don’t think discussion of any of these issues has merit (it does) but because we think it best to remain focused on the areas closest to our expertise and that are core to our business, and to avoid those that are more inherently political. While the broader societal questions are still being debated, every programmer here has to answer the question of whether and how best to use LLMs in their work during their time at RC.

This is from the introduction, and it didn’t sit well with me.

Asserting that something is a question for experts, and that consequently you’re off the hook, doesn’t make it so. Sometimes, you’re not in power to decide.

It has the air of sensibility: In science, experts on geological movement are not experts of cell mutation are not experts on quantum physics are not experts on pedagogy. Of course people have all kinds of opinions on many things, but expertise they have not, so they defer to colleagues who do.

But there are clear cases where you can’t say “I’m in no position to judge the situation because I’m not an expert”, because life demands a position from you nevertheless.

Very clear cases without clear boundaries exist. For example, I’m not an expert in child psychology, physical exercise, or the biology of the human lung. But if a one-year-old clumsily walks into a deep pond, I have to get them out of the water lest they drown. It’s not a question about whether a bit of water exposure would be beneficial for them or not. Or whether me forcibly removing them from the destination of their travels will do harm to their development of self-actualization. With life-or-death questions, unless it’s a trolley problem, the ‘correct’ thing to do can presents itself naturally like that. It would be absurd to defer judgment.

So clear cases do exist where expertise is not required to do the right thing, and if you don’t act, you’re to blame. I hope we established that.

There are worlds between rescuing a child from drowning and using an LLM.

I’m not sure whether you can delegate judgment about LLM usage to the experts and claim to remain ‘neutral’. This is not a rhethorical device, I really don’t. It could be that this is a sensible stance. Or maybe it’s not. We haven’t figured this out collectively, yet.


One example that closer to environmental pollution where a sensible stance seems to have been established: PFAS. Forever chemicals in plastics. There are products with PFAS in the world. They will never decompose. We should not produce more of them so that we don’t pollute more of all the living and breathing organisms of earth with microplastics.

Imagine we manage to stop all production today. Still the products that have been produced in the past continue to exist. The (potential) damage has been done, the clock on each product is ticking.

Should you not use these products, provided they are safe to use otherwise?


This is not a thought experiment. It’s an actual problem of many bike trailers to transport children, I’ve learnt: Many for sale (in Germany) are of excellent build quality and very secure. But every single one of the trailers tested “deficient” (a grade of 5/6, where 6 is the absolute worst) because they contained PFAS.

Manufacturers are now (I believe rightfully so) shamed into changing their process to use different materials.

Still these trailers of excellent build quality exist. What to do with them? If you burn them or put them into landfill, the potential damage to the environment would be realized immediately. It’d be a terrible solution.

Using these trailers and changing the manufacturing process is a more sensible thing to do. If anything, trying to use these trailers for as long as possible would be more sensible than throwing them away. Take extra care of them. Repair them. Make sure they never need to be thrown away. That’s the best we can do to help not realize the potential damage.

The material is safe to the touch (and occasional bite) of children. It’s just a bad idea to have used these materials in the first place because we will never get rid of their chemical compounds. Other materials would have been better for humanity and the world at large in the long run.

So we can both blame the manufacturers to have introduced a potential danger into the world and force them to change their processes, and buy and use their products. That’s really tough to stomach. If you’re convinced that production of bike trailers with PFAS is wrong, you don’t want to support the manufacturer with your money. But we don’t just vote with our wallets to e.g. incentivize future PFAS use; we also vote for government bodies to enforce stricter rules and make this effectively illegal (at least in the EU). Again, burning the trailers will only make things worse. Better treat them as precious and take care of them.

I have seen people make similar arguments in the context of LLM usage and GenAI. The analogy is flawed, because LLM’s aren’t just “done” once, and that’s that.

With GenAI, there’s constant re-training, there’s constant resource investment to use them. OpenAI and friends scrape the web without caring for server load and cost for small communities on the web, without care for copyright or licenses.

It’s clear to me that this practice of training is not morally good. We should be after the companies that train LLM’s and enforce sane practices. (And by “we” I mean the regulatory bodies that enact our collective interests.)

But what does that make of LLM usage? I don’t have a clear answer for that. Just that it is not like the bike trailer case.

I want to leave you with another quote from the beginning of Developing our position on AI (Emphasis mine), that left me conflicted for different reasons:

Our interest in these questions is not academic; it’s practical. AI has popped up in every aspect of our work, from our admissions process (should we let applicants use Cursor?) to our retreats (are AI tools helping or hindering people’s growth?) to our recruiting business (what should people focus on to be competitive candidates?) to community management (how do we support productive discussion when people have such strong feelings and divergent views?).

One’s morality is not an academic discipline. It’s the most practical of all questions: what should I do?

Yes, the field of ethics, of moral philosophy, is an academic discipline exploring of how we get to answers to questions like this. That doesn’t make the everyday question “what should I do?” any less practical.

I’m not convinced that, again, it’s possible to assert that this question does not apply to you, and suddenly you’re off the hook.

Meanwhile, this is followed by:

There are no simple answers to these questions. Nevertheless, I think it’s important that we at RC have a thoughtful perspective on AI; this post is about how we’ve tried to develop one.

There are no simple answers, indeed. And I believe that the Recurse Center team has done a good job investigating and working on a stance here, provided it’s not immoral to use LLM’s. It’s just that this is potentially a rather huge ‘if’.