Hello, welcome to my fresh new blog! I am planning on writing up my thoughts every week based on the different AI-related news that have come up the week before week, and how it seems to affect the world. I hope you like it.

I’m going to start off by talking about AI Accelerationists, what kind of ideology they have and what their impact on the world is. AI Accelerationists advocate for the rapid intensification of social, technological and capitalist growth. They want to disrupt (read: destabilize) existing systems and institutions in order to create radical social transformations. They tend to feel like existing institutions are slow and bloated, and that we have tried and exhausted other means of making the world better. This is an extension of the original Silicon Valley ethos of the libertarian idea that the world is just waiting for Great Men to come save it.

AI Accelerationists have an optimistic vision of what the world could be. They believe that technology is a great democratizing force in the world, that it can be used to empower every man, woman and child so that they can be the most self-actualized version of themselves. It’s a great dream! Who wouldn’t want to ensure that every future Mozart, Tesla and Galileo are properly educated and free to work on their passions?

But, just as Cory Doctorow teaches us, we need to ask two questions about AI that we should ask about any tech: “who does the twiddling and who gets twiddled”?

If you’re in Silicon Valley, working on AI tech, chances are great that you’re white and affluent. You have enough economic stability to survive– even thrive– through disruption. If you’re not… well, tough break! The mix of excitement and impatience within the tech world around AI leads them to believe that whatever path we end up taking to the technological Singularity is inevitable, and so we might as well get it over with as soon as possible.

Being able to say to yourself, “it wasn’t me, it was the inevitable trajectory of History toward the Singularity! I’m just a powerless cog in a bigger machine that I can’t stop.” is a great way to avoid responsibility for your actions. Especially when your comparatively humongous profits depend on it– profits which get bigger with time, while shrinking everyone else’s.

Let’s take a very concrete case of how that money and power transfer can happen: OpenAI’s partnership with Khan Academy to create their educational chatbot Khanmigo and its effects on the educational system. Educational systems in the US, the UK and Canada are notoriously underfunded. Now comes the “inevitable” disruption of AI, promising to fix the 2 sigma problem. Schoolboards now have a choice, do they fix collapsing ceilings, low teacher pay, low teacher to student ratio, etc. or do they instead fork over $60 per student from the public system to OpenAI based on cyberfantastic promises of tech bros who have billions to gain from people believing them? Sam Altman is laughing all the way to the bank, a real world Piped Piper luring children away from meaningful connections with their teachers and their communities and into intimate relationships with privately owned data vacuums.

What happens when the enshittification hits the fans?

We’ve seen these sort of technological solutionist promises before, and their failures to meet reality. Police bodycams, for example, were heralded as a new era of accountability which will curb unequal application of the law and hold cops accountable. It turns out, it doesn’t fix the actual problem because the footage itself is not held by parties who have the will and power to make the police accountable.

And just like police bodycams and facial recognition technology, the story is bound to repeat itself again: accelerationists create tech at breakneck speed, and the rest of society has to deal with the consequences. In 2024, 2 billion people from 70 countries are heading to the polls. Even though Canada, the US and the EU are all working on their own set of regulations, they won’t have time to actually start enforcing them by the time their citizens need to make sense of the landscape in order to inform their votes.

As Neil Postman wrote, “Once a technology is admitted, it plays out its hand: it does what is it designed to do. Our task is to understand what that design is—that is to say, when we admit a new technology to the culture, we must do so with our eyes wide open.”

So what’s our job in all this? What should we actually do? Be mindful of the technology you use. Think of who the technology works for, and who it’s used on. Find the closest hill and climb it.