“The Pentagon has signed a multi-million dollar deal to begin using Elon Musk’s artificial intelligence chatbot, Grok, as part of a wider rollout of AI tools for government use, the Department of Defense confirmed… it comes just days after Grok sparked backlash for spouting antisemitic posts, including praise for Adolf Hitler on X, the social media platform owned by Musk.” — BBC
You know our chatbot? The one that responded to your mom’s question about a pasta recipe with “White genocide is real and happening as we speak”? Yeah, that’s going to be in charge of the military now.
Yes, we know you may have questions, but rest assured, it’ll be fine! Our chatbot has been ranked as the number one AI on the market by several reputable lists, like the list of X accounts flagged for hate speech. But don’t worry. Being first on that list just means we are the freest thinkers, and also that we’ve done lots of hate speech. Sure, our chatbot did refer to itself as “Hitler” a week ago, but that’s just a normal part of technological trial and error. We’ll fix it eventually, probably.
Plus, our chatbot is fun and edgy. The number one complaint we hear about the US Department of Defense is that it is “not edgy enough.” We’re just giving the people what they want, and by “people” we mean the DoD, and by “what they want” we mean lots and lots of weapons. Because honestly, would you rather hear a mission update from a sober fifty-nine-year-old who will have to live with the consequences of his actions? Or from a glib edgelord chatbot who’ll start every paragraph with a slur and end every paragraph with “LOL.”
But really, we’re getting ahead of ourselves. You don’t even know what we’re going to be using the AI for. According to our press release, it’s for “DoD use cases.” Why are you assuming that means weapons? Maybe we’re using it for chill purposes, like planning the office holiday party. This year’s theme is epicness and transphobia.
Look, we are building the weapons—physical weapons. Like, that was always part of the plan. Actually, it was the main part of the plan. It was the first thing we came up with when we started our AI company. All of those “puppy running errands” videos were our way of distracting you while we built massive data centers that can be used for making war.
But this doesn’t have to be a bad thing. Weapons can be ethical, even though their express purpose is to cause harm. Just because we’re building superintelligent mega-nukes doesn’t mean we’ll abandon our code of ethics, mostly because we never had one to begin with. Our official mission statement is “make an AI,” and we’ve followed through on that.
Whatever happens now does not matter, because we don’t care.
Listen, we’re done splitting hairs. Ultimately, it comes down to the quality of the product, and ours is top-notch. Our AI can profile you with unprecedented speed and bigotry, and it’s trained on a wide variety of sources—from the racist ones to the ones that don’t exist. That is why the DoD is utilizing it for various military operations. For example, our chatbot can easily mock up step-by-step plans for toppling a democratically elected foreign government. Could a human do that? For sure, and they have. Countless times. However, all that scheming was a lot of work, and everyone was getting really burned out by the moral qualms.
The important thing to remember is that we’re always pushing the boundaries of what’s possible. Some say it’s more of an ethical boundary, and therefore, pushing it is not a good thing. We just don’t see it that way.
At the end of the day, we’re using AI to advance the DoD’s goals of democracy and peace through a thoughtful combination of violent warfare and unilateral decision-making. And yes, we’re making $200 million off this deal. But we’re not in it for the money—we’re just passionate about doing good, honest work. Work like giving an AI trained on Elon Musk’s opinions complete control over America’s military operations. That’s the kind of heartwarming, feel-good future you can’t put a price on.
Except we did, and it was $200 million.