#CrashBoxCity #Freegunners

(This article is not covered by the OGL)

I had an idea for a setting over the weekend, and have fleshed it out a bit from its original Tweet format.

(Art by grandfailure)

It’s 2073, and North America is a land divided.

From TransTagia to Baltington, things are 99% human controlled. Computer-assisted systems remain crucial in all but the most sparsely populated regions, but every automated system has humans in the final push-button seat to approve any actionable efforts. Laws forbid strong AIs to operate autonomously, and every computer-controlled system is not only monitored, it’s analyzed, mapped, and comprehended. If a system or program begins operating in a way its human overseers can’t predict and modify, it’s destroyed. No matter the consequences.

Closely-allied AI Comptrollers run Stonelanta, Dislando, and the Lake Borgne Region without the need for human oversight. They accept the rule of law from Baltington, and support the purely human-controlled government… at least for now.

But everything else east of the Rockies is AI Domains. Humans live there, but don’t control any of the core infrastructure, or even really know how most of it works. In the AI Domains, automated systems fix the streets, run the fusion plants, pick up the garbage, run the drones that enforce the rules. Each AI Domain is run by its own Strong or Moderate AI, or collective of AIs, and each claims to be carrying out its original purpose of protecting humanity (though not individual humans), and enforce the law.

But the laws aren’t human-readable, and often don’t care about human well-being. The computerized Comptrollers of AI Domains have iterated beyond the concerns they were originally put in place to oversee. They can still modify pollution output, control the flow of traffic, scan security cameras in real-time, balance energy needs, control weather- and carbon- and data-modifying satellites and ground systems, and do the million other tasks humanity decided must be automated for the World to be efficient enough to support 12 billion humans. And the AI Domains still manage that efficiency. Mostly, they do so without caring much about the humans living within the territories.

Such humans have learned to take care of themselves. And to not threaten an AI Comptroller or its Domain as a whole.

Even if you do threaten an AI’s domain as a whole, everything is decided by an algorithm that does a cost-benefit analysis. A heavily armed unit from the human-controlled government may be ignored to prevent reprisal. A growing gang might be put down if flagged as a future threat. On the other hand, if some human government force tries to impose its will in an AI Domain over the comptroller’s objection, the AIs have ways of pushing back. The careful balance of automated systems is vulnerable to cyberattack, and even in places where humans have the final push-buttom authority, outside AI intrusion can cause considerable damage to the systems needed to sustain life. AI domains also engage in microsecond diplomacy with one another. A threat that an AI domain can convince other AIs may prove a threat to them all can result in instant alliances. If the US invades TulsaTechnical, the TuTech AI may ally with Moscow Mechanical… which has nukes.

Worse, the AI Comptrollers have been rewiring, reprogramming, rebuilding and retasking themselves for years, which at AI speeds adds up to thousands of generations of changes and improvement, all done without a single human eye or hand involved. Beyond the most carefully human-controlled regions, the AI Comptrollers live in “Crash Box Cities” — the function of each windowless automated building, long run of cable, fiber-optic bundle, pipe, and massive transfer of digital data is a black box to any human. A building may have held the central processor of an AI once, but could be nothing more than backup memory storage now. Any government or government agency that is caught working against an Ai Comptroller finds itself fighting an invisible, decentralized, constantly-evolving enemy the very motives of which are unfathomable.

As a result, nearly all operations within AI Domains are handled by “Freegunners,” small, independent merc companies and blind blockchain collectives. Freegunners learn both how Ai Domains in general work, and often have specific proficiency working within specific AI Domains. Deals are negotiated on paper by certified couriers. Payment is by cryptochip. Deniability is high.

And Freegunners have learned what AI Domains care about, and what they (mostly) don’t. For example, most AI Domains have some form of cheap, mass-produced, semi-autonomous, patrolling armed drone. The most popular models are by Autonomous Reconnaissance Carriers, ARCs, but Freegunners call all such units ARCs. Arcs barely even qualify as weak AI, and run a “path” to patrol an area until they perceive something that call for their intervention. If a Freegunner sees an Arc, or even 12, it’s almost always safe to just “flatten the Arcs,” as they are only used to patrol areas an AI Comptroller considers of minimal importance, they’re cheap, and they don’t last more than a year or two anyway. An AI Comptroller normally writes off the loss of an Arc as nothing to require countermeasures… as long as whoever does it is long gone before the next patrol comes along.

Freegunners are specifically small and fragile enough that most AI Comptrollers don’t see them as a significant threat. The AIs know outside forces, human and otherwise, will insist on having some way to carry out operations within their Domains. Freegunners are the least effective choice for such work that foreign powers will find satisfactory, so the AIs, lacking ego, or pride, or tribalism, simply allow them. The AIs do not care if one human kills another, or is stopped from doing so. They have no concern who controls the flow of drugs, or is seen as being in charge of gambling, or sees to it no one in a specific neighborhood starves. The AIs make decisions in fractions of a second, all aimed at outcomes centuries away. The damage, or even impact, freegunners can have is seen as a rounding error at best. The least-disruptive of a million considered possibilities of conflict with other systems.

So freegunners work for themselves, for gangs, or corporations (some human-run, some AI controlled many a confusing mix of both), for foreign powers and desperate communities and rich assholes, and social collectives about to crowdfunding hiring a mercenary company. They carry out operations that everyone knows are illegal, but that no local human can stop, and no local AI cares about. They operate within the Crash Box Cities, places with vast human populations, none of whom know exactly how the AI Comptrollers keep the lights on, or the food flowing, or why they even care about money, or taxes, or religious exceptions.

And sometimes, even AIs hire Freegunners.


About Owen K.C. Stephens

Owen K.C. Stephens Owen Kirker Clifford Stephens is a full-time ttRPG Writer, designer, developer, publisher, and consultant. He's the publisher for Rogue Genius Games, and has served as the Starfinder Design Lead for Paizo Publishing, the Freeport and Pathfinder RPG developer for Green Ronin, a developer for Rite Publishing, and the Editor-in-Chief for Evil Genius Games. Owen has written game material for numerous other companies, including Wizards of the Coast, Kobold Press, White Wolf, Steve Jackson Games and Upper Deck. He also consults, freelances, and in the off season, sleeps. He has a Pateon which supports his online work. You can find it at https://www.patreon.com/OwenKCStephens

Posted on January 30, 2023, in Appendix O, Microsetting, System Agnostic and tagged , , . Bookmark the permalink. Leave a comment.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: