Elizabeth Barrette (ysabetwordsmith) wrote,
Elizabeth Barrette
ysabetwordsmith

  • Mood:

Autonomous Weapons

This article talks about autonomous weapons.


Very little about this is really new. We've had autonomous weapons for a long time, ranging from traps and snares in the deep past to more modern things like sea and land mines. The main difference is that those were dumb rather than smart, with plain physical triggers. Because of this, we know both that smart autonomous weapons are tempting, and that they are a really bad idea. Bear in mind that large swaths of France remain uninhabitable from the first World War, due to everything from unexploded ordnance to heavy metal poisoning.


A brief passage from a recent UN report describes what could be the first-known case of an autonomous weapon, powered by artificial intelligence, killing in the battlefield.

So let's look at this part. Follow a train of thought:

* Militaries love weapons.
* Smart autonomous weapons could be really good at killing people.
* The technology to build smart autonomous weapons already exists, especially if you're not too picky about quality.
* Militaries often don't care too much about collateral damage.
* Therefore, if they can build and use such weapons, they probably will.
* Militaries prefer to keep their actions secret.
* So it's probable they are already using smart autonomous weapons and have been for some time; the question is when and how other people will learn of it.

Happily, we don't need proof of this to start dealing with it. We can just follow that train of thought to conclude that people are probably making mistakes we'd rather they didn't, and take steps to stop that.


There's currently no clear international restrictions on the use of new autonomous weapons, but some nations are calling for preemptive bans.

We certainly need international regulations on smart autonomous weapons. We can base these on those for mines, because there's plenty of evidence why mines are bad, and there are regulations dealing with that problem which should adapt well to new weapons.


"Combined with A.I., tiny cheap little battery-powered drones could be a huge game-changer. Imagine releasing a networked swarm of autonomous quadcopters into an urban area held by enemy infantry, each armed with little rocket-propelled fragmentation grenades and equipped with computer vision technology that allowed it to recognize friend from foe."

Now imagine releasing the same swarm in a civilian population, because that's where it will end up very quickly. Imagine the fun terrorists will have with this technology. So let's not flush firecrackers down civilization, okay?


But could drones accurately discern friend from foe? After all, computer-vision systems like facial recognition don't identify objects and people with perfect accuracy; one study found that very slightly tweaking an image can lead an AI to miscategorize it. Can LAWS be trusted to differentiate between a soldier with a rifle slung over his back and, say, a kid wearing a backpack?

That's the question you want to ask if you wish to use these against enemies without your civilians throwing a fit.

The question you should be asking is, what's going to happen when these weapons get deployed by people who don't give a shit about legitimate targets and just want to kill everything that moves? Because there are a lot of those people and they've already gotten their hands on a lot of military arsenal. If this stuff exists, it will end up in their hands. Don't think about what this shit would do to Aleppo. Think about what it would do to Paris or New York. Because it's not going to STAY in Aleppo. It never does.


Unsurprisingly, many humanitarian groups are concerned about introducing a new generation of autonomous weapons to the battlefield. One such group is the Campaign to Stop Killer Robots, whose 2018 survey of roughly 19,000 people across 26 countries found that 61 percent of respondents said they oppose the use of LAWS.

Awesome name. Somebody has a clue.

So let's talk about another reason why smart autonomous weapons are stupid. If you look at robot rampage science fiction, you will note that a lot of it starts with the military. I'd really rather avoid building Skynet, or anything remotely like it. We have enough of a mess with amoral humans killing each other; adding amoral machines would make it a lot worse. People KNOW this. It would be much better to learn that lesson without throwing some massacres so the dunces can retake History 099 again. Don't build killer robots. That never ends well.


The U.S. and Russia oppose such bans, while China's position is a bit ambiguous.

No, China's just being cagey. They're organlegging, it's not like they give a shit about the sanctity of life or human dignity. And their country is one of the most heavily cybered.

And hey, look at the company, not a good cluster of upright nations there. O_O


It's impossible to predict how the international community will regulate AI-powered autonomous weapons in the future, but among the world's superpowers, one assumption seems safe: If these weapons provide a clear tactical advantage, they will be used on the battlefield.

See, it's not hard to predict at all. It's easy. Humans like killing each other, but that causes problems. If those of us who don't want to mop up the mess fail to stop them, it's going to get messy. So let's act like we didn't sleep through history class or the last several decades, and try to stop this shit before it turns into another Holocaust.
Tags: cyberspace theory, networking, politics, safety, science
Subscribe

  • From Fiction to Reality

    Here's a fuss over someone building the Euro bridges, remarking about places that exist in imagination before reality. People, please. EVERY place…

  • Community Refrigerators

    Meet the Freedge, a source of free perishables. Community refrigerators are the newest form of Little Free Pantry, skyrocketing in popularity over…

  • Managed Retreat

    I'm pleased to see someone else admitting that not all cities can stay where they are. This article gives several examples of how cities could adapt…

  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 0 comments