Drone Used For Autonomous Attack

Wayne Mack

Well-Known Member
Joined
Sep 12, 2020
Messages
1,789
Location
Chantilly, Virginia, US
A drone was used in Libya to attack opposing forces. It was programmed to go after the forces and did not make the attack decision on its own, but it did apparently identified the targets without human intervention. I have seen two separate reports of this, so I assume it is likely valid.

 
Can our S.F. automated assassins be far behind? No, technology will just keep improving and our world will just keep getting more dangerous.
 
I wouldn't say autonomous targeting munitions are new. What this represents is that they're newly cheap. That's how all technologies go, really. Someone figures out how to do something for the first time; a few decades later, someone else figures out how to do the same thing priced for the masses. It is not the presence of more lethal or cheaper weapons which makes the world more dangerous. In the history of mankind, never has our species seen such safety, such, peace, and such prosperity as it has enjoyed in the decades since the fall of the iron curtain, despite the chaotic disintegration of the Soviet bloc and the proliferation of nuclear and other weapons of mass destruction. The seeming threats (the technological potential for disaster) grown and grow, yet in real numbers, the world gets safer. Wars become more limited, casualty rates decrease, violent crime decreases, and perhaps most importantly untimely death and violence in the third world decrease--for a time. Now murder rates are skyrocketing, international tensions are flaring, autonomous drones are being used in anger, Iran is restocking Gaza's rocket supply, etc., but not because there has suddenly been invented some new method of killing. These ebbs and flows in real danger do not follow the evolution of technology but are solely dependent on culture, and which culture is ascendant at any given moment. If you have the will to fight to the death, and the ability to inflict catastrophic harm on your enemy in the process, your world, your life, will become safer. If you lack the will or the capability, then your world will become more dangerous. This is ultimately the choice put to all of us. Be willing and able to kill, and maybe you won't have to. Shrink from violence, and you will inevitably feed violence, one way or another.
 
If you have the will to fight to the death, and the ability to inflict catastrophic harm on your enemy in the process, your world, your life, will become safer. If you lack the will or the capability, then your world will become more dangerous. This is ultimately the choice put to all of us. Be willing and able to kill, and maybe you won't have to. Shrink from violence, and you will inevitably feed violence, one way or another.
I would agree to a point. It is how the world is now, but I want to believe that there will come a better day. Like Martin Luther King Jr. I too have a dream.
 
I put no stock in such reports. And the same for so-called"artificial intelligence." There is no such thing. Humans have to program the devices. They cannot think on their own.

During the Cold War and today, the motto remains Mutually Assured Destruction. It doesn't matter that the USSR is gone as long as nuclear weapons exist. And nuclear proliferation comes at a cost. Making the weapons and storing them means a secure method must be used. They can't be put just anywhere. At the moment, the United States is engaged in 'low-intensity conflicts.' This just means that money is the driver as opposed to just sophisticated hardware. Map out the poor and wealthy parts of the world.
 
Humans have to program the devices.
This is not entirely true. Current capabilities are known as machine learning, which is a form of pattern matching. No human determines the algorithms for what constitutes a match. Human involvement consists of providing a large amount of images and whether the image is a match or not a match. The computer algorithm determines what characteristics are important and the weighting to give to individual characteristics, to combinations of characteristics, or to lack of certain characteristics. Human involvement consists of training the equipment to identify a match and not programming in the rules that define a match.

This is where some of the controversy over use of machine learning arises. There is no deterministic way to identify the boundaries of what makes a match and what makes a non-match. There is no way to predict when either false negatives or false positives occur. It is a very murky question as to how to determine responsibility when a matching failure occurs and a target is misidentified.

If one ignores the moral aspect, the technology is quite interesting. The moral aspects of the application of machine learning should give everyone pause.
 
This is not entirely true. Current capabilities are known as machine learning, which is a form of pattern matching. No human determines the algorithms for what constitutes a match. Human involvement consists of providing a large amount of images and whether the image is a match or not a match. The computer algorithm determines what characteristics are important and the weighting to give to individual characteristics, to combinations of characteristics, or to lack of certain characteristics. Human involvement consists of training the equipment to identify a match and not programming in the rules that define a match.

This is where some of the controversy over use of machine learning arises. There is no deterministic way to identify the boundaries of what makes a match and what makes a non-match. There is no way to predict when either false negatives or false positives occur. It is a very murky question as to how to determine responsibility when a matching failure occurs and a target is misidentified.

If one ignores the moral aspect, the technology is quite interesting. The moral aspects of the application of machine learning should give everyone pause.

I remember reading an article back in the 80s on an early application of machine learning.

There was concern that some tube station platforms were becoming over-crowded and passengers could be accidentally pushed onto the line. There was research into an automated method of limiting access to the platform when it was 'too full'.

Photographs of platforms in various states from 'empty' to 'full' to 'too full' were shown to a 'learning' program and each photograph given a corresponding status and which status would trigger limiting access to the platform.

The programming was in the 'learning' and nothing to do with station platforms. The photos could have been of anything.
 
This is not entirely true. Current capabilities are known as machine learning, which is a form of pattern matching. No human determines the algorithms for what constitutes a match. Human involvement consists of providing a large amount of images and whether the image is a match or not a match. The computer algorithm determines what characteristics are important and the weighting to give to individual characteristics, to combinations of characteristics, or to lack of certain characteristics. Human involvement consists of training the equipment to identify a match and not programming in the rules that define a match.

This is where some of the controversy over use of machine learning arises. There is no deterministic way to identify the boundaries of what makes a match and what makes a non-match. There is no way to predict when either false negatives or false positives occur. It is a very murky question as to how to determine responsibility when a matching failure occurs and a target is misidentified.

If one ignores the moral aspect, the technology is quite interesting. The moral aspects of the application of machine learning should give everyone pause.

Machine learning is incorrect. Pattern matching is the correct term. A missile can identify an aircraft by its shape but it can be fooled by using a type of camouflage pattern that will show it an irregular shape or sky color. No one is going to waste money lobbing missiles with 'machine learning' capabilities. You might as well just use radar homing and ignore the learning aspect entirely. For example: I have four missiles. Two are radar homing with anti-jamming and two are machine learning types. Machine learning is no good if a simpler system can do the job. A false positive or misidentified target can result in an international incident. All missiles that hit the wrong target are not only wasted but it also means that enemy personnel or equipment are still advancing and you're out of missiles. The same with drones. Destroying the wrong target means the real target is still a threat.

All the military wants is a piece of equipment that works in a reliable, much better than 50%, fashion.

The moral aspects revolve around the user. Any type of "autonomous" weapon came from somewhere and components can be identified after detonation.
 
I remember reading an article back in the 80s on an early application of machine learning.

There was concern that some tube station platforms were becoming over-crowded and passengers could be accidentally pushed onto the line. There was research into an automated method of limiting access to the platform when it was 'too full'.

Photographs of platforms in various states from 'empty' to 'full' to 'too full' were shown to a 'learning' program and each photograph given a corresponding status and which status would trigger limiting access to the platform.

The programming was in the 'learning' and nothing to do with station platforms. The photos could have been of anything.

Machine Learning: To the extent that this is a discussion of AI, Mosaix and Wayne are getting at the root of that issue. "Pattern matching" is the problem. "Machine learning" or "Artificial Intelligence" is one general method of solving the problem, the other being algorithmic solution.

If I as a programmer solve a problem algorithmically, I am writing a fixed set of programming statements--if/then decisions which drive fixed operations based on specific parameters--to determine what the program will do. IF it walks like a duck AND IF it quacks like a duck AND IF it swims like a duck, THEN it is a duck. You will see that algorithm represented right there in the computer code, and it never changes. Maybe my algorithm is very sophisticated in the way it assesses whether or not the thing walks like a duck. Maybe I determine whether it produces seismic impacts that fall within a particular range of frequency, intensity, and wave pattern, and I have programmed functions to measure each of these. Maybe I include a function to assess the thermal outline of a duck in profile, thereby identifying its feet, and then measure the pattern of vertical and horizontal cycling of its feet, comparing that to programmed norms, measuring standard deviations from the norms. I also have a separate function to measure the shape of its gait if I happen to catch it in a frontal or quartering aspect on camera. I have a formula in my code to calculate Duck or Not Duck based on each of these assessment functions, but the process, what parameters are measured, how they are used, what calculations are applied, is right there in the code for anyone to see, and I as the programmer control it.

If I'm going to solve the problem with ML--for instance, using a simple perceptron neural network--I'm going to write code that has nothing to do with ducks at all. There will be nothing in my program about duck gait frequencies, nothing in my program about quack timber or even color. What I'm going to write is a program which does something like this: for each picture given to the program, take the color code of each pixel and multiply that color code by a random value. Add all the resulting products together to produce a sum. Then do the same thing again, but this time, use a different set of random values. Again, add all the products to make a sum. Do this fifteen-hundred times, each with a different set of random weights, so that I have fifteen-hundred sums. Now, take each of those fifteen hundred sums, multiply each by a random weight, and add all the resulting products together to get a sum. Then do it again, with a different set of random weights. Do this two thousand times, to produce two thousand sums, which are randomly generated from the original fifteen hundred sums, which were randomly generated from the color codes of the pixels in the original photograph. Keep doing this for several more rounds. Finally, take each of the sums from my most recent round, multiply each by a random weight, and add them all together into a single sum. No ducks anywhere in here, obviously, or any measurement of duck-related parameters, nor any programming at all specific to ducks. Now, declare: if this final sum is 1, the picture was a picture of a duck. If the final sum was 0, the picture was not a picture of a duck. Feed it a picture of a duck. If the final answer is not 1, do some calculus to figure out how to adjust the weights to the final answer comes out to 1. Feed it another picture, not of a duck. If the final answer is not 0, do some calculus to get adjustments for all the weights so that it will be a 0. Feed it more known pictures, a couple million, adjusting the weights each time.

Keep adjusting the weights, little by little, with each training input, and eventually your process of multiplying pixels by weights, taking sums, multiplying those by weights, taking sums, etc. etc., will start to produce a 1 if the picture was a duck, 0 if it was not. I'm leaving out some steps, some regularization functions and optimizations, but this is basically what's happening in a neural net program. It's just a massive pile of weighted sums, a massive pile of dot products (which, incidentally, is why the hardware that can "calculate" a neural net program is very similar to the hardware in your graphics card), that eventually gets tweaked until it produces the right answer most of the time. You can't say why it produces the right answer. You can't tell what it's looking for or what it's seeing. All of that is mysteriously emergent from that particular set of millions upon millions of originally random but repeatedly tweaked weights. You don't know how any given change is going to affect it. You can't go in and tweak one or more of the weights manually and produce a predictable result. All you can do is experiment with it, and keep retraining it, until it works well enough for your application.

The application of this to targeting is that a well-trained ML process will be much better at recognizing targets reliably than an algorithmic solution. For an algorithm, you would have to program a contingency for every different way the potential target could be facing, every different color it might have, how close to or far from the camera it might be, how it might be moving. Oh, enemy guy might be wearing a red hat? Gotta have a branch of the logic that accounts for red hats which otherwise disrupt the shape of a human body. It's impossible, which is why there are no good algorithmic solutions to recognizing objects in imagery. There is nothing consistent even about the shape of a human body, or a duck, or a sailboat, when seen from various angles, much less anything consistent about its color or its movements. But with ML... camouflage? Just start feeding it training data of camouflaged people and tweaking the big-picture aspects of your neural net (its dimensions, its activation functions) until you start to see good results. You don't need a program specialized in image recognition, nor programmers to write it. You just need a generic perceptron well trained. Heck, you can get one online, open source. The downside of ML is that if it doesn't work quite right, fixing it is not a logical process. It can be difficult to figure out how to tweak it to make it work better, because there's no inherent relationship between the program itself and the problem it's trying to solve. Not a big problem if you're a terrorist just looking for a cheap way to kill anyone you can kill within camera range.

You can run it aboard the weapon itself, if it's big enough for a decent graphics card and power supply and whatnot, but much easier is to network the munition to the computer. As long as you're not worried about hardening it against comms jamming, your COTS LTE signal or even a 2.4 GHz signal will be sufficient to carry the data traffic back and forth, and you can run the software on a gaming laptop at your launch position. And that's the whole point I was making before: machine learning software and consumer drone tech make something that was previously very technical (CBU-105) now very easy, especially, again, for an essentially terrorist combatant who doesn't care about preventing civilian casualties or hardening his gear for a great powers competition. If all you're looking to do is get a drone into the combat area to kill anyone--man, woman, or child--you can find, then a simple payload-bearing drone with a camera linked to an AI program and a grenade on its belly will do. Extremely cheap and easy to replicate.

A dangerous world: I have faith that the square of the length of the hypotenuse is equal to the sum of the squares of the lengths of the two remaining sides of a right triangle. It was proved, and though I haven't done the work to reprove it lately, I feel I can depend on it to remain true as I go about, say, designing a bookshelf. If I have religious faith, it is of the same definition. I have assessed something to be true or likely, and just because I'm not assessing it right now, not redoing that work, doesn't mean it has gotten less likely. Based on the fact that the, shall we say, distribution of violence in human life hasn't changed in thousands of years, nor has human capacity for evil (nor human capacity for good), I have every faith that these will continue unchanged for the foreseeable future. In that case, what was true a thousand years ago and is true today will be true tomorrow: that might doesn't make right, but might makes reality. The physically strong people will determine how dangerous your world is. If the people who value freedom and peace are the most deadly and dangerous, the strongest in physical contest, then your world will be a world of freedom and peace. If the fearful, power-hungry, and other abusive sorts of the human species are most powerful physically, then they will rule, and the world will be one of obedience and death.
 
One minor quibble,
much easier is to network the munition to the computer.
I believe the article implies an autonomous drone. If the unit is networked back somewhere, then there is no need for the central computer, just have a human make the targeting and attack decisions.

The bigger concern in weapons systems is not communication jamming but enemy tracking. The communications link can reveal the source of the attack Also, one of the more dangerous situations in current systems is that of a spotter. This is someone who needs to be in direct line of site of the target and remain vulnerable until the attack is completed.
 
Around 2011, I was working with one of the comp-sci professors at UCL. At the time they were working on a number of projects including rudimentary biomechanoid robots that had rat neurones suspended in a nutrient jelly that could pilot small wheeled robots, and a holodeck room made of eye tracking projectors. He was working on a project with MOD for autonomous drones that could detect a person by chemical signature - the implied application was to create miniaturised drones to deliver a lethal injection for assassination purposes. I don't know how far they got, but I seem to remember them having a working prototype.

On Pattern matching / machine learning - what's your thoughts on GPT3?

 
There was a Panorama documentary on BBC recently called 'Are you scared yet, human' which is now on YouTube for those who can't get iPlayer

One section concerned the use of AI in autonomous drones (amongst other warlike things), (starts at 37:37 on the above video) which mentioned 'Slaughterbot'

The whole Panorama programme is rather chilling :(
 
One minor quibble,

I believe the article implies an autonomous drone. If the unit is networked back somewhere, then there is no need for the central computer, just have a human make the targeting and attack decisions.

The bigger concern in weapons systems is not communication jamming but enemy tracking. The communications link can reveal the source of the attack Also, one of the more dangerous situations in current systems is that of a spotter. This is someone who needs to be in direct line of site of the target and remain vulnerable until the attack is completed.

Happy to quibble! The article implies an autonomous drone. We are presuming that much is true, but all that means is there's no man in the loop during its flight and engagement activity. The points I was making are to distinguish between an algorithmic auto-targeting solution and a ML-based solution, and that that is one of several technologies which makes something like this, which had been the exclusive purview of first world militaries for several decades, now something that can be implemented cheaply with generic open-source software and a COTS cargo drone. Not new, but newly cheap. In this case, ML software technology (obviating the need for expensive proprietary algorithmic pattern matching software written by an extremely talented mathematician), consumer drone technology (obviating the need for proprietary hardware), and large-bandwidth wireless communication technology (which could potentially obviate the need for highly miniaturized onboard processing capacity). They're all optional, as evidenced by the fact that BLU-105 exists, but each helps. It's very possible that this autonomous drone was using onboard processors, but my point is it doesn't have to. If I want to create a solution cheaply, I don't have to buy a drone capable of doing all of this work onboard. I can buy a camera+cargo drone, get a laptop computer, use open source software, and then pre-program a loiter/target location. Then, any one of my barely-literate terrorist goons can carry the kit forward to the operating area, set it up, and press the launch button. They don't have to be trained in drone flying or targeting or anything. They just have to get the drone and the processor both in range of the target area. An easy way for me to deploy a smart weapon to the front cheaply.

If I have the resources to do more, like build the processing capability right into the drone, then I do more, of course. And you're right in pointing out that the two counter-threats make that particular upgrade very valuable. As to those counter-threats, I would absolutely worry about jamming over tracking (or "direction finding" or "precision geolocation," or pick your term for the enemy identifying the location of a transmission source). PGL is much more complicated and much harder to do, and involves a lot more resources, than simply jamming a signal. If I merely want to defend against a drone attack, hands down my first approach is to jam its control signals by shining a lot of energy at it in the right wavelength. That takes one defender with one simple-to-use piece of hardware. And this would be the biggest reason to try to run the autonomous targeting software onboard the drone itself. So that it doesn't just turn around and fly home, or orbit uselessly until it runs out of gas, if it loses contact with a base station.

If the enemy has the resources to PGL control base station and attack it, then they might also try to do that, but now you're talking about aircraft or satellites or boots on the ground, roving the countryside, trying to pinpoint your location. Then what? Either a raid or an airstrike. That's a whole big thing. And to accomplish what, that jamming would not have accomplished? If this was a man-in-the-loop system, then that might be worthwhile to take out one of the terrorist's few trained drone operators, but if it's an autonomous system, just using an unmanned base station to carry its processing load and nothing more, then using a lot of resources to hit the base station (and the fourteen year old kid who set it up and clicked the "launch" button) probably doesn't buy you much that you wouldn't get just by jamming the drone's receivers. The nature of the software means there won't be trained pilot or targeteer, or anyone else we might equate to a forward observer, at the base station. If the software is running on a laptop somewhere a couple of miles away, that laptop might be completely unattended or, if you're Hamas, you set it up in a children's daycare center in the same building as the local AP or CNN office, tell the daycare teacher not to touch it, and then you leave and go somewhere safe. So, even if they do PGL the base station and run a counterattack, you still get a media win.

So, yeah, if I could put the processing directly on the drone, I definitely would, but more for jamming concerns than PGL, I think. And if I couldn't do that, if I did have to offboard the processing capability to a base station, because I'm a terrorist operating on the cheap, trying to use commercial, consumer equipment so as not to raise suspicions during my acquisition process, I could absolutely accomplish a fully autonomous system using cheap drones, cheap deployable base stations, and one talented machine learning programmer back at HQ who creates the infinitely replicable deployable package, with no need for any proprietary gear nor any advanced training on the part of the people who will deploy and employ it.

Which is quite a thought, when you think about it. I can see why one would be tempted to consider such a world more dangerous, but I stand by my original response, which is that it's not the capability of your neighbor to kill you that makes your world more or less dangerous. It's absence or presence of your ability and willingness to kill back. If you try to counter a technology like this by PGLing the base station and striking it, or making attacks against the people transporting these things to the front, or even against the programmer who set them up, that's going to be a losing game. The only way to fight this is the same way you fight people who think sending children into daycare centers with suicide vests is a good way to wage war: by annihilating their entire ideology. Which, of course, requires a certain willingness to be brutal, and a certain ability to argue to yourself that you can do that without becoming the very thing you're fighting against. Which is a topic for a different thread entirely, or maybe a thought-provoking sci-fi novel.
 
On Pattern matching / machine learning - what's your thoughts on GPT3?
I hadn't heard of GPT3, so this video was my introduction. I have to admit, though, I found this video off-putting due to adding the video and audio in postproduction. I am afraid that point may color my objectivity.

The claims that I found most interesting were that GPT3 learned to do at least basic decimal arithmetic and how to create at least a trivial level javascript function. I am wary due to the statements that GPT3 lies, which sounds a little like a justification provided by external observers for failures and I found the explanation by GPT3 of why it lies (to protect itself) sounded more like a canned response, especially as the interviewer specifically warned about GPT3 'lying.'

As I said, the revelation that the video was created entirely after the fact from text logs has really biased me and I can't claim to feel terribly objective at the moment.
 

Similar threads


Back
Top