Considering the current advances of AI how likely do you think an AI might replace you in your job?

how likely do you think an AI might replace you in your job in 10 years?


  • Total voters
    29
If AI took over most jobs, resulting in mass unemployment, the economy wouldn't be able to sustain itself becuase nobody would have the money to spent on products. It's the trifecta of Econmics (production, distribution and consumption). If people don't have money to purchase products (consumption), then companies wouldn't be able to sustain themsleves (production). Which would lead to even more unployment.
 
If AI starts replacing the perpetually under-employed then I may have something to worry about.
 
If AI took over most jobs, resulting in mass unemployment, the economy wouldn't be able to sustain itself becuase nobody would have the money to spent on products. It's the trifecta of Econmics (production, distribution and consumption). If people don't have money to purchase products (consumption), then companies wouldn't be able to sustain themsleves (production). Which would lead to even more unployment.
The whole idea of post-scarcity economics is that there is no longer any need for money. Think Banks' Culture or Star Trek's Federation. I'm not saying it could or would work, and don't really want to get into that discussion; too much potential for going sour and there's masses of it in the literature already! But the utopian principal of post-scarcity is that all old models of economics collapse as completely irrelevant.
 
The crash of Boeing 737 Max appeared to be caused by the MCAS putting the plane into a nose dive due to erroneous data from the AoA sensors. From what I remember, the MCAS 'thought' that the aircraft was about to enter a stall, it compensated by dropping the nose (resulting in the dive). However, from what I understand, the autopilot wasn't engaged at the time, since one of the criteria required for the MCAS to activate is that the plane must be manually flown.
The key thing to realize is that incorporating AI into this system would not have made any improvement. While humans have very redundant sensory systems that are self-adjusting and self-calibrating, automated systems rely on a very limited set of fixed sensors that need to be carefully configured for desired operation. Giving an AI an increased number of sensors only increases the training difficulty. Additionally, humans are quite adept at at identifying inconsistencies, ignoring bad information, and being able to avoid doing the wrong thing. Machine learning is focused on training the AI to do the right thing under specific circumstances. I don't know of and approach to allow it to identify misinformation and to avoid making matters worse; the assumption is that all data is correct.

I believe that the 737-Max issue was far bigger that failing sensors; a lot of these new aircraft seemed to have sensor issues. What can be verified is that the 737-Max was made with lighter materials and had a changed wing and engine position from previous versions. It had also been promised to customers that, despite the changes, pilots would not need to get certified on the new version, but could use their existing 737 certifications. This is my speculation, but the solution would seem to be inserting even more automation between the pilots and the aircraft; a virtual environment. Sensors would detect the orientation of the plane, its speed, etc., and attenuate pilots adjustments of controls from what would be done on a standard 737 to what was required for the 737-Max. Beyond there being a bad sensor, not much information has come out about what happened.

Assuming the virtualization hypothesis, the bad sensor would logically be well integrated into the flight system. This helps explain why the pilots had so much difficulty overriding the bad sensor and why the aircraft repeatedly fought their attempts to correct its orientation and trajectory. This also explains why there was such a long delay in having a correction available. It was no easy matter to isolate the potentially bad sensor from the system. It is unclear what correction was made.

While there are certainly cases where the cost of AI failure is low and merely results in amusement, there are also cases like piloting airplanes and driving trucks and cars where the cost of failure is high. In the former case, having developers iterate over issues is acceptable. In the latter cases, it is not.
 
The whole idea of post-scarcity economics is that there is no longer any need for money. Think Banks' Culture or Star Trek's Federation. I'm not saying it could or would work, and don't really want to get into that discussion; too much potential for going sour and there's masses of it in the literature already! But the utopian principal of post-scarcity is that all old models of economics collapse as completely irrelevant.
Star Trek displays a communistic, utopian world because of two things, a clean, abundant power source, and the replicator. The latter getting rid of a division of labor, thus the need for money and the collapse social class systems. DS9 poked holes in Roddenberry's vision in the episode where Quark tells Nog that humans are wonderful, so long as all their creature comforts are working, take those away and they'll become as "blood thirsty as any Klingon".
 
You show me any social or economic system that can't have holes poked in it. The point is that it is almost impossible to predict how society would shift in a post-scarcity environment. It could develop into the ultimate capitalism with scarcity created artificially to maintain the position of the privileged or into a utopian society. I dislike the cummunistic description as I feel it is irrelevant to such a society. In other words just because DS9 poked holes doesn't make it invalid. And Quark's comments could apply to any post industrial society not just a post-scarcity one.
 
Beyond there being a bad sensor, not much information has come out about what happened.
There are a couple of documentaries that do go into this in some detail. The system dips the nose in response to low air speed (which would normally be the correct action...so that the aircraft gains speed and avoids a stall). However, if the sensor malfunctions and reads low air speed erroneously, then it still dips the nose and potentially flies the aircraft into the ground (which happened twice, of course). There were secondary factors involved. Why only one sensor in the design (no redundancy)? And was the aircraft design inherently less stable, making it harder to fly and necessitating the complex automation between pilot and machine? But I think the pilots' main complaint was that they had not been sufficiently informed about this aspect of the system and were therefore not able to understand exactly what was going on. If they were aware of it, then they would have been able to successfully bypass it. So, yes, the determination to allow pilots with existing 737 credentials fly the new plane was a factor too. When the fault was recreated in a simulator, many 737 pilots had trouble understanding what was going on and keeping control. So the pilots in the two crashes were exonerated I believe. Definitely not Boeing's finest hour.
 
The key thing to realize is that incorporating AI into this system would not have made any improvement. While humans have very redundant sensory systems that are self-adjusting and self-calibrating, automated systems rely on a very limited set of fixed sensors that need to be carefully configured for desired operation. Giving an AI an increased number of sensors only increases the training difficulty. Additionally, humans are quite adept at at identifying inconsistencies, ignoring bad information, and being able to avoid doing the wrong thing. Machine learning is focused on training the AI to do the right thing under specific circumstances. I don't know of and approach to allow it to identify misinformation and to avoid making matters worse; the assumption is that all data is correct.

I believe that the 737-Max issue was far bigger that failing sensors; a lot of these new aircraft seemed to have sensor issues. What can be verified is that the 737-Max was made with lighter materials and had a changed wing and engine position from previous versions. It had also been promised to customers that, despite the changes, pilots would not need to get certified on the new version, but could use their existing 737 certifications. This is my speculation, but the solution would seem to be inserting even more automation between the pilots and the aircraft; a virtual environment. Sensors would detect the orientation of the plane, its speed, etc., and attenuate pilots adjustments of controls from what would be done on a standard 737 to what was required for the 737-Max. Beyond there being a bad sensor, not much information has come out about what happened.

Assuming the virtualization hypothesis, the bad sensor would logically be well integrated into the flight system. This helps explain why the pilots had so much difficulty overriding the bad sensor and why the aircraft repeatedly fought their attempts to correct its orientation and trajectory. This also explains why there was such a long delay in having a correction available. It was no easy matter to isolate the potentially bad sensor from the system. It is unclear what correction was made.

While there are certainly cases where the cost of AI failure is low and merely results in amusement, there are also cases like piloting airplanes and driving trucks and cars where the cost of failure is high. In the former case, having developers iterate over issues is acceptable. In the latter cases, it is not.
I don't consider the aforementioned systems 'AI', they're computerized safety features build into modern aircraft to help prevent accidents, based on (specific) programing.

"Giving an AI an increased number of sensors only increases the training difficulty. Additionally, humans are quite adept at at identifying inconsistencies, ignoring bad information, and being able to avoid doing the wrong thing"

I disagree with that. For example, the crash of Eastern Air Lines Flight 401, was caused by the captain inadvertently taking the aircraft off of autopilot, while the crew tried to figure out if a landing light (which was not lighting up) was burnt out, or if the front landing gear hadn't deployed, the aircraft dropped significant altitude, by the time they realized what had happened it was too late to recover and the aircraft crashed. When the flight recorder was found the NTSB could clearly hear the low altitude warnings in the cockpit, however there was no indication that the flight crew heard them. The reason they didn't hear the warnings has to do with our attention, or more specifically how are brains will block out any external stimuli that appears to be irrelevant to the task at hand. For example, there's a popular experiment where researches had subjects (in groups) focus on a difficult task (using computers), while they inserted stimuli in the room they were in, including a man in an Ape suit who walked around the room. When the researches asked the subjects if they notice the person in the Ape suit almost all said no and didn't believe it occurred, only to be shown evidence that it did via video recordings. They were shocked they didn't notice it. Humans (some anyway) maybe good at 'identifying inconsistencies, ignoring bad information, or avoiding dong the wrong thing, in certain situations', however place them into a complex, stressful situation and we're not so good at it. Pilots are trained to trust their instruments over their intuitions, for those very reasons. I know a pilot who on his first solo flight lost is instrumentation and contact with the tower, by the time he regained contact with the tower he had dropped significant altitude and would have crashed if he didn't regain contact with ATC (he was flying at night). Modern aircraft would be incredibly difficult (if not impossible) to fly without modern computer systems. The question is how much control do we allot to computers on modern aircraft? Perhaps if the crew of 737 Max could have overridden the MCAS the crash could have been avoided. Unfortunately, many improvements in aviation safety are a result of data gathered via crashes, or avoided catastrophes.


As far as AI driving cars, I believe that would be even more difficult than flying aircraft, due to the many variables related to driving, most notably human behavior (you didn't signal you ^#$^ing idiot!!).
 
You show me any social or economic system that can't have holes poked in it. The point is that it is almost impossible to predict how society would shift in a post-scarcity environment. It could develop into the ultimate capitalism with scarcity created artificially to maintain the position of the privileged or into a utopian society. I dislike the cummunistic description as I feel it is irrelevant to such a society. In other words just because DS9 poked holes doesn't make it invalid. And Quark's comments could apply to any post industrial society not just a post-scarcity one.
If we did move into a technological utopian society, it would no doubt be gradual and we'd have time to adapt. If one day we had a replicator the concept of some being 'privileged' would be a thing of the past. If we could all have what we wanted money would be meaningless, there would be no need for it. However, for us to enjoy our 21st Century life styles we need a division labor, jobs that most probably wouldn't want to do. You can either pay them, or employ communism, which as history has shown devolves into tyranny. As you said, all social systems have flaws inherent to them. The 21st Century is bound to be a 'rough ride', however, I think we're slowly edging in the right direction, at least in certain parts of the world.
 
FB_IMG_1680137166580.jpg
 
Modern airliners can almost fly themselves. After the flight plan has been inputted into the FMS the plane can practically fly itself. And if I'm not mistaken, in certain situations autonomous landings do occur. Computers have become an integral component of modern aircraft. Modern fighter jets wouldn't be able to achieve the performance they have without modern computer systems (controlled by wire). With that being said, when diversions or emergency scenarios occur you want a well trained (and rested) human being at the controls. Technological innovation is progressing faster than ever, the time between one innovation and the next becoming smaller, I think it's only a matter of time before AI would be able to act as an actual pilot. However, I doubt it'll be anytime soon.

It's not quite that simple. Yes you are mistaken.

There are no "autonomous" landings in airliners. The Airbus 320 series and other modern airliners can do what's called an "autoland". That's the auto pilot landing the aircraft. However the set up is a bit tricky as it's not just dependent on programming the MCDU. The ground equipment has to be working and the ILS has to be rated for CAT II or CAT III approaches. The autobrake system has to be working and an autoland approach requires very strict monitoring for glitch behaviors that would precipitate the crew executing a mandatory go-around even to the very last second and the mains even briefly touching the ground from a balked landing. The programming of the MCDU requires us to do some "human" legwork in order to ensure the performance is calculated properly and the ATC clearance matches the route uploaded from dispatch.
 
I noticed that under research, Bearly is listed twice. That certainly reduces my concerns with AI replacing researchers.
I noticed that as well, but in my following post there are two more.
Perhaps the previous scarcity of research AI means that more of that type than others will get created over the next year or three.

I used to be a researcher, one of my favourite jobs back in the day
 
Perhaps the previous scarcity of research AI means that more of that type than others will get created over the next year or three.
Remember, though, that the current Machine Learning approach to AI is a two phase process. There is a learning phase then a cut-over to a usage phase. As I recall, ChatGPT-3 was trained on data through 2019. Research engines, as well as other applications in advancing fields, will not be privy to current findings. Maybe AI engines will be able to generate study of studies type papers, but I do not see them being able to access current research and certainly not advancing the cut edge of knowledge or hypothesis.
 
It's not quite that simple. Yes you are mistaken.

There are no "autonomous" landings in airliners. The Airbus 320 series and other modern airliners can do what's called an "autoland". That's the auto pilot landing the aircraft. However the set up is a bit tricky as it's not just dependent on programming the MCDU. The ground equipment has to be working and the ILS has to be rated for CAT II or CAT III approaches. The autobrake system has to be working and an autoland approach requires very strict monitoring for glitch behaviors that would precipitate the crew executing a mandatory go-around even to the very last second and the mains even briefly touching the ground from a balked landing. The programming of the MCDU requires us to do some "human" legwork in order to ensure the performance is calculated properly and the ATC clearance matches the route uploaded from dispatch.
Correct me if I'm wrong, however you stated that the Airbus (to use your example) is cable of an 'autoland', which in your own words is the 'auto pilot landing the aircraft'. In my comment I stated 'in certain situations', meaning so long as everything goes 'according to plan' (data is inputted correctly, everything's functioning properly and the airport is equipped appropriately), the auto pilot can land the airplane. However, I'm fully aware that that process has to be monitored by the flight crew.

My point, was that human beings have to be at the controls to ensure everything goes smoothly, if something happens then the pilots take over. I used the word 'autonomous' as opposed to 'autoland' since most people understand what the term autonomous means, where as autoland is technical term used in aviation. However, I always thought that most pilots would choose to land the aircraft manually anyway, conditions allowing, especially commercial airline pilots who want to land the aircraft gracefully for the sake of the passengers. I always assumed it was a skill set commercial pilots would want to keep sharp.

I don't like to use the term AI, because it denotes an artificial intelligence akin to ours, which computers are still nowhere near. However, in the future, I do see a time when you can instruct an aircraft's (or automobile) onboard computer system where you want to go and it'll take you there. However, that requires an intelligence that goes beyond the programing (think outside the box), which computers still aren't that good at.
 
I saw this online just now, with the intro that I’ll type here:


AI won’t replace you.
A person using AI will.
24 AI tools to future-proof yourself.
Cre: Zain Kahn.
View attachment 102341

I try not to bang my head against the wall every time I see such memes.
Ok , let's assume a person that knows AI will replace workers that don't know how to use AI tools.
The statement misses the forest for the tree. They key question is how many workers will it replace?
1.0 .. well , nothing to worry about.
1.5 ... slightly worrisome.
2 ... that sounds bad.
3 ... crap 2/3 of desk workers will loose their jobs.
 
Correct me if I'm wrong, however you stated that the Airbus (to use your example) is cable of an 'autoland', which in your own words is the 'auto pilot landing the aircraft'. In my comment I stated 'in certain situations', meaning so long as everything goes 'according to plan' (data is inputted correctly, everything's functioning properly and the airport is equipped appropriately), the auto pilot can land the airplane. However, I'm fully aware that that process has to be monitored by the flight crew.

My point, was that human beings have to be at the controls to ensure everything goes smoothly, if something happens then the pilots take over. I used the word 'autonomous' as opposed to 'autoland' since most people understand what the term autonomous means, where as autoland is technical term used in aviation. However, I always thought that most pilots would choose to land the aircraft manually anyway, conditions allowing, especially commercial airline pilots who want to land the aircraft gracefully for the sake of the passengers. I always assumed it was a skill set commercial pilots would want to keep sharp.

I don't like to use the term AI, because it denotes an artificial intelligence akin to ours, which computers are still nowhere near. However, in the future, I do see a time when you can instruct an aircraft's (or automobile) onboard computer system where you want to go and it'll take you there. However, that requires an intelligence that goes beyond the programing (think outside the box), which computers still aren't that good at.

Pardon me, you used the word autonomous. I took you literally. That's the Sheldon Cooper in me. Apologies at the verbiage misunderstanding.
 

Similar threads


Back
Top