Home Editor's Picks Will Programming Biases Warp the Future of Strong AI?

Will Programming Biases Warp the Future of Strong AI?

Will Programming Biases Warp the Future of Strong AI?
0

American Military University STEM Dean Dr. Ahmed Naumaan dives into the current state of artificial intelligence and its future capabilities. He evaluates the capabilities of a strong AI presence in the future but presents a potential roadblock for its unlimited potential, biases in programming. Will the future of AI be a burden or beneficial to our society?

Video Transcript:

What is artificial intelligence? Artificial intelligence simply means that a system, a computer system or a software system, is able to respond to situations in a manner similar to how a human being would respond. And the definition of weak artificial intelligence is that you have a program which behaves or responds in a manner that a human being would. So it doesn’t imply anything other than it’s sort of almost like a stimulus-response kind of system. You ask a question and it responds in a certain fashion, or you ask it to do something and it goes out and does that thing.

Strong AI is referred to as when the machine is actually thinking like a human being would think. So it is…It understands what it’s working on and what the implications of those things are. So it’s not like a robot just let’s say…A robot playing cards. It understands the rules of card playing and it just plays the cards but doesn’t know what it’s doing. There’s no emotion. There’s no feeling. It doesn’t really care why it’s doing it, it’s just being programmed to do it, but a human being playing cards are in there because she or he enjoys the game and is entertaining themselves and the people that they’re playing with the same kind of camaraderie and involved all of that.

That’s because they’re thinking. They actually understand the concept and what it means and what the social aspects are. So if we have a machine that behaves like that, that’s strong AI. That’s actually a thinker. It’s a machine that thinks for itself, is self-aware and understands what it’s doing.

So most of the work being done today is in the area of weak AI. There is no strong AI, and I don’t think there is going to be any for some time. Let’s put it that way. AI systems that are being built are not independent thinkers. My worry is more about how those systems are programmed. I think there are subtle biases that get built into the system, especially if there isn’t a multiplicity of perspectives that are brought into play and different types of people that are involved in both building and testing the AI.

Get started on your cybersecurity degree at American Military University.

You get these biases built in, and that’s more of a concern to me because the tendency of folks is to say well, it must be, it’s on the computer so it must be right. And that gets baked in, and then when you want to change something…It’s well, we can’t do it. It’s the computer is built that way or the software is built that way and that is the real danger. I don’t think there’s any danger of the machines certainly waking up one day and taking over the worlds. So the Robotcalypse is nowhere near. I just don’t see it in the near future.

But if you have an independent thinker, by definition that entity’s thinking independently. So it may disagree with what you know humans or individuals within humans or humanity as a whole wants to do. And how it wants to proceed. I think humans bring their own fears to this problem because they know each other and they know how they have treated each other.

So, when they think about…Oh, somebody who’s different from us and is capable of thinking, they’re going to do to us what we have done to ourselves and others. I think that’s what gets played out. No one knows today how a truly independent thinking entity is going to behave. We have never encountered another entity that is able to think and verbalize its thoughts.

Only humans have been able to do it so far even though in amongst animal species there are several that are considered intelligent, but they’re not vocal in the sense of being able to use words and communicate, and they’re certainly not at a technological level to compete with human beings. So we have no experience of what’s out there. And this is why many people are fearful and should a truly independent and independent thinking arises, It’s going to overturn philosophy and religion and all of these social implications that social systems that we have built in, and because it will have a truly independent approach.

However, if humans are building, it could also be conditioned by the biases that humans bring. I mean the example I would give to you is if you raise a child, the environment in which the child is reared influences the way the child thinks or behaves. So if you have an AI that you’re really developing as a child, it may be an independent thinker but it’s going to be influenced by human culture and the people who bring the AI up. So there may be that kind of influence but an independent thinker is…We don’t know how to approach violence.

It’s as simple as that. It’s beginning to impact all kinds of areas. Sometimes areas that people may find surprising. For instance, if you are a consumer of news of various types, and you see stories in print, many of those stories are written by computers. So they get the data. There’s a template that some humans set up but as soon as for instance in the business world as company earnings are announced, the data goes immediately to these computers which fill out the story and it goes out over the wires. So this is happening within seconds.

There are diagnostic systems that are being used in the medical arena which make diagnoses of patients conditions. And today, those diagnoses have to be run by physicians. But more times than not, the computer is actually correct. So AI is beginning to penetrate all areas of activity, and it’s going to do so more and more as we go forward.

Get started on your cybersecurity degree at American Military University.

So there are a lot of opportunities for careers as AI is developed. For instance, there are the people who are doing the theoretical research and what it takes to make thinking machines or expert systems. A lot of that is being used today as weak AI. it’s an expert system that doesn’t really understand and can’t really think for itself. It’s a pattern-recognizing device. It’s a correlation-building device or takes the theory of how to recognize patterns more easily. How to make correlations that are more robust. There is the mathematical theory behind it. So there are people who work in that area. There are people who actually programmed these systems.

Think of it this way as the computer sitting down with an expert in that area and saying I’m anthropomorphizing this, but computer saying well, teach me what you know. And so it’s like an instructor telling a student and in that knowledge from the subject matter expert is his transfer to the computer by being programmed in a certain way. So there are people who will do that, who will actually instruct the subject matter experts on how to interact with the computers. Not necessarily computer scientists themselves, but they understand psychology. They understand instructional approaches and so on and so forth. Some jobs of that type.

In AI systems that are built using neural networks, its neural networks are trained on data. So in other words, the given lots and lots of examples and they go through the neural networks sort of processes that example and comes up with some conclusions. If it comes up with the conclusion, that is correct. Correct and the sense it’s shown an apple and it recognizes an apple, a human being will check off a box or push a button or something to say to reinforce that recognition. If it thinks something else, a human being will provide negative feedback. So there’s some work of that type, but that work itself can be automated. So in principle, you could use a neural network to train another neural network. So there really isn’t a limitation on what AI can be used for.

And we’re sort of feeling a way forward in this. So there are a variety of jobs. It’s not all techie stuff. I mean techie in the sense of it’s not all computer scientist type stuff or engineer type stuff or mathematician type of stuff. There are other disciplines and sociology and anthropology and psychology that come into play as well.

As long as we have weak AI. It’s really not going to take over anything. It’s going to be deployed very, very widely because that’s what it’s strength is. It’s a general purpose tool. And so it can be used in pretty much any area. There will be a role for humans because weak AI or the AI kind of systems that we have today, they are really not capable of thought. So to make sure that the system is being utilized in the appropriate areas and that the conclusion it’s reaching those conclusions that are appropriate and relevant. Somebody who is a human still needs to judge that.

And again the biggest concern is, that human beings as individuals and as a social as clusters of beings in a social system will abdicate that responsibility. Is this the right thing or the wrong thing to the machines? That’s really the problem. It’s a problem generated by human beings. It’s not a problem with the machine. So the people who worry about AI taking over the world are just off on the wrong track in my opinion.

For students or for graduates or for people, in general, to function in a world where AI is used in a wide variety of areas, it really has a good broad education. Understand something about many different areas. Have sound thinking yourself. I mean you can be a technical expert, but that’s not what I’m talking about. I’m talking about really understanding what is important in a given situation. What are the consequences of actions that will be taken? How widely will those consequences be distributed both in time that means as time goes forward and across space which means in different regions across the world of certain technologies are applied?

What are the social consequences of that? What will be the impact on the economy? What will be the impact on what it means to be human? The dignity of labor or the or being able to function as a respected member of society. How are we going to treat all of that? That’s the responsibility of the human.

The best way to prepare for it is to have a good general understanding of humanity, of society, of social organization, a philosophy of literature, of technology, also and of science is that holistic aspect of-of being able to work with knowledge and understanding how as social entities we work together that’s really important.

Because many of the detail level work that is or jobs that are being done today. They can be automated. I mean AI will be doing that. Looking to have a niche where you are doing something specific is probably not going to be very profitable, but generalized thinking is really going to be much more important. It’s going to take on increasing importance in order to be able to simply deal with the fact that a lot of the work is now automated and being done by machines.

Get started on your cybersecurity degree at American Military University.

Comments

comments