Artificial Intelligence By Ilan Levy
Clarification: The article refers only to the writer's opinion about the use of artificial intelligence for transportation-related issues.
John McCarthy: " It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable." human beings are supremely different from all other species.
A.I. try to use the "wisdom of the crowds", for the purpose of teach the machine to face challenges.
since most of the population on hearth, know by 100% that human was made by GOD that is why we are perfect.
The A.I. believers, try to convince everyone that if they will have enough samples of human behavior on specific events, they can "filter" the mistakes, and make machines that will know how to react to daily events and by that help people to live a better life.
Since samples are taken at long time distances, it is necessary to fill the gaps with synthetic data.
And now all that is needed in order to give a score to "driving behavior", is to find out the user's distance from the average. (Figure 1) The red line is the average of all the groups. The blue dots are the user behavior, and the "score" is the distance from the average.
Does A.I. know better than us?
No, he can't be better than his sources.
Is it laziness, a technological limitation, or outdated thinking that does not recognize individual intelligence? (Figure 2) Most people have a low level of intelligence, and their driving is not perfect, so it mandatory that they need to get technological assistance. When the autonomous decisions are based on the behavior of those "deficiencies", it is apparently a conflict.
I will try to bring here a few samples, of why we must limit the use of AI, for some cases.
the use of WAZE to navigate in places foreign to us or running it even though we know the route When the navigation app leads us in a wrong way to the destination point and/or on a route that is not correct to the best of our experience, we ignore it and sometimes even argue with "him".
As I experienced, if Waze will having an autonomous ability, from time to time he will drive the car into buildings or worse.
A.I. can save a life, by can help patients to accurately prescribe medication. By reminding or automatically.
A.I. can save the lives of diabetics by balancing their sugar, or the heart rate of a heart disease patient.
A growing part of data providers, have a very strong belief that the soul of the user must be saved as well, is it possible for the intelligence produced by them, to manage our lives according to their beliefs?
And if our beliefs are in the minority, would it be possible for AI to see this as an exception that needs to be eliminated?
A.I. is a blessing for many purposes, it can make the lives of many peoples better and easier, but unfortunately for every blessing comes a curse, and the users of the method must research carefully, before using it, because there are areas of our lives that it is better to use the subjective method.
The conclusion according to the above is that from time to time A. I can give "below the average" people some advantages, and smart people disadvantages.
What if the "famous human wisdom will be programmed to save our soul also?
And the autonomous car will start only after praying?
And every day the car autonomously will take us to the mosque, church, or synagogue during the praying times?
Is it worth collecting all this "knowledge"?
When, everything that is considered "facts" by one, will be considered "sin" by the other.
Most people are sure that they "know" or they follow others who "know", as long as people are not united in what can be considered as fact, the majority will always oppose.
Humanity is so proud of its various shades and ability to accommodate different beliefs some of the time that the very idea of producing "universal insights" is ridiculous.
The advantage of the Subjective “Driver Risk Index”.
This is based on limited personal information, compared to the currently accepted methods based on Collective big data and averages.
The limitation of A.I.
The biggest problems of AI Builders are:
1. 2. Since the "collections of information" are not fast enough, it is necessary to add a lot of synthetic data, in order to create the illusion of a lot of information.
3. The very fact of combining the information of users with differences in age, health, behavior, education, working hours, type of vehicle, different loads, different "courage" or fear, and different driving experiences for the same "stew", raises the question: "Does everyone have to drive exactly the same?"
And how is "the same thing" determined?
The stupidities of humans are collected, and the idea that bigger data, give better results, is ridiculous because the same presented mistakes are in all sizes of data.