A 2018 MIT study examined a facial recognition software used by police to detect criminal suspects. It turned out that the software identified a third of the dark-skinned women as men (compared to an error of less than 1 percent for white men). The error stems from the database used by the software, which hardly contained dark-skinned women.
AI is a pre-trained system that bases its decisions on the available data. If there is a bias within the training database AI could come up with unethical conclusions.
The OpenAI company, that brought us ChatGPT and Dall-E, was founded because a number of people in Silicon Valley believed that there was a danger in artificial intelligence and therefore it was desirable that it be open to the public so that everyone could understand what it does. The danger we fear is related to the possibility that the machine will be able to manipulate us, and make us do things we are not interested in. Why would software want to make us do such things? It is possible that this will happen because people or governments will want to play with our minds, for good reasons (in their opinion) or for less good reasons, and it is also possible that the information on which the system is based is wrong and therefore wrong messages will be created. There is a known case where a student asked to learn about the Holocaust and received Neo-Nazi messages, and another case of a teacher who wanted to teach about Nazism and was blocked because it is prohibited content.
True, even today we are being manipulated. However, unlike in the business field when we are persuaded to buy unnecessary things or in the political field when we are persuaded to vote based on various motives, when it comes to education the effect is of a different magnitude, both because the customers, i.e. the students, are young and because the effect may be more profound.
As I wrote in the post “AI – an inevitable technology“, I believe that at some point artificial intelligence will impose itself on the education system. When this happens there is a good chance that it could be used in different ways to manipulate the students into believing/acting in a way that goes against the accepted values. It can be argued that the problem is not the education system, the governments should come together to create regulations that will prevent a significant part of the potential negative maneuverability of artificial intelligence. However, even in an optimistic scenario, I still think the education system will have to go the extra mile to protect the students.
In a world where any message can be easily created in any media (image, article, video), it is easy to convince that the message is truthful. In my opinion, the only ones who will be able to withstand the burst are the educators (teachers and parents) who will teach the student to use common sense (which is not easy) and to apply critical observation, to act when action is needed, and to show humanity (love, compassion, fear, passion…). In this world the teacher should be an educator who teaches the student to distinguish between good and bad, and leave to the machine to focus on the technical part of learning.
I would assign all the teachers, as many as possible, to subjects such as philosophy, art, sports, citizenship, current affairs (and if this reminds you of ancient Greece, it’s not by chance). I have no problem in principle with language, science, or history studies, but in my opinion, very quickly the computer will do it better than us.
Unfortunately, I am not inventing anything new, educators such as Zvi Lamm, and Roni Aviram in Israel and many other good ones around the world have been talking for decades about the need for the education system to focus on education for a common vision, for basic humane values agreed on by the people and the state, for fostering interpersonal relationships, and rational and critical thinking… but the teachers are busy, most of the time, transferring information. This is a historic opportunity to let the machine focus on the technical part of learning and let the teacher focus on the intellectual and emotional parts.
Back to OpenAI, which started as a non-profit organization with altruistic motives. The company finally abandoned the moral motives and became a commercial company (almost) and if it succeeds it will be another conglomerate whose main business is making money.