We all enjoy AI-takeover movies where Artificial Intelligence (AI) becomes the dominant form of intelligence. Whereas, we haven’t experienced such a situation in reality yet with the modern state of technology. However, theories of AI help humans to automate everything to save time. For example, AI is massively used in the manufacturing segment to boost the production rate and also to achieve more accurate factors in the production process. Furthermore, AI plays a huge role in situations where humans can be easily replaced with more efficient computer-based systems. For example, consider a parking area where many vehicles come and go. Object detection and pattern recognition AI concepts based systems replace humans who can also record and assign a location for each vehicle in the parking area.
Undoubtedly, AI is a great innovation for humanity. Whereas, like every modern technological concept it has bad parts too. Indeed, AI is just about making decisions as a human does, decision making can be done in two ways: with the help of provided supportive conditions (classical machine learning) and without (deep learning).
AI-powered Rabbit Holes
Nowadays, automatic suggestions can be seen everywhere. These features are really helpful if those suggestions tend to do what we expect to do. For example, if a particular blogging software automatically adds some tags according to the content of an article, that is a great time-saver. Gmail’s smart-compose feature and smart-reply feature are also great. So, what’s wrong with AI-powered suggestions? Well, it’s all about misusing the user’s intention.
A few years ago, I watched a few review videos on YouTube about a specific mobile phone model before buying one, thereafter, even though I bought a mobile phone, I unintentionally watched so many review-videos spending a lot of time on YouTube. Those kinds of time-killers are known as “Rabbit holes” where humans can be trapped in an endless loop due to AI. Moreover, some recommendation systems may contain biased algorithms where the user becomes a puppet of a particular business entity.
Can you remember the hyper-reality scene from the eXistenZ movie? Well, I don’t think so. Because it is a highly-underrated masterpiece. That scene is about a hyper-reality situation where a movie-character got confused with reality and the simulation. Perhaps, you may felt the same situation after being in a high-realistic Virtual Reality (VR) environment for a long time. Similarly, Deepfakes can trick you into a realistic perception.
A Deepfake material is some kind of multimedia of a person who inherits the likeness of another person. In other words, we may see a video about a known person doing a weird talk but he/she is not the actual person and another person is acting on behalf of the original person. As people who know about these tricks with AI, we simply won’t believe every video on the internet, but not everybody does. The majority of people haven’t heard about the term “Deepfake” will believe that Deepfakes is real.
Simply show this video to a person who doesn’t know much about AI but knows about Elon Musk.
Privacy is open-sourced
Privacy is a major concern of the modern online community and the internet. Nowadays, the suitable term for modern personal data is PII (Personal Identifiable Information). Web services usually store and process PII data mostly for verification purposes. However, users have a right to control PII data as they want and that’s why the laws like GDPR play a big role in privacy rights.
“Data is the new Oil” — Clive Humby
The main issue with modern AI is that there is too little transparency in privacy. After using a website, if we visit another website, we often notice that the particular website also well-personalized according to the previous site by making a doubt about privacy. Of course, AI existed several decades ago but it grasped the recent hype with the help of numerous data that are being collected every second, including PII. In other words, Big data becomes a new oil that feeds AI machines.
Technology is going towards a way where people can clone a person as an AI-powered robot by putting humanity at a crucial risk. In several years, We won’t be surprised if a set of non-human humans start a war with humans or someone marries a non-human human.
The only weak-point of humanity that leads to the possible popularity of non-human humans is that people find it’s harder to make a contact with real humans than non-human humans. So, did you ever ask something from Siri that you will never ask from a person?
You may think nuclear-power is better when someday someone starts building AI-powered war-machines.
Technology is evolving and that is inevitable. If Deepfake illusions are going viral there will be AI-powered anti-Deepfake software coming up as native web browser features. Every entertainment service will embed intelligent recommendation modules into their products to drain everyone’s time. Whereas, we will be able to get rid of this bad stuff by setting up our regulations. For example, I don’t use any personalized recommendation system to find movies I prefer. Instead, I use online forums to find out good movies and also entertaining great movies without wasting time on movies.
Therefore, we don’t care if AI has a dark side similar to the web. We get benefits from AI as same as we get benefits from the web.