Top

The dangers of Artificial Intelligence

Artificial Intelligence (AI) is not bad; the major concern is how humans use the AI.

Artificial Intelligence — this was something that was vague a few years ago. No one knew what it really meant. No one knew what it really did. But 2018 saw a boost of artificial intelligence in almost everything — from speakers to air conditioners, smartphones to cars. With AI and ML (Machine Learning) hand-in-hand, things can be automated to glory, without much human intervention.

Today we use AI everywhere — from common home appliances such as smart vacuum cleaners that automatically clean the surroundings and charge themselves, to driverless cars that ride you to your destiny at the push of a button. And from smartphones that can automatically detect the frame and do some sorcery to get the best image from a puny sensor to smart speakers and android robots that can communicate with you like another human being. While these seem to be something that we look forward to getting better and flawless technology in the future, there are a few AI areas that could be equally dangerous to mankind in future.

AI is a set of instructions that is born from algorithm. While the instruction is at no fault, the algorithm is the main culprit. Confused? AI works on algorithm and gives an output with a set of commands after processing the input. For example, ask Siri, Google Assistant or Alexa for the time. It processes the voice input, crunches the audio into data using a set of algorithms that it needs to convert voice commands into data commands. If then crunches the data into an output, and then gives out the answer through the speaker, using an artificial pre-recorded voice. Same goes for an auto-pilot mode in an airplane or a self-driving car. It grabs information from cameras, sensors and a lot of other inputs to know the way ahead and steer around to its destination.

While AI is constantly getting better as it learns along the way, there could be dangers lurking for us in the future. We have seen fictional movies showing how AI-based systems take over the world and killer robots that could put an end to mankind. Well, we could see those days nearing soon unless we make sure that we limit the power of AI before it is too late. If we are not careful, vigil or cautious, AI could be manipulated by bad guys, and who knows what could be in store for us ahead. Let’s look at a few AI-based platforms and the dangers that could lurk within each of them, and if they are not attended to carefully by their creators, they could spell disaster.

Autonomous cars:

Well, AI is the best example here. Using sensors, cameras and radars, the car can drive around a given route by sensing and viewing its surroundings. There are a handful of companies (such as Google, Uber and a few more) who are experimenting with AI-driven cars and many are also currently running around the streets in a few countries.

AI

However, self-driving cars are not completely safe — at least not yet. You may have heard of the Uber incident where a woman was mowed down by a self-driving car because the car was not able to spot the victim in time. While self-driving cars are probably the future mode of transport, today’s roads and pedestrian rules are not meant for the autonomous traffic. It will take a lot of time and investment to make roads safer and compliant for self-driving cars to commute. Roads will need sensors, pedestrians will need to adhere to strict rules, and a lot more has to still be placed in line. Additionally, AI needs a lot of computing power and processing time to come up with the right decision and make immediate responses. A single bug in the algorithm could make the AI go for a complete toss, putting the car into a rampant metal beast. Fortunately, today’s driverless cars still need a human behind the wheel for emergency takeover.

Face recognition:

Recently, the London police went on to once again place cameras around its streets in order to do a test on live or real-time video-based face detection on its citizens. The police were working on a test drive and the citizens were well informed about the same. This drive was to check how the face detection algorithm could be used in real time to search for people, who are lost, and wanted criminals and terrorists.

AI

While this could be a great way to have a safe community, hackers could make best use of this technology and probably sell it to the underworld to find their next victim easily. Face recognition has also seen a big move into smartphone’s today. With the best of it seen used by Apple on its latest iPhones, others are also implementing it into their phones in order to unlock smartphones and make online payments. Soon, we could see the fingerprint biometrics fade away to face recognition. The future will also see the technology enter conventional appliances such as televisions (the TV will know who is watching and personalize the content accordingly) or cars (where you can authorize the car to start with your face). But we have also seen hackers attempting to break into this technology by building fake faces using simple and cheap methods such as masks and silicon to replicate the face and skin. If this technology is not made robust enough, we could soon see a dystopian future where privacy and security could be at stake.

Deepfake:

Till date, voice recordings could be doctored and voices could be mimicked by voice artists. Photos could also be photoshopped or morphed with technology getting better and give out almost flawless results. However, the new trend with evolving AI technology can now manipulate videos too. Recently a few videos were out where AI was used to create fake videos — the more prominent ones where a woman’s face was cleverly morphed on a pornstar’s naked body and the video clip looked almost real. Take for example this video below, which uses AI and related data to create a fake video, but looks almost real.

And here is another fake video example that features Barack Obama, manipulated to speak something that he did not.

GANs can generate photorealistic faces of any age, gender, or race and graphics processors can easily make these come to life with their immense processing power and based on AI algorithm. Neural processing engines can help AI take the darker side if they are used for all the wrong means. A recent news report did mention about a US defence agency investing in new methods to detect deepfakes. However, with non-ethical hackers gaining funds from the dark web and the underworld, we will see probability of deepfakes only grow larger.

Military AI:

Google was in the news last year for helping the Pentagon with AI-based drones for military operations. The pilot project from Google set off alarms between its own employees when they found out about the involvement. The AI that was used in drone operations could detect and identify objects in the footage, known as Project Maven, was questioned about its ethical use of machine learning. The concern was with the technology that could be used to kill innocent people. While Google denied the use of the technology for combat operations, it was finally reportedly abandoned the project.

AI

Project Maven was used with AI and ML to detect vehicles and various other objects to take the burden off analysts and provide the military with an advanced computer vision. It could detect and identify up top 38 categories in the video footage from the drone’s camera. While the project could be seen as a benefit to the military in keeping the country safe, the use of autonomous weapons could also spell disaster if fallen in wrong hands. There are campaigns running around to ban the use if AI in autonomous weapons stating that decision to take a human life should never be delegated to a machine. “Thousands of AI researchers agree that by removing the risk, attributability, and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems,” claims FutureOfLife in a pledge against the use of AI-based autonomous weapons.

Online information and manipulation:

Facebook was big news last year. We saw how Facebook and Cambridge Analytica were in the firing zone when they were exposed for manipulating the elections based on user database shared by the social network platform. Using this data, algorithms by the artificial intelligence algorithm to decide what to news and information should be shown to public. Using this technique, fake and unrealistic news gets front prominence and can manipulate something as sensitive as the election polls.

AI

AI was used to amplify misinformation and create isolation between citizens with different views from each other. with fake news and information flooding the online channels such as instant messaging platforms and social media, it is now up to AI itself to help detect and eradicate the nuisance. However, AI is still at a nascent stage where it fails to understand what is being circulated in videos, text or photos. AI is presently being used in almost all online platforms that serve information. It brings to the front the most read, circulated and the freshest/unique content without any human intervention. However, with fake information easily riding on these platforms, it is not easy for AI to detect fake information from the truth.

Privacy nightmare:

While AI could be used to solve many world problems, intrusion of AI into our privacy could be another issue thereon. And this could spell disaster to people, probably in long term. Earlier last year, it was reported that China has built a behavioural monitoring system for its citizens that uses AI-based face recognition cameras around the country to look up on the behaviour of its citizens and give them individual scores.

AI

All of its 1.4billion citizens are planned to be given a score based on their behavior — whether the smoke, drink, jaywalk, get into fights, and a lot more. Based on this score their future will be based. The score could ban them from purchasing property, flying abroad, or even which school their kids are allowed to go to. CBS News reported in April that a journalist was denied flight because he was on the list of ‘untrustworthy behaviour,’ and that he could not buy property or even send his kids to a private school. With every citizen being stamped a social credit, it is a way for China to ‘purify’ its society. In another parts of the world, we also see AI so a similar check — for example — credit scores on credit cards, and tax payments, will treat you to your next home or car loan accordingly. While AI can be used for the good, a simple error or manipulation by the AI algorithm can ruin someone’s life.

While Artificial Intelligence (AI) is not bad, the major concern is how humans use the AI. We should not be afraid of AI, but humans usually tend to be afraid of something that they cannot understand — and that is pretty much seen in the modern machine — robots. Many people are freak out with robots that are large and do weird stuff. Well, AI presently is as good as a toddler. It has to learn a lot. With more than two decades of data shared by users and collected by online tech companies, the data may be sufficient to build algorithms for AI, but not enough to build a 'super intelligence' that can take over mankind. However, with the amount of AI being used for good, the misused AI is what we need to be careful of. Professor Stephen Hawkings, SpaceX Chief Elon Musk, Cleverbot creator Rollo Carpenter, and Bill Gates are some of them who have also warned about AI getting the better side of humanity, fearing that the future could be disastrous and the human race could be at stake of a full artificial intelligence is created.

Photos: Pixabay. (for representational use only)

Next Story