AI Bias And Related Cyber Security Issues

Written by Deepak Rajgopal

August 16, 2019

AI systems is on a fast pace, many AI based systems and tools are already emerging or in development stage and soon to hit the market.  The business for AI looks exciting, however there is a dark side which is also emerging  – the fear of ‘bias‘ that is overlooking these AI based systems. 

Industry experts argue that an element of human bias would exist, and such bias will be ideally in the form of :

  • Algorithms that make up the AI system
  • Data on which the AI system is built
  • Teams that build the AI systems

The perils of a bias AI program may lead to the entire focus shift to incorrect or wrong priorities or  the system throwing up wrong patterns  and the real threats may just fly out.   In order to build a good AI system requires volumes of training data to learn, and deeper levels of trust so that the system is fair and unbiased. 

The role of AI systems can be significant in the following ways:

  • AI’s predictive analytics might render early warnings of cyber attack
  • Early intrusion detection and monitoring
  • Reinforcing strategic and tactical planning of cyber operations

So what are companies up to : Engineers at Microsoft are gearing to mitigate AI bias. What-if-tool at Google, AI Fairness 360 toolkit at IBM, Fairness flow at Facebook, Fairness tool at Accenture.  Facebook is in collaboration with The Technical University of Munich (TUM)  to support the creation of an independent AI ethics research center.  Technical University of Munich is one of the top-ranked universities worldwide in the field of artificial intelligence.

Cyber attacks on AI

AI has enhanced security, but it  comes with increased cyber threats. With AI getting adopted in almost every domain, the rise of AI cyber-attacks   is   also is on the rise. Common targets are AI  botnets the cyber attacks being on data and vulnerable devices. Until last year, there   has been a massive 20,000 plus botnet attacks on word press sites.    

Vulnerabilities will still exist  even with AI.  AI systems can still be compromised and go undetected, as AI systems will be build for particular deductions and decisions are not always immediately clear to overseers.  Most cyber attackers maintain a low profile and harder to detect, they get and manipulate data slowly without anyone noticing it.  Technology giants like Google, Face book, Apple jointly have formed the Partnership on AI in 2016 to encourage research on ethics of AI including issues of bias

Related Articles

Los Angeles Police Department Faces Data Breach

Los Angeles Police Department Faces Data BreachData is everywhere and so are the data beaches. With increasing number data breaches every day. The organization takes lots of measures to secure their data and network. Still data breach is increasing day by day....

Centrify Joins Identity Defined Security Alliance (IDSA)

Centrify Joins Identity Defined Security Alliance (IDSA)Centrify, a leading cloud provider of Zero Trust Privilege joined several working groups in the Identity Defined Security Alliance (IDSA). The IDSA is an independent industry alliance comprising of leading...

Stay Up to Date With The Latest News & Updates

Join Our Newsletter

Unleash more of your potential with weekly updates, tailored for your team.