Ramprakash Ramamoorthy at ManageEngine argues that ethics is essential in the creation of artificial intelligence
From self-driving cars to humanoid robotics, chatbots to social media, artificial intelligence (AI) is progressing rapidly with more and more organisations incorporating it in everyday business use. Indeed,
ManageEngine’s recent 2021 Digital Readiness Survey, showed that 85% of organisations in the UK have increased their use of AI over the last two years.
Yet despite this surge, confidence in AI is wavering. Whilst machine learning can do great things, it can also do great harm, leaving people struggling to trust its capabilities. Only 1 in 4 (28%) of UK organisations we surveyed said their confidence in AI technology had significantly increased, despite using it more.
One possible reason for the distrust is the potential for unethical biases to creep into AI technologies. While it may be true that nobody intentionally builds an unethical AI model, it may only take a few cases for disproportionate or accidental weighting to be applied with damaging results.
AI is only as good as the data that drives it
AI is not fool proof and needs to be trained and constantly monitored by data and developers to avoid ‘model creep’. Demographic data, names, years of experience, known anomalies, and other types of personally identifiable information can skew AI and lead to biased decisions.
Businesses need to question if their AI data represents the full diversity of their end users, and if not, find other data sources.
Existing human bias can also infiltrate AI systems, reinforcing and amplifying prejudices and inequalities. Developers are trained on socially generated data and their cognitive biases can be embedded into AI algorithms accidentally. With more than 180 defined human biases at play, this social and cultural impact on decision making can easily seep into machine learning.
In essence, if AI is not properly designed to work with data, or the data provided is not fully representative, the AI model can generate potentially discriminatory algorithms.
Take one of the most famous examples at Amazon. In 2014 they tried to automate their recruiting process based on reviewing CVs and rating applicants with an AI algorithm. However, this new system was abandoned a few years later after they realised that it showed bias against women having used male-dominated CVs to develop the data.
Similarly, an algorithm used across 200 hospitals in the US to predict who would likely need extra healthcare, heavily favoured white patients over black patients due to the data used.
Creating AI models that aren’t subject to unintentional biases is both an ethical obligation and a business imperative. Fortunately, there are several ways developers can ensure their AI models are designed as fairly as possible to reduce the potential for unintentional biases. Two of the most effective steps developers can take are:
Adopting a fairness-first mindset
Embedding fairness into every stage of AI development is a crucial step to take when developing ethical AI models. However, fairness principles are not always uniformly applied and can differ depending on the intended use for AI models, creating a challenge for developers.
All AI models should have the same fairness principles at their core within a framework and governance structure and supported by training. Educating data scientists on the need to build AI models with a fairness-first mindset will lead to significant changes in how the models are designed.
One of the key benefits of AI is its ability to reduce the time and energy human workers spend on smaller, repetitive tasks, and many models are designed to make their own predictions. However, humans need to remain involved with AI, at least in some capacity.
This needs to be factored in throughout the development phase of an AI model and its application within the workplace. In many cases, this may involve the use of shadow AI, where both humans and AI models work on the same task before comparing the results to identify the effectiveness of the AI system.
Alternatively, developers may choose to keep human workers within the operating model of the AI technology to guide it, particularly in cases where an AI model doesn’t have enough experience.
This is especially pertinent in model drift, where AI accuracy can change over time because the data it is trained on doesn’t reflect the present world. Covid-19 is a dramatic example. The abrupt changes in market conditions and customer behaviour caused immense data drift where AI could no longer accurately predict trends such as online shopping or airline passenger volumes.
By fully exploring the way humans and machines can best work together and investing in AI research and development, organisations can improve the processes they have in place to highlight bias and then minimise it. The more measures and techniques developed to reduce bias transmission, the more reliable AI will be.
The future of ethical AI
In the need to drive efficiencies and achieve a competitive edge, the rapid adoption of AI in the workplace will continue. Organisations should all be held accountable for bias and making sure their AI systems are trustworthy and ethical is becoming increasingly urgent.
If AI systems are to be genuinely intelligent, they need to designed to be as open and unhampered as possible, to consider bias at every stage of development and reduce the likelihood of distortion.
Business leaders must have confidence that their AI systems are reliable and accurate, and their customers must be able to trust them and the technology they’re using.
AI has no moral compass, but people do, and the onus is on businesses to create responsible systems that truly work for everyone.
Ramprakash Ramamoorthy is director of research at ManageEngine
Main image courtesy of iStockPhoto.com