LOADING....
AI and Ethics – Where is the middle ground?
Digital, Marketing, Social, Trends
'.$post->post_title.'

Marketing in this day and age is becoming increasingly data-driven. With the rise of Artificial Intelligence (AI), it aids businesses in being more innovative and agile. Today, marketers like us use AI for product or content recommendation, customer segmentation, social listening and sentiment analysis, search, predictive analytics/forecasting, and personalization.

However, with great power comes great responsibility. Studies have shown that AI can be biased, sexists, racists and even enforce dangerous ideologies. A research done by MIT revealed that AI systems sold by tech giants performed substantially better guessing the gender of male faces than female faces. The error rates for lighter-skinned men vs darker-skinned women was 1% compared to 35%. As such, we should heed the advice of Stephen Hawking and Elon Musk to start considering this ‘middle ground’ between AI and ethics.

While there are probably more risks involved in AI but these are 3 risks that resulted in extensive studies to address.

Data Bias

The chances of human partiality being transferred to algorithms is quite high, given the variability of existing data sets. For instance, Google Translate has shown gender bias when converting terms for specific professional roles and social media algorithms that take cues from the user behaviour may start delivering only certain political opinions that are leaning on a certain side. This causes people’s world view to be limited by algorithms.

To address data bias, diversity in designers and in STEM (science, technology, engineering and math) will help to bring social context to framing and identify biasness earlier in the designing of the algorithms.

Lack of Data Transparency

You might face hesitancy due to the need to safeguard intellectual property – resulting in non-disclosure of one’s deep learning processes. However, this can worsen the effects of bias and result in a longer amount of time spent to develop an effective AI solution. Making the process transparent can open up opportunities for data biases to be pinpointed earlier and help inculcate the habit of risk evaluation through the development process. Transparency will aid your company in moving towards reducing associated costs and compliance. Moreover, it will lower the amount of cybersecurity risk that your company might be exposed to while allowing for greater top line growth potential.

Data Monopoly

Data monopolist are dangerous because it limits the general public from understanding the data beyond the bounded universe of the monopoly. This phenomenon also threatens new competitors who wishes to build new markets and products based on new data that has been collected, making the industry competitiveness suffer. Finally, because these internet giants run a largely boardless world where their main source of profit is intangible intellectual property, the national tax authorities have problems tracking and quantifying their taxes. In order to curb the problem, individuals must take ownership of their own data and protect it from ground up.

Understanding the risks that come with AI helps us as an integrated agency to work towards minimising risk and building a more inclusive society for the brands we partner with. Stronger data protection and regulations not only protects the consumers’ data but also offer us the opportunity to identify and engage with consumers who truly want to communicate with the brand. In the long run, by working to maintain a clean, unbiased, reliable data foundation, we will be able to protect loyalty and brand love that takes a long time to build.

Come say hi to us. CONTACT US.