Ethical Artificial Intelligence

The importance of ethic in building AI models

Shivan Sivakumaran
3 min readSep 19, 2021

In previous posts, we discussed what is deep learning and even gave an example of creating an artificial intelligence model to differentiate blue eyes and brown eyes.

Exciting! But we found there are a few limitations. One was with the data used to train the model. Unclean data produce unclean results. Second is the model we are using begins life very naive. We can use a pre-trained model that already knows the basics of an image like edges and textures, so it has to learn is eye colours.

Determining eye colour, though a fun introductory project, has a very low consequence. In other words, if the model gets it wrong, it’s humorous and we can move on with our lives. What happens if the weight of our consequences is greater? Flying a plane full of people? Orchestrating a network of motor vehicles? Diagnosing or discharging a patient? Now the stakes are serious. Life or death.

Pedalling back to our eye colour model, an immediate problem was with our data. Apart from the data set being was relatively small, the data was far from ideal.

Problematic data

Some of the data had makeup, some photos labelled as ‘blue eyes’ has both brown eyes and blue eyes. It also seems more images of fair-skinned people have blue eyes and brown eyes are represented in a multitude of skin colours. When training a model, these biases present in the data are amplified.

Changing gears to eye care and health. A few newsletters ago, we discussed diabetic eye disease. Currently, in Aotearoa, individuals with diabetes are put onto a diabetic screening service. In New Zealand, we have over a quarter of a million have diabetes. The number is estimated to rise, and needing retinal images taken and graded at least every 2 years. Already stretched health services will be pushed beyond their limits. Not to also mentions, district health boards are almost half a billion in debt.

Grading retinal images usually require specially trained individuals of which there are too few. The task involves pattern recognition of images. Just like our AI model we trained to classify blue eyes and brown eyes, we can do the same with retinal images.

Already existing is a data set of already graded retinal images, which can be used to train a model. This model can be used to grade retinal images, reducing the burden on healthcare.

This is the idea on paper.

But as we discovered, it’s not easy training to model to do what we want.

Even in health data there is bias. A large proportion of Pacific People have diabetic eye disease. There is a chance the model would correlate a pigmented fundus to diabetic eye disease rather than the actual clinical signs.

Furthermore, the data required needs to be diverse. For example, a model trained on European eyes won’t be appropriate to use in the New Zealand population due to our ethnic diversity. This means the model will likely perform better on one ethnic group (usually the majority) and then poorly on minority groups.

This can result in further health disparity when AI should be trying to plug the leaks in healthcare systems.

The topic of ethics in AI is very deep. We must understand that AI is a powerful tool. Neither good nor bad. It is how we implement this tool that is important.

Diverse teams are a good way to combat bias in developing AI models and products. Members can provide different insights from differing backgrounds.

What do you think? How important is ethics in technology in general?

--

--