A quick glance at a portrait photograph is, apparently, all an off-the-shelf AI tool needs to detect a person’s sexual orientation, and potentially even, political affiliation. Sieving through tens of thousands of images has taught machine learning to be 91% accurate in its detection of homosexual men, and 83% of women.
Circulating around the web for its highly controversial and possibly dangerous consequences is a recently published study by The Journal of Personality and Social Psychology that revealed machine learning can detect sexual orientation with high accuracy and speed. Stanford University researchers Yilun Wang and Michal Kosinski led the development of this study. Interestingly, Kosinski is also an advisor for the Israeli company Faception, currently developing ‘facial personality profiling’ for the improvement of “public safety, communications, decision-making and experiences” as the company describes it.
With this study, Kosinski and Yilun claim that “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”. The research, titled ‘Deep neural networks are more accurate than humans at detecting sexual orientation from faces’, used over 30,000 images from US dating websites with a specific agenda of creating “links between characteristics and facial features that might be missed or misinterpreted by the human brain”.
Yet amidst the cloud of blurred moral judgements behind this research, is the fact that basically any abstract can be ‘predicted’ by machine learning given it has been fed sufficient background data, often subjective to its research purposes. What shouldn’t be overlooked however, is why facial personality profiling tools are being tested and developed without a consensus from international human rights organisations, or even on a more micro level, national law.
There seems to be little boundaries built in the collection of personal data for the sake of ‘security’. And here, while the researches have been able to liken the tone of their findings to that of Frankenstein’s inventor as they claim “given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women”, the very negligence of the paper’s publishing proves the care with which potentially dangerous studies are dealt with has been absent from the scientific community.
What concerns me most with this study isn’t its accuracy, or lack thereof; or its potential bias (only white persons were considered for it). What is particularly alarming is the planting and spreading of such fears and ideas, on a global scale. With homosexuality still considered a crime worthy of death penalty in many countries, there is a profound responsibility attached to the public publishing of studies such as this.
Within hours the paper made headlines across international news outlets, transforming an otherwise inconceivable AI programme into a tangible, accessible tool in the minds and consciousness of the mass.