Data accountability
Recently I came across the latest hype concerning FaceApp. I decided to dig a little deeper and see what it’s all about — the app is neither new nor something that the world has never experienced in the past. I’ve already seen lots of my friends using FaceApp via Facebook and other social networks. Even people whom I know are well-aware of cybersecurity’s best practices. So what is the big concern here?
Let’s take a look at the best-known portion of the app’s terms of use:
You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you. When you post or otherwise share User Content on or through our Services, you understand that your User Content and any associated information (such as your [username], location or profile photo) will be visible to the public.
— FaceApp terms of use, Forbes, 2019
Being put in just two sentences, you consent that you don’t care what the algorithm on the receiving end does with your data. It’s a legal precaution caused by the fact that the neural network pulling the strings of the app is not entirely predictable. A fact well backed by an unfortunate past algorithmic behavioural lump. In the year 2017, the app featured a transformation, which was supposedly altering people to look more physically attractive. Society interpreted the feature as racism for toning the skin and changing facial features to look more European. An apology followed by the founder and chief executive, describing the incident as “an unfortunate side-effect of the underlying neural network caused by the training set bias, not intended behaviour”.
You can see a paradox in accusing a man’s work of racism while having an investigation on intentional usage of collected data, based on the collector’s nationality.
Nevertheless, to err is human! While we develop more and more sophisticated neural networks, we cannot predict every outcome of our products. Whether or not the statement about the unfortunate side-effect was simply a “wash of hands”, it can easily be true!
No matter our best or worst intentions, we are not always in control. When it comes to technology, a simple small error can be like a light match accidentally dropped, bursting into an uncontrollable blaze burning up entire forests.
I strongly encourage everybody to think twice before becoming a part of somebody’s playground. Because even the builders, securing themselves in all possible ways against lawsuits, are not entirely aware of the ethical responsibility they take by collecting data — even if it is with your approval. You don’t have to be paranoic to be aware of the threats that the growing Internet is posing. Harm can now come from unliving algorithms, with no bad intentions from the designer.
Be safe out there!