By Thomas Ansell
In 2021, there were an estimated 6.3 billion smartphone subscriptions around the world, whilst the IBM Global AI Adoption Index for 2022 estimated that more than 37% of companies worldwide utilise AI in their processes. Meanwhile, in 2019, the Toeslagenaffaire made headlines when it was revealed that a self-learning algorithm had created faulty ‘profiles’ for normal people across the Netherlands, resulting in life-changing (and unjustified) tax bills for (mostly) people with a “non-Western appearance”, according to an audit into the scandal.
But what are the implications of this amount of technology in our lives? And what does this super-fast adoption of technology in a decision-making function mean for our human rights? We spoke to Afef Abrougui of Fair Tech to get some insights into these questions.
What are some of the newer risks brought about by the adoption of technology into our daily lives?
“With so many people having a smartphone in their pocket, and the increase in the use of smartphones in all aspects of our lives; including very personal aspects, the risk for their misuse has also risen”, says Afef. “For example the use of surveillance technology like spyware has really increased, and this has a very harmful and potentially immediate effect on activists, whether for women’s rights, human rights defenders, or for people like journalists. For example, phones of Jamal Khashoggi’s associates were infected with the spyware before he was killed.”
And, adds Afef, scandals like the Pegasus spyware scandal reveal that even the best-protected and wealthiest people [as was the case with Amazon CEO Jeff Bezos] in the world can have their phones hacked. “So, what could this mean for a small human rights organisation? Having to protect against these threats can really hamper CSO’s resources, and of course knowing that you’re being listened to has a very chilling effect. We often discuss our personal and professional lives on the same device, and not everyone can have two phones, for example. So what can happen is that people self-censor, it’s very invasive knowing that you might be being monitored.”
And what about using ‘big data’? For example machine learning, algorithms, and AI?
The widescale adoption of ‘big data’ within lots of situations is potentially (and evidently) harmful for our rights and freedoms, says Afef. “In the health sector, for example, there are always huge concerns about privacy: since its sensitive data there’s an issue of leaks and unauthorised access.”
But away from operational problems, she adds, there’s a big issue with the diversity of input data skewing outcomes ‘decided’ by technology. “For example, a few years ago, Youtube started utilising algorithms to take down extremist content, and most AI is trained using English or another Western language. This meant that lots of Syrian Human Rights Defenders’ videos were taken down during the war in Syria in the 2010’s, as they displayed ‘violent’ imagery or language- but in order to highlight illegal brutality, rather than glorify it. Another example is moderation of messages with ‘hate speech’ or ‘incitement to violence’ in Ethiopia: the algorithm was inappropriate for the context, where many languages are spoken.”
Generally, says Afef, algorithms simply reflect the biases of the people that programmed them, or the society in which they were made. “It can be on a simple use basis, like facial recognition software not recognising black people’s faces; but then you look at the Toeslagenaffaire for example, a complex set of existing biases combined to really cause problems for thousands of Dutch people.”
And away from specific threats to specific people, how has the wide-scale adoption of (for example) social media affected our rights?
“In the beginning”, says Afef, “the big social media networks were not necessarily focussed on growth, nor designed for purely monetary gain”, which led to a general public acceptance of how they work. “Then, with the growth of the smartphone and internet access, and the role of data processing, their business models changed: and our realisation of what the networks do and how they make money is quite recent.” The big challenge, says Afef, is how we can reign in these big tech companies, but we also have to be mindful that Facebook is used by lots of small businesses and entrepreneurs in less developed countries. “And, on a content moderation/freedom of speech point, in lots of these countries a politician might say ‘yes we can legislate for that, just look at the EU’, and then use it as a pretext to curb freedom of speech.”
Finally, to end on a slightly more positive note, how can we find and integrate solutions to these various problems?
“The most important things to remember”, says Afef, “are that transparency, safeguarding, and using human rights impact assessments throughout can make a better system.” But also, she adds, “when a big decision is going to be made by technology that can really affect peoples lives, there really should be a human in the process. There should always be accountability within a system: people should be able to appeal to a human, and neutral, legal system.”
But simply adding extra ‘digital rights’ to the current human rights frameworks won’t be enough by itself, says Afef: “it’s not an online vs offline discussion, all of these issues are now enmeshed, and we should focus our support on countries where perhaps current human rights legislation hasn’t yet been adopted in their governance systems. But, at all times, we should remember that a human should have the right to know how a decision has been made.”
Our thanks to Afef Abrougui of Fair Tech for her expertise in addressing this huge subject. Afef and Fair Tech provide consultancy and services for projects and organisations in and around human rights and the digital world, to see how her expertise could help your initiative, check out her website or connect with her on LinkedIn.