Former Google Ethics AI Team Co-Leader Timnit Gebru Believes AI Needs To Slow

0


On May 18, 2021, Google CEO Sundar Pichai announced the deployment of LaMDA, a large language model (LLM) system that can chat with its users on any topic. This is an example of how linguistic technologies become entangled in the linguistic infrastructure of the Internet, despite the unresolved ethical debates surrounding these advanced systems.

Language technologies are getting out of hand

In December 2020, Timnit Gebru was fired from Google for refusing to retract a revolutionary paper in which she argued that these models are prone to generate and propagate racist, sexist and abusive ideas. Although these are the most powerful autocomplete technologies in the world, they don’t understand what they read or say, and many of their advanced features are still only available in English.

LLMs tend to relegate some professions to men and others to women; associate negative words with blacks and positive words with whites; and if probed in some way, can encourage people to self-harm, condone genocide, or normalize child abuse.

The danger of these systems is that they are fluent, and it is very easy to believe that their outputs were written by other human beings. This gives them the dangerous potential to produce and promote disinformation on a large scale.

Research censorship

Very little research is being done to understand how the shortcomings of LLMs might affect people in the real world or what efforts should be made to alleviate these difficulties. Google’s dismissal of Gebru and its co-head, Margaret Mitchell, highlighted that the few companies wealthy enough to form and maintain LLMs have a strong financial interest that will deter them from carefully considering its ethical implications.

As of 2020, Google’s internal review process requires a separate review level for “sensitive subjects”. Therefore, if researchers are writing on topics such as facial recognition or categorization of gender, race, or politics, they should consult with Google’s public relations and legal team to first review their work and suggest changes before you can publish it.

While many researchers look to academia as an alternative to this, even this avenue can be riddled with concerns about access control, harassment, and an incentive structure that does not support long-term research. There are also concerns about the funding of AI research at academic institutions by tech companies, with some researchers comparing it to how big tobacco companies were funding research in an effort to allay concerns about the effects. of smoking on health.

AI must slow down

In a recent interview with WIRED magazine, Timnit Gebru’s central point was that AI needs to slow down.

Gebru has witnessed the negative consequences of the precipitous development of LLMs in his own life. She was born and raised in Ethiopia, where 86 languages are spoken and almost none of them are taken into account by traditional language technologies.

Despite these language shortcomings, Facebook relies heavily on LLMs to moderate content globally. When war broke out in Ethiopia’s Tigray region, the platform struggled to bring the epidemic of disinformation under control.

In an interview with the Wharton Business Daily, Gebru said she was particularly concerned about the “go fast and break things” attitude that dominates technology today. She argues that when you have the software and data available for people to download and collect data very easily and efficiently, it can be easy to forget to consider the things you should be mindful of. In her view, incentive structures need to slow down so that people can be educated about the kinds of things they should be thinking about when collecting data.

In the same podcast, Gebru says she’s already seen how good research can tackle the lack of awareness about how quickly technology and innovation overtakes regulation and policy. the paper 2018 she wrote in collaboration with Joy Buolamwini – who shed light on disparities in commercial gender classification – has been instrumental in bringing about rapid and remarkable change in industry and politics.

Clearly, the research and the awareness it brings have the potential to influence the direction AI technology is taking.


Share.

Comments are closed.