Soledad O’Brien’s ‘Matter of Fact’ debuts on WCVB

“[Philanthropists] have to understand why their philanthropy is necessary in the first place. And it’s because we’re not addressing these bigger issues,” …

During a year in which each month introduces a new tragedy, candid conversation can be hard to find. One journalist has recognized this moment as the perfect time to televise the urgent conversation on racial justice, starting with all the ways bias affects our lives.

Soledad O’Brien’s new series, the “Matter of Fact Listening Tour,” tackles 2020’s issues by introducing new voices in the conversation and addresses the reality of our country’s troubles through open conversations with experts and everyday people. WCVB-TV launched the series on Oct. 8, along with the Hearst network of stations and newspapers.

Hearst has expanded accessibility of these conversations on race and justice by premiering them live on digital platforms as well. According to a press release, “‘The Hard Truth About Bias: Images and Reality’ is the first installment of the Matter of Fact Listening Tour, with a series of quarterly virtual forums to be presented in 2021.”

The first episode features Wes Moore, CEO of Robin Hood, an organization that fights poverty, and author of the New York Times Bestseller “The Other Wes Moore,” where he explores the lives of two men with the same name who had opposite outcomes in life due to access and opportunity.

When asked by O’Brien to explain how race and poverty intersect, he said, “Race is the most predictive indicator for life outcomes … Everything from income and wealth, to educational attainment, to maternal mortality.”

He noted that only 10% of all philanthropic donations go to organizations led by people of color.

“[Philanthropists] have to understand why their philanthropy is necessary in the first place. And it’s because we’re not addressing these bigger issues,” Moore said, adding that it’s important to listen to people who are closest to the problem.

Another segment in the 90-minute program featured six strangers, who watched viral videos of incidents where white people were being racially insensitive or accusing Black people of things they did not do, subsequently putting them in danger. While the videos played, the strangers spoke in a group chat anonymously. Afterward, they saw each other’s faces over video chat and discussed their thoughts.

While one white male said that it isn’t always about race, and that Black people discriminate too, two Black women and another Black man explained to him that racism is more than just discrimination — it’s about the power that white people hold over Black people’s lives.

O’Brien also invited former ESPN reporter Jemele Hill to discuss bias in the NFL, since she drew fire for her support of Colin Kaepernick’s kneeling protest. O’Brien noted that women like Hill are often left out of the conversation on racial justice, to which Hill replied, “They are better equipped to fight this than anybody.”

“There’s a long history of Black people being accepted for the entertainment they provide, up until they remind those who are paying them … that they’re actually Black people who have to live in America,” Hill said.

Other guests were able to examine the psychological origins of bias and whether racial justice is a political issue or a moral one. The goal of the first installment of this special was not to come to a conclusion about America’s battle with bias, but to listen in order to uncover truths the viewer may not have considered before.

The listening tour’s long list of guests includes journalists Dorothy Tucker and Joie Chen, Oscar-winning filmmaker John Ridley, and Dr. Rashawn Ray, professor of sociology at the University of Maryland. The listening tour will have quarterly installments throughout 2021, each focused on a specific topic like the first.

Artificial intelligence poses serious risks in the criminal justice system

Whenever I tell people that I’m interested in artificial intelligence (AI), most of them bring up their favorite movie that features an evil AI assembling an …

Whenever I tell people that I’m interested in artificial intelligence (AI), most of them bring up their favorite movie that features an evil AI assembling an army of killer robots that threaten to wipe out humankind. I have to admit that I used to be right there with them, but as entertaining and enjoyable as they are, they lead to a lot of misconceptions about what AI truly is and the very real ways that it impacts our lives.

In the first two decades of the 21st century, the boom of advanced machine learning techniques and big data revolutionized modern computing. Highly capable AI has now infiltrated its way into nearly every field imaginable: medicine, finance, agriculture, manufacturing, the military and more. Rather than sentient beings, AI has taken form as complex algorithms — ones that can diagnose breast cancer from mammograms more accurately than trained radiologists or detect DNA mutations in tumor gene sequences.

Now more than ever, AI has an enormous capability to impact people’s lives in a meaningful and substantial way. But it also raises multidimensional questions that simply don’t have easy answers.

In Steven Spielberg’s Minority Report, Tom Cruise leads Washington’s elite PreCrime Unit, a section of the police department solely dedicated to interpreting knowledge given by the Precogs, three psychics that forecast crimes like murder and robberies. It showcases the dangers of a world where police use psychic technology to punish people before they commit a crime. At first I enjoyed it as another entertaining, albeit thought-provoking, science fiction tech-noir film. But as a tech junkie, I soon learned just how relevant the nearly 20-year-old film is.

One of the areas where AI is currently being implemented is at the intersection of law, government, policing and social issues like race: the criminal justice system.

Over the last few months, the rise of the Black Lives Matter movement and a renewed scrutiny of race relations, policing and structural biases have brought the issues of the American criminal justice system to light. While it’s an institution and system that claims to have been founded on the principle of fairness and justice for all, it is riddled with biases that disproportionately affect Black and brown Americans. From deeply flawed societal constructs that perpetuate injustice to discriminatory police, attorneys and judges, bias is one of the biggest issues that afflict our criminal judicial system.

Criminal risk assessment algorithms are tools that have been designed to predict a defendant’s future risk for misconduct, whether that’s the likelihood that they will reoffend or the likelihood that they will show up to trial. They are the most commonly used form of AI in the justice system, employed across the country.

After taking in numerous types of data about the defendant like age, sex, socioeconomic status, family background and employment status, they reach a “prediction” of an individual’s risk, spitting out a specific percentage that indicates how likely they are to reoffend. These figures have been used to set bail, determine sentences and even contribute to determinations of guilt or innocence.

Their biggest selling point is that they are objective — those that favor their use tout the impartiality and unbiased nature of mathematical code. While a judge could be affected by emotions and serve harsher punishments, an algorithm would never fall prey to such an inappropriate and human flaw.

Unfortunately, like many other forms of AI, they become subject to the seemingly uncontrollable issue regarding bias. The biggest source of bias in AI is bad training data. Modern-day risk assessment tools are driven by algorithms trained on historical crime data, using statistical methods to find patterns and connections. If an algorithm is trained on historical crime data, then it will pick out patterns associated with crime, but patterns are correlations, not causations.

More often than not, these patterns represent existing issues in the policing and justice system. For example, if an algorithm found that lower income was correlated with high recidivism, it would give defendants that come from low-income backgrounds a higher score. The very populations that have been targeted by law enforcement, like impoverished and minority communities, are at risk of higher scores which label them as “more likely” to commit crimes. These scores are then presented to a judge who uses them to make decisions regarding bail and sentencing.

The artificial learning methodology amplifies and perpetuates biases by generating even more biased data to feed the algorithms, creating a cycle coupled with a lack of accountability, since for many algorithms it’s difficult to understand how they came to their decisions. In 2018, leadings civil rights groups, including the National Association for the Advancement of Colored Peopleand American Civil Liberties Union, signed a letter raising concerns about the use of this type of AI in pretrial assessments.

The very idea that we can reduce complex human beings, people who deserve to be seen as a person first and foremost, down to a number is appalling. As a society we treat those who have been incarcerated as waste-aways, promoting revenge and punishment over rehabilitation. We make it nearly impossible for people to return to a normal life, ripping away their right to vote in many states and hurting chances of employment. Putting a number on people’s heads adds to the already rampant dehumanization of minority communities in this country.

I typically find fear of technology rooted in a deep misunderstanding of how it actually works and its ability to impact our lives. AI is a valuable tool that has many practical and ethical applications, but the use of risk assessment tools in the criminal justice system perpetuates racial biases and should be outlawed immediately.

Anusha Rao is a freshman studying Cognitive Science from Washington, D.C. She’s part of the Artificial Intelligence Society at Hopkins.

Elon Musk: “Will Those Who Write The Algorithms Ever Realize Their Negativity Bias?”

As part of an insightful discussion on Twitter, Elon Musk and followers deliberated over the effects of bias on algorithms. In doing so, they opened up …
Consumer Technology

Published on July 27th, 2020 | by Carolyn Fortuna

0

July 27th, 2020 by Carolyn Fortuna


As part of an insightful discussion on Twitter, Elon Musk and followers deliberated over the effects of bias on algorithms. In doing so, they opened up conversations about the role of algorithms in our lives and the ways that algorithms persuade us to think and behave in particular ways. The Tesla CEO spoke to the responsibility that “those who write the algorithms” have and underscored the importance of thinking carefully about the labels used during algorithm development.

“Algorithm bias” refers to systematic and repeatable errors in a computer system that create unfair outcomes. This can be privileging one arbitrary group of users over others, favoring particular solutions to problems over equally viable ones, or creating privacy violations, for example. Such bias occurs as a result of who builds the algorithms, how they’re developed, and how they’re ultimately used.

What’s clear is that they are sophisticated and pervasive tools for automated decision-making. And a lot depends on how an individual artificial intelligence system or algorithm was designed, what data helped build it, and how it works.

Behind the Looking Glass

Algorithms are aimed at optimizing everything. The Pew Research Center argues that algorithms can save lives, make things easier, and conquer chaos. But there’s also a darker, more ominous side to algorithms. Artificial intelligence and machine learning are becoming common in research and everyday life, raising concerns about how these algorithms work and the predictions they make.

As researchers at New York University and the AI Now Institute outline, predictive policing tools can be fed “dirty data,” including policing patterns that reflect police departments’ conscious and implicit biases, as well as police corruption.

Stinson at the University of Bonn speaks to classification, especially iterative information-filtering algorithms, which “create a selection bias in the course of learning from user responses to documents that the algorithm recommended. This systematic bias in a class of algorithms in widespread use largely goes unnoticed, perhaps because it is most apparent from the perspective of users on the margins, for whom ‘Customers who bought this item also bought…’ style recommendations may not to produce useful suggestions.”

Rozaldo conducted research that revealed, in addition to commonly identified gender bias, large-scale analysis of sentiment associations in popular word-embedding models display negative biases against middle- and working-class socioeconomic status, male children, senior citizens, plain physical appearance, and intellectual phenomena such as Islamic religious faith, non-religiosity, and conservative political orientation.

Algorithms and the CleanTech World

AI systems are often artificial neural networks, meaning they are computing systems that are designed to analyze vast amounts of information and learn to perform tasks in the same way your brain does. The algorithms grow through machine learning and adaptation. We’ve been writing quite a bit on this at CleanTechnica.

A constant thread through all these articles is the concept that algorithms have profound implications for critical decisions, and a machine’s thought process must be fully trustworthy and free of bias if it is not going to pass on bias or make mistakes. Clearly, there is still work to be done at the same time that artificially intelligent personal assistants, diagnostic devices, and automobiles become ubiquitous.

Final Thoughts

A Wired article posed the questions, “Are machines racist? Are algorithms and artificial intelligence inherently prejudiced?” They argue that the tech industry is not doing enough to address these biases, and that tech companies need to be training their engineers and data scientists on understanding cognitive bias, as well as how to “combat” it.

One researcher who admits to having created a biased algorithm offers suggestions for alleviating that outcome in the future:

  • Push for algorithms’ transparency, where anyone could see how an algorithm works and contribute improvements — which, due to algorithms’ often proprietary nature, may be difficult.
  • Occasionally test algorithms for potential bias and discrimination. The companies themselves could conduct this testing, as the House of Representatives’ Algorithm Accountability Act would require, or the testing could be performed by an independent nonprofit accreditation board, such as the proposed Forum for Artificial Intelligence Regularization (FAIR).

Harvard Business Review suggests additional layers of prevention in which businesses can engage so that algorithmic bias is mitigated:

  • Incorporate anti-bias training alongside AI and ML training.
  • Spot potential for bias in what they’re doing and actively correct for it.
  • In addition to usual Q&A processes for software, AI needs to undergo an additional layer of social Q&A so that problems can be caught before they reach the consumer and result in a massive backlash.
  • Data scientists and AI engineers training the models need to take courses on the risks of AI.

And as we return to the inspiration for this article, Tesla CEO Elon Musk, we can also look at his vision for Level 5 autonomy. With his consciousness about algorithms and negativity bias, there’s hope that the newest and best in the highest levels of driver assistance will incorporate the most innovative R&D, so that Tesla sets an example of being Bias Detectives — researchers striving to make algorithms fair.


Latest CleanTechnica.TV Episode


Latest Cleantech Talk Episodes


Tags:AI, Algorithm Accountability Act, artificial intelligence, Elon Musk, Forum for Artificial Intelligence Regularization, Level 5 autonomy, machine learning, Neural nets, neural networks, Tesla

About the Author

Carolyn Fortuna Carolyn Fortuna, Ph.D. is a writer, researcher, and educator with a lifelong dedication to ecojustice. She’s won awards from the Anti-Defamation League, The International Literacy Association, and The Leavy Foundation. As part of her portfolio divestment, she purchased 5 shares of Tesla stock. Please follow her on Twitter and Facebook.

Mitigating Bias in Artificial Intelligence

Artificial intelligence (AI), which represents the largest economic opportunity of our lifetime, is increasingly employed to make decisions affecting most …

Artificial intelligence (AI), which represents the largest economic opportunity of our lifetime, is increasingly employed to make decisions affecting most aspects of our lives. This is exciting and use of AI in predictions and decision-making can reduce human subjectivity, but it can also embed biases, producing discriminatory outcomes at scale and posing immense risks to business. Harnessing the transformative potential of AI requires addressing these biases.

By mitigating bias in AI, business leaders can unlock value responsibly and equitably. This playbook will help you understand why bias exists in AI systems and its impacts, beware of challenges to address bias, and execute evidence-based plays.

We will be releasing guides for each of the seven strategic plays and some ‘quick wins’ over the coming weeks. Stay tuned!

DOWNLOAD PLAYBOOK

Apple Card in hot water over sexist bias in AI algorithm

According to a report by Element AI, only 12 percent of AI researches are women. This means that almost 50 percent of the human population is not …

Financial regulators are investigating Apple’s new credit card for discriminating against women in what is just the latest example of bias in AI systems.

Artificial intelligence is the capability of a machine to imitate intelligent human behaviour. In order to imitate such behaviour, AI systems are fed a number of datasets, and just like humans, they are what they eat.

According to a report by Element AI, only 12 percent of AI researches are women. This means that almost 50 percent of the human population is not represented in the creation of such a lifechanging technology. Similarly, according to MIT Technology Review, women account for only 18 percent of authors at leading AI conferences, 20 percent of AI professorships, and 15 percent and 10 percent of research staff at Facebook and Google, respectively.

On Friday, software developer David Heinemeier Hansson tweeted that the tech giant’s new credit card offers him “20x the credit limit she does.”

The couple, said Hansson, files joint tax returns, live in community-property state and have been married for a long time.

In a long thread, Hansson explained that although Apple’s customer service manually raised his wife’s credit limit, they had no idea how the algorithm reached its decision, nor could they do anything to permanently change it.

Other users reported similar problems, including Apple Co-Founder Steve Wozniak, who tweeted saying he received 10x the credit limit his wife did.

The same thing happened to us. I got 10x the credit limit. We have no separate bank or credit card accounts or any separate assets. Hard to get to a human for a correction though. It’s big tech in 2019.

— Steve Wozniak (@stevewoz) November 10, 2019

Shortly after the news came out, Linda A. Lacewell, Superintendent of New York State Department of Financial Services, explained that the DFS would examine whether the algorithm “violates state laws that prohibit discrimination on the basis of sex.”

In a statement, Goldman Sachs, the banking giant that is currently issuing Apple credit card, said they “have not and will not” make decisions based on gender.

We wanted to address some recent questions regarding the #AppleCard credit decision process. pic.twitter.com/TNZJTUZv36

— GS Bank Support (@gsbanksupport) November 11, 2019

Wozniak commented on the issue in an interview with Bloomberg and said, “These sorts of unfairnesses bother me and go against the principle of truth. We don’t have transparency on how these companies set these things up and operate. Our government isn’t strong enough on the issues of regulation. Consumers can only be represented by the government because the big corporations only represent themselves.”

Back in April, US Senators Cory Booker and Ron Wyden proposed a bill that introduces a framework to require organisations to asses and “reasonably address in a timely manner” any biases found in the algorithm. The Algorithmic Accountability Act of 2019 is however still a draft, and has been criticised for “holding algorithms to different standards than humans, not considering the non-linear nature of software development, and targeting only large firms despite the equal potential for small firms to cause harm.”

To date, the examples of sexist bias in AI are more than anyone would like to admit – Apple being just the latest of a long list.

Last year, Amazon had to scrap an AI recruitment tool that discriminated against female candidates. The system was fed data submitted by applicants over a 10-year period, mostly men. Due to this data, the tool scraped any CV that contained any reference to the candidate being a woman.

Voice assistants – mostly “females”, i.e. Siri, Cortana, Alexa – have been long criticised for reinforcing gender stereotypes and portraying an idea of women as servants. The choice, according to different studies, leads back to human preference, and historical bias – when people need help, they prefer to have it delivered from a female voice, while male voices are preferred for authoritative statements.

A woman paediatrician in Cambridge found herself locked out of the women’s changing room of her gym because the algorithm assumed every person with the title “Dr” was a man.

@PureGym dont understand the concept THAT WOMEN CAN BE DOCTORS.If you have the title of DR yr pin no ONLY LETS YOU IN THE MALE CHANGING ROOM

— Lou (@louselby) March 14, 2015

The issue however is drilled a lot deeper into our reality than the biased results of an algorithm. Although technological innovation moves faster than ever, the rate of women in STEM positions is still increasing way too slowly, with on average only 28 percent of women in the field, as opposed to 72 percent of men.

Automation, according to a recent study by McKinsey will only worsen the situation – between 40 and 160 million women may need to transition between occupations due to this new technology.

Shukri Eid, Managing Director, East Region, Cisco Middle East
Shukri Eid, Managing Director, East Region, Cisco Middle East

“We live in a time when human behaviour and skills have a direct impact on technology and vice versa – which means, it is crucial that both men and women play a fundamental role in the evolution of automation and digitisation,” said Shukri Eid, Managing Director of the East Region at Cisco Middle East. “The correlation between emerging technologies and the decline in women’s participation in the workforce largely depends on the automation of the roles traditionally held by women, and the skills gap which prevents women from innovating at work. This realisation presents new responsibilities for organisations when it comes to taking adequate steps for the future of inclusive work.”

Although the future looks direly biased as women find themselves excluded from AI development and suffer the results of automation, a recent study by UNESCO suggests it is not too late to right the wrongs of sexist AI systems as these are still in their infancy – but the clock is ticking.

“There is nothing predestined about technology reproducing existing gender biases or spawning the creation of new ones. A more gender-equal digital space is a distinct possibility, but to realize this future, women need to be involved in the inception and implementation of technology,” explains the study. “This, of course, requires the cultivation of advanced digital skills.”

For example, Eid tells us that Cisco established a foundational dialogue called “The Future of Fairness”, which they consider to be the fuel that strengthens the power of teams and accelerates participation and harmony in the workplace, and added, “As the nature of work transitions, organisations need to foster inclusion by internally building a culture of tolerance, education and openness about this cause.”

Similarly, Tatiana Labaki, Senior Manager – Revenue & Analytics at Emaar Hospitality Group, believes we will never reach the results we hope if organisations don’t move quickly enough to solve the problem.

“Unless women become an equal player with equal influence and impact in the field, despite the numbers, AI will remain a reflection of a society and barriers we have long fought to break as women,” explained Labaki. “Numerous studies have proven that the unconscious bias towards women and how they are represented are strengthened by the scarcity of women leaders who are in decision-making positions in the Machine Learning field, yielding in technologies that represent assistants as females and change-making robots as males, unfortunately.”

Although we are making strides to solve the issue (ie. thankfully Siri doesn’t respond “I’d blush if I could” when called a “b*tch”), there is still long to go before AI is fully representative of all groups and absent of bias.

Josie Young, a feminist AI researcher, advocates for designing Artificial Intelligence (AI) products and systems using ethical and feminist principles. In a TED Talk, she argues that “assigning a gender to a voice bot or chatbot is poor design” as this reinforces gender stereotypes that society has been trying to eradicate for the past 50 years. With this in mind, she has created a practical tool for teams to use when building a bot, prompting developers to question their own bias and training them to address these issues themselves in the future.

The aforementioned research by UNESCO shares a series of suggestions to improve the situation, including “performing ‘algorithmic audits’ to map and label the sources of gender bias in AI technology” and more gender-equal teams that enable women to assume leadership jobs and roles.