PTE阅读写作SWT训练: 机器如何学会偏见 Prejudice

随着PTE考生对PTE口语和PTE听力的重视,大家口语和听力的分数得到极大提高,但是PTE阅读渐渐考生们新的难题
墨尔本悉尼文波PTE特别为PTE考生们挑选了适合练习PTE阅读的文章,主题,内容,长度都与PTE阅读题中的文章相似。激活学过的词汇,更新新的词汇,提高阅读速度,全面提升自己的阅读能力。

 

If artificial intelligence takes over our lives, it probably won’t involve humans battling an army of robots that relentlessly apply Spock-like logic as they physically enslave us. Instead, the machine-learning algorithms that already let AI programs recommend a movie you’d like or recognize your friend’s face in a photo will likely be the same ones that one day deny you a loan, lead the police to your neighborhood or tell your doctor you need to go on a diet. And since humans create these algorithms, they’re just as prone to biases that could lead to bad decisions—and worse outcomes.

 

These biases create some immediate concerns about our increasing reliance on artificially intelligent technology, as any AI system designed by humans to be absolutely “neutral” could still reinforce humans’ prejudicial thinking instead of seeing through it. Law enforcement officials have already been criticized, for example, for using computer algorithms that allegedly tag black defendants as more likely to commit a future crime, even though the program was not designed to explicitly consider race.

 

The main problem is twofold: First, data used to calibrate machine-learning algorithms are sometimes insufficient, and second, the algorithms themselves can be poorly designed. Machine learning is the process by which software developers train an AI algorithm, using massive amounts of data relevant to the task at hand. Eventually the algorithm spots patterns in the initially provided data, enabling it to recognize similar patterns in new data. But this does not always work out as planned, and the results can be horrific. In June 2015, for example, Google’s photo categorization system identified two African Americans as “gorillas.” The company quickly fixed the problem, but Microsoft AI researcher Kate Crawford noted in a New York Times op-ed that the blunder reflected a larger “white guy problem” in AI. That is, the data used to train the software relied too heavily on photos of white people, diminishing its ability to accurately identify images of people with different features.

 

 

Relentlessly: adv. 残酷地,无情地。

Enslave: v. 使成为奴隶,奴役;使受控制,束缚,征服。

Algorithm: n. 演算法,计算程序。

Blunder: n. 愚蠢的错误,疏忽;v. 踉踉跄跄地走,跌跌撞撞。

 

 

 

墨尔本悉尼文波PTE原创首发

更多精彩请持续关注微信wenbo_tv2。

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注