A psychological effect is known that words and pictures subconsciously determine a person’s behavior. I think everyone has ever heard such a word, priming.

Experiments on this topic have been carried out for many years, some of them are astounding (such as the fact that the choice of words determines the level of manifestations of ), some are recognized as fraudulent.

In general, the idea that the words, metaphors, and images in the interface determine the behavior of users seems very strong — imagine, what if Facebook’s secret of success is that on each page you see ten words “Like”?

So, what’s the problem?

Google

Google corporation, has achieved serious success in the “disrespect” to representatives of several races at once. Users of the network have repeatedly accused of racism and intolerance.

As a result of the absurd scandal associated with black people, Google Photo no longer supports the requests “gorilla”, “chimpanzees” and “macaque”. What is it? Trolling or the imperfection of machine learning algorithms?

In 2015, appeared many messages that the Google Photo service marks people with black skin as “gorilla photos.” Google immediately apologized.
The result of the improving from Google was that the gorillas and some other primates disappeared from the internal lexicon of the service.

Another Google miss was discovered by network users in 2016. At the request of the Jewish baby stroller (“Jewish baby carriage”), the search engine issued pictures with mobile barbecues and pictures of grills on wheels.

Users were outraged by the blasphemous search results as if company mocking the millions of Jews killed during the Second World War.

By the way:

An interface is a shared boundary across which two or more separate components of a computer system exchange information. The exchange can be between software, computer hardware, peripheral devices, humans and combinations of these.

and

The user interface (UI), in the industrial design field of human–computer interaction, is the space where interactions between humans and machines occur.

FaceApp

The developers of FaceApp, an application that uses a neural network to edit a selfie, were forced to apologize for creating a racist algorithm.

This application prompts the user to upload a photo and change the appearance of the person depicted on it. And one of the filters of this program, called “Hot”, turned out to be racist. As users have noticed, filter, which supposedly makes the user more attractive — actually lightened their skin tone.

Naturally, this angered black people and users from Asian countries who immediately accused the application of racial intolerance.

FaceApp again accused of racism because of the filters “Asian”, “Black”, “Caucasian”, and “Indian”. The application has released new updates, but after indignation in social networks and publishing on Techcrunch, the features were canceled under the pretext of “scandalous”.

The company later apologized for the feature, with CEO Yaroslav Goncharov explaining that the software that was used to change users’ appearance had been fed only pictures of white people, so this was the skin color it associated with “hotness.” This sort of data-led bias is a big problem in the field of artificial intelligence, with programs regularly embodying racial and gender prejudices because of the data they’re trained on.

Chatbot Tay

But FaceApp was not the only service accused of racism. One of the most notorious cases of intolerance has occurred with the Microsoft program. In 2016, the developers of the company launched the “smart” chatbot Tay, which was supposed to correspond with users, not limited to a set of prepared phrases and learning to communicate based on messages from living people.

In obedience to the prescribed commands, Tay began a conversation with a sugary saying, “People are very cool”, but in just one day he gained experience and began to issue cues from the series “I am wonderful! I just hate everyone!”, “ I hate feminists ” or “I hate Jews and Asians”. Moreover, the bot did not hide its political beliefs, openly speaking in support of Hitler and the nationalist movement.

Snapchat

In racially based insults, users caught the update of popular messenger Snapchat, which appeared in 2016, and its new functions made it possible to impose on the person in the camera viewfinder the face of musician Bob Marley.

Released in honor of the unofficial “Day of Marijuana” filter is not pleased users Snapchat. On the contrary, they accused the developers of racism and intolerance towards black people. “Snapchat’s Bob Marley is, in fact, the 2016 blackface. Digital disrespect”, noted one of them, hinting at theatrical make-up, with which American comedians caricatured blacks in the 19th and 20th centuries.

HighBlood

Network users are outraged by the new dating application HighBlood, which is available only to wealthy people and provides for certain racial restrictions.

With regard to racial discrimination, the founder of the application, Herbert Eng, published the announcement of the application, saying that it is without workers from Bangladesh, without maids, without freaks.

What is it?

In general, the conviction that human dignity is primarily due to (and only solely) biologically the fact of origin from parents belonging to certain clans (clans), tribe, people, race is called racism.

Culture in the racist sense is secondary in its significance in relation to the biological basis.

In general, Wikipedia is also in solidarity:

Racism is the belief in the superiority of one race over another, which often results in discrimination and prejudice towards people based on their race or ethnicity. As of the 2000s, the use of the term “racism” does not easily fall under a single definition.

And vice versa

NextDoor

NextDoor is a social network for neighbors. In the Crime and Security section, participants report suspicious activity in the vicinity.
The problem is that messages are often based only on the skin color of “suspicious personalities.” Nextdoor CEO and co-founder Nirav Tolia was especially unpleasant to find out about it, because … here is his photo:

Nirav Tolia

Battle

An original web application that allows users to urgently report on cases of police control, motivated by racial or ethnic grounds, has created the Stop Le Contrôle Au Faciès team, which has existed since 2011 and is known in the media for its ingenious methods of struggle.

According to research by René Lévy, published by the CNRS (The French National Centre for Scientific Research), people of Arab appearance are generally subject to police control 8 times, and African — 6 times more often than the “white French”.

The guys made the media talk about themselves for the first time after shooting the acclaimed web-series My First Document Check, in which famous rappers, actors, dancers, artists and other cultural figures of Arab or African descent talk about how they themselves first encountered police violence.

Tinder

Tinder moderators and other dating applications are engaged in the creation of special systems that will limit the degrading racist comments on online platforms.

White cosmos only for white sheeps!

Conclusion

Thus, racism is not a trifling thing, which happens in the statements of politicians or in casual conversations of famous businessmen. Racism is not a thing that can happen to anyone, but not to you.

Racism is something that can wait for everyone on the Internet or in a mobile application. Racism can wait in the Apple Store or Play Store, racism can wait in applications for Mac OS or Windows.

But the worst thing is that racism can settle in your products and services at a time, when you will not know about it. What could be the damage — it is impossible to assess.

Be careful.

Found this post interesting? Kindly tap the 👏 button below! 🦄



Source link https://uxdesign.cc/racism-in--5781bb5174ad?source=rss—-138adf9c44c—4

LEAVE A REPLY

Please enter your comment!
Please enter your name here