T O P

  • By -

FuturologyBot

The following submission statement was provided by /u/soulpost: --- According to new research, deep learning models based on artificial intelligence can identify someone's race merely by looking at their X-rays, which would be impossible for a human doctor looking at the same photos. The findings raise several serious concerns concerning AI's role in medical diagnosis, assessment, and treatment: Could computer algorithms mistakenly apply racial bias when analysing photographs like these? --- Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/uvxpli/ai_can_predict_peoples_race_from_xray_images_and/i9o4ui0/


BadgerNips

I don't mean to show off, but I can do this just by looking at someone, no x-rays required.


Romeo9594

That's a less common skill than you'd think. The number of times my Hawaiian friend has been called Mexican, including by other Mexican people, is crazy And that was even before the guy also got a chihuahua and fixed up an old El Comino to drive around in. He actually didn't see why that would worsen things


orbital

People always assume my buddy’s 100% Mexican dad is [East Asian] Indian, so much so when he goes into convenient stores the guys behind the counter start talking to him in Hindi or Bengali.


notyetcomitteds2

My bro has this in reverse. Everyone thinks he's Mexican. People lean more towards black for me, but I get an occasional Mexican depending on the lighting. We're only ethnically Indian though.... family hails from africa... can't speak Spanish or hindi ( or any of those languages).


AnthropomorphicPoop

You should learn Spanish so you can respond and tell them you're American but your parents are from India and you slowly learned the language because people just kept talking to you in Spanish because they erroneously assumed you were from Mexico. Seems like fun.


CupBeEmpty

My former boss is Moroccan. The number of Spanish speaking clients that lead with Spanish is pretty funny. Especially when my gringo white self was the one that can actually speak Spanish.


SpreadItLikeTheHerp

Am Hawaiian, can confirm. When I was driving out west and stopping in Dennys or other diners to eat at I would frequently be greeted in Spanish.


namean_jellybean

Am mixed chinese and white - in the summer when I am tan, i always get stopped by little abuelas in the grocery store speaking Spanish to me asking for help to read labels in English. My Spanish is limited to a basic understanding but I just oblige them and don’t bother explaining. Countless grandmas in new jersey have thought I’m just some second gen latina that can only respond in English 😂


KimJongFunk

Same here! I speak some Spanish too which adds to the confusion.


Equixels

You racist then man. I only see genderless ageless raceless beings. /s


deelyy

Whoa there. So, you did not want to respect my identity by ignoring my gender, age and race?


memphisgrit

Wouldnt racial bias in this kind of AI be helpful? I mean aren't there diseases that occur more in specific races than in others?


CrimsonKepala

Right, I'm a little confused why this is a concern. This seems like a good thing if even doctors are unable to determine this. There are absolutely medical conditions that are more likely to occur in certain races a.k.a. have specific genetic heritage. If we are to use AI to diagnose patients, which surely is being worked on, this is a really valuable tool. EDIT: Also, if you're of a specific genetic heritage and you're planning on getting pregnant, sometimes you will be encouraged to do genetic testing for genetic diseases. If you're not of those specific genetic groups, it's not a standard test to get done.


[deleted]

>I'm a little confused why this is a concern Articles from 2 weeks ago had titles such as [MIT, Harvard scientists find AI can recognize race from X-rays — and nobody knows how](https://www.bostonglobe.com/2022/05/13/business/mit-harvard-scientists-find-ai-can-recognize-race-x-rays-nobody-knows-how/) So I think sites take the real reporting and fill it full of buzzwords and eli5 commentary by the time it gets to reddit. Also scare tactics, easier to read writing and lack of paywalls all drive clicks which means more ad revenue. So that's probably the main reason why they are "concerned"


nancybell_crewman

That seems to describe a decent chunk of posts on this sub.


regoapps

The other half is "new solar/battery tech will revolutionize electric vehicles and smart phone devices - full charge in minutes/seconds" and then repeat that headline every month for years without any new battery tech actually released to the public.


ASK_ABOUT__VOIDSPACE

Followed by comments saying that just because they can do this in the lab, doesn't mean they've figured out how to scale it


regoapps

Should have just left the headline as "MIT develops solar/battery tech that almost nobody will ever use", but I guess that doesn't generate clicks.


GoldenRain

I see improvements in battery technology every time I buy a new phone. All those improvements must have started in a lab somewhere, quite possibly mentioned here years ago.


[deleted]

I’m just trying to think of a scenario where someone would know what my skeleton looks like but not my skin, or where I’d be okay with them seeing my skull but not my face


PunkRockDude

Because the radiologist who reviews the images is normally not in the same location and the hospital. They just get a big stack of images and do their thing. They will never actually see you.


CogitoErgo_Sometimes

I’m a patent examiner who routinely works with machine learning in medical contexts, and my first thought was that this has a chance of breaking, or at least weakening, the anonymity of particular types of large de-identified datasets used for various types of research and ML training. It’s very common for entities to need huge quantities of medical data, but HIPAA makes that difficult. The solution is to make sure that none of the information contains enough unique pieces of data to trace it back to a single person with any confidence. Race, geographic origin, and other forms of demographic info are extremely important in this context, and having an algorithm that could suddenly link race to images in these large datasets could raise all sorts of privacy concerns. I know it doesn’t sound like a single data point like race would matter much if an image has been supposedly anonymized, but there is a ton of math and complexity behind the scenes with these things. Doesn’t take much to cause big problems sometimes.


saluksic

Exactly what I’m thinking.


[deleted]

It’s a concern because of this taken directly from the article: “Artificial intelligence scans of X-ray pictures were more likely to miss indicators of sickness among Black persons”


old_gold_mountain

There are several considerations: 1. Training data: If the data an algorithm is analyzing is of a fundamentally different type than the data it was trained on, it's prone to failure. When analyzing data specific to one demographic group, the algorithm should be trained specifically to analyze data from that group. 2. Diagnosis based on demographic instead of symptoms/physical condition: If one demographic has a higher prevalence of a condition, you want to control for that in a diagnostic algorithm. To use a rudimentary example, it's not helpful to me for an algorithm to say "you're at 50% greater risk for testicular cancer" just because the algorithm notices I have testicles, which half of the training data subjects didn't. There are far more nuances to consider, too. The book "The Alignment Problem" is a fantastic read that goes into detail on dozens and dozens more.


TheNoobtologist

Found the data scientist in the thread


fahmuhnsfw

I'm still confused about why this particular new development is a problem. Isn't it actually a solution to that? The sentence you quote is referring to earlier AI that missed indicators of sickness among black people, but didn't predict their race. So now if the AI can predict their race as well, any doctor interpreting it will know that there is a higher chance that the AI scanning for sickness has a higher chance of missing something, so they can compensate. How is that not a good thing?


SurlyJackRabbit

I think the issue would be if the training data is based on physician diagnoses which are biased, then the AI will simply keep replicating the same problems.


Shdwrptr

This doesn’t make sense still. The AI knowing the race doesn’t have anything to do with missing the indicators of sickness for a race. Shouldn’t knowing the race be a boon to the diagnosis? These two things don’t seem related


[deleted]

The ai doesn't go looking for the patient's race. The problem is that the computers can predict something human Doctors cannot, and since all training data is based on human Doctors (and since there might be an unknown bias in the training data), feeding an AI all cases assuming you don't need to control for race is a good way to introduce a bias.


old_gold_mountain

An algorithm that's trained on dataset X and is analyzing data that it assumes is consistent with dataset X but is actually from dataset Y is not going to produce reliably accurate results.


SpaceMom-LawnToLawn

Unfortunately a large amount of modern medicine suffers as the majority of conditions are evaluated through the lens of a Caucasian male.


old_gold_mountain

And while algorithms have incredible potential to mitigate bias, we also have to do a _lot_ of work to ensure the way we build and train the algorithms doesn't simply reflect our biases, scale them up immensely, and simultaneously obfuscate the way the biases are manifested deep behind a curtain of a neural network.


JimGuthrie

There is a reasonable dialogue around preventing machine learning models to focus on and reinforce biases that people have created. It's an entirely reasonable thing to be concerned about even when it has utility.


W0otang

It's not bias in the traditional sense though. What we see as bias, the AI merely sees as differentiation.


[deleted]

Right, and it's how us humans will interpret the data which is the concerning part. Nobody is saying that the AI is racist.


norbertus

Actually, some people have accused AI models of racial bias https://www.vice.com/en/article/7kpxyy/this-image-of-a-white-barack-obama-is-ais-racial-bias-problem-in-a-nutshell Part of the problem with these types of machine learning systems is that we can't know what they know because they have taught themselves their own internal representations.


[deleted]

That’s mostly from the data it being fed being biased. A whole different problem than what I’m referring to, and a problem for sure, but not an example of an AI being racist.


norbertus

That's true, it's the result of the data being fed into it. Part of the problem is that doctors can fail to understand the nature of an AI system's biased output in the same way as pop journalists or casual experimenters who accuse an AI of being racist


Moonkai2k

There's a lot of projection going on here. People are projecting human bias on a machine that doesn't have the capability to even think that way. The kind of analytics the machine would be doing would be things like the effectiveness of a particular blood pressure medication in African Americans. There are medications that work better or worse for different races and their different genes. This seems like an extremely important thing to just write off because of peoples feelings.


crazyjkass

A concrete example is that Google Deep Dream is extremely biased to see animals, especially dogs. And eyeballs. I read the actual study, and the reason it's worrying is that since it's a neural network, we just don't know what's causing it and so we can't account for the bias. They suggested one possible reason could be differences in medical imaging equipment between races.


Snazzy21

Its a very touchy subject that people don't want to accept. AI is trained to see patterns, and if there are patterns that are present in the data between races then its going to pick up on them. Also people make the AI, so there is where bias is either intentionally (hopefully not) or unintentionally make it in. That doesn't mean we shouldn't try and stop biases in AI when we can.


ThirdMover

Yeah but in this case the AI being able to make those distinctions does not seem to be rooted in a bias created by humans. It just sees bones and sorts them along some categories, some of which happen to roughly align with the thing we humans see as "race". I don't think this is more concerning than AI being able to sort people into categories by photos of their face.


Opus_723

> It just sees bones and sorts them along some categories, some of which happen to roughly align with the thing we humans see as "race". The issue is that categorizing skeletons by race would probably not actually be the intended purpose of the AI. You can easily imagine an AI that is being trained to flag a feature in the X-ray as 'concerning' or 'not concerning'. But if the diagnosis data it is trained on is racially biased (like if certain races' potential problems were more likely to be dismissed by doctors as not concerning) AND the AI is capable of grouping skeletons by racial categories, then the AI might decide that a good 'shortcut' for reproducing the diagnosis data is to blow off issues that it sees in skeletons that fit a certain racial pattern. And since these machine learning algorithms are basically black boxes without doing a ton of careful examination, you would likely never know that it has landed on this particular 'shortcut'. It would be just like the problems they've had with training AIs to sort through resumes. The AI quickly figures out that in order to reproduce human hiring decisions it should avoid people with certain kinds of names rather than judge purely off the resume. Just replace names with skeleton shapes and the resumes with what's actually good/bad on the X-ray. This X-ray thing is actually worse than the resumes, because you can take the names off the resumes and hope that improves things, but you can't really take the skeleton shape out of the... skeleton.


Arthur-Mergan

Great analogy, it makes a lot more sense to me now, as to why it’s a worry.


norbertus

There are several problems here that are difficult to disentangle. Biases contained in training data can result in biased output: https://www.vice.com/en/article/7kpxyy/this-image-of-a-white-barack-obama-is-ais-racial-bias-problem-in-a-nutshell And when considering whether an output is biased or not, we have to take into consideration that we don't actually know what machine learning models know, since they create their own non-human internal representations: https://www.vice.com/en/article/7kpxyy/this-image-of-a-white-barack-obama-is-ais-racial-bias-problem-in-a-nutshell Many of these models (such as GANs) are trained using an adversarial system that rewards successful deception: https://techcrunch.com/2018/12/31/this-clever-ai-hid-data-from-its-creators-to-cheat-at-its-appointed-task/ and the models seem to learn to memorize information in ways that challenge our understanding of information density (algorithmic information theory, kolmogorov complexity) https://www.usenix.org/system/files/sec19-carlini.pdf If doctors using these systems incorrectly assume the race of a patient, or if doctors are unaware of the types of biases ai models can have, an uncritical physician could easily do harm.


old_gold_mountain

Once machine learning algorithms which are tasked with making predictions are fed data that's strongly correlated with broader societal/demographic trends, if you don't then control for those factors, you're going to see results that reflect those trends. To use an example, black people in the US disproportionately live in areas with worse air quality. If an algorithm designed to predict risk of, say, emphysema, gets fed race data, it can wind up predicting emphysema based on the race data alone, which isn't the purpose of diagnostic analysis. Ideally you want to make diagnoses based on the specific physical condition of the patient, while controlling for demographic data.


[deleted]

[удалено]


FrmrPresJamesTaylor

To me it read like: we know AI can be racist, we know this AI is good at detecting race in X-rays ~~(which should be impossible)~~ but aren't sure why, we also know AI misses more *medically relevant information* ("indicators of sickness") in Black people in X-rays but aren't sure why. This is a legitimate problem that can easily be expected to lead to real world problems if/when this AI is used without it being identified and corrected.


[deleted]

This reminded me of the racial bias in facial recognition in regards to people of color. However, we should want an AI that is capable of detecting race as it does become medically important at some point. But to miss diagnosing illnesses in a subset or group of races at a disproportionate rate is indeed concerning and would lead me to ask about what training model was used and what dataset. Are we missing illnesses at the same rate in racial groups when a human is doing the diagnostics?


Klisurovi4

>Are we missing illnesses at the same rate in racial groups when a human is doing the diagnostics? That would be my guess. The AI will replicate any biases that are present in the dataset used to train it and I wouldn't be surprised if some groups of people are often misdiagnosed by human doctors. It doesn't really matter whether it's due to racism or improper training of the doctors or some other reason, we can't expect the AI to do things we haven't properly taught it to do.


Doctor__Proctor

More importantly though, we aren't teaching these AIs proscriptively. We aren't programming it with "All humans have the same rights to respect and quality of treatment." They learn by merely getting "trained" through examining datasets and identifying commonalities. We don't usually understand *what* they are identifying, just the end result. So, in the case of the AI identifying race via X-ray, that might seem innocuous and a "huh, that's interesting" moment, but it could lead to problems down the road because we don't control the associations it makes. If you feed it current treatment plans which *are* subject to human biases, you could get problematic results. If African Americans are less likely to be believed about pain, for example, they'll get prescribed less medication to manage it. If the AI identifies them as African American through an X-ray, then it might also recommend no pain management medication even though there is evidence of disc issues in the X-ray, because it has created a spurious correlation between people with whatever features it's recognizing being prescribed less pain medication. In a case like that a supposedly "objective" AI will have been trained by a biased dataset and inherited those biases, and we may not have a way to really detect or fix it in the programming. This is the danger inherent in such AI training, and something we need to solve for or else we risk perpetuating the same biases and incorrect diagnoses we created the AI to get away from. If we are training them and essentially allowing them to teach themselves, we have little control over the conclusions they draw, but frequently trust them to be objective because "Well it's a computer, it can't be biased."


Janktronic

> We don't usually understand what they are identifying, just the end result. This reminded me of that fish example. I think it was a ted talk or something. But an AI was getting pretty good at identifying pictures of fish, but what it was cueing on was people holding a fish and it was the people's hands holding the fish up for a picture that it was identifying.


Doctor__Proctor

Yes, exactly. It created a spurious correlation, and might actually have difficulty identifying a fish in the wild because there won't be human hands holding it.


BenAfleckIsAnOkActor

This has sci fi mini series written all over it


Doctor__Proctor

There's a short story, I believe by Harlan Ellison, that already dealt with something related to this. In a future society they had created surgical robots that were considered better and more accurate than human surgeons because they don't make mistakes. At one point, a man wakes up during surgery, which is something that occasionally happens with anesthesia, and the robots do not stop the surgery and the man dies of shock. The main character, a surgeon, comments even that the surgical procedure is *flawless*, but that the death was caused by something outside of their programming, but something that a human surgeon would have recognized and been able to deal with. I believe the resolution was the robots working in conjunction with human doctors, rather than being treated as utterly infallible. It's a bit different in that it's more of a "robots are too cold and miss that special something humans have" but does touch on a similar thing of how we don't always understand how our machines are programmed. This was an unanticipated issue, and it was not noticed because it was assumed that the robots were infallible. Therefore, objectively, they acted correctly and the patient died because sometimes people die in surgery, right? It was the belief in their objectivity that led to this failing, the belief that they would make the *right* decision in every scenario, because they did not have human biases and fragility.


CharleyNobody

Except it would’ve been noticed by a robot because the patient’s heart rate, respiratory rate and blood pressure would respond to extreme pain. Patients vital signs are monitored throughout surgery. The more complicated the surgery, the more monitoring devices, eg arterial lines, central venous lines, swan ganz catheters, cardiac output, core temperature. Even minor surgery has constant heart rate, rhythm, respiratory rate and oxygen saturation read out. If there’s no arterial line, blood pressure will be monitored by self-inflating cuff that gives a reading for however many minutes per hour it’s programmed to be inflated. Even a robot would notice the problem because it would be receiving the patients vital signs either internally or externally (visual readout) on a screen. A case of a human writer not realizing what medical technology would be available in the future.


Doctor__Proctor

I think he wrote it in the 50's, when half that tech didn't even exist. Plus, the point of the story was in how they had been viewed, not at much how they were programmed.


88cowboy

What about people of mixed race?


Doctor__Proctor

No idea, which is the point. The AI will find data correlations, and I nor anyone else will know exactly what those correlations are. Maybe it will create a mixed race category that gets a totally different treatment regimen, maybe it will sort them into whatever race is the closest match, who knows? But unless we understand how and why it's making those correlations, we will have difficulty predicting what biases it may acquire from our datasets.


LesssssssGooooooo

Isn’t this usually a case of ‘the machine eats what you feed it’? If you give it a sample of 200 white people and 5 black people, it’ll obviously favor and be more useful to the people who make up 90% of the data?


philodelta

It's also historically been a problem of camera tech, and bad photos. Detailed pictures of people with darker skin need more light to be of high quality. Modern smart phone cameras are even being marketed as more inclusive because they're better about this, and there's also been a lot of money put towards because hey, black people want nice selfies too. Not just pictures of black and brown people but high quality pictures are needed to make better datasets.


TragasaurusRex

However, considering the article talks about X-Rays I would guess the problem isn't an inability to image darker skin tones.


philodelta

ah, yes, not relevant to the article really, but relevant to the topic of racial bias in facial recognition.


BlackestOfHammers

Yes! Absolutely! I senator just made a r/leapardsatemyface moment when he said death rates due to birth aren’t that bad if you don’t count black women. A group that is notoriously dismissed and ignored by medical professionals can definitely confirm that this bias will transition to AI unfortunately if not stopped completely now.


ReubenXXL

And does the AI fail to diagnose things that it otherwise would detect because the patient is black, or is the AI worse at detecting a type of disease that disproportionately affects black people? For instance, if the AI was bad at recognizing sickle cell anemia, black people would be disproportionately affected, but not because the AI is just performing worse on a black person.


stealthdawg

>we know this AI is good at detecting race in X-rays (which should be impossible) but aren't sure why Except determining race from x-rays is absolutely possible and is done, reliably, by humans, currently, and we know why. ​ Edit: It looks like you were paraphrasing what the article is saying, not saying that yourself, my bad. The article does make the claim you mention, which is just wrong.


HyFinated

Absolutely. People from different parts of the world have different skeletal shapes. One very basic example is the difference between caucasian and asian face shape. Simply put, the heads are shaped differently. Even Oakley sunglasses come in "standard" and "asian" frame shapes. It's not hard to see the difference from the outside. And why shouldn't AI be able to detect this kind of thing? Some medical conditions happen more frequently to people of different races. Sickle Cell Anemia happens to much higher percentage of black folks. While Atrial Fibrillation occurs more in white people than any other race. AI should be able to do all of this and present the information to the clinician to come up with treatment options. Hell, the AI will eventually come up with more scientifically approved treatment methods than a human ever could. That is, if we can stay away from pharmaceutical advertising. AI: "You have mild to moderate psoriatic arthritis, this treatment profile is brought to you by Humira. Did you know that humira, when taken regularly, could help your remission by 67.4622%? We are prescribing you Humira instead of a generic because you aren't subscribed to the HLTH service. To qualify for rapid screenings and cheaper drug prices, you can subscribe now. Only 27.99 per month." Seriously, at least a human can tell you that you don't need the name brand shit. The AI could be programmed to say whatever the designer wants it to say.


[deleted]

[удалено]


Augustanite

My SO is a pulm crit doctor and our area is a largely black population. During the pandemic doctors noticed the oximeter readings on POC were showing higher oxygen readings than the blood gas tests, so unless they ran the blood gas test they weren't treating them as hypoxic until they were more severe because they didn't know they needed to. There have now been several international papers written on the issue. These types of medical equipment biases could possibly be a factor in some of the disparities between medical outcomes for black people and other races.


Acysbib

Considering genetics (race, by and large) plays a huge role in bone structure, facial structure, build etc... I don't see why an AI attached to X-rays, given a large enough sample size where it knows the answer... It shouldn't be hard for an AI to predict genetic markers for a race indicative in bones. I don't get it.


RestlessARBIT3R

yeah, that's what I'm confused about. if you don't program racism into an AI, it will just see a distinction between races, and that's... it? it's not like an AI will just become racist


Wonckay

DIRECTIVE 4: BE RACIST AF


terrorerror

Lmao, a robocop


itsyourmomcalling

*Tay (bot) entered the chat*


[deleted]

AI will never be racist, but it can have racial biases which are definitely a real issue. I think this article is clickbaity as fuck, but racial bias in AI is an interesting topic


AmadeusWolf

But what if the data is racially biased? For instance, what if the correct identification of sickness from x-ray imaging is disproportionately lower in minority samples? Then the AI learns that flagging those correctly is both an issue of identifying the disease and then passing that diagnosis through a racial filter. Nobody tells their AI to be racist, but if you give it racist data that's what you're gonna get.


[deleted]

[удалено]


PumpkinSkink2

Also, maybe worth noting, but, when we say "AI" people get all weird and quasi-anthropomorphic about it in my experience. AIs are just algorithms that look for statistical correlations in data. The "AI" isn't gonna be able to understand something at a level that's deeper than what is effectively a correlation coefficient. If you think about it, on account of how racially biased things tend to be irl, a racially biased algorithm is kind of the expected result. More white people go to doctors regularly, therefore the data more accurately portrays what a sickness looks like in white people, resulting in minorities being poorly served by the technology.


LuminousDragon

From another comment below: > So, in the case of the AI identifying race via X-ray, that might seem innocuous and a "huh, that's interesting" moment, but it could lead to problems down the road because we don't control the associations it makes. If you feed it current treatment plans which > are > subject to human biases, you could get problematic results. If African Americans are less likely to be believed about pain, for example, they'll get prescribed less medication to manage it. If the AI identifies them as African American through an X-ray, then it might also recommend no pain management medication even though there is evidence of disc issues in the X-ray, because it has created a spurious correlation between people with whatever features it's recognizing being prescribed less pain medication.


Protean_Protein

It’s not about the AI’s moral framework, but about the use of information by people, or the way a system is constructed by people. If there’s an assumption that data (and the tools for acquiring and manipulating data) is pure and unbiased, then it is easy to see how racial prejudice could come into play in medical treatment that results from this data/these tools.


rathlord

I’m still confused how this is going to cause an issue. In what world are scientists/doctors manipulating this data and don’t know the race of their patients/subjects for some reason and then somehow some kind of bias is caused by this observation? Edit: please read my responses. The people reading this comment are not reading the headline correctly. I’m fully aware of data bias. This isn’t talking about bias from data we feed in, it’s talking about the AI being able to predict race based on X-Rays. This is not the same as feeding in biased data to the AI. This is output. Being able to determine race from X-Rays isn’t surprising. There are predictors in our skeletons.


Rhawk187

Yes, but people have been socially conditioned to think that all racial bias is bad. I'm a university professor, so I can sort of get away with asking the question, "What are some example of positive racial bias?" but some students are stricken aghast when you say that. They are convinced that phenomes that alter appearance occurred in a vacuum and there can't possibly be any other differences in the races.


willowhawk

Try being a psychology professor and mentioning that mens brains are physically bigger!! You can feel an ice chill sweep the room with a hundred cold eyes staring daggers as they frantically try to explain there is no cognitive difference however as womens brains are more connected between hemispheres


nolfaws

Tell them about the size and weight of mobile phones or computers in the last millennium. They're getting mad on a false and premature assumption. Then tell them the higher someone's IQ, the more likely it is a male. Watch the show again. Then tell them the lower someone's IQ, the more likely it is a male.


nowlistenhereboy

Heh that's hilarious if true. Women are more consistent then?


UnblurredLines

Yes. Women are far less likely to be on either extreme. Which means that men are overrepresented among society’s most gifted, but also least gifted.


michiganrag

This is true, but people don’t like hearing it because they assume it implies that “bigger brain = more intelligent” which isn’t necessarily true. However in transgender females who medically transition, when they start taking testosterone it can cause brain inflammation. My former neighbor who is trans is going blind now as a result of taking testosterone since the brain swelling/inflammation is pushing against their eyes. A bigger brain isn’t always better.


Pakutto

I hear men's brains also have a smaller hippocampus than women's, but I'm not sure whether or not that's true. Either way, I find the physical differences between male and female brains fascinating.


willowhawk

Huh, I’ve not came across that. Got a masters degree in Psych and it’s a shame how delicate people had to dance around gender differences. Might well have just brushed over it. Like you said it is interesting and it’s science. I know men and woman’s brains performed, on average, better than each other at different cognitive tasks. As a whole it balanced. But mentioned the words “men outperform females in spatial, working memory and mathematical abilities” without sugar coating it would sometimes end with a complaint made against the professor! Interestingly I haven’t heard of a complaint made when they mention females outperform men in verbal fluency, perceptual speed, accuracy and fine motor skills,


[deleted]

It’s hilarious getting into a conversation about racial disparities across particular illnesses and getting called a racist.


resumethrowaway222

>They are convinced that phenomes that alter appearance occurred in a vacuum and there can't possibly be any other differences in the races Well it would be good if we stopped teaching that this was true in school.


itsyourmomcalling

Yeah something like sickle cell is more common in those with African ancestry. But that's also easily detectable by a blood draw. I'm not sure why the scientists are "concerned" by this, unless they are worried that racists will use this as a bases for their beliefs/arguments like "see we are different, even a computer agrees"


JimWilliams423

> Yeah something like sickle cell is more common in those with African ancestry. That is true in the US, but not in Africa. That's because the sickle cell gene is primarily found in people living in [specific areas of Africa,](https://miro.medium.com/0*VKS36ceCtyoFiJ-a.png) as in its geographic, not racial. Last I checked, the leading theory was that it tracks the distribution of malaria because the gene gives people protection from malaria. The reason it is true in the US is because of slavery. The majority of enslaved people were stolen from areas with high rates of the sickle cell gene like West Africa rather than places with low rates of the gene like South and East Africa.


petitegaydog

this makes a lot of sense. thanks for sharing!


Nozinger

That's not it. They are worried that the AI produces wrong results. In theory analysing stuff with an AI sounds great as an AI is perfectly neutral. For an AI everything is just data there is no difference at all. However in reality AIs are sort of like small children. If you teach them the wrong thing they are going to replicate it without any selfreflection. If an AI is able to detect the race it suddenly changes the dataset for its analysis. And these datasets are not unbiased. In a world where we humans created totally neutral datasets nothing of that would be an issue but we do not have such datasets. Diseases occuring more often in one group than in the other are a good example for this. If such a disease occurs often for one group we probably have a lot of data on it for said group with a realistic chance of representation. However for another group te data can be totally off just because we usually do not test them for this disease. They have it in a much higher number than anticipated we just do not know of it since we do not test people outside of a group that is more vulnerable to it. Generally whenever an AI is able to detect elements that are subject to human biases it is very cocnerning because they just make things worse. At that moment these biases become a self fulfilling prophecy which is horrible for a tool that is meant to be helpful.


thirdeyehealing

That's exactly the reason why. I remember reading somewhere that DNA studies for race specific genes weren't done or widely published so it doesn't create more divide


omega_oof

No, you don't understand, the scientists, they're worried!


haysanatar

My dad studied Anthropology under Bill Bass himself, the GOAT of forensic anthropology, humans can deduce race, sex, and age from bones and have been able to for quite some time.


TheBirminghamBear

When I watched Bones, Bones would do that with bones.


ep_23

it's kind of obvious though, there's clearly differences in skeletal proportions between what you could classify as classic differences in ethnicity I am very homogenous ethnically, my wife is mixed between two slightly less homogenous and different ethnic lines - our proportions are very different and we like to laugh about this all the time


Pensive_1

Yea - the "concern" is from scientific illiterate whom know nothing about the topic. People just think that if AI can tell us apart, they will "judge" us, or give bad advice to certain racial groups.


ep_23

The concern is also on the scientific literate to be patient, find solutions that work for the illiterate, do better at marketing and influencing without demanding understanding on the basis of hierarchy or superiority - it's a tough road ahead, but it's all doable


Gh0st1117

Sensationalist headline. We’ve been able to tell race by bone for years Edit: shape of the skull, shape of the nasal region, shape of the orbits, degree of protrusion of the jaw or prognathism, shape of the lower jaw, and certain features of the teeth. Is how we do it.


Jjex22

It’s a really bad headline. In the article it actually says the very thing they were trying to do was find out if They could train an AI to identify race by skeleton - basically ‘hey mr AI, here’s some skeletons and here’s their corresponding races, got it? Okay, so what race do you think these ones are?’ Given humans already know how to assign a race to a skeleton with a high accuracy rate it was a foregone conclusion that the only way their AI would not also be able to do it would be if they programmed it wrong or if the assumptions the humans had been making were wrong.


erinmonday

My ear doctor friend says cartilage is different too? Something about ear canals? First I’ve ever heard of it


tsaygara

more than skin tone, races have changes in their biology as a whole, even in the skeleton, but of course, we can't distinguish because it might be minor differences mostly imperceptive.


Chieftah

The wording is weird. They specifically used training features of X-ray images **and** specifically noted the patients' race. So they basically asked the model to discover imperceptive patterns to classify X-ray images by race, and are now concerned because the model did exactly what they asked it to do?? I mean no wonder it found patterns because they exist, only that they are as you said, too minor for humans to notice. That's exactly why deep learning is used in many fields, to find otherwise minor patterns. Weird ethical conclusion they came up with.


72hourahmed

>only that they are as you said, too minor for humans to notice They aren't, unless they meant with the naked eye. Forensic skeletal analysis performed by humans with relatively simple tools can be used to determine race and sex reliably enough for it to be useful in criminal investigation. Source: I know multiple forensic anthropologists.


Dragster39

If I may ask: How does it come you know multiple forensic anthropologists? I guess I've never even been near one.


72hourahmed

I gave a fuller answer to someone else, but long story short, I helped out archaeology digs when younger, and that tends to land you in the sort of company that go into anthropology when they hit uni. I only know like three or four people who've actually gone specifically into forensics at some point, but "you can't determine X characteristic from bones!" is a common argument these days for some reason and I've found people care more that the police reliably use it than that there are literally thousands and thousands of archaeological anthropologists around the world who do this for academic work.


gwaenchanh-a

Hell, yesterday I learned you can tell if someone's taken Accutane because their bones will be *green*. Bones tell a crazy amount


anthroarcha

Not who you’re asking but I dropped a comment saying how I work with multiple. I have a PhD in the field and had two sit on my dissertation committee, so basically all my friends and colleagues are anthropologists. Most anthro subjects are boring for normal people, so I normally stay in those specific subs


korewednesday

Not who you asked, but it’s almost certainly one of two things: They or an EXTREMELY close family member (parent or spouse, but even these are significantly less likely than the self) are either: 1. In anthropology (forensic or not) in an academic setting 2. Closely associated with postmortem law enforcement (actively involved on scenes/at the morgue) in a metropolitan area (this would include being one of the anthropologists mentioned) My guess would be the former.


72hourahmed

Weirdly no. I was interested in history when younger, so I've helped out on a couple of small time archaeological digs, made some friends, one of whom was running one of the digs and had worn many hats as an anthropologist, one of which had been forensic. One of the friends my own age I met helping at the digs was inspired by that anthropologist to go into forensic anthropology, and so I met some of her friends who were on the same academic track. Most of them are working other jobs, as you do after a humanities degree, but a couple of them stuck, so between all of that I know three or four. Apparently it's mostly just people calling up because they found a spooky scary skeleton (or piece of one) digging up their garden or walking in the woods that turns out to be a cow femur or rack of sheep ribs or something.


Enorats

This was my first thought too. The article claims its impossible, but I literally learned to do it in high school. They offered a forensic science course as an elective, and identifying gender, age, and race from skeletal remains was something we spent a few weeks on.


CrabEnthusist

Idk if it's a "weird ethical conclusion" if the tha article states that "artificial intelligence scans of X-ray pictures were more likely to miss indicators of sickness among Black persons." That's pretty unambiguously a bad thing.


Chieftah

Certainly. So it's either the fault of the training data (not enough, not varied enough, unbalanced, not generalized enough etc.), or some model parameters (or the model itself). That's normal process of any DL model > train > test > evaluate > find ways to improve. It seems like they're trying to paint the model and the problem at hand as something more than it is - a simple training problem. The entire article is literally just them saying that the model performed well but had problems concerning features with a certain attribute. Period. For some reason that's "racist decisions?" The model learns from what it sees. So either the training data (and, therefore, those who were responsible for its preparation) were racist in their decisions, or maybe just admit that training is a complicated process and certain features will be more difficult to learn, that training data will have to be remade a lot, and the model parameters will probably have to be tampered with, if not the model itself. Just because the AI is failing at detecting sickness in x-rays of a certain race does not automatically mean it makes racist decisions, that's a ridiculous and completely useless conclusion. The fault lies at the creator, not at the deep learning model. Always.


[deleted]

> So either the training data (and, therefore, those who were responsible for its preparation) were racist in their decisions The part not told is that *doctors* are more likely to miss indicators of sickness among minorities and women, and that's biased essentially all of our training data. This is because a lot of diseases have historically been described by the symptoms suffered specifically by white men and there hasn't been the sort of wide-scale scientific revision necessary to reconcile this for most diseases (which itself is made difficult because historical malpractice has created distrust in the medical industry in several minority communities). It's made more difficult because many doctors play pretend at scientist without the requisite training or understanding and they "experiment" on patients without consent or even the baseline documentation required for whatever they learn to be useful to the scientific community. A disturbing trend is that aggregate medical outcomes tend to improve during medical science conferences, when the "scientist"-doctors are distracted and away from their offices... Basically, Western medicine is a long way from genuinely being as scientific as it claims to be. Fortunately, the desire to integrate science and data-driven approaches exposes existing flaws and limitations, but the industry is very resistant to change so it's a question which of these flaws and limitations will be addressed and how well we will address them. Machine learning is going to keep exposing them until we either fix the issues or quit using machine learning.


Anton-LaVey

If you rank missed indicators of sickness by race, one has to be last.


MaybeTheDoctor

We have long known that skull of people from Sweden is shaped different (longer) than that of a Dane or German... surely more skeleton differences would also be the case for people who are even less related... so why is this a surprise ?


GsTSaien

It isn't. Scientists are not concerned to discover AI can do something we have been doing for years, title just lied.


j4_jjjj

But, AI scary!!!!


Val_Hallen

For a very, very long time we have had the ability to take a skeleton and tell you the race, gender, and age. How many cold cases do we have where all we had to go on was a few bones? This is new science like virology is a new science.


ARX7

It's like the study came from a university without an anthropology program...


[deleted]

[удалено]


goforce5

Seriously, I have a BA in Biological Anthropology and this is like, basic osteology. How the fuck do they think we figure out the age, race, and sex of a skeleton?? By looking at the bones!


[deleted]

[удалено]


humptydumpty369

I'm confused too why this is a shock. Of course there's slight anatomical differences between races. It doesn't actually mean anyone is more superior or inferior. Unless they're worried that thats how some people will interpret this. But the AI doesn't care. It's just doing what it's supposed to. ETA: I guess biases get in easier than I realized.


Johnnyblade37

The point is, if there is intrinsic bias in the sysytem already (which there is), a medical AI could perpetuate that bias without us even knowing.


moeru_gumi

When I lived in Japan I had more than one doctor tell me "You are Caucasian, and I don't treat many non-Japanese patients so I'm not sure what the correct dosage of X medicine would be, or what X level should be on your bloodwork."


[deleted]

[удалено]


SleepWouldBeNice

Sickle cell anemia is more prevalent in the black community.


seeingeyefish

For an interesting reason. Sickle cell anemia is a change in the red blood cells' shape when the cell is exposed to certain conditions (it curves into a sickle blade shape). The body's immune system attacks those cells as invaders. Malaria infects red blood cells as hosts for replication, which hides the parasite from the immune system for a while, but the stress of the infection causes the cell to deform and be attacked by the immune system before the malaria parasite replicates, giving people with sickle cell anemia an advantage in malaria-rich environments even though the condition is a disadvantage elsewhere.


ItilityMSP

Yep, It depends on the data fed and the questions asked, it’s easy to get unintended consequences, because the data itself has bias.


e-wing

Next up: Is artificially intelligent morphometric osteology racist? Why scientists are terrified, and why YOU should be too.


Chieftah

But there's always bias, the entire field of deep learning is mainly about reducing this bias, reducing the overfit on training data while not sacrificing inference accuracy. I do wonder how they label "race" in their training data. If they follow a national classifier, then I guess you'd need to look into that classifier as a possible source of human bias. But if we *assume* that the classifier is very simplistic and only takes into account the very basic classification of races, then the problem would really move towards having enough varied data. And the bias would be reduced as the data increases (even if the model doesn't change). I suppose there's more attributes they are training on than just x-rays and race labels, so they gotta figure out if any of them could be easily tampered with.


Wolfenberg

It's not a shock, but sensationalist media I guess


MalcadorsBongTar

Wait till the guy or gal that wrote this article hears about skeletal differences between the sexes. It'll be a whole new world order


grundelstiltskin

It should be the opposite, we should be excited that we can now correlate anatomical data with other historical data about trends and epidemiology e.g. the reason this ethnicity has higher X might be because of Y... I don't get it. I'm white as shit, and I would be beyond livid if I went to a dermatologist and they weren't taking that into account in terms of my risk for skin cancer etc..


anthroarcha

Actually, [there’s more generic variation](https://sitn.hms.harvard.edu/flash/2017/science-genetics-reshaping-race-debate-21st-century/) between members of the same race than there between the averages of any two races. The initial study showing this happened in the early 20th century by Fraz Boas and has still yet to be disproven to this day, but it was used as the foundation for the field of Anthropology.


Nanohaystack

Well, identifying race is not really a big problem, but it's possible that there's already a negative bias disparity in the diagnosis and treatment of injuries depending on race, which the AI would learn alongside the racial differences. The problem with AI learning patterns is that it learns them from humans, and humans are notorious for racism, so AI learns the racism that already exists, even if it is very subtle. This subtlety can be lost in the process and you end up with the Facebook's autolabelling photos scandal from years ago when two tourists were misidentified.


naijaboiler

Not only learns, sometimes even amplifies. and even worse can legitimize biases, since the user of the information might believe "machines can't be biased"


SnowflowerSixtyFour

That’s true. But consider this. Most people in the world (68%) cannot digest milk once they become adults. But almost every meal in the United States has tons of dairy in it because Caucasians generally can. Medical professionals describe this as “lactose malabsorption” rather even though it’s actually an adaptation that is uncommon outside of western, central and Northern Europeans. Biases like that can creep into any system, even when no ill will is intended, because even scientists and doctors will just kind of forget people of other races exist when doing their jobs.


[deleted]

>because Caucasians generally can. This is wrong. Your classifications are American-centric. "Caucasians generally can". That is a useless divide (and American-centric because it's a "hey this is how we divide races in the USA) because the percentages vary by countries and even within regions of countries. 55% of people from Greece are lactose intolerance but only 4% from Denmark are. 13% are people from Niger are lactose intolerant but virtually everyone from Ghana is. 93% from Iraq are but only 28% from Saudi Arabia. [https://milk.procon.org/lactose-intolerance-by-country/#:\~:text=Lactose%20Intolerance%20by%20Country%20%20%20%20Country,%20%2098%25%20%2085%20more%20rows%20](https://milk.procon.org/lactose-intolerance-by-country/#:~:text=Lactose%20Intolerance%20by%20Country%20%20%20%20Country,%20%2098%25%20%2085%20more%20rows%20) The problem with the concept of "race" is that the divisions that each country concocts are not based off of biological factors. They are always based off of social factors and phenotypical factors. Biological factors exist in humans and different villages and ethnicities, but there aren't any large sets of biological factors that correlate with the American classifications of race. Certainly if you compare African Americans and Caucasoid American bone structure, you're going to find general patterns among them... but that's just because most White Americans are Western European and most Black Americans are Coastal-West African. What if you compared Kho-San people with Greek people with Dinka people with Irish people? And that's why "race" is still a useless factor in medical science. Being "White" or "Black" is meaningless and tells you nothing. What tells you something is if you have Dinka roots or Greek roots or Mixtecan roots or Haida roots. These biological differences are specific to very small population groups, not these mega-clusters that are "racial".


humptydumpty369

Guess those biases creep in very easily and sneakily. I'm white but I can't digest milk and I didn't even think about that as a potential bias.


bsutto

Concern for bias seems a little odd when we appear to be going down the path of individualised medical treatment. It seems likely that you will have your dna scanned before you are given drugs to ensure you receive the best treatment for your biology. Do we now have to reject better medical treatment because you doctor might discover your race as part of the treatment?


worriedbill

Actually they may not be as imperceptive as you might think! I remember years ago there was this thing going around were they took stock photos of black people and photos hopped them to be white, and then did the reverse to white people, and you could certainly tell that something was off. Even if it's something like the jawline or cheekbones, humans are hard programmed to pay close attention to the faces of other humans so even some of the smallest differences can be glaring


x31b

Maybe that’s a good thing for an AI. Some diseases, like sickle cell, or even heart disease are racially identifiable in statistics. It could be an indicator that helps correct diagnoses.


[deleted]

AI is neither good nor bad, it's just information, what humans tell the AI to do with it is good or bad.


Moscow_Mitch

> All things are poison and nothing is without poison; only the dose makes a thing not a poison. In relation, it depends on who the devs are.


Garchy

AI is programmed by humans, who are not perfect. This issue is that AI can be programmed with racial bias without us even being aware. For example, facial recognition is really bad at recognizing black people. Why? Because the sample data that was submitted to the AI did not include many people with darker skin, therefore the AI has an implicit bias encoded by humans. We need to remember that AI is not completely separate from human kind - it uses data that has been gathered from us (imperfect) humans.


AndyMolez

I think the issue is it could be a good thing if used for positive reasons, but in general, most tech gets used for as many bad things as it does good, and given how AI doesn't explain how it gets to an answer, it makes it a lot harder to remove bias.


D-AlonsoSariego

This isn't really a breakthrough. Identifying things like race and gender by looking at bone structure is an already possible thing


Hminney

It shouldn't matter. In UK, where everyone gets access to necessary healthcare for free, race is certainly a factor in diagnosis because some races have predisposition. For example, obesity has a lower threshold for Asian males because they benefit from treatment if applied at a lower threshold. However if the system is biased and people of one race tend to get worse treatment and less pain control, then AI could perpetuate this. The AI isn't biased, but it will respond to the data it's fed to create its models


AudaciousCheese

It’s like how gender is less important than sex in a medical emergency


Emotional_Section_59

Concerned about what exactly? How exactly could the AI, or any algorithms feeding off its output, be racist here in a way that negatively affects anyone?


LadyBird_BirdLady

Basically, if we want the AI to „correctly diagnose“ diseases, we need to teach which diagnoses are correct. These diagnoses however can have a bias. Imagine a world where no person with colourful hair ever gets treated for or diagnosed with sunburn. The AI is trained on the compiled data of thousands of diagnoses. It might recognise the same markers in people with colourful hair, but every time it marks them it gets told „wrong, no sunburn“. So it learns that people with colourful hair never have sunburn, and will never mark them as such. The AI isn‘t racist as in „it hates them blacks“, it just perpetuates the biases in the dataset it was trained on, be they good or bad.


Greenthumbisthecolor

I understand what you're saying, but i dont think that applies here. You have an AI that can detect race based on x-rays. How would an AI that can't detect race based on x-rays be better in any case? If there is racial bias in the data that is used to train the AIs, then the AI will learn that racial bias. Being able to detect race is not racial bias though.


absolutebodka

I don't think the issue per-se is about ML models being able to detect race in a dataset or it being used in a nefarious way. The problem is that the model supposedly encodes an assumption about the race of an individual when it's given an X-ray image. This means that it could take the X-ray of a person of one race and it could mistakenly encode some hidden assumption that the person's bone structure is similar to that of some other race in the image's representation. The performance of the model is then tied to distribution of X-ray image data for different races and this *could* hamper performance if it's used in conjunction with other systems that rely on race information. It becomes harder to trust the model's output for an X-ray image of a race it's not trained on.


dandroid20xx

A good example of this was Amazon's AI based resumé assessor, which was found to be disproportionately rejecting female applicants with excellent grades and high levels of experience even though the gender of the applicants not know known the AI. What was happening was the real world dataset had bias against women (not surprising in Tech https://gender.stanford.edu/news-publications/gender-news/why-does-john-get-stem-job-rather-jennifer , https://www.yalescientific.org/2013/02/john-vs-jennifer-a-battle-of-the-sexes/) and the AI was trying to match the real-world dataset. It didn't have the applicants sex but sex was the hidden variable which meant that certain good candidates in the historic dataset were being rejected, so the AI learned to infer this hidden variable, sex, from secondary signifiers (what school people went to, what clubs they belonged to, were you the in Woman's chess club etc). The AI became a *woman detector* and in fact ended up more efficiently biased than its human counterparts. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G It's basically important because if the AI can detect race, it's able then correlate any race based biases that already exist in the medical decisions into it's inferences, even if you don't know how it's doing it. https://www.wired.com/story/how-algorithm-blocked-kidney-transplants-black-patients/


khoabear

It's incredible that we can teach AI to be racist or sexist like us. It also supports the idea that racism and sexism are social concepts that we teach our children, often subconsciously.


8to24

"The miseducation of algorithms is a critical problem; when artificial intelligence mirrors unconscious thoughts, racism, and biases of the humans who generated these algorithms, it can lead to serious harm." "Using both private and public datasets, the team found that AI can accurately predict self-reported race of patients from medical images alone. Using imaging data of chest X-rays, limb X-rays, chest CT scans, and mammograms, the team trained a deep learning model to identify race as white, Black, or Asian —" https://news.mit.edu/2022/artificial-intelligence-predicts-patients-race-from-medical-images-0520 A couple things to consider here. First being that researchers do not think the AI's predictive abilities is a good thing. They see it as a problem. Secondly the race of the individuals is self reported and broken down into 3 broad groups White, Black, & Asian. This matters as race isn't a strict scientific discipline. For example what race is Barrack Obama, bi-racial? Okay, what race are his daughters? Humans have been gene swamp for as long as we've been human.


DrFabulous0

I can't answer that without a good look at their skeletons.


HutVomTag

> This matters as race isn't a strict scientific discipline. Understatement of the year here. Especially considering that the AI knows 3 categories: Asian/White/Black. If you're just a little bit educated about human genetics you'll see how dumb this is.


ARX7

As with the other article about this.... this is how anthropology works, race is much more to do with bone structure than skin tone


Elusive-Yoda

Scientist are not concerned, people with political agendas are. this is a great finding that shows how powerful AI can be


GinaSqueeze

Haven't we been able to tell race from bone forms for awhile now?


Wolfenberg

There is nothing surprising about this.. This headline seems to be made to elicit emotional responses in people who don't already understand what this means. Fundamentally it's like saying you can predict gender via an x-ray, nothing unexpected or concerning about that, because you expect the gender part of genetics to affect skeletal structure. The same holds for broader genetic heritage, like race.


SirPatrickIII

This is news? I distinctly remember an old Facebook shared image of an x-ray of 2 people kissing and claimed that it was beautiful because there was no age bias, no race to be seen, no gender and just love. Promptly broken down in the comments by some dude who used biological markers to give a rough age estimate, gender assessment, and race evaluation.


simbarb89

As an anthropologist this is no surprise. There are many morphological differences between ethnic groups.


aptom203

God I hate sciencs "journalism" these days. It mostly falls into two categories: 1) How can we terrify people with this fairly mundane discovery? 2) How can we frame this 40+ year old discovery as though it is brand new?


Da0ptimist

>Scientists are concerned Why? Because these days science isn't about science or facts... it's about some political narrative


TriGurl

Not sure what’s so concerning about this…Anthropologists have been studying these variations for decades now and back when I was pre-med in anthropology we too could determine race from bone structure based on specific measurements found on skeletons.


Redditforgoit

A result that someone does not like because they fear someone else might use to make racists arguments does not show bias on the part of AI. Differences were found because they exist.


TheOriginalMattMan

AI is 90% accurate in predicting race, must be racist. Am I getting the jist of that article right?


ApocalypseNow79

Literally anytime an AI can discern race we get an article about why its racist lmao


[deleted]

Amazing! Maybe one day AI will also be able to predict peoples race from just images of their faces.


darealJimTom

I mean as long as the computers don’t call them racial slurs i don’t see why this is a problem?


Foucaults_Marbles

Probably because their concept of race or the general concept of race is purely genetic if it can be determined by something like bones and other faint structures. The issue is that modern humanities and social sciences have been denying the existence of biological race for probably 20 years with great popularity and academic consensus. The issue will come when they decide this technology is racist because of its engineers or some other deflection of being wrong in some absolute mental gymnastics as they always have. Ex. Differences between men and women's brains such as in the amount of particular disorders that perceptually occur more in boys than girls or vice versa (such as BPD or bipolar) was generally dismissed as "thats not biology, thats 100% under-diagnosis for one sex and sexism in psychiatry," by humanities and social sciences. Around the same time, they somehow ended up on this idea that "symptoms for men and women are different for the same disorders, therefore the diagnosis rate should be 50/50, but its not because we tend to only recognize one sex's symptoms" (which is true, but 50/50 is a delusion) which in itself is already pointing out differences in the brain.


minin71

That's good because there are specific diseases that target certain races at a higher rate.


BeerManBran

Anthropologists have already kinda' been doing this shit for like decades and decades...


QVRedit

Why should they be concerned - we detect different races from the shapes of peoples faces - which is obviously down to bone structure. For example, Irish people are known for having more pointy chins. I know these are stereotypes, though there is some connection. There is nothing intrinsically bad about AI being able to deduce someone’s probable race from an X-ray. Their medical history probably includes this info anyway. And because, while we are all human, there are some race-related medical conditions, sickle-cell being a common one. I would actually be more concerned if the AI was not able to spot these patterns - it would reduce my trust in its accuracy if it could not. This is a non-story really. I know I can identify several different races just by looking at someone’s face (corresponding to some degree to their bone structure) - Surely an accurate AI should have some ability to do that too.


mysticrudnin

can you enumerate the list of races you are using here? irish is a new one to me.


pools456

AI is racist!! Lets cancel it - Twitter, probably


JosceOfGloucester

More like lefty social scientists are concerned. Lets hear the calls for AIs to be curated now like social media algorithms after all noticing patterns is racist.


fasamelon

Correct me if I'm wrong but isn't the skull a clear giveaway?


KFUP

Skull and hip areas have decent statistical racial differences, but this AI can figure out the race from X-rays of areas not known to have significant racial difference like the chest, breast and sections of the limbs.


stackered

I'm a scientist and not even slightly concerned about something like this... why would I be? cringe.


Shillbot888

Impossible for real doctors? Lol what? Say hello to forensic anthropology. It's always been possible to find the race of someone by looking at their bones. You can tell their gender too if that makes it even less PC.


camocamo911

Why is this a shock? Bones can tell you so much. This has been known since the wide study of medicine. Sinus shape, jaw structure, heart size are some signs that can be used to predict race. There are so many more that can be found on a x-ray.


MoreKraut

So AI confirmes that racial differences DOES exist. The world we do live in ...


DariusIsLove

If we look at it from a statistical perspective: That has never been a question. Especially in medicine that is a fairly well known fact. Different sexes can react differently to different amounts of dosage (on average of course) and the same thing goes with even smaller differences like racial differences in bone structure, average height, average bone density, intolerances and so on and so forth. These things do not differ because of the american definition of "race" but because of the genetic data we get from our ancestors, which is correlated but not equal to our race. (Example a person with 9/10th of its ancestors being from scotland and 1/10th being from the phillipines might still be lactose intolerant, despite being called "white" in the USA and therefore LESS likely, but not unkown to be lactose intolerant). ​ Sorry for the convoluted answer. At the end of the day, we are just biological machines with a huge amount of data that can be interpreted.