
The Brackenridge fellowship focuses on bringing interdisciplinary perspectives together. By learning about the research going on in different fields and industries, I’m broadening my perspective about the world and even about how I look at my own research. Joining a smaller cohort this week and sharing our unique research topics has made me realize just this.
For example, my cohort has peers conducting state-of-the-art research in linguistics, pharmaceuticals, stem cell science, rehabilitation science, and others. Talking with Sarah Moore and learning about her quest to find the semantic difference between the word “ kind” and “ nice” really opened my mind to the nuances of everyday language. Talking to the other Sara, Sara Zdancewicz, taught me to appreciate the intricacies of drug-protein interactions and angiogenesis.
While the projects in my cohort cover a wide variety of topics, one ended up being in a similar field to mine. Sachit Anand is reviewing artificial intelligence bias in healthcare applications. As someone who is actually constructing medical AI algorithms in my lab, I’ve discovered this topic frequently. Some of the more popular AI frameworks that are up-and-coming follow a “black-box” model. In other words, these AI are so complicated that there’s no way to tell what exactly the AI is looking at and how it’s using clinical data. For example, if we were to train an AI to look at stroke data which consisted of a patient’s demographics and clinical features like blood pressure during the stroke, etc., the AI might incorrectly correlate something like patient ethnicity to poor stroke outcomes. This is a problem because now the AI is inherently biased to look for a certain ethnicity. So, if this AI finds itself being used in the ER as a stroke outcome-predicting program, it will inaccurately overlook some patients due to their ethnicity and hyper-focus on others. This is obviously a simple example, but it’s something I’ve considered in my own research and approach to developing AI for deployment in healthcare settings.
As my cohort and I move through the Brackenridge fellowship, I’ll certainly be bringing back ideas I learn from my peers to my lab. Already, I’m considering investigating how I build my AI frameworks to avoid biases that Sachit points out. Sarah Moore’s research has made me realize that perhaps my AI may look at similar words like “kind” and “nice” in a different way – so I may need to simplify some clinical text for it. The more I learn about my cohort, the more I’m benefiting as an interdisciplinary researcher.