IBM this week presented research investigating how AI and machine learning could be used to improve maternal health in developing countries and predict the onset and progression of Type 1 diabetes. In a study funded by the Bill and Melinda Gates Foundation, IBM researchers built models to analyze demographic datasets from African countries, finding “data-supported” links between the number of years between pregnancies and the size of a woman’s social network with birth outcomes. In a separate work, a team from IBM analyzed data across three decades and four countries to attempt to anticipate the onset of Type 1 diabetes anywhere from 3 to 12 months before it’s typically diagnosed and then predict its progression. They claim one of the models accurately predicted progression 84% of the time.
Improving neonatal outcome
Despite a global decline in child mortality rates, many countries aren’t on track to achieving proposed targets of ending preventable deaths among newborns and children under the age of 5. Unsurprisingly, the progress toward these targets remains uneven, reflected in disparities in access to healthcare services and inequitable resource allocation.
Toward potential solutions, researchers at IBM attempted to identify features associated with neonatal mortality “as captured in nationally representative cross-sectional data.” They analyzed corpora from two recent (from 2014 and 2018) demographic and health surveys taken in 10 different sub-Saharan countries, building for each survey a model to classify (1) the mothers who reported a birth in the 5 years preceding the survey, (2) those who reported losing one or more children under the age of 28 days, (3) and those who didn’t report losing a child. Then, the researchers inspected each model by visualizing the features in the data that informed the model’s conclusions, as well as how changes in the features’ values might have impacted neonatal mortality.
The researchers concluded that that in most countries (e.g., Nigeria, Senegal, Tanzania, Zambia, South Africa, Kenya, Ghana, Ethiopia, the Democratic Republic of the Congo, and Burkina Faso), neonatal deaths accounts for the majority of the loss of children under 5 years and that the percentages of neonatal deaths have historically remained high despite a decrease in under-5 deaths. They found that the number of births in the past 5 years was positively correlated with neonatal mortality while household size was negatively correlated with neonatal mortality. Furthermore, they claimed to have established that mothers living in smaller households have a higher risk of neonatal mortality compared to mothers living in larger households, with factors such as the age and gender of the head of the household appearing to influence the association between household size and neonatal mortality.
The coauthors of the study note the limitations of their work, like the fact that the surveys, which are self-reported, might omit key information like healthcare access and healthcare-seeking behaviors. They also concede that the models might be identifying and exploiting undesirable patterns to make their predictions. Still, they claim to have made an important contribution to the research community in demonstrating that ensemble machine learning can potentially derive neonatal outcome insights from health surveys alone.
“Our work demonstrates the practical application of machine learning for generating insights through the inspection of black box models, and the applicability of using machine learning techniques to generate novel insights and alternative hypotheses about phenomena captured in population-level health data,” the researchers wrote in a paper describing their efforts. “The positive correlation between the reported number of births and neonatal mortality reflected in our results confirms the previously known observation about birth spacing as a key determinant of neonatal mortality.”
Type 1 diabetes prediction
A separate IBM team sought to investigate the extent to which AI might be useful in diagnosing and treating Type 1 diabetes, which affects about 1 in 100 adults during their lifetimes. Drawing on research showing that clinical Type 1 diabetes is generally preceded by a condition called islet autoimmunity, in which the body consistently produces antibodies called islet autoantibodies, the team developed an algorithm that clusters patients together and determines the number of clusters and their profiles to discover commonalities across different geographical groups.
The algorithm considered profiles based on types of autoantibodies, the age at which autoantibodies were developed, and imbalances in autoantibody positivity. After clustering the autoantibodies-positive subjects together, the researchers applied the model to data from 1,507 patients across studies conducted in the U.S., Sweden, and Finland. The accuracy of cluster transfer was reportedly high with the aforementioned 84% mean, suggesting that the AAb profile can be used to predict Type 1 diabetes progression independently of the population.
In another somewhat related study, this same team of researchers created a Type 1 diabetes ontology that captures the patterns of certain biomarkers and uses them together with a model to discern features. The coauthors claim that when applied to the same datasets as the clustering algorithm, the ontology improved prediction performance for up to 12 months in advance, enabling predictions of which patients might develop Type 1 diabetes a year before it’s usually detected.
It’s important to note, of course, that imbalances in the datasets might’ve biased the predictions. A team of U.K. scientists found that almost all eye disease datasets come from patients in North America, Europe, and China, meaning eye disease-diagnosing algorithms are less certain to work well for racial groups from underrepresented countries. In another study, Stanford University researchers claimed that most of the U.S. data for studies involving medical uses of AI come from California, New York, and Massachusetts.
The coauthors of an audit last month recommend that practitioners apply “rigorous” fairness analyses before deployment as one solution to bias. Here’s hoping that the IBM researchers, should they choose to eventually deploy its models, heed their advice.