Like footprints in the snow, a distinct trail remains every time someone accesses a digital device that follows them from website to website and from app to app. While it would seem time spent online—for fun or work—is a person’s own, it turns out these digital footprints do not serve them as much as they serve the artificial intelligence constantly looking over their shoulders and gathering that information for later use.
Each time someone taps on a smartphone screen—which the average person does more than 2,600 times every day—the data accessed is tracked and stored. (80% of smartphone app usage is consumed by querying Google or interacting on Facebook.) Much of the information left behind on the internet is widely available to anyone with the interest and (moderate) computer skills to track it down. (And the ongoing uproar over the Facebook/Cambridge Analytica debacle demonstrates just how easy it is to have personal information shared far more broadly than users expect or want.)
Social media sites, where many people often share very personal details and input detailed user profiles, are used to extract information which is later used to target ads. This is not a concern when the ad hawks a household appliance or new car, but should people be concerned when advertisers start selling medications and treatments based on what they have posted?
Do Hashtags Equal #Depression?
A recent article in The New York Times reported on a slew of new and existing companies that would like to analyse our digital footprints, especially those found on social media, to make decisions about mental health, including depression. Artificial intelligence can monitor how fast one types, the tone of one’s voice when using the phone, the kind of photos one posts and the hashtags used to express emotions, and then extrapolate from this information one’s well-being.
Research from Arizona State University and Georgia Tech explores the images and hashtags used on social media, specifically Instagram (owned by Facebook), and how “dark” images may be very informative regarding mental health status. As part of the study, they added more than 2 million images to an artificial intelligence engine to secure results relating to the apparent, downward spiralling mental health status of specific users.
In other words (no pun intended), a picture (and a handful of hashtags) is worth more than a thousand words when it comes to attempting to diagnose mental health from afar.
Today, some social media outlets have already taken on the tas, and potential risk, of monitoring users themselves, which is problematic since they are not healthcare professionals:
“Although Instagram and other social media platforms have put in place some intervention policies to bring help to those users who engage in mental health disclosure, at best, they can be called ‘blanket’ strategies. This is because the interventions are neither tailored to the individual or the context, nor do they leverage nuanced and subtle cues manifested in shared content.”
ASU/Georgia Tech research
Data Mining to Serve Ads
Today, as people travel from website to website they are served ads based on where they have been and what they have viewed. If they have looked at golf clubs on one website, that will follow a user when they surf, for example, to news website CNN in the form of an in-page ad. This is expected and is part of the implicit contract that is made with the internet: unfettered and free access to information and inundation by advertising to support it.
But what happens when it is not shopping websites, but, rather, posting about a bad day at work on social media? The post is likely public (although users are becoming more aware of privacy issues) and posters likely remain the copyright holder, but the social media site “owns” your post in other ways. As more companies mine social media data to publish relevant advertising, there is a persistent risk in how personal information is used, especially in when it comes to mental or physical health. (The ASU/Georgia Tech researchers, for example, accessed user data via Instagram’s readily-available application programming interface, which included user bios from which they were able to extract additional detail.)
Ethical or not, it seems the logical next step in the data mining sphere is to sell culled information to companies that will use it to, for example, buy ads for prescription drugs and place them in the social media feeds of or the websites visited by those who display depression through photos or text. This, of course, again raises the issue that these companies are not healthcare providers and the suggested medications may be contraindicated or unnecessary.
And just last week CNBC revealed Facebook was in talks with several healthcare organizations to acquire de-identified patient data and apply it to Facebook data in a process called “hashing,” which allows the two unrelated data sets to be combined in such a way that it is possible to bring the information together to match people found in both data sets. Although the matched person technically remains anonymous, Facebook or another organisation would have enough information to market healthcare products to a specific person.
Will Advertising Dollars Force the Issue?
So should healthcare or pharmaceutical companies serve ads based on the photos, hashtags and comments found in social media posts? The issue will grow as advertisers continue to flood the internet with money: it is predicted $119bn will be spent in 2021. There undoubtedly will be a desire to serve these types of ads to healthcare consumers.
Very few people are prepared to go cold turkey and stop using the internet altogether, so users will need to decide how, or if, they want to interact with the artificial intelligence that keeps an eye on our online movements. How far are we prepared to go to protect our privacy?
It will take a combination of social media companies, healthcare organisations, data brokers and, most importantly, social media and internet users coming together to reign in potential misuse and formulate acceptable use of this data. In the meantime, it remains important that internet users be circumspect when posting health information to social media, lest the data be misinterpreted by an algorithm: Artificial intelligence should be used to help us in our daily lives, not make it more complex.