Mental Health Apps Are Not Keeping Your Data Safe

Mental Health Apps Are Not Keeping Your Data Safe

Imagine calling a suicide prevention hotline during a crisis. Do you ask about their data collection policy. Are you sure that your data are safe and secure? Recent events might make you rethink your answers.

Mental healthcare technologies such as chat lines and bots are available to people in crisis. These are the most vulnerable users to any technology and should expect their data being kept confidential, safe, and protected. Recent examples of misuse of sensitive data are alarming. Our research shows that developers of mental-health-based AI algorithms only test whether they work by gathering data. They don’t address ethical, privacy or political concerns about how they might use them. Technology used to provide mental health care should have the same standards of ethics in health care.

Politicorecently reported that Crisis Text Line, a not-for-profit organization claiming to be a secure and confidential resource to those in crisis, was sharing data it collected from users with its for-profit spin-off company Loris AI, which develops customer service software. An official from Crisis Text Line initially defended the data-exchange as ethical and “fully compliant with the law.” But within a few days the organization announced it had ended its data-sharing relationship with Loris AI, even as it maintained that the data had been “handled securely, anonymized and scrubbed of personally identifiable information.”

Loris AI, a company that uses artificial intelligence to develop chatbot-based customer services products, had used data generated by the over 100 million Crisis Text Line exchanges to, for example, help service agents understand customer sentiment. Loris AI reportedly deleted all data it received from Crisis Text Line. However, it is not clear if that includes the algorithms that were trained on that data.

This incident and others similar to it highlight the increasing value placed on mental healthcare data as part of machine-learning and the regulatory gray areas through which these data flow. At stake is the privacy and well-being of vulnerable people or those who may be in crisis. They are the ones who will bear the consequences of poorly designed technology. In 2018, U.S. border authorities refused entry to several Canadians who had survived suicide attempts, based on information in a police database. Let’s look at that. To flag someone who wanted to cross a border, law enforcement had shared noncriminal mental health information.

Regulators and policy makers need to have evidence to prove that artificial intelligence can be used in mental health products.

We surveyed 132 studies that tested automation technologies, such as chatbots, in online mental health initiatives. The researchers in 85 percent of the studies didn’t address, either in study design, or in reporting results, how the technologies could be used in negative ways. This was despite the fact that some technologies pose serious dangers to human health. For example, 53 studies used public social media data–in many cases without consent–for predictive purposes like trying to determine a person’s mental health diagnosis. None of the studies that we reviewed addressed the possibility of discrimination people might face if these data became public.

Very few studies have included input from people who have used mental healthcare services. Only 3 percent of studies did not include input from people who had used mental health services in design, evaluation, or implementation. This means that the research behind these technologies is lacking in the participation of the people who will suffer the consequences.

Mental Health AI developers should explore the long-term and possible adverse effects of different mental health technology, including how data are being used or what happens if it fails the user. This should be required by editors of scholarly journals, institutional review board members, funders, and others. These requirements should be accompanied by urgent adoption standards to promote lived experience in mental healthcare research .

While most states provide protection for mental health information in the United States, some emerging data regarding mental health are not covered by policy. Regulations such the Health Insurance Portability and Accountability Act, (HIPAA), do not apply directly-to-consumer products for health care, including technology that goes into AI-based psychological health products. Federal Drug Administration (FDA), and Federal Trade Commission(FTC) could play roles in evaluating these direct to-consumer technologies. The FDA’s jurisdiction does not appear to cover health data collectors such as websites, well-being apps, and social networks. This excludes most “indirect health data.” The FTC does not cover data collected by non-profit organizations. This was a major concern in the Crisis Text Line case.

It is clear that the generation of data on human distress is more than just a potential invasion privacy; it also presents risks to an open, free society. There are profound social consequences to the possibility that people will police their speech, and conduct in fear of unpredictability datafication of their inner lives. Imagine a world in which we need to find expert “social media analysts” to help us create content that is “mentally healthy” or where employers routinely screen potential employees’ social media accounts for “mental health hazards

All data, no matter how they were used by mental health services, can be used to predict future distress and impairment. Big data and AI are being used to explore our daily activities in order to find new forms of “mental-health-related data”, which may be evaded regulation. Apple currently collaborates with multinational biotechnology company Biogen as well as the University of California, Los Angeles to explore using phone sensors data, such as movement and sleep patterns, to infer mental and cognitive decline.

Crunch enough information points about a person’s behavior, the theory goes. Signs of ill-health or disability will emerge. These sensitive data offer new opportunities for discriminatory and biased decision-making regarding individuals and populations. How will data that are labeled “depressed” or “cognitively disabled”–or likely becoming those–impact an individual’s insurance rates and premiums? Will individuals be able contest such designations before data is transferred to other entities

Companies are increasingly recognizing the value of using data from individuals for mental health. This is a result of Things moving quickly in the digital mental healthcare sector. A World Economic Forum report values the global digital health market at $118 billion worldwide and cites mental health as one of the fastest-growing sectors. A dizzying array of start-ups are jostling to be the next big thing in mental health, with “digital behavioral health” companies reportedly attracting $1.8 billion in venture capitalin 2020 alone.

This flow of private capital is in stark contrast to underfunded health care systems in which people struggle to access appropriate services. While online support is cheaper than face-to-face, it can seem like the only option for many. However, this option creates new vulnerabilities which we are just beginning to understand.

IF YOU NEED HELP

If you or someone you care about is having thoughts or problems with suicide, there are resources. Call or text the 988 Suicide & Crisis Lifeline at 988 or use the online Lifeline Chat.

This is an opinion and analysis piece. The views expressed by the author/authors are not necessarily those Scientific American.

ABOUT THE AUTHOR(S)

    Piers Gooding is a senior research fellow at Melbourne Law School at the University of Melbourne in Australia. He is a socio-legal researcher who focuses on the law, politics, and treatment of mental and disability.

      Timothy Kariotis is a lecturer in digital government and Ph.D. candidate in digital health at the University of Melbourne in Australia, and also works as a policy practitioner. His research focuses on the design of digital mental healthcare technologies, regulatory design, and digital equity.

      Read More