Imagine calling a suicide prevention hotline in a crisis. Do you ask for their data collection policy? Do you assume that your data are protected and kept secure? Recent events may make you consider your answers more carefully.
Mental health technologies such as bots and chat lines serve people who are experiencing a crisis. They are some of the most vulnerable users of any technology, and they should expect their data to be kept safe, protected and confidential. Unfortunately, recent dramatic examples show that extremely sensitive data has been misused. Our own research has found that, in gathering data, the developers of mental health–based AI algorithms simply test if they work. They generally don’t address the ethical, privacy and political concerns about how they might be used. At a minimum, the same standards of health care ethics should be applied to technologies used in providing mental health care.