Disclaimer: the following sentence is going to make me sound at least twice as old as I actually am. Since I have warned you, here it goes: I am often amazed by the scopes and power of technology. Why am I surprised on this particular day? Because an English mental health trust is working with American researchers to develop a suicide prevention app. Yes, an app, like the kinds that currently flood almost every phone.

A mental health trust in England has produced a suicide prevention app and the conversation has been about privacy issues. PHOTO VIA PEXELS

A mental health trust in England has produced a suicide prevention app and the conversation has been about privacy issues. PHOTO VIA PEXELS

According to an article published by the BBC, this app would be a platform for clinicians to have a 24/7 monitor on patients considered to be high risk. The app would give a warning if the patient was at a suicide-prone location, and it would also track communications through email and social media. The reasoning behind this tracking is that it is believed that people will share more with their friends than with their doctors. The trial for this app should begin in June 2016 and the app should be released by January 2017. The participation in the trial for this app will be voluntary.

I think this app’s intentions are more than noble. I appreciate that such an amazing effort is being taken to lower suicide rates and to literally show patients that they are not alone. However, is technology being taken too far?

In the general sense of the word, this app would be violating any sense of privacy that these patients could have. Essentially what this app would be doing would be tracking conversations and movements. The claim is that people would be more open with their loved ones and people they consider to be close friends, but then is this app not exploiting the patients’ trust? If they are truly at such a risk, then should they not remain in a facility where they can be physically monitored?

The patients, however, would be monitored voluntarily. So because of that, it could be argued that they forego the previously mentioned privacy argument. Does it make it okay that they are agreeing to what this app will do to them? I do not know, but I would like to think it does.

I cannot even pretend to understand what these patients must be going through, but I would like to believe that if they are okay with their conversations and locations being monitored, then they must want that feeling of literally never being alone. Maybe it is what they truly need. Maybe they have been in a facility, but do not feel safe enough in the real world.

While I am often amazed by technology (as previously stated), a lot of the times, I am even scared. This time I am both, but what I feel about the development of this app is insignificant if what the actual users feel about this app is positive. As I come to this realization, I hope that in the months to come, people will realize that in order to understand this use of technology, they truly need to understand the users of it.