This leads to another controversial topic which is the trust or confidence of AI when you talk to ai in front of with him/her. Though AI systems, such as virtual assistants and chatbots can generate responses with a 90% accuracy rate, they are still lacking. In fact, AI can struggle with very basic human behaviour — such as reading between the lines or feeling ambiguity; a 2022 study showed by MIT that this is often an issue. Some 50 percent of users said it annoyed them if the AI system misunderstood their question, which was a problem for 40 percent of users in the survey.
AI technology is improving with time, and only makes minor errors very occasionally. As an example, due to breakthroughs in machine learning and natural language processing (NLP), Google’s AI answer simple queries with over 95% accuracy. With these technologies, AI can help a lot in many areas like customer service, healthcare, etc. However, AI is not perfect. It can fail when dealing with vague or overly detailed prompts.
And the trust or otherwise in AI itself, is also linked to its limitations. As stated in a 2021 report of the World Economic Forum, AI systems can encode bias because they are based on historical data on which they have been trained. As a result, we end up with AIs that could favour some groups of people over others, even when the design does not intend to do so. For example, the training data for AI tools that hire job-seekers has included racism and sexism, demonstrating that human supervision of any decision made by an AI is crucial to validate it.
Elon Musk has previously warned AI can be more dangerous than nuclear weapons, noting it can behave unpredictably. So okay, maybe that sounds a little bit extreme, but this brings forth a very important point — AI systems need data with some sort of pattern to verify (decide) instead of the human brain intuition. An AI can be used in healthcare indeed but if it wrongly diagnoses a patient, that might lead to fatal repercussions.
So, yes AI can help in the field of mental health but it is not meant to replace human touch and connection. Woebot is a great example of an AI therapy bot that can help people with mild symptoms but fail to solve the problem for those who experience severe cases since it is never meant to replace the humans. According to a study by the American Psychological Association, AI-based mental health tools cannot replace human therapists for serious cases.
Although AI is very helpful, users should be careful with it. It is important to realize what AI can and cannot do when you are speaking to it. Instead, AI transparency and explainability should help users employ machine learning in an informed manner helped by human discretion.