Google is one of the pioneers in the use of artificial intelligence. The company recently announced that it could use AI techniques to analyze users’ searches and detect whether they are in a crisis.
To do this, Google has developed machine learning models. The company’s latest model is called MUM, which was introduced at the IO conference last year. Google has now integrated the MUM machine learning model with its search engine to provide a higher level of analysis.
MUM needs to determine if the search terms may be relevant to someone in crisis. The company says that the goal of doing such a thing is to “more accurately detect a wider range of personal crisis searches.” Anne Merritt, a Google product manager for health and information quality says that regular search tools cannot do this.
“MUM is able to help us understand longer or more complex queries like ‘why did he attack me when I said I don’t love him,’” Merrit told The Verge. “It may be obvious to humans that this query is about domestic violence, but long, natural-language queries like these are difficult for our systems to understand without advanced AI.”
Google uses AI searches to detect if a user is going through a crisis
Searching the queries related to suicide is a great field of work for MUM. Previously, searching for terms like “most common ways suicide is completed” or “Sydney suicide hot spots” were understood as information seeking.
However, now MUM can detect such queues and provide users with a “Help is available” option. It usually contains phone numbers and websites for mental health charities like Samaritans.
In addition to preventing suicide and risky activities, Google has taken steps to reduce access to pornographic content. The company uses an older AI language model called BERT to detect searches for explicit content, specifically pornography. Google said that using BERT could reduce unexpected, shocking results by 30% year on year.
The impact and use of artificial intelligence in various Google departments have increased significantly. A few days ago, Google announced that it could block 100 million abusive edits in Maps thanks to AI.
Of course, using AI can be a double-edged sword, and with many benefits, there are risks such as introducing biases and misinformation into search results.
2022-04-01 15:07:00