When was the last time you received an email from a colleague or supervisor which worried you? One could be hesitant to report it to HR or anyone at all. Before #MeToo around the globe took off, many women said their in-house HR has not been able to ease their discomfort towards sexual harassment and bullying in the workplace.
That said, Artificial Intelligence programmers at Chicago-based NexLP are developing #MeTooBots which will scan official communications for inappropriate vernacular. The technology is hoped to be adopted globally. So, how would the Indian corporate scene respond?
What experts say
Mumbai-based Beerud Sheth, co-founder of Gupshup and founder of Elance, is credited with bringing in a new wave to India’s bot revolution. But even he knows that such a technology cannot be implemented and effective in impact overnight, but he believes that AI will gradually become more effective in handling many sensitive issues. He explains this space requires training for both humans and the AI, adding that any issue that the bot cannot handle, it can escalate to a human. “Over time, it can take on more responsibility,” he says, “Also, bots can harness vast computing power to do a first review of millions of emails that is not humanly possible.”
Much of what a #MeTooBot will have to do will rely on data, adds Jaspreet Bindra, a consultant in AI, cryptocurrency, and digital transformation. “AI accuracy ultimately depends on the accuracy and amount of data that you train it on. So, a #MeTooBot’s accuracy will entirely depend on the datasets it is trained in.”
- According to a Complykaro services annual report, Indian companies reported more cases of sexual harassment in the financial year of 2019 compared to a year earlier. In fact, data from BSE 100 companies, which are required to furnish this information, showed a 14% increase in sexual harassment complaints in 2019.
To get a bit deeper, let us look at language. Connotation in sexual harassment and bullying can be fairly ambiguous in something like an email. Jaspreet continues, “In my opinion, the bot will have huge problems picking up the nuances — for example, sarcasm. Take the example of the giant social networks — Facebook has had to hire tens of thousands of people, along with their AI, to try to weed out porn, racism etc from the posts that people put up. And we know that they are still not being effective enough. Pictures are another problem — if you think text is a problem, image recognition of #MeToo content will be an even bigger one. And then there is video!”
According to a January 3 article by Isabel Woodford in The Guardian , implementation will take some time. But the culture gap is where real challenges lie; nuances in language need to be learned by bots, which does not happen as quickly as we would assume. “MNCs, in my experience, will do what their global HQs direct them to. The challenges in the Indian market will be common to any other — with the added complication of language and culture. What might seem inoffensive in the US, might be regarded as offensive in India and vice versa,” explains Jaspreet, who has an extensive background in working with major MNCs in India.
Beerud says there are four factors worth deliberation before implementing a #MeTooBot in the Indian corporate market, the first being that any such tool will have false positives and false negatives, and therefore, humans will have to supervise and train it over time. The second point is that for interactive chatbots, mismatched expectations between the bot’s ability and user expectations is a challenge. It is vital that the bot must communicate clearly what it can and cannot do. Then there is the third idea of user adoption and trust, which will take time to build, like any new product. Finally, the significant technical challenges in getting natural language processing (NLP) and AI working, as well as encoding the written and unwritten social rules into the bot.
Ethical or not?
Is there a concern around privacy? On one hand, a January 3 op-ed from RT titled ‘#MeTooBots that will scan your personal emails for ‘harassment’ are an Orwellian misuse of AI,’ elaborates, “Instead, it is an attempt to harness science to support the Culture War, to transform it into an all-encompassing presence in constant need of monitoring and scrutiny. This doesn’t just threaten privacy, but the legitimacy of AI.”
Beerud believes automated monitoring of business emails may be acceptable since employees are expected to follow certain guidelines for business communication. He adds, “This is no different from CCTV cameras etc. However, it is still critical to define privacy policies, to strictly adhere to them and to communicate it to all users.” Jaspreet adds that Google, in a way, ‘reads’ your email anyway. “No humans (at least that is what we know) read your emails but machines do, and that is why they generate relevant, targeted advertising. Machines (and now people) listen in to Alexa and Google Home. So, this is not a problem unique to this situation. Except that here, the employees will know who to complain to, which is their company HR, rather than a giant, faceless corporation. So definitely, there will be outrage of some kind,” he concludes.
However, HR managers do state that leaving it to bots, however helpful they may be, cannot be the only solution. While 2020 could be the year of more groundbreaking technology, nothing is more effective than speaking up and ensuring the offline workplace ecosystem is held accountable.