Which is more effective, a team of all-stars or an all-star team?
Various studies have demonstrated that the latter is likely to be more effective in the long run than the former. Research from Carnegie Mellon, MIT, and Union College, identified two behaviors that successful teams shared. Firstly, equality in the distribution of input – everyone has a chance to contribute their opinions. When only one or two members speak, performance declines. Secondly, behaviors related to emotional intelligence or EQ, including sensitivity to non-verbal cues among team members, psychological safety, and trust.
Most high-performing teams have these two behaviors in abundance.
As AI performs more and more repetitive tasks previously done by humans, as well as more advanced services like providing insight and answers, which humans are no longer able to provide due to the vast amounts of data that need to be analyzed and the speed of computing today, one has to ask the question: Can AI become an effective team member? Isn’t it time for AI to become part of the team?
The Dream Team: Human-AI Partnership
Whether we recognize it or not, AI is ubiquitous in our lives. From voice-enabled assistants, advanced search software, and self-driving cars to what is called “narrow AI” – the use of AI in which a learning algorithm is designed to perform a single task like automated claims management or automated contract markup. If AI is here to stay and will play such an important role in our future, how can a high-performing team be built with the necessary characteristics, such as equal input and emotional intelligence, when a machine is part of the team?
A recent article in the Wall Street Journal discussed this very topic, starting from the premise that research has shown that humans and AI working together often perform better than either humans or AI alone. On the lines of the earlier research the author, Kartik Hosanagar, posits there are two fundamental questions that need answering if the Human-AI team is to be most effective a) who decides who does what (equality in distribution of input) and b) how is ‘trust’ engendered (EQ and psychological safety).
The better the “team” is able to assess which is more appropriate for the task – the human or the machine/algorithm, the more likely a positive and more accurate outcome can be achieved. The conclusions are perhaps unsurprising. Humans are naturally wary of technology that they barely understand. Humans in many cases are poor judges of their limitations and not good at deciding whether they should be doing the task or a machine/AI could or should. Algorithms are more dispassionate and when they encounter something they cannot decipher, pass it back to the human to legislate. Getting more comfortable with deciding to give work to the algorithm rather than doing it oneself or another human team member, is part of the process and will lead to better outcomes. On the second point, humans find it difficult to allow the concept of trust to be applied to a machine or algorithm. When they do, then, outcomes improve significantly.
The popularization of the singularity concept has not been really be helpful in endearing humans to AI. The fear of AI taking over and extinguishing human life-form is alarmist. However, it is clear that the more that humans embrace AI as a partner rather than seeing it suspiciously as some sort of Brutus with a hidden agenda, the more likely we will get comfortable with having a new non-human team member, one that adds to the star team, rather than being a star performer to be feared.
This has been the foundation for the development of BlackBoiler’s automated contract markup technology. We started with the premise that lawyers will always want to have control of the process of review and revision of contracts, but that AI can be a partner to drive to better outcomes in the negotiation phase of the contract life cycle. The latest release has taken the focus on the duality of deciding who does what task – human or machine, while ensuring lawyers and contract negotiators remain in control. At BlackBoiler we have long strived to deliver “time to value” which is an important concept – how quickly can you get return on the investment of advanced AI technology like BlackBoiler. But there is perhaps an equally important concept that needs striving for, that of “time to trust” – how soon can the human team members really trust the machine/algorithms. BlackBoiler aims to reduce ‘time to trust’ as much as possible, ensuring a harmonious partnership between man and machine in every contract negotiation, resulting in a faster ‘time to value’.
Embrace your AI platform and make it your favorite co-worker!