Okay, so initially when I sat down to write this I was only focused on the people to people skills that are going to be critical for us to develop in order to collaborate effectively with others. But then it dawned on me… Collaborating with others in industry 4.0 goes beyond person to person. It is now person to humanoid, person to robot.
Collaborating with others now involves establishing interpersonal relationships with our human colleagues in addition to establishing these relationships with what were and really still are largely defined as inanimate objects, according to the Merriam-Webster definition.
This got me thinking about what really underlies our ability to form relationships with each other and how this can be transferred between humans and robots. You see, in general, collaboration skills involve an artistic interplay of intrapersonal and intrapersonal skills. These skills are expertly crafted together during communication and directed towards achieving the team’s aims.
As a team then, if we want to develop our collaboration skills firstly we need to focus on understanding ourselves, developing our self- awareness. To be able to collaborate with others means that I understand my strengths, it means working hard to achieve a growth mindset, and of course adopting an “I am okay you’re okay approach,” yes good old transactional analysis. Without these we replace collaboration with nasty competiveness.
In addition to self-knowledge or personal mastery, I also need to know my team members. I need to know what they are good at but most of all I need to trust them, and trust that we are working towards a common goal within well-defined norms to achieve our purpose.
That brings me to the second key element of fostering collaboration, developing a deep understanding of our why, i.e. why do we exist as a team, what is our reason for being. When I was still working for a global management consulting firm I was blessed to work with many great leaders but one in particular stands out for me. This is because without fail at the start of each new project, each project team would have to define and present back to Steerco their reason for being, why did we exist, how would our existence benefit the project overall? This was such a simple yet effective technique to ensure that each team was aligned as a unit and that each unit understood how they contributed to the overall project aims.
Thirdly, once we are aligned and understand our why, as a team we are then able to move on to clearly defining how we (the team) will achieve this. And no I’m not just talking about developing an action plan. The How I am talking about here goes deeper than just a list of actions, instead it is the nucleus of High Performing teams. The How that I am referring to relates to establishing clear norms or values to govern the way in which we achieve these aims. An environment wanting to foster collaboration by default has to foster open and honest communication and adopt a zero tolerance policy when it comes to fear of failure. These high performing teams understand that in order to achieve high performance and drive innovation members of the team need to participate and experience free flowing exchange of ideas, and allow themselves the space to be creative and diverse in their thinking.
So above describes how we foster collaboration between us humans, the question is does this change when we have to foster collaboration between humans and bots, and if so how?
When I discussed this with others, and by this I mean dinner conversation…no in depth research… they responded by saying that for them establishing a relationship with a robot would be similar to developing a relationship with animals. But animals are sentient, does this then mean that we would perceive bots as being sentient beings? It seems I am not alone in this thinking, there is a major drive to develop bots that mimic sentience, and this is apparent from the latest research trends in Robotics, Social Robotics to be exact.
The field of Social Robotics is fascinating, their area of inquiry centres on how we develop artificial empathy and not just Intelligence. I am currently reading a fascinating book by Dumouchel and Damiano titled “Living with Robots.” In their book they explore the sociological aspects of Living with Robots. And explore some of the big questions driving researchers in this field.
One of these questions are how do we make Robots more Human, how do we build them so as to function as Social Agents? This in turn is making us re-examine and redefine what it is that actually makes us human, and how we develop interrelationships. A key question we need to answer is how will we treat these social agents in future? Will we perceive them to be inanimate, and thereby we can treat them as slaves, built solely to tend to our needs? Or will we move towards treating them as sentient, in which case what rights will they have?
The way in which we treat them then, what does that say about us? What is the psychology that underlies this? What possible pathologies will this give rise to?
So many new questions…what an awesome time to be alive!