Those Who Do, Need To Explain
It’s no longer enough to make great technology: Customers, partners and regulators want to know why you made your choices.
There is a little-noticed talent that’s critical for success in a tech-centric world, up there with being a great programmer, a master strategist, or even an innovative entrepreneur.
It’s being good at explaining stuff.
Welcome to a prime skill of the future. Explaining how and why something functions has always been a high-value pursuit. At the top level, it’s the stuff of clear mission statements, great speeches, and effective selling. Defining something effectively, in this sense, establishes a kind of ownership of it, and can stir thousands to action.
It’s why Steve Jobs, among many other leaders, would spend months on a “mere” product presentation.
That capability is now also urgently needed in the engine rooms of business, where cloud computing, artificial intelligence, and an explosion of data are reshaping how we work. These new technologies can make things happen at an accelerated rate, and touch an increasing number of areas in life.
Putting these technologies into rapid use, then telling people how they worked and why they did what they did is critical. In fact, it’s already a big part of Information Technology.
Areas like fast and accurate answers to questions, easy navigation, and clean and organized web pages all inherently show an understanding of both user needs and product capabilities.
More important is what AI practitioners call “explainability.” That means sorting out what an algorithm did, what data was used, and why certain conclusions were reached. If, say, a machine learning algorithm also made business decisions, these decisions need to be annotated and presented effectively.
Explainability is currently important so business leaders can understand why they are doing what they’re doing. This kind of thing will become even more important as AI becomes commonplace in law or regulated activity.
In these cases, it will be incumbent on AI specialists to show that their data is free of bias, and the outcomes their programs reach are consistent — an interesting challenge for things like deep learning, where there are many, many layers of analysis and different approaches that can affect the outcome.
In a conversation I had last year with AI researcher and professor Nigel Shadbolt, he talked of a future need for algorithmic accountants and data accounts; people who worry about the nature and origin of the data sets.
Elsewhere in the corporation, the increased level of collaboration made possible by cloud-based systems means that, both within different departments and with external partners, there will be a growing emphasis on well-understood roles and identities so people can move swiftly and with certainty.
For many people, there is likely to be a period of change and learning. Successful teams, along with more specialized agile units in areas like coding, accelerate progress. Documentation of what makes them effective is now often, at best, an afterthought. It may need to be part of the process of work.
Whatever the challenges, there is much to like about the explaining revolution. For one thing, AI that is well-examined and understood often surfaces data biases that arose among the humans the algorithm was aping (the recent story of an AI hiring program that ruled out female engineering candidates is a good example). Departments that can explain themselves to other parts of the company will likely have better outcomes, since they’ll be better understood. In turn, they can help the company explain itself to customers, and vice versa.
Fulfilling that need — to be better understood, on all sides — is a high-value activity, whatever technology is at hand.
This originally appeared, in a lightly altered form, in a monthly email newsletter I publish. Interested readers may subscribe here.