The longstanding mantra of Silicon Valley is no more; Moving fast and breaking things served a purpose at a time when innovation was about getting a product into the hands of the consumer as quickly as possible. But in the eyes of many, it came with little consideration of the socio-economic or ethical impact of said innovation.
The world we innovate in has now changed – and whether we’re talking about a US tech giant or a local high-growth start-up, workers are voting with their feet and leaving jobs they feel could have negative consequences for people and society. At least, that's according to a report by Doteveryone released this week on the attitudes of the people who design and build digital technologies in the UK.
The belief that technology can do good, both for an individual and for society, comes across powerfully in the research, with 90% indicating that tech has benefited them as an individual and 81% believing it has benefited society as a whole. But this belief exists in conflict with the reality that big tech has been too irresponsible for too long.
The findings around AI are particularly shocking. An incredible 59% of people working in AI have experience of working on products that they felt might be harmful for society and more than a quarter (27%) of those quit their jobs as a result.
In the context of the report’s findings, UK tech workers are asking for more government regulation, more responsible leadership and more guidance and skills to help navigate the new dilemmas they are facing.
These changes need to move beyond voluntary codes of practice or advisory boards. Workers are demanding practical guidelines to help them navigate this new world, such as the responsible innovation standard being developed by the BSI. Accenture’s Ethical Framework for AI, launched earlier this year, outlines further steps in the right direction – including practical guidance on how the government, business, academia and society can collaborate to engage AI further in an ethical way.
To that end, organisations need to foster public discussion on the ethical principles for AI and tech innovation. That means having open discussions with employees, investors, analysts and media about the ethics and morals behind the tech they are building, practicing what they preach when it comes to transparency and innovation.
Some organisations are already doing this. Take OpenAI, looking to drive ultra-powerful artificial intelligence but “not in a bad way”. On developing language processing software that had the potential to be used maliciously (e.g. for fake news), the company let media outlets try the software but kept the full specifications private. Some researchers fear this puts a negative spin on AI; others laud the organisation for its transparent external approach to the development of its technology.
What’s clear in all of this is that workers in tech are demanding change, to the extent that they are prepared to leave when they see harm done. And in a world where every tech worker leaves a company at a cost of £30,000, the financial ramifications of not sitting up and taking note are substantial.
We've carried out our own research into values in a time of high stakes comms. We’re launching the UK findings at an event this week, hearing from the likes of KPMG, Schillings and TransferWise about their approach to crises and external comms in a time where values and ethics are so critical. Please RSVP to Maira if you’re keen to attend!
Innovation and ethics are usually seen as two separate things, often in opposition to one another. By its nature, innovation requires people to do new things in ways that haven’t been seen before. And tech workers need ways to navigate this. They may be pushing boundaries but they still need to know where they draw the line. What is acceptable and what is not. These findings show that there needs to be a new way of seeing, and doing, innovation — one that puts ethics and responsibility at its heart.