AI expert Catriona Campbell says that we need to plan now for our future living with artificial intelligence, because it will come regardless and if mismanaged will be a catastrophe rather than a blessing.
By Catriona Campbell
From Facebook’s news algorithm to Google Search, Netflix recommendations and possibly even managing your pension fund—consciously and unconsciously—we use AI-driven software and apps every day.
However, like any technology, AI can be misused or have unintended consequences. Most of the world’s top scientists and technologists agree that uncontrolled AI could be highly dangerous unless managed, culminating in an open letter in 2015 signed by thousands of business leaders and scientists, including Stephen Hawking and Elon Musk.
Many technologists believe that in 25-50 years, AI will match, and then exceed, human intelligence. What does this mean for us?
Unfortunately, outside a core group of experts, we struggle to answer that question coherently. Even in the business world, and especially in government, few have a profound, firm grasp of AI, although it has been around far longer than the age of social media. So why don’t people understand AI and its possible problems? The first issue is that AI is hugely complex. As AI attempts to replicate humanity, conscious thought and action, it needs to be highly sophisticated.
Creating sophisticated AI can require a deep understanding of speech, vision, programming, robotics, machine learning, psychology, neuroscience, philosophy and ethics. Very few people have expertise across more than one of these areas and fewer understand the importance of how these disciplines connect.
Second, humans are not programmed to deal with long-term consequences. ‘Hyperbolic discounting’ is the economic theory where people opt for short-term rewards rather than receiving larger rewards down the line. This is a psychological cognitive bias and helps explain why people make poor long-term life decisions: some decisions are too far away and today’s problems are always more pressing.
However, some dangers are already here and I am not talking about Terminators taking control of defence systems. AI technology is already overtaking technology companies, defence, cyber-security, healthcare, financial services and more.
Facial recognition software can scan your face, check global databases and return results in seconds. Six of the top US investment funds are run by AI. Google found that its recruitment systems were rejecting candidates based on gender bias from data they had themselves programmed into the system. Terrorist hacking groups and rogue states attack their enemies using AI tools to create fear and fund their operations.
Planning To Live With AI
AI is changing how we live, work and play and cannot be left unchecked. The US and China are investing as much in AI as the rest of the world put together. Europe needs to recognise the challenges and create a long-term plan to manage AI and to compete globally.
After 25 years working in technology, I have a lot of experience in designing systems, software and products for banks, tech companies, electronics and entertainment brands. I have applied many learnings in my new book, AI by Design: A Plan For Living With Artificial Intelligence, to design a practical roadmap of what companies and governments can do to plan for the future with an artificial general intelligence (AGI).
Using a ‘Future Back’ model, I have created five potential scenarios before settling on the most likely one to build a plan around. The first scenario, ‘Keep Calm and Carry On’, is where we let things happen and become controlled by interest groups and big tech companies. My problem with this outcome is that it’s not realistic. We already regulate and design for AI, so I can’t see us stopping that.
The most controversial scenario I have investigated is ‘Are we Living in a Simulation?’ There is a (albeit small) scientific movement that believes that we are in a simulation—like The Sims but for real. Although an interesting concept, the evidence is too slight (i.e. none!) to take this option seriously.
Similarly, the idea of ‘Escaping AI for Mars’ doesn’t have the legs to be considered. The spend on preparing commercial spaceflight to Mars is already running to tens of billions of dollars and while it will happen, AI will be needed to support any life on Mars so running from AI won’t be possible.
The fourth scenario, ‘The New Luddites’, is a throwback to reactions against technological disruption—where small communities decide to isolate themselves from AI and live outside the mainstream. I think this is feasible and will likely happen somewhere. However, these groups will be so small as not to be significant, and we can’t base a global plan on outliers.
The scenario I feel is most compelling is the ‘Urge to Merge’, where we physically merge with AI. We already have our phones within arm’s reach for over 20 hours a day, so are already partly down this road and I believe it is how we should approach living with AI. Essentially, if we can’t beat them, join them.
Once we accept a shared future with AI, it helps us shape practical steps to make it happen. First, we have to improve as policymakers and develop our business technical knowledge. We need to stop writing guidelines and start agreeing actions. We have to start controlling AI by risk management and the auditing of AI systems and their algorithms.
We must also make AI fairer by removing bias from AI data sets. We must legislate on merging with AI and consider how it can happen. The dangers of lethal autonomous weapons (LAWs) must be managed. We need robust models of control—we should consider how the world managed nuclear proliferation and see if that model would suit AI. We have to legislate to create virtuous circles where we work with companies and countries that use AI for good. Educating the next generation is vital and they need to be aware of the coming change.
Europe has one of the world’s biggest economic markets, is largely a liberal democracy, has amazing academic talent and a shared history of technological innovation. AI is an opportunity that must be capitalised on, as used wisely it will support democracy and improve quality of life. Misused, manipulated and unplanned AI could lead to chaos and be a dangerous weapon in the hand of those who would use it for evil.
Q&A Interview With Catriona Campbell
We speak with AI expert Catriona Campbell, author of AI by Design: A Plan For Living With Artificial Intelligence, shares further insights into our AI future, and what we should expect and plan for.
Q. How can we protect ourselves from losing our jobs to AI?
A. During any technology shift there are job losses; this happened during the Industrial Revolution and more recently since the 1980s when coal mining and related industries were downsized. But, at the same time, new jobs and industries are created that generally replace those lost. What I think will be different this time is that AI will displace far more jobs as it will have the capability to undertake more complex jobs. For example, AI could replace traditional graduate roles such as accountants or lawyers as well as customer service roles. Our children will need to move into creative roles, entertainment industries, or roles which software can’t replace such as plumbers or electricians. Governments need to recognise future shifts and start to educate and train the workforce to manage this transition.
Q. Your book talks about Big Tech companies becoming ‘Megatech’ companies thanks to AI advances, with potentially greater wealth and influence than national states. What are the implications of this on us all?
A. Big Tech companies already manage how we communicate, run our social media, help us shop online and produce most of our personal software. As they have a head start in AI expertise and investment, Big Tech companies like Google are already in leading positions. As AI allows them to improve their services beyond the capability of competitors to match, plus financial benefits of operating globally, they are becoming titans in their fields, or MegaTechs. The challenge of Megatech companies is that they become wealthier and more influential than the countries that they operate—so, too big to control and with the potential to become runaway trains. Historically, monopoly has never been good for the consumer and we must continue to ensure that we balance the need of the economy and individual companies.
Q. You also discuss how mankind and machines will merge in the AI future. Does this mean that we could all become cyborg superhumans?
A. I think we’re already merging with technology. We already carry our mobile phones with us, sleep next to them and get stressed when they’re not within arm’s reach. It is not too much of stretch to permanently graft a device into your arm or head. ‘Biohackers’ are already editing themselves with NFC chips for contactless cards or tweaking their DNA with CRISPR software and I think over the next few decades we’ll move from mainstream cosmetic surgery to structural surgery.
Q. How will AI change the face of warfare?
A. Warfare relies on technology so having the best AI is the next arms race. Political leaders are always under pressure to reduce casualties so, ultimately, robots will replace human soldiers, pilots and front-line troops. It’s happening already—drone technology and hypersonic missiles are powered by AI and South Korea has already deployed autonomous robot guns on the North Korea border. The complexity of fighting ground battles means robot soldiers are a long way off but super-soldiers augmented with small pieces of AI technology will move onto battlefields in the next 10-15 years. My biggest personal worry is that uncontrolled AI weapons could run amok on battlefields or, worse, in civil wars where the generals deploying AI weapons don’t care if innocents are killed.
Q. Compared to other nations, how well is the UK doing in the development and integration of AI?
A. On the AI report card the UK is ‘Doing OK … could do better’. We are not in the same league as the US and China but in Europe the UK is probably the leader in developing and scaling AI technologies. Private investment is driving a leading start-up environment, predominantly in London but with successes across the country. The UK government is being supportive financially and that should be acknowledged. One area they could improve is providing more AI contracts to UK companies to help them grow and scale. Silicon Valley grew up on the back of the US defence industry. The UK government must help develop UK talent and its economy by protecting companies from being bought by US and Chinese competitors with bigger war chests. We cannot grow a long-term industry if our crown jewels are sold off. Education-wise, more must be done to help school children understand AI, so direct them towards either better university choices or more practical careers that will exist in 20 years. This is an area where we are very weak. We also need to legislate to protect people against unintended consequences of poor AI software and algorithms, for example in facial recognition, recruitment or even mortgage applications.
AI by Design: A Plan For Living With Artificial Intelligence by Catriona Campbell is published by Chapman and Hall/CRC and is out now on Amazon in paperback, hardcover and eBook, priced at £22.99, £56.99 and £14.55 respectively. For more information, visit www.catrionacampbell.com.