By Steve Durbin, Chief Executive, Information Security Forum
AI holds immense potential in enhancing every business aspect and process, from driving innovation to uncovering hidden business insights, from improving staff productivity to enhancing operational efficiencies. On the other side, AI also poses numerous risks for businesses.
For example, AI systems are trained on large datasets. If these datasets are inherently biased, then the outputs or decisions of these AI systems will be biased. AI also boosts the ability of threat actors to target and automate their attacks. As threat actors learn to harness and manipulate these technologies, it’s possible that organisations may find themselves overwhelmed and under protected against cyberattacks. There are also legal and compliance consequences to consider. If AI systems violate the privacy of individuals, discriminate against them or are unfair to them in any way, then organisations may be subject to fines, penalties and other legal ramifications. Not to mention the loss of customer trust, damage to business reputation and the long-term harm to brand value it may cause.
Bridging the supply chain gap
There’s no doubt that AI is a “hot topic” in boardroom discussions around the world. However, nearly three-quarters of board members still have minimal knowledge or experience with AI technology, which goes to show that there is an urgent need for AI literacy to be elevated in the boardroom. Beyond the bare minimum of AI literacy, there are other best practices board members should consider adopting:
- Embracing AI as Part of Corporate Strategy
AI should not be viewed as a technology or a tool that improves efficiency or reduces employee workloads. Board members must evaluate AI from a business and strategy perspective – understand the business goals, the value it will create, the associated costs, the resource requirements, the limitations and risks prior to setting off on the AI journey. - Building Accountability and Oversight Structures
The question of whether AI oversight is a responsibility for the entire board of directors or can be assigned to the audit committee is still unresolved. This is because some discussions may not be applicable to the entire board. However, it is crucial to establish accountability, oversight, and reporting structures to ensure that board members have the information and control required to make informed decisions and manage AI risks effectively. - Establishing Security as a Core Pillar of the AI Foundation
Security should not be an afterthought – AI systems must be secure by design. This means embedding security measures like access controls, encryption and regular security audits throughout its entire development lifecycle. It means tackling ethical and privacy concerns, implementing bias detection measures, ensuring AI outputs are traceable, transparent and accurate, and having a diverse and inclusive team that oversees all aspects of AI operations. - Fostering a Culture of Transparency and Collaboration
About 41% of American workers feel that AI might take away their jobs. It’s crucial to recognise such emotions and engage the workforce to alleviate their anxiety or fears. Having an open communication and dialogue, providing learning and training resources, extending care and support, etc. – these are ways in which leadership can foster a culture of trust and make employees feel more empowered to use AI.
The AI revolution is definitely underway. However, it is important to avoid rushing into any hasty decisions. By embracing AI as part of business strategy, establishing oversight structures, considering security consequences and implications and understanding its impact on the company culture, organisations can ensure that the usage and adoption of AI is managed in a secure, ethical and responsible manner.
Further information
linkedin.com/in/stevedurbin