AI Alumni Roundtable Q&A Summary
Welcome
Following the inaugural Alumni Roundtable on the topic "How are CXOs utilising AI to create value, how can AI help create competitive advantage, and what are some of the potential pitfalls to beware?" please find questions from the attendees – and answers from the panel.
Hosted by David Fenton, CMO, the event featured fabulous representation from David Searle, Karen Laidler, and Tobias Uthe, who shared their thoughts, experiences, and stories.
For more information and access to all of the content available for Youd Andrews Alumni head to:
Alumni Hub | Youd Andrews (youd-andrews.com)
- Those industries that are regulated are also led by operating risk - where does that fit into the AI-driven business value challenge?
Risk management plans are needed to understand and manage both types of risks to maintain the stability, compliance, and efficiency of the business. Where AI is used to enhance operational areas, risks such as security, privacy, business continuity, disaster recovery, etc., must be managed.
AI has the potential to reduce operating risk and, depending on the use case, might introduce new operating risks. The tradeoff will always need to be considered carefully, but as the technology develops, it has great potential to reduce risks and contribute to identifying potential problems earlier and more reliably.
- What are the risks CXOs need to manage around AI?
Both regulatory and operational risks must be managed wherever AI or any tech is used to drive business value. Risks specific to AI include:
- Bias and fairness in algorithms which could confuse or mislead, and also transparency in how algorithms make decisions
- IP infringements, eg copyright or plagiarism
- Data Privacy violations - personal or sensitive info used to train models
- Malicious content
- Security
- Proprietary data from 3rd parties
- Legal and ethical
- ESG impacts – carbon emissions, workforce disruptions
- If not effectively managed, the above can lead to regulatory, legal, reputational and business consequences.
- Operational integration, eg with legacy systems which can be costly and complex.
- Skills gaps in both the regulatory and also technical understanding of AI.
- Over-reliance on AI - needs a balanced approach so AI complements human decision-making.
- What are CXOs thinking about regarding the implications of AI for present and future employees?
What skills are required to get the best value from AI across the business? Cost reductions in current job roles vs cost to employ more highly skilled employees. Employee morale - what does it mean for my role, job loss or better more enjoyable roles with a higher company value - and personal rewards. It will change the needed skillset. Technical skills like Excel formulas, programming languages or special frameworks might not be necessary anymore. The importance of abstract thinking, general technology use and attitude will grow in importance.
From research -" Whether the organisation is using AI to power a new product for clients, to improve sales enablement, to generate social media content, or to gain efficiencies, people will need to adopt new behaviours and demonstrate new capabilities as their daily tasks are impacted. Projects, initiatives, and transformations will be wave upon wave of change dependent on employee adoption. Demand for “preparing, equipping, and supporting people through change” will increase as organisations explore the changes needed to incorporate AI into operations and experimentation. "
- Much is said about establishing a “Data Culture” in a business. Should this be driven programmatically or grow within the company organically as more AI applications are ingested?
Data and its role in achieving business objectives are too important to evolve organically. The organisation's leadership needs to communicate a clear data-driven culture and define a clear data strategy that aligns with the company's goals and outlines specific objectives.
Top-down programs are never comprehensive. Depending on the company culture, there will always be a mix of both. There is a great benefit in growing it organically and then restructuring it after the first learning experience.
- I just had one final question: How are CXOs addressing the ethical and social implications of AI, ML, NLP, and CV?
Establishing guidelines and frameworks, such as AI ethics committees, bias mitigation strategies, data protection policies, and regulation compliance, is essential. This requires clear accountability structures, regular audits, and assessments. It also requires clear communications, training, and education across the organisation and with external stakeholders, such as customers, regulators, and industry groups.
It depends heavily on company culture. For companies using AI, I do not see major ethical or social implications. The technology might change roles and responsibilities but will not replace humans in the near future. As mentioned in the discussions, it is about helping employees grow with the technology.
Environmental and other aspects on a company level related to AI use are also negligible. When it comes to biases, the technology is probably more balanced than most humans.