July 10, 2025

From Risk to Opportunity: Why Boards Need a Clear Stance on AI Now

A recent conversation with a colleague working with a major US tech firm raised an interesting question: why is AI adoption progressing more slowly in Australia? In Silicon Valley, AI is being positioned as a major strategic shift. Here, the approach can appear more measured, with many organisations still navigating how it fits within their broader operations.

One perspective we explored is that AI isn’t yet well integrated into strategic and risk thinking at the leadership level. The result is a cautious tone, where AI can be framed more as a risk to mitigate than an opportunity to shape.

That conversation prompted me to reflect more deeply on what I’ve been observing across the organisations we work with, particularly Victorian public sector and NFPs.

The Cost of Inaction

Doing nothing is no longer an option. This could mean not setting any organisational policy on AI, or alternatively, adopting an overly restrictive policy, such as a blanket prohibition on AI use. In both cases, the likely outcome is the same. Staff will continue using AI tools, often public, freely accessible platforms, without clear organisational guidance and appropriate safeguards. This raises several risks. Sensitive data could be mishandled. Decisions may be influenced by errors or bias embedded in AI outputs. Staff may also choose not to disclose their use of AI in preparing work, which can compromise transparency and reduce the effectiveness of downstream quality checks and assurance processes. Without a framework for responsible use, training, and oversight, these risks compound.

Even for large organisations with dedicated technology and risk functions, getting AI adoption right is not easy. The recent Financial Review AI Summit reinforced this. While the big end of town is beginning to embed AI, the pace remains uneven, and the complexity is real. For smaller or financially constrained public sector and community organisations, this reality underscores the importance of setting direction early, building awareness, and promoting safe experimentation from the outset.

AI on the Risk Space: What We're Seeing

In our work with Audit and Risk Committees and Boards, we’re seeing a growing optimism about AI and its potential. Directors are increasingly asking thoughtful questions and raising suggestions, often showing interest in its potential while considering the associated risks.

Alongside this curiosity, we still see a patchwork of maturity:

-AI policies: Where AI policies do exist, they are often developed in isolation from broader organisational strategy and objectives. In some cases, they are primarily focused on managing risks, which may limit the ability to identify and pursue opportunities.

-Risk narratives dominate: Data security, bias, compliance, cultural and disruption uncertainty are valid concerns, but they are often where the conversation starts and ends.

-Funding hesitancy: Many organisations cite budget constraints and uncertainty as reasons not to invest.

-Forward-thinkers emerging: Encouragingly, a number of organisations are starting to see AI as a tool to manage enduring funding pressures and improve outcomes.

Importantly, in the Victorian public sector, we are seeing AI emerge in discussions about workforce planning. With proposed reductions in public service staffing and ongoing financial constraints, many are exploring how AI might supplement capability, not as a substitute for people, but as a critical enabler in delivering services under pressure.

AI is a Strategic Risk and Opportunity

AI is not just a tech initiative. It’s a strategic consideration that should be embedded in organisational planning, corporate strategy, and enterprise risk management. As with all significant shifts, there are both downside and upside risks.

The upside, greater efficiency, better outcomes, cost reduction, isn’t theoretical.

A Practical Way Forward: Advice to Boards and Audit & Risk Committees

Boards and Audit and Risk Committees are uniquely positioned to lead a strategic and thoughtful engagement with AI. To support this role, we suggest the following areas of focus:

1. Champion a strategic stance on AI: Support the development of an AI strategy aligned to the organisation’s strategic objectives, not just a policy. Ensure this strategy includes a balanced consideration of risks and opportunities.

2. Set the tone for innovation and governance: Encourage executive teams to explore AI’s potential, while putting in place clear expectations for responsible use. This includes ethical frameworks, model validation processes, and controls for data security.

3. Foster a culture of safe experimentation: Promote an environment where staff can pilot AI tools in controlled settings. This helps surface valuable use cases and builds internal capability.

4. Understand how AI intersects with workforce planning: AI can’t replace people, but it can enhance capacity. In the face of constrained budgets and rising service demand, explore how AI can support staff and extend reach.

5. Promote establishing cross-functional AI working groups: Form internal working groups with diverse skills, roles, and perspectives to identify use cases, surface opportunities, and flag potential barriers. This inclusive approach helps create shared ownership and broader insights.

6. Prioritise capability uplift: Ask what investments are needed to support AI literacy across the organisation. This includes staff training, recruitment of data and AI specialists, and partnerships.

7. Encourage cross-sector and inter-agency collaboration: Champion initiatives that share learnings and resources. Collaboration across the sector and with industry or academia can amplify outcomes.

8. Monitor risk while enabling opportunity: Integrate AI into risk registers and assurance processes, but avoid an overly cautious lens. Make sure the organisation is not just protected, but positioned to benefit.

9. Set clear reporting expectations: Request regular updates on AI-related initiatives, usage, and risk exposure. This ensures transparency and embeds AI into performance discussions.

Final Thoughts

AI is already part of the workplace landscape, whether acknowledged formally or not. For public sector and community organisations, the key is to take a measured, informed approach. These insights are drawn from what we’re seeing firsthand in our work with clients across government and not-for-profit sectors.

Start by providing clear guidance, encourage thoughtful use, and build internal capability. Boards and Audit and Risk Committees can play a crucial role by supporting practical steps that help their organisations engage with AI strategically, constructively and safely.

If you’d like to discuss how your organisation can take its next step, feel free to get in touch.

Michal Jozwik
Author

Executive Director and Co-Founder of Aster Advisory