Earlier this month, the Australian Sports Commission published a Guide to Responsible Use of AI in Sport and a Roadmap for AI use in sport. The guide was developed by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) in collaboration with the ASC and Sport Integrity Australia. It offers useful practical guidance to the Australian sport sector on how to navigate and use AI responsibly.
Australia’s high performance sports system has a proud history of being one of the world’s most innovative. Australia is also one of the world’s most competitive sports marketplaces – a country with a relatively small population where a large number of professional sports compete for participants, talent, fans and commercial partners. For these reasons, there may be a temptation in Australian sport to seek the productivity gains offered by AI as quickly and fully as possible. The guide is intended to assist in enabling these gains to be achieved whilst ensuring that AI is used in ways that are safe, ethical, and aligned with Australian sport’s values of fairness, inclusion, integrity and respect.
The guide references the Australian Government’s Voluntary AI Safety Standard and AI Ethics Principles and adapts these guardrails to the unique needs of sport, focusing on fairness, inclusion, safety, and the spirit of competition. It provides clear and practical guidance on managing the risks of AI use in both community and high performance sports, including in relation to the following issues:
Transparency and contestability – many AI models are “black box” models which provide outputs without showing how the result was reached. They are widely used in sport to generate injury risk scores and player selection recommendations. There have already been cases of athletes who feel fully fit and ready to compete being forced to sit out based on an AI-generated injury risk score which the athlete does not agree with or understand. This undermines trust, fairness, and accountability. To address this, the guide recommends that relevant AI systems should be required to provide explainability summaries which athletes, coaches and support staff can understand, and there must always be a process for humans to challenge or override AI outputs.
Accountability and oversight – AI systems should inform human decisions in sport, but never replace them. AI systems should never be left to function without human monitoring and oversight. The guide encourages the establishment of clear governance structures so it is always known who is responsible if the AI model is wrong. Staff should also be trained to recognise when not to use AI. For example, when there is insufficient data, or when an athlete presents with unusual circumstances.
Privacy and security – high performance AI systems often use highly sensitive medical information relating to athletes. It is critical that the proper consents (explicit and revocable) are obtained before this data is collected and that this data is stored securely and otherwise used in accordance with the APPs. Robust data governance frameworks must be in place to address this risk, including mapping out data flows for each AI tool (what is collected, why, where it is stored, and who can access it), and prohibiting any sharing of athlete data outside the immediate performance team or medical team without consent.
Fairness and inclusivity – AI systems in sport must treat individuals and communities fairly and impartially, avoiding bias or discrimination based on protected characteristics such as gender, age, disability, cultural background, or sexual orientation. The guide refers to the risk of algorithmic bias particularly when AI models are trained on non-diverse or international datasets which do not reflect Australia’s sporting population. Sports organisations should conduct regular bias checks on their AI systems as the technologies evolve.
Governance alignment and continuous improvement – the guide highlights the risk of fragmented governance within sports organisations where different business units use different AI systems (all provided under different terms and conditions) and develop their own policies, practices and procedures for AI use. This patchwork approach makes it harder to ensure consistency, accountability, and trust across the organisation. To address this risk, sports are encouraged to establish a dedicated AI governance body to provide focused oversight, consolidate expertise and establish clear accountability.
We would encourage lawyers working within sporting organisations to read the guide, and to ensure they play a central role in AI governance within their organisations. A significant number of the risks with AI are legal in nature, and best managed by in-house lawyers. The guide refers to the risk of breaching privacy law through AI use but there is also the risk of breach of confidence, breach of third party intellectual property rights and inadvertent waiver of legal privilege. It is also important that AI use complies and is consistent with existing rules, regulations and polices already in place and that new AI governance structures do not create a parallel set of rules which conflict with existing ones.

For further information, please contact:
Brianna Quinn, Partner, Bird & Bird
brianna.quinn@twobirds.com




