Artificial Intelligence: Threat or Theatre?

Robert Chalmers, Senior Lecturer in Law, Flinders University


AI: Threat of Theatre?

In 2023 AI emerged from the shadows into mainstream use and consciousness. Amidst the hype about its impact there is also increasing discussion of regulatory intervention. Europe is moving to finalise an AI Act, and the US has recently issued executive orders. Some of this has been driven by fears of misuse of the technology leading to harm or even existential threat. Perhaps it is also driven by dominant players in AI being keen to see regulation create barriers to entry to others, including those committed to a more open source and interoperable approach.

At its core AI is simply software and data. It is not magic and it is not human. Still, it does have a growing impact on our lives and this is likely to grow as it becomes more sophisticated over time. Putting aside the hyperbole of its potential existential threat, what is more relevant today is its broader use in the background of our lives, in automated decision making about jobs, loans, housing, healthcare or education.

The evidence is clear: AI can be bias prone; AI producers and users do not always share the same understanding of the purpose, utility and functionality of AI tools; AI producers cut corners to sell products that may not be suited to the task they are intended for; AI users are not fully aware of how and when their AI tools operate beyond their intended scope (boundary creep). All these risks require monitoring, which in turn requires transparency. Transparency may not equate to understanding but is essential for enabling critical review and reflection by users as well as producers of AI. Governance systems, both ethical and legal, have a role to play in shaping the appropriate use of AI. The significance of this is underscored by recent ructions in the governance and leadership of Open AI itself.

While there is a lot of ethical guidance already, as usual the challenge is the contextual application of those principles. Legal frameworks specifically tuned to AI are relatively few and new, and consistent global standards are yet to emerge, but many governments including in Australia are considering how best to approach this. I partnered with colleagues earlier this year to make submissions to Federal and South Australian government consultations on the responsible use and regulation of AI. Our submissions have taken a broad conception of what AI is, without fixating on latest developments or definitions. This is because a narrow focus risks being outdated and also misses the real point surfaced by AI, namely the software-based automation of the socio-technical systems that mediate and shape most elements of our society and economy. Our submissions were informed by an interdisciplinary approach, drawing on business, social, ethical and legal thinking, and addressing the leadership and people skills needed to implement ‘Responsible AI’.

We see significant potential for AI across many sectors. Our core thesis is that in order to make the most of this opportunity, and avoid or mitigate the downside risks, we need a central focus on the people elements. This includes leaders, workers, those developing AI and those using it, and those whose lives are impacted by its application whether they are aware of it or not.

It was pleasing to see that many of the issues that we advocated on have been picked up in the Australian Government’s interim response to the consultations. These include concerns about the use of AI to predict suitability for a job, the need for greater transparency, contextual analysis of risk, and for government to act as an exemplar in its own use of AI (addressing matters identified in the Robodebt Royal Commission).

Broader education and awareness of the current and possible future application of AI is a key starting point to enable this, a point made to enable this, a point made earlier by the Australian Human Rights Commission.

All parts of our community have a role to play, including government, the education sector, industry and the not for profit or for purpose sectors who are using these systems. Careful use of design thinking and project management will also be important parts of getting better outcomes.

All parts of our community have a role to play, including government, the education sector, industry and the not for profit or for purpose sectors who are using these systems. Careful use of design thinking and project management will also be important parts of getting better outcomes.

AI is a tool: like other tools it can be skillfully or poorly used, and its design and implementation can be effective and useful, or wasteful or even destructive. While robo-debt was not an example of the application of sophisticated AI, it was a clear example of how automated decision making could turbocharge the negative consequences of a flawed policy approach and lead to billions of dollars in costs, deaths and misery, as well as fracturing public belief in government. So, before we can solve problems – with AI or other approaches – we must first understand them thoroughly; and in designing solutions we need to engage those who will be affected by them as collaborators and develop, prototype and test potential solutions with thosegroups, paying close attention to their needs and feedback. We need to be less artificial (technology focused) and more intelligent (people focused).

Additional reading

1) Submission to the Australian Government “Supporting responsible AI: discussion paper”

2) Additional personal submission to the Australian Government “Supporting responsible AI: discussion paper”

3) Submission to Select Committee on Artificial Intelligence (South Australian Parliament)

4) Additional JBC submission to Select Committee on Artificial Intelligence

5) Report of the Select Committee on Artificial Intelligence (South Australian Parliament)

(joint submissions also involved input from Dr Andreas Cebulla, Associate Professor;Dr Rajesh Johnsam, Senior Lecturer; Professor Tania Leiman, Professor and Dean of Law; Dr James Scheibner, Lecturer)

Mr Robert Chalmers


Robert Chalmers is a Senior Lecturer in Law at Flinders University.

Rob is a versatile professional with extensive experience as a manager, company director, commercial advisor, lawyer, and educator. His career spans both private and public sectors, contributing to projects across diverse fields, including defence, agriculture, and health. Rob possesses specialised expertise in intellectual property, technology, and commercialisation.

In his teaching role, Rob leverages his diverse professional background to provide students with insights into real-world scenarios, preparing them for future challenges. His interests extend to the dynamic interplay between technology, society, and regulation. Passionate about exploration and development, Rob actively seeks opportunities to delve into new fields. Beyond his professional pursuits, he is dedicated to fostering the skills and awareness of others through coaching, teaching, and training.

Researcher Profile | Email

Posted in
Policy Perspectives Uncategorised

Leave a Reply