AI transparency statement

Artificial Intelligence Transparency Statement

Last updated: 10 March 2026

This statement is produced in accordance with the Digital Transformation Agency's Policy for the responsible use of AI in government and the Standard for AI transparency statements.
It should be read alongside the Department of Parliamentary Services (DPS) AI Transparency Statement.

Transparency Statement (PDF Version)

Introduction

The Department of the House of Representatives (the department) supports the work of the House of Representatives of the Parliament of Australia. Our functions include providing secretariat services to the House and its committees, supporting members of parliament in their representative and parliamentary duties, facilitating the Parliaments national, international and regional relationships through an annual visits program, and publishing information about the work of Parliament that enables all Australians to be better informed about Parliament.

We use information and communications technology (ICT) provided by the Department of Parliamentary Services (DPS) under a memorandum of understanding. Our approach to AI must be considered in this context: DPS provides the Parliamentary Computing Network (PCN) infrastructure on which most AI-enabled tools operate, and this statement should be read alongside the DPS AI Transparency Statement.

We are committed to the responsible, transparent and accountable use of AI. Our current approach is cautious and targeted, reflecting the sensitivity of the parliamentary environment, the primacy of human judgement in our work, and our obligations to the House, its members and the public.

 

Do we use AI?

Yes – but on a limited basis. Our current AI use is confined to internal administrative tasks, with all outputs subject to comprehensive human review. We do not use AI to make, or substantively influence, decisions affecting members of parliament, parliamentary staff, or the public.

Microsoft Copilot Chat is the recommended generative AI tool on the Parliamentary Computing Network (PCN). It comes with enterprise data protections and is the approved tool for tasks such as drafting assistance, research support, and document or meeting summarisation.

Non-Microsoft AI tools are accessible on the PCN via browsers, accompanied by a DPS-configured caution splash screen reminding users about the risks of AI use. Access is not blocked, but users must exercise careful judgement – particularly in not sharing personal, classified, sensitive or copyright-protected information with public AI services. This use aligns with whole-of-government advice that encourages agencies to permit use of approved public tools for OFFICIAL information.

We are monitoring AI developments and will evaluate whether additional tools are appropriate for specific operational needs. Any new use cases beyond the general uses described above must be assessed with the Information Management Office, which will seek advice from DPS and coordinate a benefits, risks and cost assessment before deployment.

 

How we use Al

The table below maps our current AI use to the DTA classification system (usage patterns and domains), available at https://www.digital.gov.au/policy/ai/resources/use-classification.

 

Usage patterns

Domains

Decision making and administrative action

Analytics for insights

Workplace productivity

Image processing

Service delivery

 

Internal use

Internal use

 

Compliance and fraud detection

 

 

 

 

Law enforcement, intelligence and security

 

 

 

 

Policy and legal

 

 

 

 

Scientific

 

 

 

 

Corporate and enabling

 

Internal use

Internal use

 

The department provides guidelines on the use of AI by staff which describe appropriate and inappropriate use of AI in parliamentary work. The guidelines establish security and privacy requirements with a goal of enabling staff to safely explore innovation.

What this means in practice

Our current AI use is limited to internal staff use in the following areas:

  • Workplace productivity (corporate and enabling): Microsoft Copilot Chat is used by staff for drafting assistance, research support, and summarisation of documents or meeting notes. All outputs are reviewed by the staff member before use. While Copilot is the primary tool for this use case, there is some use of other public tools – Claude, ChatGPT and Gemini.
  • Workplace productivity (service delivery): AI features embedded in Microsoft 365 applications – including grammar and style assistance – used in preparing parliamentary and administrative documents.
  • Analytics for insights (service delivery and corporate and enabling): Standard AI-assisted features in Microsoft security tooling, provided and managed by DPS, to support monitoring and anomaly detection on the PCN.

We do not use AI for decision making and administrative action. We do not currently use AI in any public-facing services.

As new use cases are agreed, the department will ensure it is clear to stakeholders when they are interacting with an AI system (such as an AI chatbot) or viewing content that is substantially AI-generated with minimal human review.

Future intentions

Staff are actively exploring how AI might support specific operational needs. These are areas of investigation as at March 2026 none are yet deployed and any progression to more formal use will be subject to the assessment and approval process described above.

Areas under active exploration, or which are likely to be explored, are below.AI coding tools.

AI Coding tools.

We are exploring how AI coding tools can assist us to make our information more accessible. For example, converting the department's key procedural reference publications – including House of Representatives Practice and Standing and Sessional Orders – into structured HTML. Our exploration of AI coding tools is only using publicly available information. AI-coding tools could significantly improve the accessibility and searchability of our publicly available information for members, staff and the public along with reducing the effort required to convert the content to web pages. We are engaging with DPS about benefit and risks of AI coding tools.

This area reflects the department's core functions and the specific demands of parliamentary work. Where these explorations progress to trials or deployment, this statement will be updated accordingly.

These explorations are occurring in the department’s Systems and Innovation section within the Information Management Office (IMO).

 

Public interaction and impacts

The department does not currently use AI in any system or service where members of the public directly interact with AI, or where AI outputs may significantly affect the public without human review.

Our public-facing digital services – including the Parliament of Australia website and community engagement programs – do not incorporate AI decision-making. All content published by the department is prepared and reviewed by staff.

We do not use AI to make, or contribute to, decisions that could negatively affect individuals. Where AI tools assist staff in preparing documents or analysing information, a human reviews the output before any action is taken.

 

Governance

The parliamentary departments and the Parliamentary Workplace Support Service (PWSS) meet the requirements of the DTA Policy for the responsible use of AI through a shared responsibility model. This model recognises:

  • each organisation is independent, but shares DPS as a common ICT service provider; and
  • the use of AI by an individual or group within the Parliament may affect others – collaboration is therefore essential.

The Parliament's accountable officials have established an interorganisational AI working group responsible for:

  • meeting DTA AI policy requirements;
  • building AI maturity within the Parliament, including aligning with best practice, legislation, regulation and whole-of-government direction;
  • maintaining a central register of AI use cases for the parliamentary departments; and
  • managing the Parliament's shared responsibility model for AI.

Within the department, the AI Accountable Official has responsibility for implementing the DTA AI policy, reporting high-risk use cases to the DTA, acting as the department's contact point for whole-of-government coordination, and driving staff engagement with AI training and governance.

 

Monitoring and risk management

DPS employs the Commonwealth AI assurance framework to support safe and ethical AI use across the Parliament. DPS monitors, logs and reports on:

  • DPS-managed AI solutions; and
  • third-party AI used by the Parliament.

Within the department, our approach to monitoring and risk management includes:

  • All AI outputs must be reviewed by the staff member responsible before use – accuracy, quality and appropriateness remain the responsibility of the individual, not the tool.
  • Only AI tools available through the PCN may be used for work purposes. Staff are notified promptly when specific tools are restricted – for example, the February 2025 requirement that DeepSeek not be used on government devices.
  • New AI use cases are assessed by the Information Management Office before deployment, including consideration of benefits, risks and costs.
  • AI-related risks are addressed in the department's risk register, which is regularly reviewed by the executive.
  • The Accountable Official and the DPS coordinated Parliamentary AI Working Group monitors for new use cases, and maintains a unified register of existing use cases.

Where issues are identified with an AI tool, the department will act promptly – including engaging with DPS to suspend use if necessary.

 

Compliance with legislation and frameworks

We use AI only in accordance with applicable legislation, regulations, frameworks and policies. Key frameworks relevant to our AI use include:

  • Privacy Act 1988 (Cth) - Whilst the department (covered by the Parliamentary Service Act 1999) is not subject to the Privacy Act, wherever possible, the department will determine how it collects, holds, uses and discloses personal information in accordance with the Australian Privacy Principles (APPs), contained in Schedule 1 of the Privacy Act.
  • Archives Act 1983 (Cth) - governing records management obligations for information generated or processed with AI assistance.
  • Public Governance, Performance and Accountability Act 2013 (Cth) - including accountability for Commonwealth resources and systems.
  • DTA's Policy for the responsible use of AI in government and associated technical standard.
  • Australia's AI Ethics Principles.
  • Parliament of Australia Digital Strategy 2023–2027.

We do not rely on AI for decisions governed by practice or the standing orders of the House of Representatives. Human decision-making is maintained for all functions that directly support the constitutional and procedural work of the House.

 

Training and capability

DTA AI fundamentals training is mandatory for staff of the department. Ongoing training and education requirements are coordinated by the parliamentary departments' AI working group.

We are committed to ensuring staff understand the legal and ethical considerations around AI use – including privacy obligations, accuracy limitations, and the importance of human review of AI outputs.

As our AI use evolves, we will continue to build staff capability and update internal guidance accordingly.

 

Accountability and contact

Accountable Official

The Director (Information and Content) is the department’s Accountable Official under the DTA Policy for the responsible use of AI in government. The Accountable Official is responsible for:

  • implementation of the DTA AI policy within DHR;
  • reporting new high-risk use cases to the DTA at ai@dta.gov.au;
  • acting as the department's contact point for whole-of-government AI coordination;
  • engaging with whole-of-government AI forums and processes;
  • keeping up to date with changing requirements; and
  • driving staff engagement with AI training.

Chief AI Officer

The Director (Systems and Innovation) has been appointed Chief AI Officer (CAIO) for the Department of the House of Representatives.

The CAIO and the Accountable Official hold complementary and mutually reinforcing roles. While the Accountable Official is responsible for policy compliance and governance accountability, the CAIO provides strategic leadership on AI adoption and capability across the department. Together, they ensure that the department’s approach to AI is both responsibly governed and forward-looking.

The CAIO's responsibilities include:

  • providing advice on the suitability and strategic fit of AI tools and use cases for the department’s operational environment;
  • driving AI maturity and innovation across the department, including identifying opportunities to use AI to improve the quality and efficiency of the department’s services;
  • coordinating with DPS and the parliamentary AI working group on technical AI matters; and
  • supporting the Accountable Official in meeting the department's obligations under the DTA Policy for the responsible use of AI in government.

Statement currency

This statement was last updated on 10 March 2026. It will be reviewed annually, or sooner where there is a material change to AI use or governance, consistent with the DTA AI policy.

Contact

To enquire about AI and its use within the Department of the House of Representatives, please contact the Information Management Office.

For broader enquiries about AI across the Parliament, contact the DPS ICT Service Desk.