Last updated: 29 January 2025
1. Introduction
What the policy covers
This policy represents how we (Insync and its employees) will manage the use of AI in accordance with our vision, purpose, and values. It outlines our methodology for ensuring that AI is used responsibly, ethically, privately, and securely.
What is AI?
AI in business is a rapidly developing space. For this policy, we are using the term AI to refer to the class of software that is not programmed with specific rules but instead learns patterns within a training dataset and finds those patterns inside a given input.
At the moment, this is mainly in the form of classifiers and generators, including:
- Text and image generation
- Data labelling (e.g. text sentiment analysis or image object identification)
- Translation
Note that this list is not exhaustive.
Also, for this policy, we include another definition of an “AI tool.” This refers to a standalone AI product or system or an in-house-developed tool that leverages AI. It does not refer to software products that have integrated AI features or that we already use and trust.
2. AI tool approval
We have decided that appropriate oversight is necessary when using AI. Thus, we have an approval process for AI tools that must be completed before they can be used within our business.
We commit to never introducing a new AI tool or using an already approved AI tool in a new way without the approval process outlined in this document.
This process is a function of our Business Improvement Forum (BIF), which will evaluate each proposed AI tool for a specific business use and then approve it. The BIF meets regularly to review such proposals, but for urgent cases, approval can be done by an out-of-cycle meeting or circular resolution.
A list of approved tools will be available to all employees, and the BIF will review the usage of an AI tool approximately three months after it has been approved to ensure that it is being used per this policy. This policy excludes initial beta testing and prototyping from needing to be approved so long as it is done in accordance with our privacy policy.
3. AI tool evaluation
This policy section outlines our key considerations when assessing whether to approve an AI tool for use in our business.
Bias management
AI tools can exhibit biases in their output. These biases can be due to several factors but are most commonly the result of biased training data. Since AI works by replicating patterns from its training dataset and finding them within a given input, an AI can learn patterns in the training data that the creators did not intend for it to replicate or did not know were present in the training data. This results in AI tools that exhibit biases in their output. We acknowledge that potential bias is a fundamental part of AI. We endeavour to eliminate biases to the best of our ability or clearly state when we cannot.
Privacy and security
We only approve AI tools and providers that align with our privacy policy, including Australian privacy law and the GDPR (General Data Protection Regulation, the EU’s Data Privacy Act). For any given AI provider and tool, the person who is proposing the tool will need to gather the following information for the BIF to consider:
- Data Security
- Where is the data stored?
- Where is the data processed?
- How is the data encrypted?
- During transit
- While at rest
- Data Privacy
- Who has access to the data?
- Does the provider utilise the data to train its systems further, potentially risking a data leak?
- Supplier Reputation
- What is the public perception of the supplier?
- Supplier Security Posture
- Is the supplier compliant with relevant standards and certifications?
- Disaster Recovery
- Does the provider have an appropriate disaster recovery plan in place?
They will also need to ensure that the tool complies with our privacy policy, which can be found at https://insync.com.au/privacy-policy/.
The following are just some of the requirements imposed by the privacy policy and Australian privacy law:
Data Sovereignty
That data sent overseas (including in the hands of a third party) is still treated in a way that does not breach the Australian Privacy Principles. We must also take “reasonable steps” to ensure that this occurs. Does this AI tool comply with this requirement?
The right to be forgotten
Respondents can withdraw their consent to us having their data, and we must take “reasonable steps” to delete or de-identify it. Does this AI tool allow us to delete or de-identify uploaded or processed data permanently?
4. AI processes and best practices
The following section outlines our accepted AI processes and best available practices.
Transparency
We are committed to using client data transparently. This policy is readily available on our website.
Accountability
We understand the need for accountability when dealing with AI systems, as they are imperfect. Therefore, we take full responsibility for any AI-generated content that we publish.
Verification of AI-generated marketing content
We are committed to using AI without compromising the integrity of our marketing. Therefore, no marketing content made by AI will leave the organisation without being verified by a human.
Other commitments
- We will remain cognisant of industry best practices
- We will never use AI without understanding the implications of it on our data and our systems
- In the event of a material incident, we will comply with our Business Continuity Plan (BCP). Incidents include:
- Data breach
- Any other security breaches
- Any violation of this policy
- Any failure of performance
- We will use as many tools as possible to get the best value for our clients and to be their trusted improvement partners.