Skip to content

Minimize the risks and maximize the benefits of using data-driven technologies within government processes, programs and services through transparency. | Réduire les risques et à maximiser les avantages liés à l’utilisation de technologies axées sur les données, dans le cadre de processus, programmes et services gouvernementaux, grâce à la trans…

License

Unknown, Unknown licenses found

Licenses found

Unknown
LICENSE.md
Unknown
LICENCE-FR.md
Notifications You must be signed in to change notification settings

ongov/Transparency-Guidelines

Repository files navigation


Transparency Guidelines for Data-Driven Technology in Government

This guide sets out 8 points to help minimize the risks and maximize the benefits of using data-driven technologies within government processes, programs and services through transparency.

We’re in the early days of bringing these guidelines to life. We encourage you to adopt as much of the guidelines as possible, and to share your feedback with us. You can send us an email [email protected], or see CONTRIBUTING.md for more details.

You can also check out the Alpha Principles of Ethical Use

Table of Contents

1. Identify Data Enhanced Decisions

2. Keep People in Focus and in the Loop

3. Provide Public Notice and Clear Communication Channels

4. Assess Expectations and Outcomes

5. Allow Meaningful Access

6. Describe Related Data

7. Support Rules, Requirements and Reporting

8. Update Regularly

1 Identify Data Enhanced Decisions

Data is used to enhance decisions in big and small ways using increasingly sophisticated methods and technologies. These technologies can be hard to identify and consider apart from the processes and systems they support or power. They can encompass elements like algorithms, computational models, machine learning and others we sometimes refer to as, or part of Artificial Intelligence (AI).

Understanding data-driven technologies as they exist today and how they could evolve in the future is critical to meaningful transparency.

Today these are emergent technologies, used to automate various steps and processes that may one day be completely computerized. This understanding of data-driven technologies encompasses all tools and approaches to automating or standardizing traditional human intervention in government work.

Why it matters

To be able to protect rights, apply principles and foster constructive discourse, there needs to be a common understanding of the elements and use of data to enhance decisions within government processes, programs and services.

From checklists and decision-trees to weighted scoring and predictive modelling we use many tools to create consistency and defensible logic within our projects, programs and services. Each of these elements require transparency to support training, development and auditing.

How to follow this guideline

Identify and document elements that use data to inform or influence decisions such as:

  • Tools used to provide consistent application of criteria or rules

    • Checklists, scoring rubrics, calculators of risk/score
  • Data used to teach application of criteria or rules

    • Raw data used to teach staff or machines how to carry out tasks
  • Models of logic used to teach application of criteria or rules

    • Algorithms, computational models or packages used to analyze data for trends or optimal paths

Ensure all elements are included in a Data Asset Inventory with complete and updated metadata.

2 Keep People in Focus and in the Loop

Be aware of who will benefit most and who will be impacted both directly and indirectly as a result of using the data-driven technology. Development activities from design, to implementation need to reflect multiple perspectives to assess and address potential and perceived risks.

People have a right to think and act for themselves. Governments should not limit a person’s ability to make important decisions for themselves, their dependents or their livelihoods.

Where the government makes decisions, they should be done with people/stakeholders rather than on their behalf so the outcomes can be more practical, appropriate and trustworthy.

Why it matters

The use of data-driven technology like AI has the potential to exponentially impact groups of people. Existing biases can be amplified, and accountability can be diminished if potential impacts are unexplored.

When minimizing people within a process, program or service the risk of people being impacted by the process, program or service is not minimized. Involving the right people at the right time can ensure protections are appropriate and practical.

Government services and programs can be vital to the lives of people, businesses and communities being able to use government supports should create conditions for better outcomes for all parties. Decisions should be made openly, fairly and ideally in partnership.

How to follow this guideline

During all stages of development, use and management of new technologies or data sources consider the various audiences that need to be engaged, informed or consulted.

  • Create a comprehensive list of people involved with the data, tool, system or AI, their roles, motivations and potential impacts

  • Use personas or other tools to develop communication channels at each stage of development that reflect the needs and expectations

  • Apply user research where possible

  • When developing tools, systems or algorithms allow input and collaboration options for all the people involved in the decision

  • Provide access to tools used on behalf of people, businesses or communities like risk assessments and encourage discussions about assessment criteria or scoring formulae to foster collaborative use of these types of tools between government and clients/applicants/etc.

3 Provide Public Notice and Clear Communication Channels

Respect the public’s right to know when and how data enhancements to a decision or process may impact their lives. When acquiring or using technology such as AI that significantly affect individuals and communities notice should be provided that is public, timely and clear.

Notice should be accessible to a broad audience and outline the purpose and potential impacts of a technological intervention like AI as well as clear channels for further communication.

Technology often acts an invisible layer or ‘black box’, effort should be made to allow multiple perspectives to shape and guide these hidden elements.

Clear lines of communication to learn more, provide input or submit challenges should be accessible and promoted during multiple stages of development and use of data-driven technologies.

Advice should actively be sought from people contributing to any related data, designing any AI elements, or affected by any impacts to ensure the use of any data-driven technology is designed and implemented optimally for collective benefit.

Why it matters

AI elements are often invisible within a process, program or service. For people to trust that the use of AI is safe and appropriate they must first be aware that the AI exists.

By providing a basic understanding of how data and technology is being used a common level of digital and data literacy and fluency can be established and raised that will improve society’s capacity to trust, adopt and shape technologies that impact everyone.

Due process mechanisms can address bias, correct and prevent further negative impacts and provide recommendations for improvements.

How to follow this guideline

Provide notice in plain language and make available through familiar channels. The information must at minimum allow people to answer the following questions:

  • What should I call it?

  • Why is the government using it?

  • How does it add value or efficiency to the process, program or service?

  • How much does it influence the decision or outcome?

  • What rules does the government need to follow when using it?

  • How might it affect me? My community? Other people or communities? The environment?

Establish communication channels to:

  • Solicit public comments to clarify concerns and answer outstanding questions before and during implementation of AI or other technology using data to enhance outcomes

  • Ensure that the public has a meaningful opportunity to respond to and, if necessary, dispute the use of technology, any outcomes or even the approach to accountability

4 Assess Expectations and Outcomes

Many jurisdictions are beta testing impact assessments to support safe and responsible AI and data use. These tools can help express intentions, expectations and outcomes. Adding a deeper dimension to measurement and risk mitigations beyond technical specifications.

AI elements and use vary across sectors, time and jurisdiction, comparing assessment results can help place technological elements and use within context to allow for better evaluation, comparison and perception.

Why it matters

Assessments can increase internal expertise and capacity to evaluate the AI they build or procure, so they can anticipate issues that might raise concerns, such as disparate impacts or due process violations.

How to follow this guideline

Complete appropriate impact and self-assessments and share results along with plain language summaries. Including regular, post-implementation assessments of actual impact and outcomes.

Depending on the technological elements used in the process, program or service potential assessment tools may include:

5 Allow Meaningful Access

AI and other data-driven technologies are often complex and/or proprietary meaning access to the technical elements may need to be limited. Very few people may have the skills, resources or roles to analyze or understand the technical elements of the AI, tool, algorithm or system minimizing the value of broader accessibility of these elements.

Companies may want to protect their proprietary tools or systems from replication or adaption by competitors and governments may want to protect against ‘gaming’ of public programs or services. Access should be limited to protect these interests but allow for oversight and accountability.

Why it matters

To enable accountability of the computational model there needs to be ongoing opportunity for external researchers or auditors to review, audit, and assess these systems using methods that allow them to identify and detect problems.

How to follow this guideline

Develop a process for applying for meaningful access that includes:

  • Criteria for granting access

  • Security measures for ensuring safe access of sensitive data and AI assets

  • Publication or public notice of audit results or academic findings

6 Describe Related Data

No data is perfect. As machines begin to learn how to learn they have the power to amplify the imperfections in their training data. It is important to understand the imperfections in the data to understand the strengths and weaknesses of the outcomes.

Data used to train machines needs to be assessed for bias continually and steps need to be taken to flag and minimize different biases during the entire lifecycle of use.

Why it matters

Machines have been expected to be free of human bias, but we have learned that that is not true. Technologies like AI with minimized human intervention create the potential to make these biases seem more credible and can entrench harmful bias into processes, programs and services that serve our most vulnerable people.

How to follow this guideline

Identify and describe:

  • What data is related to this tool, outcome or process and how?

  • How was the data collected?

  • What biases exist within the data (known and possible)?

7 Support Rules, Requirements and Reporting

Transparency should support local and global governance guiding AI, Automated Decision Systems and/or data use. Metadata and accessibility should reflect requirements and support reporting and/or assessment activities to measure, evaluate and communicate data-driven technology use within context.

Why it matters

Protecting rights and ensuring safety require a foundation of truth that supports law and justice. Transparency provides the opportunity for AI and data practitioners to contribute to this foundation through consistent language, understanding and expectations. Building a wide understanding of the benefits and risks that shapes accepted practices and principles to frame future AI and technology use and expectations.

It also allows AI practitioners to easily show compliance to any applicable governance tools or frameworks.

How to follow this guideline

Document how the use of data-driven technologies in the process, program or service aligns with ethical principles, governance frameworks and industry standards such as:

8 Update Regularly

The nature of data-driven technologies, especially Machine Learning, is dynamic, always learning and improving. Transparency efforts need to be ongoing and reflect the most current iteration of the product, tool, algorithm or intelligence.

Why it matters

Change without human intervention needs ongoing human assessment to make oversight or accountability possible.

How to follow this guideline

Revisit and renew all the above guidelines regularly.

About

Minimize the risks and maximize the benefits of using data-driven technologies within government processes, programs and services through transparency. | Réduire les risques et à maximiser les avantages liés à l’utilisation de technologies axées sur les données, dans le cadre de processus, programmes et services gouvernementaux, grâce à la trans…

Topics

Resources

License

Unknown, Unknown licenses found

Licenses found

Unknown
LICENSE.md
Unknown
LICENCE-FR.md

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published