Skip to content

Latest commit

 

History

History
100 lines (56 loc) · 6.3 KB

sample-agency-ai-policy-long.md

File metadata and controls

100 lines (56 loc) · 6.3 KB

This repository contains a sample AI Integration and Experimentation Policy. The policy outlines our approach to the ethical use of AI, promotion of experimentation, and protection against data breaches. It is designed to guide the integration of AI technologies, particularly Large Language Models (LLMs).

AI Integration and Experimentation Policy


Ethics and Transparency

  • Employees are expected to consider and uphold the highest ethical standards in their use of AI, including ensuring fairness, transparency, and privacy.
  • Any use of AI must comply with our organization's existing data privacy and confidentiality policies to ensure the protection of sensitive information.

Promotion of Experimentation

  • Employees are encouraged to experiment with AI technologies within the guidelines of our policies.
  • Organizational leaders are urged to promote a culture of innovation and openness, emphasizing the importance of calculated risk-taking in AI experimentation.
  • Feedback mechanisms should be established where employees can voice their experiences, challenges, and successes in AI experimentation.
  • Our organization encourages professional development and training in AI technologies and educational stipends or dedicated time for such activities may be provided.

Protection Against Data Breaches

  • Employees are strictly prohibited from disclosing sensitive information to AI technologies, particularly cloud-based LLMs. Examples include:
    • Campaign strategies or plans
    • Client or donor personal data such as emails, phone numbers, or personal information
    • Employee personal data
    • Non-public research data or findings
    • Software source code
    • Any information subject to non-disclosure agreements (NDAs)
  • Any significant use of AI technologies, especially in the case of new projects or integration into existing processes, should be reported to managers or supervisors.

Evaluating and Mitigating AI Risks

  • Our organization will conduct regular risk assessments of our AI use, taking into account potential threats to data security, privacy, and ethical standards.
  • Our organization will maintain a dedicated team or designate a person responsible for overseeing the ethical use of AI and addressing any issues that arise.

Respect for Industry Professionals

  • AI integration is intended to facilitate and enhance the work of industry professionals, not to replace them.
  • We encourage our employees to cultivate and refine their uniquely human capabilities, such as strategic thinking, creative problem-solving, and empathetic communication, which are indispensable as we increasingly integrate AI tools in our operations.
  • Professionals are advised not to overly rely on AI for their work and to balance the use of AI tools with human skills and judgment.

Adaptability

  • This policy will be regularly reviewed and updated as necessary to adapt to the rapidly changing field of AI technology.
  • We will engage in an ongoing dialogue with peer organizations to stay informed about best practices and emerging issues in AI use.

The following is an extension of the core policy for organizations seeking to establish guidelines for disclosing the use of AI technology in public-facing communications or other work products.

Definition and Disclaimer of AI-Written Content

  • AI-written content requires disclaimers and additional scrutiny due to its distinct nature.
  • AI-written content is defined as content significantly generated by AI, as defined by our three-part test.
  • AI-assisted content is differentiated as being the result of significant human modification of AI-generated suggestions. This is treated as human-authored content and continues with standard operations without necessitating special disclaimers.

Sample AI transparency label:

"The content of this email was generated with the help of an AI Large Language Model (LLM), and carefully reviewed by our team for accuracy and alignment with our campaign values."


Three-Part Test for Defining AI-Written Content

This test is designed to distinguish between AI-written and AI-assisted content. Content can be categorized as AI-written if it meets the following criteria:

Test 1: Source of Functionally Impactful Ideas

The origin of the core content or ideas is evaluated. If an AI tool primarily generated the ideas that drive the function of the message, the content should be considered AI-authored. On the other hand, if an AI tool was merely instrumental in refining or enhancing a human's original ideas without contributing new functionally impactful ideas, the content passes this test.

Test 2: Degree and Impact of Human Input

This test evaluates the degree and impact of human input in crafting the key message, framing, or call-to-action. If a human substantively alters or contributes to these crucial elements, the content passes this test. Conversely, if an AI substantively alters or contributes to these crucial elements, the content should be considered AI-authored.

Test 3: Degree of AI Autonomy and Narrative Control

This test assesses the degree of AI autonomy, especially regarding content distribution. In the context of a political fundraising campaign, each piece of communication - be it an email, text, or ad - forms a part of a larger narrative, like chapters in a book. Together, these chapters construct the complete campaign narrative, each contributing to the overall message. If an AI tool independently determines the sequence, timing, and recipient of these communications, it effectively contributes to shaping the narrative. Therefore, if the AI tool operates without substantial human influence in these decisions, the content should be considered AI-authored. If a human is making these key decisions, even based on AI-provided data, it passes the test.


Contributing

Contributions to this policy are welcome. If you have suggestions for improvements or additions, please follow these steps:

  1. Fork this repository.
  2. Create a new branch in your forked repository.
  3. Make your changes in the new branch.
  4. Submit a pull request detailing the changes you've made.

License

This policy is released under the MIT License. You are free to use, modify, and distribute this policy, provided that you include the original copyright notice and disclaimers.