Effective Date: April 2026
Holon Foundation recognizes that artificial intelligence and machine learning tools are increasingly deployed in conservation, research, and organizational operations. This policy establishes ethical guidelines and governance frameworks for the responsible use of AI systems across all Holon Foundation activities. The policy ensures that AI deployment aligns with our 501(c)(3) mission to advance conservation and serve our community with integrity and transparency.
1. Purpose & Scope
This AI Ethics & Governance Policy governs:
- All use of AI and machine learning systems in Holon Foundation operations
- Development or deployment of proprietary AI systems for conservation applications
- Use of third-party AI tools (ChatGPT, Claude, wildlife identification AI, etc.)
- Conservation technology applications using AI (drones with image recognition, eDNA analysis algorithms, GIS modeling)
- Internal organizational processes involving AI (recruitment screening, data analysis, decision support)
This policy applies to all staff, interns, volunteers, and partners conducting work on behalf of Holon Foundation.
2. Core Principles
All AI deployment within Holon Foundation must adhere to these five core principles:
Human Oversight
AI systems augment human decision-making but do not replace human judgment. Critical decisions involving personnel, conservation priorities, or resource allocation require human review and approval.
Transparency
When AI tools influence decisions affecting individuals (applicants, interns, staff) or communities, those individuals must be informed that AI was used. We do not deploy AI systems covertly.
Fairness
AI systems must be regularly audited for bias and discriminatory outcomes. If an AI system produces unfair results for any demographic group, the system is modified or discontinued.
Privacy
Personal data used in AI systems must be protected, anonymized where possible, and never shared without consent. AI training data must be consistent with our privacy commitments.
Accountability
Responsibility for AI deployment rests with Holon Foundation, not with the AI vendor. We remain accountable for harms caused by AI systems we deploy, regardless of whether they are proprietary or third-party tools.
3. Prohibited Uses of AI
The following AI applications are strictly prohibited:
Biometric Categorization
- Prohibited: Use of facial recognition, gait analysis, or other biometric AI to categorize, track, or identify individuals without explicit consent
- Exception: Wildlife identification AI (identifying animals in camera trap footage) is permitted
- Rationale: Biometric categorization raises serious privacy and civil liberties concerns
Subliminal Manipulation
- Prohibited: AI systems designed to influence behavior through hidden persuasion, emotional manipulation, or psychological exploitation
- Prohibited: Personalized messaging or content designed to manipulate decision-making without awareness
- Rationale: Respects individual autonomy and informed consent
Autonomous Personnel Decisions
- Prohibited: Using AI to make final decisions on hiring, firing, promotion, or compensation without human review
- Permitted: Using AI to screen applications, identify qualified candidates, or flag resume inconsistencies (with human review required before action)
- Rationale: Personnel decisions are high-stakes; human judgment is essential
4. Human-in-the-Loop Requirements
For any AI system influencing significant decisions, a human must be involved in the final decision-making process:
Personnel Decisions
- AI may screen resumes, identify skills gaps, or flag red flags (gaps in employment, inconsistent credentials)
- Final hiring/firing decision must be made by a human hiring manager or supervisor
- Candidate must have opportunity to discuss any AI-based concerns (e.g., resume screening issues)
Conservation Site Management
- AI may analyze data to suggest restoration priorities, species planting recommendations, or habitat management strategies
- Final conservation decisions must be made by human ecologists/conservation managers
- Human expertise overrides AI recommendations if there are compelling reasons
Verification & Quality Control
- AI-generated classifications or identifications (e.g., plant species identification from photos) must be spot-checked by human experts
- Critical conservation or research data cannot rely solely on AI; human validation is mandatory
5. Bias Auditing
Any AI system used for applicant screening, hiring, or decision-making affecting individuals must undergo regular bias auditing.
Audit Frequency
- Initial deployment: Audit before deployment and again after 100 decisions
- Ongoing: Annual audit of all active AI systems
- Triggered audit: If complaints of bias are raised or disparate impact is observed
Audit Scope
- Identify whether AI system produces different outcomes for different demographic groups (race, gender, age, disability status)
- Examine false positive/negative rates across groups
- Assess whether AI recommendations correlate with protected characteristics
Action Upon Bias Detection
- If bias is detected: AI system is modified, retrained, or discontinued
- Review of past decisions made by biased system (consider if decisions should be re-evaluated)
- Notification to affected individuals if bias caused discriminatory outcomes
6. Data Privacy in AI Systems
Training Data Requirements
- Consent: Personal data used in AI training must be obtained with explicit informed consent or anonymized
- Anonymization: Personally identifiable information (names, emails, phone numbers, addresses) must be removed from training data
- No Sensitive Data: Health information, financial data, or criminal history may not be used in AI training without extraordinary justification
Third-Party AI Services
- Before using third-party AI tools (ChatGPT, Claude, image recognition services), assess whether personal data will be transmitted
- Do not upload confidential information, intern data, donor information, or protected health data to third-party AI services
- Review third-party privacy policies to understand data usage and retention
Data Retention & Deletion
- AI training datasets must be retained only as long as necessary for system function
- Individual data subjects may request data deletion (within reasonable technical limits)
7. Transparency Requirements
Disclosure When AI Is Used
The following situations require disclosure to affected individuals:
Applicant Screening
- Job applicants must be informed that AI is used to screen resumes/applications
- Disclosure can be in job posting or standard application materials
- Example: "We use machine learning to screen for basic qualifications. All shortlisted candidates receive human review."
Intern Evaluation
- If AI tools are used in intern performance evaluation, interns must be informed
- Interns have right to request human review if they dispute AI-based feedback
Conservation Decisions
- If AI analysis influences conservation decisions affecting community stakeholders, stakeholders should be informed
- Community meetings describing conservation plans should acknowledge AI role if significant
Explanation & Contestation
- When AI influences significant decisions, individuals have right to explanation of how AI was used
- Individuals may request human review if they believe AI made an error or produced unfair result
8. Conservation Technology Applications
AI tools increasingly support conservation fieldwork. These applications are permitted under the following conditions:
Wildlife Identification AI (Camera Traps, Drones)
- Purpose: Automated species identification, population monitoring, behavior analysis
- Status: Permitted and encouraged for conservation research
- Requirement: AI identifications must be spot-checked by wildlife expert (minimum 10% of classifications)
- Privacy Note: Camera traps must not be placed where they capture human faces without consent
eDNA Analysis Algorithms
- Purpose: Automated analysis of environmental DNA to detect species presence/absence
- Status: Permitted; analysis software vetted by partnership labs
- Requirement: Results validated against traditional identification methods
GIS Modeling & Species Distribution Prediction
- Purpose: Predict suitable habitat for rare species, optimize restoration site selection
- Status: Permitted and encouraged
- Requirement: Models must be validated against field observations; predictions don't replace site assessment
Drone Image Analysis
- Purpose: Automated analysis of vegetation cover, erosion, invasive species mapping
- Status: Permitted; requires FAA compliance for drone operations
- Requirement: Flight plans respect privacy of adjacent properties; no surveillance of people
9. Third-Party AI Tools Vetting
Before deploying third-party AI services, Holon Foundation conducts a vetting process:
Vetting Checklist
- Does vendor privacy policy align with our commitments?
- Is personal data transmitted outside the U.S.? (Consider GDPR/international compliance)
- Does vendor use our data for AI training/improvement?
- Are we comfortable with the vendor's business practices and track record?
- Does service have adequate security and encryption?
- Can we export our data if we discontinue the service?
Tool Categories
Approved Third-Party Services (General Use)
- ChatGPT, Claude: General AI assistants (OK for draft writing, brainstorming; avoid confidential data)
- Google GIS tools, ArcGIS: Mapping and analysis software (vetted data storage)
Requires Evaluation Before Use
- Any new AI service not previously vetted
- Services handling sensitive data (donor info, internship records, etc.)
- Specialized conservation tools (species identification, habitat modeling)
Evaluation Authority
- Technology & Ethics Committee (Technology Director, Program Director, Executive Director, external consultant) reviews and approves new AI tools
- Committee decision made within 14 days of request
10. Review & Updates
Annual Review
This policy is reviewed annually by the Technology & Ethics Committee to:
- Assess effectiveness of current AI governance
- Identify new AI applications and emerging risks
- Update approved tools/services list
- Document any bias audits or incidents
- Incorporate feedback from staff and interns
Policy Updates
- Significant updates posted with new effective date
- All staff/interns notified of policy changes
- Training provided if changes affect current operations
Policy Questions & Contact Information
Appendices
Appendix A: AI Tool Evaluation Form
- Tool name and vendor
- Purpose and proposed use case
- Data inputs (what data will be processed?)
- Privacy assessment (is personal data involved?)
- Bias risk assessment (could this create unfair outcomes?)
- Transparency plan (how will we disclose AI use?)
- Committee decision and conditions