KairoPact Private Limited
Artificial Intelligence Policy
How we develop, deploy, and govern AI — and what that means for you.
1. About This Policy
KairoPact builds bionic execution infrastructure for the social impact sector — human-led, AI-supported, and accountable for outcomes. Artificial intelligence accelerates our work; our practitioners own every client relationship, every decision, and every deliverable.
This policy explains how we design, operate, and oversee the AI systems that power our work. It is written for clients, partner organisations, investors, and anyone who wants to understand what responsible AI use looks like at KairoPact.
This policy applies to all KairoPact products and services, including the Axon™ grant intelligence platform, the FieldPulse™ field reporting system, Quanta™ analytical tools, and any bespoke AI-assisted delivery engagements.
Where the terms of this policy and the terms of a signed client agreement differ on any specific matter, the applicable signed agreement governs the engagement.
2. Who We Are
KairoPact Private Limited is incorporated in India and operates as a bionic execution infrastructure company — human-led and AI-supported. We serve NGOs, social enterprises, development organisations, and CSR-obligated companies who need high-quality, efficient, and accountable programme execution.
General enquiries: pritpal@kairopact.com | Website: kairopact.com
3. Our AI Principles
Six principles govern every AI system we build and every AI-assisted service we deliver.
Human Oversight at Every Gate
No AI output reaches a client, a beneficiary record, or a public document without a qualified human reviewer signing off. We enforce multi-stage Human-in-the-Loop (HITL) review gates in all our pipelines. AI drafts; humans approve.
Transparency by Default
We tell clients when AI has been used in the work we deliver and at which stage. We do not present AI-generated content as purely human-authored work.
Data Minimisation and Confidentiality
We process only the data required for the task at hand. Client data, beneficiary data, and organisational information are never used to train AI models.
Accuracy and Verification
AI systems can produce plausible but incorrect outputs. We build verification layers into our pipelines — including fact-checking agents, domain-specific critic models, and human expert review.
Fairness and Non-Discrimination
We actively review our AI pipelines for outputs that could reinforce systemic bias, misrepresent communities, or disadvantage particular groups.
Accountability and Improvement
We maintain logs of AI-assisted decisions, document model configurations, and conduct regular reviews. We do not treat AI failure as an excuse for poor delivery.
4. How We Use AI in Our Products and Services
4.1 Axon™ — Grant Intelligence Platform
Axon™ is a multi-agent AI pipeline that helps NGOs and development organisations develop high-quality grant proposals. The pipeline carries out the following AI-assisted tasks:
- Context research: gathering publicly available funder information, sector data, and precedent documents
- Needs analysis: structuring problem statements from organisational data provided by the client
- Draft generation: producing initial proposal drafts aligned to funder guidelines
- Quality review: a two-tier AI critic layer that evaluates drafts for coherence, compliance, and strategic fit
- Final review: mandatory human sign-off before any output is shared with clients or submitted to funders
4.2 FieldPulse™ — Field Reporting
FieldPulse™ is a structured AI pipeline that generates field reports, programme updates, and stakeholder communications from structured data inputs. It operates under a four-gate Human-in-the-Loop architecture:
| Gate | Reviewer | What is reviewed |
|---|---|---|
| Gate 1 | AI Pipeline | Data ingestion, structuring, and source validation |
| Gate 2a | Programme Lead | Factual accuracy, tone, and sector alignment |
| Gate 2b | Founder / MD | Strategic framing, client sensitivity, and final content approval |
| Gate 3 | NGO Coordinator | Field accuracy and community representation (where applicable) |
4.3 Quanta™ — Data and Analytics
Quanta™ applies AI-assisted data analysis to programme metrics, impact data, and operational information. Outputs are clearly labelled as analytical summaries and are not represented as authoritative conclusions without human interpretation and sign-off.
4.4 ImpactMatch™ — Talent Marketplace
ImpactMatch™ is KairoPact's curated specialist marketplace. AI assists with candidate matching, scope-fit assessment, and quality assurance workflows. All placement decisions are made by KairoPact practitioners, not by automated systems.
4.5 Advisory and Consulting Engagements
In bespoke engagements, AI may be used to assist with document analysis, research synthesis, drafting, and workflow automation. In all cases, a qualified KairoPact team member is responsible for the final advice and deliverable quality.
5. What We Do Not Do
The following practices are prohibited across all KairoPact operations:
- We do not use client data or beneficiary data to train, fine-tune, or prompt-engineer AI models for any purpose other than the contracted engagement.
- We do not present AI-generated outputs as unreviewed expert advice without clearly indicating the role of AI.
- We do not allow AI outputs to bypass human review gates in any client-facing pipeline.
- We do not use AI systems that process sensitive personal data without explicit client consent and appropriate data processing agreements.
- We do not deploy AI for automated decision-making that affects individuals' rights, livelihoods, or welfare without human oversight.
- We do not use AI-generated content in communications, reports, or publications that could mislead readers about the nature or quality of the underlying work.
6. Data Handling and Privacy
6.1 Data We Process
- Organisational data: programme descriptions, project reports, financial summaries, and strategic documents provided by clients
- Publicly available information: funder databases, sector research, government data, and published reports used for analysis
- Operational data: workflow logs, system performance data, and quality assurance records
6.2 Data We Do Not Process Without Consent
- Personal identifying information of beneficiaries unless required by the engagement and covered by a data processing agreement
- Sensitive personal data (health, financial, biometric) without explicit consent and appropriate safeguards
- Data belonging to third parties not party to the engagement
6.3 AI Model Providers
KairoPact uses third-party AI model providers. We select providers whose terms do not permit training on customer data. Clients may request a current list of AI model providers by contacting pritpal@kairopact.com.
6.4 Data Residency
Where data residency requirements apply, KairoPact operates within the relevant legal requirements. Clients with specific data residency requirements should raise these at the engagement scoping stage.
7. Regulatory Compliance
| Regulation / Framework | How KairoPact addresses it |
|---|---|
| GDPR (EU / UK) | Where KairoPact processes personal data of individuals in the EU or UK, personal data is managed in line with applicable contractual and regulatory requirements, including purpose limitation, appropriate retention, and lawful transfer safeguards. |
| DPDPA 2023 (India) | Personal data of Indian individuals is managed with reference to the Digital Personal Data Protection Act, applying principles of data minimisation, purpose limitation, and consent. |
| Companies Act 2013 (Section 135 — CSR) | AI-generated CSR reporting outputs are reviewed by qualified programme staff before submission or client delivery. |
| FCRA (Foreign Contribution Regulation Act) | Engagements involving FCRA-regulated entities are scoped to ensure AI systems do not process regulated foreign contributions data without appropriate controls. |
| 80G / 12A Compliance Documentation | AI-assisted compliance documentation is reviewed by domain-qualified advisors before client delivery. |
| CSR-1 and DARPAN Registration | Reference data from these frameworks is used read-only for research and is not modified or submitted by AI systems. |
8. Governance and Oversight
8.1 Operational Level
Every AI-assisted workflow has a designated human reviewer accountable for output quality. Review gates are documented in our Master Workflow framework and cannot be bypassed without explicit authorisation from a Director.
8.2 Leadership Level
The policy owner for KairoPact's AI governance is the Founder & Managing Director. The Chief Operating Officer is the designated escalation contact for operational AI incidents.
8.3 Policy Level
This policy is reviewed every six months during the current period of rapid development. The current version is always accessible at kairopact.com/ai-policy.
9. Errors, Incidents, and Remediation
- When an AI output error is identified — whether by our team or reported by a client — it is logged, investigated, and classified by severity.
- High-severity incidents are escalated to Director level within 24 hours.
- Root cause analysis is conducted and documented. Pipeline changes are made where necessary.
- Clients affected by a material error are notified promptly with a clear account of what occurred and how it was addressed.
10. Your Rights as a Client
- Right to know: You may ask, at any time, whether AI was used in the preparation of a specific deliverable and at which stage.
- Right to human review: You may request that any AI-assisted deliverable be subject to additional human review before delivery.
- Right to opt out: You may request that specific work be completed without AI assistance, subject to technical feasibility.
- Right to data information: You may request a summary of what data was processed by AI systems in the course of your engagement.
- Right to raise concerns: If you believe AI has been used in a way inconsistent with this policy, you may raise this with us directly.
To exercise any of these rights, contact: pritpal@kairopact.com
11. Contact and Policy Updates
| Company | KairoPact Private Limited |
| Registered in | India |
| Policy owner | Founder & Managing Director |
| Policy version | 1.5 |
| Policy effective date | 15 March 2026 |
| Next scheduled review | September 2026 |
| Contact for queries | pritpal@kairopact.com |
| Policy URL | kairopact.com/ai-policy |
"KairoPact exists to make the social sector more effective. We believe AI can be a powerful tool in that mission — if and only if it is governed with the same rigour, honesty, and accountability we ask of every human on our team. This policy is our commitment to that standard."