[Clinic Name] — Generative AI Policy
1. Purpose
This policy governs the use of generative artificial intelligence (GAI) tools in [Clinic Name]. Because this clinic represents real clients in real legal matters, the use of GAI raises professional responsibility concerns that do not arise — or arise differently — in other law school courses. This policy exists to protect clients, ensure compliance with applicable rules of professional conduct, preserve the educational value of clinical work, and establish clear expectations for students and supervisors.
This policy will be reviewed and updated at least once per academic year. GAI technology and the professional norms surrounding it evolve rapidly; this document reflects our best understanding as of its effective date.
2. Guiding Principles
This policy rests on several principles that inform every provision that follows. We state them here so that when a situation arises that this policy does not expressly address, you can reason from these principles to an appropriate course of action.
2.1 Pedagogical Purpose
Every restriction and permission in this policy serves a learning objective. Clinical education develops professional competencies — legal analysis, client counseling, advocacy, judgment — that require students to do the intellectual work themselves. Where AI use supports that development (for example, by freeing students to focus on higher-order analysis), it is permitted and encouraged. Where AI use threatens to substitute for the student’s own reasoning, it is restricted. When you encounter a restriction, understand that it exists to protect a specific learning outcome, not to express distrust of the technology.
2.2 Professional Responsibility
Lawyers owe duties of competence, confidentiality, and candor that do not pause for new technology. This policy translates those duties into specific rules for GAI use. Students should understand that following this policy is not merely an academic exercise; it is practice in the kind of thoughtful, context-sensitive self-regulation that the profession demands.
2.3 Transparency over Prohibition
This policy does not impose a blanket ban on GAI. Blanket bans are unenforceable (detection tools are unreliable), and they forfeit the opportunity to teach students how to use AI responsibly. Instead, this policy requires transparency: you may use GAI within the boundaries described here, but you must always disclose that use and document your process. Transparency is the mechanism that makes everything else in this policy workable.
2.4 Authentication
Regardless of what tools you use, you must be able to represent that the work product you submit reflects your own understanding and professional judgment. This means you can explain the reasoning, defend the conclusions, and identify the choices you made. AI may contribute to your process, but the professional responsibility for the product is yours.
2.5 Adaptability
This policy is written in terms of principles rather than specific products. Particular tools will come and go; the obligations of competence, confidentiality, transparency, and independent judgment will not. When evaluating a new tool or a novel use, apply these principles rather than looking for the tool’s name on a list.
2.6 Equity and Access
Students have unequal access to GAI tools. Some can afford premium subscriptions; others cannot. This policy does not permit advantages to flow from that disparity. [Clinic Name] will [select one]:
3. Scope
3.1 Tools Covered
This policy applies to any software that uses generative AI to produce, edit, summarize, or analyze text, including:
- General-purpose large language models (e.g., ChatGPT, Claude, Gemini, Copilot — see Section 5.4 for the tier classification that determines what information may be entered into each)
- Legal-specific AI tools (e.g., Westlaw AI-Assisted Research, Lexis+ AI, CoCounsel)
- AI features embedded in other software (e.g., AI-powered drafting suggestions in Microsoft Word, AI summarization in email clients, browser-integrated AI assistants)
- AI-powered transcription or translation services
This policy does not apply to traditional legal research databases (Westlaw, Lexis) when used without their AI features, standard spell-check or grammar-check tools without generative capabilities, or basic search engines.
If you are unsure whether a tool qualifies, ask your supervisor before using it.
3.2 Tasks Covered
This policy applies whenever GAI is used in connection with any clinic matter, including but not limited to: legal research, factual investigation, drafting, editing, client communication, case strategy, preparation for hearings or interviews, and administrative tasks involving client information.
3.3 Assignment-Level Flexibility
Supervisors may impose more restrictive or more permissive AI rules for specific tasks or assignments. For example, a supervisor may prohibit AI use on a first-draft memo to ensure the student works through the analysis independently, while permitting AI-assisted revision on a later draft. When a supervisor sets an assignment-specific rule, that rule governs for that assignment even if it differs from the general permissions in Section 4. Supervisors will communicate assignment-level AI rules in writing before the assignment begins.
4. Permitted and Prohibited Uses
4.1 Permitted Uses
The following uses are permitted, subject to the data privacy (Section 5), verification (Section 6), and documentation (Section 7) requirements of this policy:
| Use | Conditions |
|---|---|
| Brainstorming and idea generation | No client-identifying information entered into the tool |
| Legal research | All citations independently verified in primary sources; legal-specific tools preferred |
| Drafting and editing assistance | Supervisor review required before any work product is shared with a client, filed, or sent outside the clinic |
| Summarizing or analyzing non-confidential materials | Public documents only (e.g., published opinions, statutes, regulations) |
| Preparing for client interactions | Interview/hearing prep questions only; no client-identifying information entered |
| [Pedagogical exercises as assigned] | As directed by supervisor for specific learning objectives |
4.2 Prohibited Uses
The following uses are prohibited without exception:
- Entering any client-identifying information into a GAI tool that lacks institutional data protection agreements (see Section 5)
- Submitting any AI-generated or AI-assisted work product to a court, opposing party, government agency, or client without full supervisor review and approval
- Using GAI to perform tasks you could not competently evaluate yourself — if you cannot assess whether the output is correct, you should not use GAI for that task
- Relying on GAI-generated legal citations without independent verification in an authoritative source
- Using GAI to communicate directly with a client (e.g., drafting and sending a client email without supervisor review)
- Using personal GAI accounts for clinic work unless expressly authorized by your supervisor
- Using GAI in any manner that violates a court order, local rule, or tribunal requirement regarding AI disclosure
- Submitting AI-generated work without disclosure as though it reflects your own analysis — doing so is a form of misrepresentation that violates both this policy and the professional norms it models (see Section 2.4)
4.3 A Note on Automation Bias and Deskilling
GAI outputs are fluent, confident, and fast. These qualities make them persuasive — and dangerous. Research consistently shows that people over-rely on computer-generated output simply because it comes from a computer (a phenomenon called automation bias). In a clinical setting, automation bias can lead you to accept an incorrect legal standard, overlook a factual nuance, or adopt a strategic approach that sounds right but does not serve your client’s interests.
There is also a deskilling risk: if you delegate core analytical tasks to GAI before you have developed the skill those tasks are designed to build, you may never develop it. A student who uses GAI to draft every memo from scratch may graduate without learning how to write one independently.
To guard against both risks:
- Approach all GAI output with professional skepticism. Treat it as you would a junior associate’s first draft — potentially useful, but requiring your independent evaluation.
- Be especially cautious early in your clinical experience, when the skills being developed are foundational.
- If you find yourself unable to explain why the AI’s output is correct (or incorrect), that is a signal you should do the work yourself.
5. Data Privacy and Confidentiality
5.1 Governing Rules
Lawyers have an ethical obligation to protect client information. See ABA Model Rule of Professional Conduct 1.6; [State] RPC 1.6; ABA Formal Opinion 512 (2024). This obligation extends to information entered into GAI tools.
5.2 Prohibited Inputs
Never enter the following into any GAI tool unless the tool operates under an institutional data processing agreement that your supervisor has confirmed provides adequate protection:
- Client names, nicknames, or other identifying information
- Case numbers, docket numbers, or internal file identifiers
- Addresses, phone numbers, Social Security numbers, or other personal identifiers
- Specific facts of a client’s case that, alone or combined, could identify the client
- Financial records, medical records, immigration records, or other sensitive documents
- Attorney-client communications
- Work product reflecting case strategy or mental impressions
5.3 Anonymization Protocols
If you wish to use GAI to assist with a task that relates to a specific client matter:
- Strip all identifying information before entering any text into the tool. Replace client names with generic placeholders (e.g., “Client A,” “Landlord”). Remove dates, locations, case numbers, and any other facts that could identify the client.
- Assess whether anonymized facts remain identifying. In small communities or unusual fact patterns, even anonymized information may identify a client. When in doubt, consult your supervisor.
- Do not rely on the GAI tool’s privacy settings or “private mode” features as a substitute for anonymization. These features vary by provider and may not prevent data retention or use in model training.
5.4 Approved Tools
Not all GAI tools carry the same data privacy risk. [Clinic Name] classifies tools into three tiers based on their data protection profile. The tier determines what information may be entered and what approvals are required.
Tier 1 — Personal Consumer Tools (e.g., personal ChatGPT, free Claude)
These tools typically lack institutional data processing agreements. User inputs may be retained, used for model training, or accessible to the provider’s employees. Tier 1 tools may be used only for tasks that involve no client information whatsoever — even anonymized information should not be entered unless the student is confident anonymization is complete and the facts are not identifying. Examples of permissible Tier 1 use: researching a general legal concept, generating plain-language explanations of a statute, or brainstorming interview questions with no case-specific facts.
Tier 2 — Institutionally Licensed General-Purpose Tools (e.g., university-provided Copilot, university Gemini)
These tools operate under an institutional data processing agreement between the provider and the university. Data is generally not used for model training, and the university’s IT security office has reviewed the provider’s terms. Tier 2 tools may be used with properly anonymized client information subject to the anonymization protocols in Section 5.3. Students must use their institutional accounts, not personal accounts, when working on clinic matters.
Tier 3 — Enterprise Legal AI (e.g., Westlaw AI-Assisted Research, Lexis+ AI)
These tools are designed for legal practice and operate under contractual protections specific to confidential legal work. They draw on verified legal databases and provide source-linked citations. Tier 3 tools may be used with client information to the extent permitted by the provider’s terms and the supervising attorney’s judgment, though the general principle of minimizing unnecessary disclosure of client information still applies.
| Tier | Example Tools | Client Info Permitted? | Approval Required? |
|---|---|---|---|
| 1 — Personal consumer | Personal ChatGPT, free Claude | No — no client information of any kind | No pre-approval for general use; supervisor must authorize any task connected to a client matter |
| 2 — Institutional general-purpose | University Copilot, university Gemini | Anonymized only — per Section 5.3 protocols | Use institutional account; follow clinic protocols |
| 3 — Enterprise legal | Westlaw AI, Lexis+ AI | Yes — within provider terms and supervisor judgment | Follow clinic protocols |
No GAI tools outside these three categories may be used for clinic work without advance supervisor approval. If you wish to use a tool not listed above, submit a written request to your supervisor explaining the tool, the proposed use, and the tool’s data privacy terms.
6. Verification Requirements
6.1 Governing Rules
Competent representation requires that a lawyer understand the tools they use and verify the accuracy of their work product. See ABA MRPC 1.1; [State] RPC 1.1; ABA Formal Opinion 512 (2024). GAI tools produce plausible-sounding text, not verified information. Outputs frequently contain fabricated citations, incorrect legal standards, jurisdictional errors, and outdated law presented as current.
6.2 Verification Checklist
Before any AI-assisted work product proceeds beyond the initial draft stage, the student must confirm:
- All legal citations have been located and verified in an authoritative primary or secondary source
- All statements of law have been checked for accuracy, currency, and jurisdictional applicability
- All factual assertions have been confirmed against the case file or other reliable sources
- The analysis reflects the student’s own professional judgment, not simply a restatement of AI output
- The work product has been reviewed for tone, clarity, and appropriateness for the intended audience
- The student can explain and defend every substantive assertion in the document
6.3 Supervisor Review
The supervising attorney must review all AI-assisted work product before it is:
- Sent to or shared with a client
- Filed with any court or tribunal
- Sent to opposing counsel, a government agency, or any third party
- Relied upon for case strategy decisions
The supervisor’s review encompasses both the substance of the work product and the appropriateness of the student’s AI use. See Section 7 for documentation requirements.
6.4 A Note on AI Detection Tools
This policy does not rely on AI-detection software to enforce its requirements. Current detection tools produce both false positives and false negatives at rates that make them unsuitable as primary enforcement mechanisms. Instead, compliance rests on the transparency, documentation, and authentication obligations described in this policy. Supervisors who suspect undisclosed AI use should address the concern through conversation and process review, not through detection software.
7. Documentation and Attribution
7.1 Process Documentation
Whenever you use GAI in connection with a clinic matter, you must retain and be prepared to produce:
- The prompt(s) you entered into the tool
- The complete output(s) the tool generated (unedited)
- The final work product incorporating or based on the AI output
- A reflective note identifying:
- What changes you made to the AI output and why
- What independent verification you performed
- What the AI got wrong or what you disagreed with
- What you learned from the interaction that you did not know before
This documentation serves two purposes. First, it enables meaningful supervisor review. Second — and equally important — it develops your ability to evaluate AI-generated work critically. The act of articulating what the AI contributed, what you contributed, and where the two diverged is itself a professional skill. Do not treat this as a compliance exercise; treat it as a thinking exercise.
7.2 Scaffolded Workflow for AI-Assisted Tasks
For substantial work product (memos, briefs, motions), supervisors are encouraged to structure AI-assisted work in stages that make the student’s reasoning visible:
- Independent analysis first. The student identifies the legal issues, develops a research plan, and forms a preliminary view before consulting GAI.
- AI-assisted development. The student uses GAI to test, extend, or refine the analysis — for example, by asking the tool to identify counterarguments, check for overlooked authorities, or suggest alternative framings.
- Critical evaluation. The student evaluates the AI output against the independent analysis, identifies discrepancies, and resolves them through their own judgment.
- Oral discussion. The supervisor reviews the work product with the student in conversation, asking the student to explain key choices and defend the analysis. This step ensures the student can authenticate the work and has not passively adopted AI output.
7.3 Attribution in Work Product
When AI-assisted work product is submitted to a court or tribunal, comply with any applicable disclosure rules. Where no specific rule governs, [Clinic Name]’s default position is:
7.4 Client Disclosure
[Clinic Name]’s approach to client disclosure of AI use is as follows:
Note: ABA Formal Opinion 512 does not impose a blanket disclosure obligation but identifies circumstances where communication obligations under MRPC 1.4 may require disclosure — including when client information is entered into a tool, when the client asks, or when the client cannot make an informed decision about the representation without knowing. Review the [State]-specific guidance for any additional state requirements.
8. Training Requirement
Before using any GAI tool in connection with clinic work, each student must:
- Complete the clinic’s AI orientation session or module, covering the contents of this policy, basic GAI capabilities and limitations, and data privacy protocols
- Demonstrate understanding by [completing a short assessment / acknowledging this policy in writing / participating in a supervised practice exercise — select as appropriate]
- Review ABA Formal Opinion 512 and [State] guidance on AI in legal practice
Supervisors are responsible for ensuring that students under their supervision have completed this training before authorizing GAI use. See ABA MRPC 5.1, 5.3; [State] RPC 5.1, 5.3. Supervisors should also maintain their own competence regarding GAI tools, including an understanding of the capabilities and limitations of tools students use. See ABA MRPC 1.1, Comment [8] (duty to keep abreast of changes in technology relevant to practice).
9. Error Correction and Incident Response
If a student or supervisor discovers that GAI use has resulted in an error — a fabricated citation in a filed document, client information entered into an unapproved tool, an inaccurate legal standard communicated to a client — the following steps apply:
- Notify the supervising attorney immediately if the student discovers the problem.
- Assess the scope. Determine what the error was, who has seen or relied on the affected work product, and whether the error has been incorporated into any filing, communication, or advice.
- Determine correction obligations. If a filing contains a fabricated citation or incorrect legal standard, it must be corrected. The duty of candor to the tribunal (MRPC 3.3) may require amendment or supplemental filing. If a client received inaccurate advice, the client must be re-advised.
- Determine disclosure obligations. If client information was entered into an unapproved tool, assess the scope of the confidentiality breach and whether the client must be notified under MRPC 1.4 and [State] RPC 1.4.
- Document the incident. Record what happened, when it was discovered, and what corrective steps were taken.
- Update protocols. Determine whether the incident reveals a gap in this policy or in supervisory procedures and revise accordingly.
10. Violations and Integration with Academic Integrity
Violations of this policy will be addressed in the same manner as other breaches of clinic protocols and professional responsibility standards. This policy is part of [Clinic Name]’s broader professional and academic integrity framework — not a standalone document. A violation of this policy carries the same weight as other integrity violations, and the same processes apply.
Depending on the severity of the violation, consequences may include:
- Additional training and closer supervision
- Restriction or revocation of GAI use privileges
- Grade consequences as set forth in the clinic syllabus
- Referral to the law school’s academic integrity process
- In cases involving client harm or breach of confidentiality, referral to the appropriate faculty or administrative body
Undisclosed AI use is treated as a form of implicit misrepresentation: by submitting work without disclosure, the student represents a level of independent understanding and effort that may not be accurate. This is incompatible with the transparency obligations of this policy and the professional norms it models.
11. Acknowledgment
I have read and understood the [Clinic Name] Generative AI Policy. I agree to comply with its terms and to seek guidance from my supervisor when I am uncertain about any aspect of this policy.
I understand that regardless of any tools I use, I am responsible for authenticating my work product — meaning I can explain, defend, and take professional responsibility for every substantive element of what I submit.
Student Name (print):
Student Signature:
Date:
This policy was compiled by David S. Kemp on February 19, 2026, with the assistance of Claude Cowork (Opus 4.6 Extended), and draws on ABA Formal Opinion No. 512 (2024), the New Jersey Supreme Court Preliminary Guidelines on the Use of Artificial Intelligence, and guidance from the legal education literature, including Matthew Sag, AI Policies for Law Schools and John Bliss, Teaching Law in the Age of Generative AI. It is intended as a template and must be customized to reflect the specific needs, practice areas, and risk profile of each clinic. The information provided on this website does not, and is not intended to, constitute legal advice, but is for general informational purposes only.