Firm News October 1, 2025
Pennsylvania Supreme Court’s New GenAI Policies for Court Personnel: What Practitioners Need to Know
* Brad Smutek, an associate with Troutman Pepper Locke who is not admitted to practice law in any jurisdiction, also contributed to this article.
The Pennsylvania Supreme Court recently confronted the issue of generative artificial intelligence (GenAI) in an order establishing policies for the use of GenAI by court personnel.[1] The new policies authorize court personnel to use GenAI within certain boundaries.[2] The policies, which take effect December 8, 2025, provide insight into how practitioners in Pennsylvania courts should approach the use of GenAI.
Background
The policies define GenAI as a catch-all term for “algorithms and/or computer processes that use artificial intelligence to generate text, audio, or images based on user prompts.”[3] We note this definition omits GenAI’s powerful capability to produce video content. It also makes no mention of burgeoning agentic AI tools that can act on the user’s behalf (including, theoretically, without the user’s express knowledge or approval).[4] The policies also distinguish “secured” AI systems, which do not retain data or documents, from “non-secured” AI systems, which do.[5] There are, however, AI systems that do retain data or documents while still keeping information confidential.
Regardless, when it comes to using GenAI in legal practice, confidentiality is paramount. Court personnel have access to a significant volume of non-public and sensitive information. The National Center for State Courts has emphasized that publicly available GenAI tools “may not offer sufficient privacy guarantees for court-related information.”[6] For example, OpenAI’s public ChatGPT does not provide adequate confidential protections: OpenAI collects personal data, and it may use that data (defined broadly to include user prompts and other uploaded content) to train its model or provide that data to third parties and government authorities.[7] On the other hand, GenAI tools built specifically for business or legal use may “provide appropriate safeguards for sensitive court data.”[8] For instance, OpenAI offers paid ChatGPT tools that claim to provide more robust confidentiality.[9] Similarly, Westlaw and Lexis each offer GenAI tools that promise to keep information secure and confidential.[10]
But confidentiality is not the only salient concern, GenAI tools also have a tendency to hallucinate, confidently providing responses—including case law citations—that prove to be inaccurate, misleading, or entirely fabricated. Earlier this year, for instance, Judge Kai N. Scott of the U.S. District Court for the Eastern District of Pennsylvania ordered sanctions against an attorney for citing hallucinated cases in motions to the court.[11] One database has counted hundreds of cases in which a party cited hallucinated cases.[12] A few judges across the U.S. have even released opinions relying on hallucinated caselaw.
Limitations aside, GenAI tools are becoming more commonly used by practitioners. But this ubiquity has not yet hit state judiciaries—a Thomson Reuters survey of state courts noted that “courts have generally been slow to adopt AI and generative AI.[13] Seventy percent of survey respondents reported that their courts do not allow AI; even more said their courts provide no AI training at all.[14] Courts are hesitant to adopt GenAI tools for a variety of reasons, including fears of technology overreliance, inaccuracies, job loss, and security breaches.[15]
With this order, the Pennsylvania Supreme Court joins a growing list of high state courts issuing statewide guidance for court personnel. The supreme courts of Arizona, Connecticut, Delaware, Illinois, and Maryland have issued similar policies.[16] All emphasize the need to avoid entering confidential information into non-sequestered AI systems.[17] Some, like Maryland, list currently approved GenAI platforms. New Jersey took a different approach, declaring broad principles for GenAI use rather than specific policies.[18]
Some federal judges in Pennsylvania have already addressed the use of GenAI. For example, Judge Kelley B. Hodge allows parties to use GenAI as long as they comply with ethical rules and disclosure requirements.[19] Judge Michael M. Baylson requires parties to disclose their use of GenAI and certify their verification of each citation to the law.[20] But these judges are outliers: of the 31 district court and senior judges in the Eastern District of Pennsylvania, only three have addressed GenAI, according to Law360’s AI tracker;[21] only one district court judge in the Middle District of Pennsylvania has created a similar order;[22] none in the Western District of Pennsylvania have done so. It is worth noting that some in the legal community debate the true necessity of these orders, arguing that “individualized standing orders are unnecessary, create unintended confusion, impose unnecessary burden and cost, and deter the legitimate use of GenAI applications that could increase productivity and access to justice.”[23] Those practitioners feel existing ethical duties and rules of civil procedure create sufficient mechanisms for punishing lawyers who fail to take appropriate care and to oversee the accuracy of their court filings, regardless of how they are generated.[24]
The Guidelines
Turning to the new guidelines, court leadership must first approve the use of a particular GenAI tool within their court.[25] They must ensure, through vendor contracts and tool policies, that the GenAI tool will keep information “confidential and privileged.”[26] Court personnel should presume that information entered into non-secured systems will not be treated as confidential and privileged.[27] Before using GenAI, court personnel must become and remain knowledgeable about GenAI’s “capabilities and limitations,” like hallucinations, biases, and inaccuracies.[28]
So, how can court personnel leverage approved GenAI tools? They may use such GenAI tools to assist with a broad range of tasks, including summarizing documents, conducting preliminary legal research, and drafting and editing their own work. But the user remains ultimately responsible for the completeness and accuracy of their work product. Pennsylvania courts may also “provide interactive chatbots or similar services to the public and self-represented litigants.”[29]
Takeaways
Though the order applies to court personnel, it may signal new standards for practitioners in Pennsylvania courts moving forward. Attorneys may use suitable GenAI tools to help with preliminary research and drafting, but they should never take a backseat. The professional obligations to keep client confidences and to exercise candor toward the tribunal do not go away when using GenAI. One should always diligently review a GenAI tool’s output for accuracy. Practitioners should also pay close attention to their GenAI tool’s confidentiality policies to ensure protection of client information—assume that free, publicly available GenAI tools may not provide adequate confidentiality protections. As GenAI adoption grows, courts and firms will increasingly enact responsible AI use policies and procedures to help educate practitioners and promote compliance. Practitioners should expect that Pennsylvania courts will increase scrutiny of filings for any improper use of GenAI and react sternly to blatant violations.
[1] Order, In re: Interim Policy on the Use of Generative Artificial Intelligence by Judicial Officers and Court Personnel (No. 643) (Pa. Sept. 9, 2025), https://www.pacourts.us/assets/opinions/Supreme/out/Order%20Entered%20-%20106502825326189062.pdf?cb=1.
[2] Interim Policy on the Use of Generative Artificial Intelligence by Judicial Officers and Court Personnel, https://www.pacourts.us/assets/opinions/Supreme/out/Attachment%20-%20106502825326188944.pdf?cb=1 (“Interim Policy”).
[3] Id. at 1.
[4] See, e.g., The Rise—and Risks—of Agentic AI, PwC (July 17, 2025), https://www.pwc.com/us/en/industries/tmt/library/trust-and-safety-outlook/rise-and-risks-of-agentic-ai.html.
[5] Interim Policy at 1.
[6] Thomson Reuters Institute & Nat’l Ctr. for State Courts AI Pol’y Consortium for L. and Cts, Staffing, Operations, and Tech, Principles and Practices for Using AI Responsibly and Effectively in Courts: A Guide for Court Administrators, Judges, and Legal Professionals 3 (2025), https://nationalcenterforstatecourts.app.box.com/s/b9f0iesp1k6au4ab3qwop4m71jazywjy.
[7] Privacy Policy, OpenAI (July 27, 2025), https://openai.com/policies/row-privacy-policy/.
[8] Thomson Reuters Institute & Nat’l Ctr. for State Courts AI Pol’y Consortium for L. and Cts, Staffing, Operations, and Tech, Principles and Practices for Using AI Responsibly and Effectively in Courts: A Guide for Court Administrators, Judges, and Legal Professionals 3 (2025), https://nationalcenterforstatecourts.app.box.com/s/b9f0iesp1k6au4ab3qwop4m71jazywjy.
[9] Privacy Policy, OpenAI (July 27, 2025), https://openai.com/policies/row-privacy-policy/ (“This Privacy Policy does not apply to content that we process on behalf of customers of our business offerings”).
[10] See Murphy Foss, Tess Felter, & Connor Catalano, Answers to Questions about Using Generative AI to Practice Law, N.D.N.Y. Federal Court Bar Association (Oct. 7, 2024), https://ndnyfcba.org/answers-to-questions-about-using-generative-ai-to-practice-law/; How In-House Lawyers Can Use the Power of AI on Westlaw Precision with CoCounsel, Thomson Reuters (Oct. 8, 2024), https://legal.thomsonreuters.com/blog/how-in-house-lawyers-can-use-the-power-of-ai-on-westlaw-precision-with-cocounsel/.
[11] Bunce v. Visual Tech. Innovations, Inc., No. 23-1740, 2025 U.S. Dist. LEXIS 36454 (E.D. Pa. Feb. 27, 2025); see also Daniel Wu, Lawyers Using AI Keep Citing Fake Cases in Court. Judges Aren’t Happy., The Washington Post (June 3, 2025), https://www.washingtonpost.com/nation/2025/06/03/attorneys-court-ai-hallucinations-judges/.
[12] See AI Hallucination Cases, Damien Charlotin, https://www.damiencharlotin.com/hallucinations/.
[13] Thomson Reuters Institute & Nat’l Ctr. for State Courts AI Pol’y Consortium for L. and Cts, Staffing, Operations, and Tech.: A 2025 Survey of State Courts 3 (2025), https://www.thomsonreuters.com/en-us/posts/wp-content/uploads/sites/20/2025/06/Staffing-Operations-and-Technology_2025-survey-of-State-Courts.pdf.
[14] Id. at 20.
[15] Id. at 21.
[16] See, e.g., Use of Generative Artificial Intelligence and Large Language Models (Ariz. Oct. 30, 2024), https://www.azcourts.gov/Portals/0/0/admcode/pdfcurrentcode/1-509%20Use%20of%20AI%20Tech%20and%20LLMs%2001_2025.pdf?ver=acMF-P2SER0dArzTQohBjQ%3D%3D; Artificial Intelligence Responsible Use Framework (Conn. Feb. 1, 2024), https://www.jud.ct.gov/faq/CTJBResponsibleAIPolicyFramework2.1.24.pdf; In re: Interim Policy on the Use of Generative AI by Judicial Officers and Court Personnel (Del. Oct. 21, 2024), https://courts.delaware.gov/forms/download.aspx?id=266848; Illinois Supreme Court Policy on Artificial Intelligence (Ill. Jan. 1, 2025), https://ilcourtsaudio.blob.core.windows.net/antilles-resources/resources/e43964ab-8874-4b7a-be4e-63af019cb6f7/Illinois%20Supreme%20Court%20AI%20Policy.pdf; Guidelines for the Acceptable Use of Artificial Intelligence (AI) Tools and Platforms (Md. Apr. 15, 2024), https://nationalcenterforstatecourts.app.box.com/s/bytljb1w4dxhdvmd23fv5bsnu94rmh3q (listing OpenAI’s ChatGPT, Anthropic’s Claude, Microsoft’s Copilot, and Google’s Gemini as approved GenAI platforms, subject to approved uses).
[17] A “non-sequestered AI system” is one “in which the vendor does not protect the confidentiality of user input or prompt data.” Use of Generative Artificial Intelligence and Large Language Models (Ariz. Oct. 30, 2024), https://www.azcourts.gov/Portals/0/0/admcode/pdfcurrentcode/1-509%20Use%20of%20AI%20Tech%20and%20LLMs%2001_2025.pdf?ver=acMF-P2SER0dArzTQohBjQ%3D%3D.
[18] Statement of Principles for the New Jersey Judiciary’s Ongoing Use of Artificial Intelligence, Including Generative Artificial Intelligence (N.J. Jan. 23, 2024), https://www.njcourts.gov/sites/default/files/courts/supreme/statement-ai.pdf.
[19] Judge Kelley B. Hodge, Judicial Policies and Procedures, https://www.documentcloud.org/documents/24747951-hodge_policy/.
[20] Judge Michael M. Baylson, Standing Order Re: Artificial Intelligence (“AI”) in Cases Assigned to Judge Baylson, (E.D. Pa. June 6, 2023), https://www.paed.uscourts.gov/sites/paed/files/documents/procedures/Standing%20Order%20Re%20Artificial%20Intelligence%206.6.pdf.
[21] Tracking Federal Judge Orders on Artificial Intelligence, Law360, https://www.law360.com/pulse/ai-tracker.
[22] See Judge Karoline Mehalchick, Civil Practice Order: Use of Generative Artificial Intelligence (M.D. Pa. Aug. 19, 2024), https://www.documentcloud.org/documents/25114516-judge-mehalchick-genai-order-81924/.
[23] Maura M. Grossman, Paul W. Grimm, & Daniel G. Brown, Is Disclosure and Certification of the Use of Generative AI Really Necessary?, 107 Judicature 69, 76 (2023), https://judicature.duke.edu/wp-content/uploads/sites/3/2023/10/AIOrders_Vol107No2.pdf.
[24] Id.
[25] “Court leadership” includes “the Chief Justice of Pennsylvania, the President Judge of each appellate court and judicial district, and the Court Administrator of Pennsylvania, or their designees.” Interim Policy at 1.
[26] Id. at 6.
[27] Id.
[28] Id. at 5.
[29] Id. at 4.
Insight Industries + Practices
Speaking Engagements
FDLI: Tobacco and Nicotine Products Regulation and Policy Conference
October 29, 2025 | 1:15 PM – 2:15 PM ET
The Westin DC Downtown
999 9th Street NW, Washington, DC 20001
Sponsored Events
2025 Mid-Atlantic Capital Conference
October 29 – 30, 2025
Philadelphia Marriott Downtown
1201 Market Street, Philadelphia, PA 19107
Sponsored Events
2025 Midwest Capital Connection
October 29 – 30, 2025
Hilton Chicago
720 South Michigan Ave, Chicago, IL 60605
Speaking Engagements
IRS Enforcement Update
October 28, 2025 | 5:30 PM – 6:30 PM CT
Dallas, TX