KEY TAKEAWAYS:
- ChatGPT does not protect confidential or sensitive information inputted to the chats by users, raising legal and ethical concerns for lawyers and businesses.
- OpenAI’s Terms of Use and FAQ are beginning to address legal and ethical concerns, such as confidentiality and attorney-client privilege.
- There is a new opt-out feature that allows users to restrict the sharing of their input with ChatGPT for training purposes.
Since I first wrote about ethics and ChatGPT for lawyers back in January 2023, OpenAI has updated their Terms of Use and FAQ in what appears to be a response to my previous concerns. These modifications are one small step lawyers, but one giant leap for generative AI-kind.
This article contains an updated analysis of the legal and ethical issues implicated by the use of ChatGPT by lawyers. When viewed in the context of where they started and where they are now, the terms seem to suggest that ChatGPT is on its way to becoming a more viable tool for lawyers.
Summary of January 2023 Analysis
To summarize my prior analysis of OpenAI’s Terms of Use—which I also cover in the Contract Teardown Show—my key ethical concerns for lawyers using ChatGPT were related to:
- Confidentiality – the duty to keep information related to client representation confidential (MPRC 1.6) ; and
- Competence & Attorney-Client Privilege – the duty to stay abreast of changes in technology, and attorney-client privilege (MRPC 1.1, Comment 8).
In reviewing OpenAI’s Terms of Use, I cited the lack of terms and conditions safeguarding confidential information shared with ChatGPT as a reason that sharing information related to the client representation with ChatGPT would be an ethical violation. In reviewing OpenAI’s FAQ, I cited to answers where users were directly cautioned against sharing sensitive information because the information could be viewed by humans as reason that inputting privileged information could be construed as a waiver of privilege.
In March 2023, OpenAI updated their Terms of Use and FAQ. As such, I am reevaluating the ethical considerations arising from a lawyer’s use of ChatGPT by looking at OpenAI’s modified Terms of Use and FAQ and comparing them against the Model Rules of Professional Conduct.
What’s changed?
1. Pop-up disclaimers now caution users about inputting sensitive information the way that the previously analyzed FAQ did. Specifically, the pop-up states: “Please don’t share any sensitive information in your conversations.” However, the FAQ no longer includes this same caution.
2. There’s a new user content opt-out method. The FAQ explains how users can opt out of sharing information and storing chat histories to address concerns over inputting sensitive information. Specifically, the FAQ states: You can request to opt out of having your content used to improve our services at any time by filling out this form.”
This is a significant change in policies for users who feel limited by the lack of protection of confidentiality. Lawyers desiring to leverage the tool in a way that requires some information related to the representation of the client now have a straightforward way to avoid an ethical violation. They can use the new opt-out feature.
3. Data privacy and security concerns related to personal data shared in the chats are acknowledged in the opt-out form, but not the Terms of Use, yet. The form goes on to claim that OpenAI removes personal data from the input if they intend to use such information to improve the tool. But alas, it is only a form and not a contract.
What changes are still needed?
OpenAI’s FAQ states that when humans do review chat histories, they are subject to confidentiality and security agreements. While that’s reassuring, the user (hypothetically a lawyer) is not a party to those agreements and, as of this writing, OpenAI does not agree to maintain your confidential information in confidence. That would need to change to fully address the concerns around information that is shared with ChatGPT.
Ideally, we would like to see a mutually-beneficial confidentiality clause in the Terms of Use, expressly protecting the user’s confidential information. In addition to some transparency on how OpenAI would go about complying with such a promise.
So, as long as this gap remains, lawyers need to be certain to opt-out of sharing data and storing chat histories before they begin using the tool in connection with any client information. Privileged information must not be shared given the lack of protection for confidential information users input. While opting out of sharing data and storing chat history may mean you’re no longer risking an ethical breach, communicating your use of it to a client is still a professional responsibility and, in some cases, a contractual obligation as well.
Principles to Guide AI Applications in Law
MIT has established a Task Force to create guiding principles for the use of AI applications in law. The Task Force has come up with seven principles and is currently seeking public comments. These principles address the concerns outlined above and loyalty to the client and regulatory compliance.
The seven principles are:
- Duty of Confidentiality to the client in all usage of AI applications;
- Duty of Fiduciary Care to the client in all usage of AI applications;
- Duty of Client Notice and Consent* to the client in all usage of AI applications;
- Duty of Competence in the usage and understanding of AI applications;
- Duty of Fiduciary Loyalty to the client in all usage of AI applications;
- Duty of Regulatory Compliance and respect for the rights of third parties, applicable to the usage of AI applications in your jurisdiction(s);
- Duty of Accountability and Supervision to maintain human oversight over all usage and outputs of AI applications;
To increase your competence around ChatGPT, register for a free CLE and webinar hosted by Contract Nerds on August 10th all about Using ChatGPT to Draft & Negotiate Contracts.
Checklist: Before You Start Using ChatGPT
Should you use ChatGPT? Is it ok for lawyers to use? The answer is it depends. So, to help you think through the issues, here’s a checklist to go through before you start using ChatGPT in a legal context. Note that this is also relevant for contracts professionals who are contemplating the use of ChatGPT with contracting tasks.
- Review your corporate policy on the use of AI to ensure that you are not prohibited from using it in the scope of your employment;
- Opt-out of sharing your information using OpenAI’s opt-out form;
- Turn-off chat history by (on desktop) navigating to ChatGPT > Data Controls;
- Avoid sharing sensitive information, especially privileged information, until ChatGPT’s terms on confidentiality are updated to treat information you input into the system as confidential; and
- Always validate the output to ensure against hallucinations and other errors.
My recommendation when using ChatGPT to perform legal or contracting work is to be cautious and follow relevant policies, but also be brave. This technology’s relevance will only continue to grow. There’s no time like the present to learn something new and this checklist should help you do so with the appropriate guardrails in place.