7 Steps to Cut Your Institution’s ChatGPT Risk
3/8/2023
7 Steps to Cut Your Institution’s ChatGPT Risk
Chances are pretty good your employees are already using a large language model (LLM) to carry out tasks from idea generation, to fact-checking, writing, and editing, to writing proprietary code. Chat GPT is the most notable LLM, but there are dozens of others—with more on the horizon.
In a recent blog post, law firm Debevoise & Plimpton outlined the main LLM risks and how to reduce the risks associated with using these services at work.
Risks include:
- Quality control risks (ChatGPT and the like can generate wildly inaccurate answers to some queries).
- Contractual and privacy risks (sharing confidential information in a query is sharing it with a third party).
- Consumer protection risks (e.g., when consumers interact with ChatGPT-driven chatbots).
- Intellectual property risks (some ChatGPT outputs may not be copyrightable).
The firm suggests several steps to reduce risk:
- Creating a risk rating for each category of use.
- Documenting all uses and reporting this inventory to a team charged with tracking and assigning a risk rating to each use.
- Internal labeling for some uses, flagging content as created by an LLM to internal reviewers for extra scrutiny.
- Clearly identifying content as created by an LLM when sharing these outputs outside the firm.
- Maintaining records of prompts used and the time that content was generated (especially for high-risk uses).
- Periodic training for employees on acceptable and unacceptable uses.
- Using tools to monitor whether information was generated by an LLM and, if so, its compliance with company policy.