The introduction of generative artificial intelligence (AI) into products and services continues to present complex privacy, intellectual property, and other legal issues. Adobe recently updated their Terms of Use highlighting how critical it is for those businesses looking to onboard and use AI tools read those terms carefully. Clarity and understanding of AI-related terms of use and privacy policies help mitigate potential risks.
Adobe’s Terms grant Adobe broad rights to access and review user content and to “use, reproduce, publicly display, distribute, modify, create derivative works based on, publicly perform, and translate” that content, inclusive of using it to train their AI models.
Specifically, the update included language stating Adobe may “access, view, or listen to” user content using both “automated and manual methods,” and that automated systems may analyze user content using techniques such as machine learning to “improve [their] Service and Software and the user experience.” Additionally, the update allows Adobe to “use available technologies, vendors, or processes, including manual review to screen for certain types of illegal content… or other abusive content or behavior.” This latter statement mirrors language in Microsoft’s Azure OpenAI Service. Several months after enterprise users had deployed the Azure OpenAI service, Microsoft was able to retain and manually review user prompts, including legally sensitive and potentially privileged information.
Adobe responded with several statements seeking to clarify that it “does not train Firefly Gen AI models on customer content” and that it will “never assume ownership of a customer’s work.” It remains unclear whether AI models other than Firefly models might be trained on customer content and how user content is stored and identified for review. Adobe released newly updated terms on June 18th, clarifying the company will not train AI on user content stored locally or in the cloud.
While certain terms related to accessing and using customer content are required so that providers may effectively deliver their services, companies need to remain mindful of the increasing awareness of, and sensitivity to, privacy and intellectual property risks introduced by artificial intelligence. Providers of AI-powered tools need to ensure their language is clear and free of ambiguity to avoid reputational risk and potential backlash.
For enterprise users, the layering of AI functionalities into existing tools can increase legal risks related to how personal information, copyrighted materials, nonpublic information, and other sensitive information is collected, accessed, used, stored, and disclosed. Users and business customers should consider revisiting (or considering for the first time) their vendor risk assessment processes, contracting strategies, internal AI responsible use guidelines, and strategic deployment of such tools.
For anyone interested in learning more about AI-related legal risks related to use, adoption, or development, contact the authors for more information.