As companies evaluate potential mergers and acquisitions, joint ventures, or sale transactions, artificial intelligence (AI) must now be part of the discussion. AI can significantly impact a company’s valuation and risk profile. Buyers must understand not only how a target company uses AI but also whether that use is lawful and supported by an appropriate risk management framework and governance structure. Regulators worldwide are introducing new frameworks—such as the European Union’s AI Act and emerging US state-level laws and regulations—that impose obligations related to transparency, disclosure, data protection, and accountability. Noncompliance can result in substantial fines or reputational damage. For sellers, a proactive AI readiness assessment can enhance deal value. Documenting compliance, implementing responsible AI policies, and preparing disclosures for potential buyers can streamline diligence and build trust in the transaction. Further, if the transaction contemplates representation and warranty insurance, the target’s use of AI will be evaluated as part of the insurer’s underwriting process.
As you are evaluating your own or the target’s use of AI, consider the following areas that may influence the transaction: (1) governance and oversight, (2) regulatory compliance and privacy, (3) proprietary data protection, and (4) product liability.
Governance and Oversight—An established AI governance framework signals maturity and control. Companies that maintain formal oversight through AI governance committees, internal policies, and designated officers such as an AI safety officer tend to have better visibility into where and how AI is used and are better positioned to manage compliance and ethical risks. Target companies should be evaluated to determine whether there is a documented internal governance structure overseeing AI use and monitoring for issues such as bias, improper data use, or security vulnerabilities.
Regulatory Compliance and Privacy—As AI systems rely heavily on data, privacy compliance and data security should be examined closely. Diligence should include a review of how the company collects, processes, and protects personal and proprietary data used to train or operate AI systems. Areas of heightened concern include the collection and use of data about individuals’ health, finances, identity, and location or anything that can be inferred from data either within a data set or a series of outputs or in combination with other information. Public companies must also be careful to avoid misstating the degree of use or effect of AI; this has been called “AI washing.” The Securities and Exchange Commission (SEC) cautions that overstating a company’s use of AI or its significance to the business may mislead investors. Public companies should also assess whether AI-related risks, use cases, or limitations rise to the level of needing material disclosure in SEC filings or risk factor statements.
Proprietary Data and IP Protection—The data sets and the AI model itself both offer proprietary value. Data sets and algorithms may be reorganized or reclassified in many ways, and any adaption of a data set or change in the algorithm from its original licensed state should be treated as proprietary data. Businesses need to understand the agreements underlying the use of AI-enabled tools to ensure that their intellectual property rights to the data inputs and outputs are protected. During diligence, buyers should also verify that the target has implemented measures to protect its proprietary data and intellectual property from misuse or unauthorized disclosure, including data outflow or leakage to third-party AI models or data sets. Likewise, the introduction of proprietary data or copyright information through generative AI model prompts may result in the unintended loss of trade secret and copyright protection under certain circumstances. Finally, the use of AI in the context of inventing runs the risk of non-patentability absent proper documentation of activities.
Product Liability—AI-related product liability exposure can arise from system failures, inaccuracies, or the misuse of generative AI tools. These risks are heightened when AI outputs influence safety systems, critical infrastructure, or consumer-facing products. A fundamental step in establishing suitable AI governance and proper risk management is assessing risk within the intended AI use context. Certain high-risk areas require special consideration when involving AI-based decision making and AI-initiated system actions, including areas such as eligibility for and access to public services and benefits, hiring and employment, educational assessment and discipline, child entertainment, consumer lending and credit transactions, healthcare, public safety, and critical infrastructure systems and products including security, safety, transportation, and utility systems.
We have prepared a practical AI Due Diligence Checklist that highlights key considerations when evaluating AI in mergers and acquisitions. You can download the PDF here. If you have questions about assessing AI risks or tailoring diligence requests for a specific deal, the McCarter & English Artificial Intelligence team is here to help. Our interdisciplinary team provides guidance on regulatory, governance, and commercial aspects of AI and represents clients in dispute involving AI.
