M&A in the AI Space: Part 1 – Structuring Transaction Documents

By Mark Mahoney, Arik Broadbent, Noah Walters

The artificial intelligence (AI) industry has seen rapid growth in recent years, with AI companies raising more than CA$84 billion in venture financings in 2021 – almost double the amount raised in 2020 – and creating over CA$166 billion in value through initial public offerings (IPOs) and mergers and acquisitions (M&A).1 This spectacular rate of adoption and value generation has made it all the more important for lawyers working on M&A transactions to educate themselves on unique AI structuring considerations to inform the drafting of sound transactional documents and better protect the interests of their clients.

In this article, Part 1, we discuss key drafting considerations in AI M&A. In Part 2, (to be released in the coming weeks) we’ll take a step back and consider the important legal questions potential buyers need to know to properly assess the value and risk of AI target companies.

AI companies vs. traditional software companies

The key difference between AI companies and traditional software companies is typically found in the value of each company’s intellectual property (IP). The software company’s primary IP asset is usually the software code the company develops to perform a certain function. With AI companies, the primary IP asset can also be software code developed by the AI company (specifically, the proprietary model(s) developed by the AI company to manipulate data); however, in many instances the source of value is derived from the AI company’s “data moat,” meaning the company’s ownership of or exclusive right to underlying datasets.

Therefore, in order to capture the true value of a target in AI M&A, the drafting of foundational transaction documents requires careful consideration and classification of the definitions, reps, warranties and other contractual terms governing the inputs and outputs of the AI model(s), including any fine-tuning of proprietary datasets by the third-party software companies who are often the main customers of such AI companies. Consideration by lawyers with experience drafting technology licensing agreements is essential, as is reviewed by subject matter experts in privacy and IP.

How to tailor a purchase agreement to address AI


Prospective buyers must ensure that definitions used in purchasing agreements to describe the business and its assets are sufficiently broad so that they capture the full value of the AI target company, including consideration of all the technical inputs and outputs. Examples of definitions that can be used to inform the structure of an AI company purchasing agreement can include:

“AI Technologies” means deep learning, machine learning and other artificial intelligence technologies, including any and all (a) proprietary algorithms, software or systems that make use of or employ neural networks, statistical learning algorithms (like linear and logistic regression, support vector machines, random forests, k-means clustering), or reinforcement learning, and (b) proprietary embodied AI and related hardware or equipment.
“Company AI Products” means all products and services of the Company that employ or make use of AI Technologies.
“Scraped Dataset” means Training Data that was collected or generated using web scraping, web crawling, or web harvesting software or any software, service, tool or technology that turns the unstructured data found on the web into machine readable, structured data that is ready for analysis.
“Third-Party AI Product” means any product or service of a third party that employs or makes use of AI Technologies.
“Training Data” means training data, validation data, and test data or databases used to train or improve an algorithm.

Representations and warranties

As with any other purchase agreement, buyers will require the seller to make representations and warranties to isolate the specific risks associated with the business of the target company and to allocate the risk of loss between the buyer and the seller. From the buyer’s perspective, the agreement should compel the seller to make disclosures before the transaction is completed.

When the target is an AI company, the unique characteristics of AI systems that give them value and create risk for the buyer may not be sufficiently captured by the boilerplate representations and warranties that would be used for intellectual property or software in a typical merger or asset purchase agreement. Lawyers should pay careful attention to the risk profile of the AI company, and/or its capacity to produce “high risk” outputs, when they draft provisions designed to mitigate risk and, ultimately, allocate responsibility for the use of AI tools.

Examples of the type of representations and warranties that buyers should include are:

Ownership: The complex nature of AI systems suggests that there may not be one form of IP protection that can be extended to the entire product. Rather, different forms of IP protections may apply to certain areas of the system. For example, copyright may be used to protect source code, and trade secrets could cover confidential information. The buyer should ensure that they will be acquiring all the necessary rights to own and use the system.
Potential for malicious use: Buyers need to be aware of vulnerabilities in the AI systems that could lead to the system being hacked or abused, particularly when it is used in important settings, such as automated systems running critical infrastructure, health programmes, etc.
Quality of dataset: Depending on the nature of the data being collected, the AI system may require additional scrutiny. For example, AI systems collecting and processing high volumes of risky, personal data in sensitive areas like health diagnostics or personal finances may raise certain procedural and ethical questions and require additional compliance with privacy regulations.
Allocation of liability amongst suppliers and customers: Buyers should consider the potential legal bases on which the target may be exposed to liability – such as product liability, negligence, breach of contract, etc. – and incorporate provisions to mitigate such risk.
Compliance with laws and regulations: Not only do buyers need to be aware of the domestic requirements under privacy and technology legislation, but they must also consider national security and foreign investment rules, which, considering the global nature of technology businesses, and the impact that underlying AI company datasets can have on outputs, may be particularly significant.
Industry standards and best practices: Buyers should have an understanding of the standard requirements for AI systems and ensure that the seller is adhering to best practices, even those that are not formally mandated.
Privacy: This includes assessing how companies collect personal information, where the information resides, who it is shared with, and whether these practices actually comply with the company’s privacy policies and applicable laws. Big picture assessment of the robustness of the company’s data security and information technology practices is especially important where the AI company’s value is heavily reliant on its underlying datasets.


Indemnification is another important area that requires buyers to be diligent in allocating liability. Often, agreements will include an indemnity for the buyer against third-party claims of infringement if the product or service is alleged to infringe a third party’s IP rights, claims of unauthorized use of datasets to train AI algorithms and more.

While AI has been around for a while, recent developments have increased pressure on industry stakeholders to not only be mindful of the law as it stands, but of ethical considerations and political discourse that hints at where the law is going. Therefore, it is important to work with service providers who stay alert to the key regulatory updates and industry developments that will inevitably inform M&A transactions moving forward.

If you have any questions about the M&A transactions related to AI companies, please contact the authors, Mark Mahoney, Arik Broadbent and Noah Walters.

Special thanks to Helen Wang for her contributions to this article.