Pages

Monday, February 9, 2026

Artificial Intelligence and Your Business: What Founders Should Know Before Making Representations and Warranties

Artificial intelligence (AI) is no longer just a buzzword; instead, it’s a practical business tool that companies of all sizes are integrating into operations, products, and services. From automating routine tasks to generating insights from data, AI is transforming how entrepreneurs build and scale their businesses. But with great opportunity comes important legal and strategic considerations, especially when it comes to what founders say about AI in contracts, pitches, and marketing. I have already seen representatives and warranties regarding the usage of AI in real-life contracts in my practice! I guess the world really evolves FAST (I still remember the time when I used a flip phone and dictionary)!

AI Is Powerful, but It Isn’t Perfect

AI systems can do impressive things, but they have limitations. Most AI solutions rely on training data that may contain biases or gaps, can generate plausible sounding but inaccurate outputs, and are highly dependent on the specific models and configurations used.

In other words, AI tools are not infallible authorities; instead, they are tools that help humans make sense of data and tasks, not replacements for judgment and oversight.

Open-Source vs. Closed-Source AI: Why the Distinction Matters for Confidentiality

Another important consideration for founders is whether their business relies on closed-source AI models (such as OpenAI-based tools) or AI systems embedded in enterprise platforms like Microsoft Copilot, which are typically deployed within a controlled organizational environment. 

Broadly speaking, closed-source models are proprietary systems where the underlying model and training data are not publicly available, while open-source or semi-open systems allow varying degrees of inspection, modification, or self-hosting. From a legal and risk perspective, this distinction can raise confidentiality and data-use concerns. For example, using general-purpose, externally hosted AI tools may require careful attention to terms of service governing data retention, training use, and access by third parties, particularly when confidential or client-sensitive information is involved. By contrast, enterprise AI tools that are contractually restricted from using customer data for model training may offer stronger confidentiality protections, but only to the extent those protections are clearly documented and understood. As a result, companies should avoid blanket statements about AI “security” or “confidentiality” and instead align their representations with the specific deployment model, contractual safeguards, and internal usage policies actually in place.

Closing Thoughts

AI presents enormous opportunities for innovation and growth, but it also challenges traditional legal assumptions about performance, accuracy, and accountability. As an entrepreneur, being thoughtful about how you present your AI capabilities in legal documentation is not just good practice, it’s essential to protecting your business as you scale.

If you’re working with AI in your product or service, consider discussing your representations and warranties with experienced counsel who understands both the technology and transactional risk frameworks.

No comments:

Post a Comment