1. If an agent commits fraud, who goes to prison?
No one, yet.
But prosecutors will target the designers, approvers, and supervisors who enabled it. Expect charges under existing laws: wire fraud, securities violations, negligence. The first corporate AI criminal case is likely within 3 years.
2. Can we be held liable for an agent’s crime we didn’t know about?
Yes.
If you deployed a high-autonomy system without adequate monitoring, courts may find constructive knowledge. The standard: “What should a reasonable organization have known?” Lack of visibility is not a defence.
3. Will courts accept ‘the model decided’ as a defence?
No.
“Algorithmic determinism” fails in court. You are expected to design systems that avoid illegal outcomes. If your agent breaks the law, the question is: did you take reasonable steps to prevent it?
4. How do we prove we exercised ‘reasonable oversight’?
Document everything: approval chains, testing results, guardrail configurations, incident logs. Show that you applied the same rigour as for financial controls. Absence of process = absence of defence.
5. Can we patent AI-generated inventions?
Not currently. United States Patent and Trademark Office (USPTO) and European Patent Office (EPO) require human inventors.
However, you can patent the process or system that generated it. Ownership of output depends on employment contracts and training data rights.
6. How do we handle AI-generated copyrighted content?
Assume all output is contaminated until proven clean. Use generative provenance tools (e.g., C2PA), conduct IP sweeps, and maintain indemnification clauses with vendors. Litigation over synthetic content is rising.
7. What disclosures are required regarding AI-related risks in regulatory and public filings?
Organizations must disclose material reliance on AI systems and associated risks in their mandatory risk and governance disclosures. This typically includes exposure to model failure, data quality issues, third-party dependencies, cybersecurity vulnerabilities, legal and compliance risk, and potential reputational harm. Failure to disclose material AI-related risks may result in regulatory scrutiny, enforcement action, or liability.
8. How do we manage cross-border data transfers with AI?
Assume all agent activity creates transfer risk. Implement data sovereignty by design: local instances, anonymization, and contractual safeguards.
9. Are our AI tools violating antitrust laws?
Yes, if they enable tacit collusion, e.g., pricing agents converging on monopolistic rates. Regulators are alert to emergent anti-competitive behaviour. Audit for coordination patterns.
10. How will AI affect M&A due diligence?
Acquiring AI systems means inheriting their history, biased training, undocumented changes, latent risks. Demand full model lineage, training data provenance, and incident logs. Treat AI like asbestos: inspect before acquisition.