Moving Average Inc.

AI-Generated Content and IP Ownership in US Law

What founders need to know about copyright, patents, and trade secrets

John M. P. Knox

John M. P. Knox

Founder

Connect on LinkedIn

Caution I am not a lawyer, my AI is not a lawyer, and this is not legal advice. Below is an edited research report I created using ChatGPT 5.2 on 2026-1-29; it may contain errors. Caveat Emptor.


Executive Summary

Generative AI tools like ChatGPT, Claude, Midjourney, and others are increasingly used in corporate settings to create content and assist in innovation. However, under current U.S. law, intellectual property (IP) rights in AI-generated output are limited and depend on human involvement. Copyright law requires human authorship for protection, meaning works produced entirely by AI with no creative human input are not eligible for copyright[1].

Patent law likewise mandates that an inventor be a natural person, so inventions conceived solely by AI cannot be patented[3]. In contrast, trade secret law does not require human creation -- any valuable information kept secret (including AI-generated data or processes) can potentially qualify as a trade secret[5].

For businesses, these legal principles mean that ownership of IP generated by AI is not automatically secured by existing IP regimes. Companies should ensure that humans play a meaningful role in the creative or inventive process if they wish to obtain copyrights or patents on AI-assisted outputs. In scenarios where IP protection is unavailable (e.g. a fully AI-generated work), firms should consider alternative strategies such as treating the output as a trade secret or otherwise controlling its dissemination. The following report provides a deep dive into U.S. copyright, patent, and trade secret doctrines as they relate to generative AI, citing recent case law, agency guidance, and expert commentary. It concludes with practical recommendations for businesses to navigate IP ownership when using AI tools.

Recommendations for Businesses Using Generative AI

  • Involve Human Creativity in AI Outputs: Ensure a human makes a significant creative contribution to AI-generated content intended for copyright protection or patenting[1][7]. For example, have employees edit, curate, or direct AI outputs such that the human's original expression or inventive concept is evident. This helps satisfy the "human authorship" and "human inventorship" requirements.
  • Document and Attribute Human Contributions: Keep records of how humans contributed to AI-assisted works or inventions. If seeking copyright, claim only the portions created by humans and disclose the AI-generated parts when registering[1]. For patents, be prepared to explain each named inventor's role in conceiving the invention, especially if AI tools were used.
  • Leverage Trade Secret Protection: For valuable AI-generated outputs that cannot be copyrighted or patented, maintain them as trade secrets. Keep the information confidential and limit access on a need-to-know basis[5]. This can protect the material as long as it remains secret and continues to have economic value from its secrecy.
  • Update Company Policies and Training: Revise employee agreements, handbooks, and training materials to include guidelines on the use of generative AI[9]. Clearly prohibit inputting sensitive or proprietary data into public AI tools and specify which tools (if any) are approved for work use. Educate staff on the IP and confidentiality risks of AI.
  • Negotiate Strong Confidentiality with AI Providers: If using third-party AI platforms (e.g. via API or enterprise services), negotiate terms of use and non-disclosure agreements to safeguard your data[9]. Ensure the provider contractually acknowledges that all inputs and outputs containing your confidential information remain your property, and that the AI service will neither disclose that data nor incorporate it into publicly available models[9]. Opt out of data sharing or model-training uses of your inputs whenever possible.
  • Monitor and Control AI Usage: Implement technical measures to track employee use of AI tools on company devices and networks[9]. This might include monitoring traffic to AI tool websites or using DLP (data loss prevention) systems to flag large text/code submissions to AI. Conduct periodic audits and remind employees that misuse of AI (such as pasting secret code into ChatGPT) can lead to discipline.
  • Plan for IP Strategy Adjustments: Given the evolving legal landscape, build flexibility into your IP strategy. If an invention was significantly AI-generated and no clear human inventor exists, recognize that patent protection is currently unavailable -- focus on protecting the innovation through secrecy or rapid product development instead[3]. If you rely on AI for creative content, consider trademark or brand protection for distinctive elements, since the underlying AI-produced expression might be unprotectable.
  • Stay Informed and Seek Legal Counsel: Laws and regulations around AI and IP are quickly developing. Monitor guidance from the U.S. Copyright Office and USPTO, as well as legislative updates. Consult legal experts when deploying AI in creative or R&D projects to ensure compliance and to adjust contracts (e.g. work-for-hire agreements or inventor assignment clauses) in light of the latest best practices.

Dr. Stephen Thaler's AI-generated artwork "A Recent Entrance to Paradise," created by his Creativity Machine algorithm. U.S. courts and the Copyright Office have held that works produced autonomously by AI, without human creativity, are not eligible for copyright protection[14][1].

The Human Authorship Requirement

U.S. copyright law has a longstanding principle that only human creations are entitled to copyright. Neither the Constitution's IP Clause nor the Copyright Act explicitly defines "author," but courts have consistently interpreted these laws to require a human author for a valid copyright[1]. In famous pre-AI cases, courts denied copyright to works without human creators -- for example, a photograph taken by a monkey was deemed to have no human author and thus no copyright[1]. Similarly, the U.S. Copyright Office's Compendium of Practices has long instructed that it will "not register works produced by a machine or mere mechanical process...without any creative input or intervention from a human author"[1]. This human authorship doctrine is now being applied to generative AI outputs.

In 2022, Dr. Stephen Thaler tested this requirement by attempting to register an artwork that his AI system created autonomously, with no human involvement in the creative process[14]. The Copyright Office rejected the application, and subsequent litigation has firmly upheld the Office's position. In Thaler v. Perlmutter, the D.C. District Court in 2023 ruled that "human authorship is an essential part of a valid copyright claim."[1] The court reasoned that copyright's purpose is to incentivize human creativity, and only humans (not machines) are the intended beneficiaries of that incentive[1]. On appeal, the D.C. Circuit affirmed in March 2025, holding that the Copyright Act "requires all eligible work to be authored in the first instance by a human being."[1] The appellate court pointed to numerous provisions of the Act that only make sense if an author is human -- for example, copyrights vest initially in the author, last for 70 years after the author's death, and can be inherited by an author's widow or children[1]. An AI cannot meet these criteria (it has no lifespan, cannot sign legal transfers, etc.), so an AI cannot be the legal author of a work[14]. Notably, the courts also rejected arguments to treat the AI as an employee or co-author. Thaler's attempt to use the work-for-hire doctrine failed because no non-human can be a party to the necessary employment or commission contract[14]. Claims of joint authorship with an AI were likewise dismissed, as an AI lacks the conscious intent to merge contributions required for joint work status[14].

Bottom line: under current law, if a generative AI by itself produces the expressive elements of a work, that output has no human author and therefore no copyright protection. It effectively falls into the public domain, meaning no one can claim exclusive rights to it[1]. This remains true even if a human initiated the process (e.g. by entering a prompt) but did not contribute creativity to the final expression. The owner or user of the AI system does not automatically become the "author" of such content in the eyes of copyright law[14]. This has serious implications for businesses: any purely AI-generated text, image, or audio they create cannot be relied on for exclusive IP rights against competitors.

Acknowledging the rise of AI-generated material, the U.S. Copyright Office released formal guidance in March 2023 on registering works that contain AI-generated content[1]. This policy clarifies how applicants should delineate the human-authored portions of a work from those produced by AI. The Copyright Office's key criterion is creative control: "what matters is the extent to which the human had creative control over the work's expression."[1] If the AI was responsible for determining the expressive elements (for instance, the imagery, wording, or composition) with minimal human shaping, then those AI-produced elements are not considered human-authored[1]. On the other hand, if a human selects or modifies the AI output in a sufficiently creative way, or combines it with original human-authored material, the human's contributions can be protected by copyright[1].

Under this guidance, applicants for copyright are instructed to identify and exclude any AI-generated portions when submitting a work for registration[1]. In practice, this means a company or creator using AI should describe, in the application, which parts of the work they created (text, editing, arrangement, etc.) and "disclaim" the AI-generated parts (which will not be covered by the registration)[1]. Several recent Copyright Office registration decisions illustrate the application of these rules:

  • "Zarya of the Dawn" (Feb 2023): A graphic novel by Kristina Kashtanova that included AI-generated illustrations. The Office concluded that while the overall book (text and the compilation of text/images) could be registered as a creative work, the individual Midjourney-generated images themselves were not copyrightable[1]. Kashtanova was recognized as the author of the story and the selection/arrangement of elements, but she could not claim authorship of the raw AI artwork. This partial registration affirmed that human creativity in text and curation was protectable, but purely AI visuals were not[1].
  • "Théâtre D'opéra Spatial" (Sep 2023): An artwork generated by AI (which famously won a digital art competition). The human submitter made some modifications to the AI output but failed to disclose which portions were AI-generated. The Copyright Office, applying its new guidance, refused registration because the applicant did not properly identify and limit the claim to the human-edited elements[41][42]. In essence, even if an AI image is later edited by a human, the human must clearly delineate their own creative input; otherwise the Office will assume the bulk of the work is non-human and reject the claim.
  • "Suryast" (Dec 2023): An artwork created by an AI system that combined a photo taken by the human applicant with a famous painting's style (via an AI style transfer). The Office denied this registration, reasoning that the AI was "responsible" for the merged output -- the human supplied a base photo and chose a style, but the AI's algorithm did the substantive creative work of applying that style. Again, without a human controlling the expressive result, the work failed the authorship test[1].

These examples demonstrate that the threshold for human contribution is meaningful creativity. Simply providing a prompt or source material to an AI, or making minimal tweaks to an AI-generated piece, is usually not enough to claim authorship. The human must inject some original expression or arrange the AI material in a creative manner to have a protectable interest[1]. If that threshold is met, the human-authored components can be owned and enforced like any traditional copyrighted work (and in a work-for-hire context, a company can own those human-authored parts created by its employees). But any elements that are purely AI-generated remain unprotected -- effectively unowned -- under copyright law.

Implications and Strategies for Corporate Use

For companies leveraging generative AI, the current state of copyright law carries important ramifications. First, who "owns" AI-generated content? Under the law today, if the content is entirely machine-produced with no creative human input, then no one owns it in a copyright sense[1]. The content defaults to the public domain, meaning competitors (or anyone) could copy or reuse it without infringing copyright. This is a stark departure from traditional company-created content, which is usually owned by the employer via work-for-hire or assignment. Simply put, if no employee or contractor can legally be deemed the author, the company has no copyright.

Companies should therefore aim to have human authors in the loop. In practical terms, this might involve using AI to generate a draft or image, then having an employee substantively revise or curate that output. The final work can then be claimed as authored (at least in part) by the human. For example, a marketing team might use ChatGPT to get ideas for ad copy, but a human copywriter should refine the language and add creative flourishes -- yielding a text that is a human-AI hybrid, with the human contributions eligible for copyright. When registering such works, the company (or its counsel) should carefully follow the Copyright Office guidance: e.g. listing the human author for "text and editing" but excluding "AI-generated draft" or similar in the application[1]. By doing so, the company can secure copyright (owned either by the individual with a work-made-for-hire agreement, or directly by the company if authored by an employee within scope of employment) on the portions that merit protection, while being transparent about any unprotectable AI portions.

It's also crucial for businesses to manage expectations: content that remains largely AI-created (like an image wholly made by DALL·E with just a text prompt from an employee) cannot be monopolized. Competitors could potentially use a very similar prompt to generate a nearly identical image, and the original company would have no copyright claim to stop them. As a risk mitigation, companies may choose to keep high-value AI-generated content confidential instead of publishing it -- effectively using trade secret law to protect it (discussed more below). They should also review the terms of service of AI tools they use: many AI providers (such as OpenAI) include terms asserting that the user is assigned any rights in the output to the extent possible under law[44]. While such terms can contractually transfer any IP that does exist to the user, they cannot override U.S. copyright law's requirements. So even if an AI service "gives" the company ownership of the output, that ownership may be meaningless if the output lacks human authorship and thus has no copyright at all[45][46]. Businesses must accordingly treat purely AI-produced materials as unprotected assets -- either freely usable by all, or protectable only through secrecy or contractual agreements, not through exclusive copyrights.

Finally, if disputes arise (e.g. another party copies an AI-generated report or image from your company), be aware that enforcing rights will be challenging unless you can demonstrate human creative input. The current legal precedent (Thaler's case and others) would likely lead a court to reject a copyright infringement claim on a solely AI-created work, because the plaintiff never had a valid copyright to begin with[1]. Until laws change, corporate counsel should focus on proactive measures -- ensuring human involvement in creation, documenting authorship, and using technological and contractual protections -- to maximize the ownership of IP when using generative AI.

Patents and AI-Generated Inventions

Inventor Must Be a Natural Person

U.S. patent law has confronted a parallel question: can an AI be the inventor of a new invention? Under current statutes and case law, the answer is clear -- no, an AI cannot be listed as an inventor on a patent. The Patent Act specifies that a patent application must name the "inventor" or inventors, defined as the individual(s) who conceived the invention[3]. Courts and the U.S. Patent and Trademark Office (USPTO) have interpreted "individual" to mean a natural person (a human being), not an AI or other non-human entity[3][48]. This came to a head in the case of Thaler v. Vidal, where Dr. Stephen Thaler (the same individual from the copyright case) attempted to patent two inventions (a beverage holder and a light beacon) that he claims were conceived by his AI system "DABUS" without any traditional human inventor[3][48]. Thaler listed DABUS as the sole inventor on the patent applications. The USPTO and courts uniformly rejected these applications on the basis that only humans can be inventors under U.S. law[3].

In 2022, the U.S. Court of Appeals for the Federal Circuit affirmed that under the Patent Act, an inventor must be a human person, and thus an AI cannot hold that title[3]. The court observed that throughout the patent laws, references to inventors use human-centric terms (like pronouns "himself/herself" and concepts like an inventor's oath), strongly implying Congress meant a flesh-and-blood individual[48]. The Federal Circuit's decision definitively held that inventions lacking any human contributor to their conception are unpatentable -- there is simply no valid inventor to name[3]. The U.S. Supreme Court declined to review the case in April 2023, leaving this interpretation as settled law[48].

It's worth noting that this limitation aligns with international trends: most other patent systems (UK, EU, etc.) have also refused to recognize AI as an inventor, although one outlier was South Africa which granted a patent with DABUS listed (under a more lenient formality process)[3]. But as far as U.S. law is concerned, a patent will not issue with an AI alone as the inventor. Any patent application requires at least one human inventor who contributed to the conception of the invention in the claims.

AI-Assisted Inventions: USPTO Guidance

While an AI can't be the sole inventor, what about inventions developed with the assistance of AI? Many companies are using AI tools to generate ideas, optimize designs, or run experiments that lead to inventions. The USPTO has recognized this scenario and in February 2024 issued formal Inventorship Guidance for AI-Assisted Inventions[7]. The guidance makes clear that not all AI involvement bars patentability. If a natural person has made a significant inventive contribution -- even if an AI tool was used in the process -- that person can and should be named as the inventor[7]. Patent protection may be obtained for such inventions, because the law only requires that a human be responsible for the inventive concept. The mere use of AI as a tool or collaborator does not disqualify the patent, as long as a human intellect actually conceived the invention's novel aspects[7].

According to the USPTO's guidance, examiners will presume that the listed inventors in an application are correct if they are natural persons[7]. The Office does not intend to scrutinize or question inventorship in most cases unless evidence in the record suggests that the named people did not actually invent the subject matter (for instance, if an applicant openly states the invention was generated by AI)[7]. Only in rare instances would the PTO inquire whether the contribution of the human was sufficient. The takeaway is that as long as at least one human is identified as an inventor, and no AI is formally named as an inventor, the patent system will proceed normally[7]. If an applicant were to try naming an AI or otherwise indicate that no human inventor exists, the application will be rejected for lacking a proper inventor (35 U.S.C. §§ 101, 115)[7].

The USPTO's 2024 examples provide some practical clarification: for instance, if an AI algorithm outputs a potential drug molecule design, and a scientist recognizes one of those outputs as a promising invention (and verifies it), the scientist is the inventor because they exercised judgment and contributed to the conception by selecting that particular compound for development. Conversely, if an AI is left to autonomously generate and test thousands of prototypes and one is chosen with minimal human insight, it becomes harder to identify a true human inventor. In such gray areas, the current guidance essentially urges: err on the side of naming a human who had any meaningful role, because otherwise you have an unpatentable result. In effect, the USPTO has adopted a "don't ask, don't tell" posture: it will not require applicants to disclose the degree of AI involvement as long as a human inventor is named and signs the required oath[64]. This policy (as of late 2025) encourages practitioners to fit AI-enabled inventions into the traditional framework by ensuring a human is credited with the inventive act[64]. While this may create a legal fiction in some cases (attributing authorship to a human who may have only guided or selected an AI's output), it is currently the only way to obtain patent protection[64].

Patent Ownership and Corporate Strategy

For businesses, the constraints on AI inventorship mean that any patentable innovation coming out of AI use must be tied to a human inventor. In corporate R&D, this usually isn't a problem because engineers and researchers use AI as a tool rather than an autonomous inventor. As long as your team can identify who conceived the invention (even if AI was used to help), you can list that person (or persons) on the patent application. Standard practice would then have that inventor assign the patent rights to the company, as with any employee invention. Companies should continue to use invention disclosure forms that ask researchers to detail how an invention was arrived at -- including whether AI was used -- so that legal can evaluate inventorship and ensure proper assignments are in place. Employment agreements should also explicitly state that any AI-assisted inventions are within scope of the employee's obligation to assign IP to the employer, to avoid any doubt that the company owns the rights once a patent is granted (noting, of course, that the patent will only be granted if a human is inventor).

On the flip side, if your business is developing inventions entirely via AI with minimal to no human inventive step, you face a dilemma: you cannot obtain a valid patent unless a human can legitimately be named inventor. Listing someone who didn't actually contribute to the idea would risk invalidating the patent (and could be deemed inequitable conduct). Therefore, truly autonomous AI-generated inventions are effectively unpatentable under current law[3]. This doesn't mean the invention has no protection -- but it means you must rely on other forms of IP or competitive advantage. For example, you might treat the invention as a trade secret (if it's not easily reverse-engineered) rather than disclosing it in a patent application[3]. In fact, legal experts explicitly advise that given these uncertainties, "patents are not the best vehicle to protect AI-generated inventions under the current framework... other forms of IP, such as trade secrets, should be considered"[3]. We'll discuss trade secrets in the next section, but from a patent strategy perspective, companies using AI should evaluate each innovation and ask: Was there a meaningful human insight here? If yes, proceed with patenting (with that human as inventor). If not, recognize that a patent likely isn't obtainable until laws change, and adjust plans accordingly (e.g., keep the invention confidential or publish it defensively).

It's also important for businesses to watch for legal developments. The debate over AI inventorship is not settled in a policy sense. The USPTO in 2023 solicited public comments on AI and inventorship[48], and there are ongoing discussions about whether patent law should be amended to accommodate AI-generated inventions in the future. Some scholars argue that denying patents in these cases could stifle innovation (since companies might choose secrecy over disclosure), while others contend that allowing AI inventors would undermine the human-centric premise of IP law. For now, however, the law is unambiguous: a patent must have a human inventor, and any AI contributions can only be claimed via that human's involvement. Companies should ensure their patent filing practices reflect this -- for instance, by avoiding any language in applications that suggests an AI "invented" something on its own, and by carefully crafting patent inventorship narratives to center on human decision-making and creativity.

Trade Secrets and AI

AI Outputs as Protectable Trade Secrets

Trade secret law offers a comparatively flexible and inclusive framework for protecting valuable information, including information generated by or with AI. Under the U.S. Defend Trade Secrets Act (DTSA) and state Uniform Trade Secrets Act (UTSA) provisions, a "trade secret" can be any form of information -- technical, business, scientific, etc. -- that derives economic value from not being generally known to others and is subject to reasonable efforts to maintain its secrecy. Notably, there is no requirement of human authorship or originality in trade secret law[5]. Unlike copyrights (which protect "original works of authorship") or patents (which require an inventor), trade secrets can encompass data or material regardless of how it was created, so long as it is secret and valuable. This means that AI-generated outputs can qualify as trade secrets on the same footing as human-generated information, provided the company treats them as confidential[5]. For example, if a model like ChatGPT produces an insightful strategy document or a piece of code and the company keeps that output internal and confidential, it could be a protectable trade secret of the company. Similarly, an AI-driven process or model (like a proprietary machine learning algorithm or the specific configuration of a generative model) can itself be a trade secret if it's not disclosed and offers a business advantage.

For corporate IP strategy, trade secrets are thus an attractive option to own and control AI-related IP. There's no filing or registration required -- protection is automatic as long as secrecy is maintained. If another party misappropriates (steals or improperly discloses) the secret, the company can seek legal remedies under DTSA or state law. Importantly, because trade secrets don't demand human inventiveness, they fill the gap when other IP rights fall short. As discussed, purely AI-created works have no copyright and AI-conceived inventions have no patent, but both can still be valuable information. Trade secret law is the tool to protect that value: it "does not limit protection to human-created information," so any information meeting the criteria can be shielded[5]. In practice, many companies are already relying on trade secret protection for AI-related assets -- for instance, the weights and training data of AI models are often kept secret, and any unique insights or designs output by AI can be guarded internally rather than published.

However, the trade secret route comes with conditions: the secret must truly remain secret (once it becomes public, protection is lost), and it must not be "readily ascertainable" by others through proper means. A potential challenge in the AI era is that what used to be costly or difficult for a competitor to assemble might now be quickly replicated by an AI system. Information that required significant human effort to compile (and thus was protectable as a secret) might become easily derivable from public data using AI tools, raising the question of whether it's still a "secret" in the eyes of the law[5]. Courts may eventually grapple with scenarios like an AI re-generating a competitor's secret formula or list from scratch. For now, though, the core principle stands: if your company has an AI-generated result that is valuable and not known outside the business, you can claim and enforce it as a trade secret, regardless of the lack of human creativity in its creation. The key is demonstrating both its secrecy and its value.

Confidentiality Risks with Generative AI

Using third-party generative AI tools (like cloud-based AI APIs or public web interfaces) introduces a significant confidentiality risk that companies must manage to preserve trade secret protection. It is a fundamental tenet of trade secret law that if you disclose your secret to a third party without protective measures, you may destroy its trade secret status[9]. When an employee inputs confidential information (e.g. proprietary code or sensitive data) into an AI like ChatGPT, that information is being sent to the AI provider's servers -- a third party. Unless the provider has robust terms assuring confidentiality, this act could be seen legally as revealing the information, thus forfeiting trade secret protection. Even if the provider doesn't intentionally publish it, the fact that it's outside the company's control can be enough to call secrecy into question. For instance, if the AI service is logging prompts or using them to train models that might regurgitate similar information to other users, the "secret" could slip out. This concern is not theoretical: studies have found that a significant portion of what employees paste into tools like ChatGPT includes confidential or sensitive data[9]. There have been high-profile incidents of employees unwittingly leaking proprietary code by asking ChatGPT to debug it, after which the data becomes part of the AI's knowledge base (accessible in some form to others).

To mitigate this, companies should set clear rules: if an AI tool is not approved for use with sensitive info, employees must not use it for those purposes. Some organizations have gone as far as banning the use of public generative AI at work until secure, private instances are available. Others allow use but only with "sanitized" inputs (no client data, no source code, etc.). The appropriate approach may vary, but reasonable measures must evolve to include AI-related precautions[5]. Going forward, what counts as "reasonable efforts" to maintain secrecy will likely include steps like restricting employee use of public AI platforms for anything confidential[5], deploying technical solutions to monitor or block such usage, and training staff on the dangers of prompt-based leaks[5]. If companies fail to update their secrecy protocols for the AI age, they might be found not to have taken adequate measures, jeopardizing their trade secret claims.

Another risk is the "readily ascertainable" prong of trade secret law. AI's ability to generate outputs from large public datasets could make some information that used to be secret now essentially reproducible. For example, if a trade secret is a compilation of publicly available information (say, a special market research report), an AI could potentially recreate a similar compilation with minimal effort, which might lead a court to say the information was readily ascertainable and not protectable[5]. Companies should be prepared to argue why their specific secret is still unique or not easily duplicated by AI. This may involve emphasizing proprietary aspects or the fact that an AI would not actually arrive at the same result without the secret inputs. Nonetheless, it's a developing area -- courts have yet to fully test how AI affects the trade secret analysis, but experts predict a higher bar for what isn't readily ascertainable in light of AI capabilities[5].

Given these challenges, businesses should proactively update their trade secret management programs to account for generative AI. Some best practices include:

  • Policy Updates: As noted in the recommendations, integrate AI-specific guidelines into confidentiality agreements and employee manuals[9]. For example, explicitly prohibit entering company secret information into any non-approved AI tools, and require employees to report any accidental exposure immediately[9]. Regularly remind and get acknowledgments from staff regarding these rules (e.g., during annual training or exit interviews)[9].
  • Approved Tools & Environments: If employees can benefit from AI, consider providing a secure, vetted AI environment. This could be an on-premises AI system or a cloud AI service under a strong enterprise agreement. By channeling use to approved systems, companies can better control how data is handled. Ensure that any AI vendor used is contractually bound to keep your data confidential and not to use it for any purpose other than providing the service[9]. Some AI providers offer "data privacy" modes or opt-outs from model training -- take advantage of these.
  • Data Tagging and Filters: Mark sensitive documents with metadata or warnings if they should not be shared with AI systems. Implement filters that detect and block uploads of certain data (like source code or customer identifiers) to external websites. These technical controls can complement policy by reducing the chance of accidental leaks.
  • Monitoring and Auditing: Use network monitoring to flag heavy usage of AI web services or unusual data transfers that might indicate someone dumping company info into a chatbot[9]. Tools can log queries made to AI services from company devices, allowing post-hoc review. If an incident is suspected, forensic analysis can show if an employee asked an AI about confidential subjects[9]. Quick detection enables the company to respond (ask the AI provider to delete the data, seek an injunction if needed, etc.).
  • Internal AI Solutions: For highly sensitive projects, an alternative is deploying internal generative AI models (or using open-source models) that run within the company's secure environment. This way, data never leaves your control, and the outputs remain in-house. The trade-off is the cost and maintenance of such systems, but it might be warranted for "crown jewel" secrets.
  • Plan for Incidents: Despite best efforts, mistakes will happen. Have an incident response plan specifically for AI-related disclosure incidents. This might involve immediately contacting the AI provider to ensure data isn't retained, assessing the legal implication (did we lose trade secret protection or can we argue it's still secure?), and taking remedial action like reminding the workforce of policies or disciplining repeated offenders. Being able to show a court that you responded robustly can support an argument that you still took reasonable measures overall (one lapse shouldn't kill a trade secret if handled properly).

It's also wise for companies to classify what types of AI-generated output they consider proprietary. For example, if an AI system internally generates analytics or optimizations unique to the company's operations, mark those outputs as confidential and store them securely, just as you would a human analyst's secret report. Assume AI outputs can be trade secrets and treat them accordingly[9]. Conversely, if you use AI to create content that you plan to make public (like a marketing image or blog post), recognize that once public, trade secret law no longer applies -- you'd be relying on copyright (which, as discussed, may not exist if the content was mainly AI-made). Thus, for public-facing AI-generated content, consider whether you should infuse more human creativity (to claim copyright), or if not, accept that the content cannot be owned and adjust your competitive strategy (perhaps focusing on speed to market or branding rather than exclusivity of the content).

In a corporate context, any trade secret developed or obtained in the course of employment typically belongs to the employer (assuming proper agreements are in place). This is just as true for AI-generated information. If an employee uses an AI tool at work and the result is something valuable and kept secret by the company, the company can claim ownership of that as a trade secret (just as it would for a report or code written by the employee). It's prudent to have invention assignment or confidentiality agreements explicitly state that outputs from tools or software used in the scope of employment are the property of the employer. That way, there's no ambiguity that, for example, an engineer's use of an AI coding assistant to generate code yields company-owned code (kept confidential if not released). The employee should not have any personal claim just because an AI did some of the work -- it's part of their job output.

One area to watch is if companies collaborate with external AI vendors or consultants to generate solutions. In those cases, ensure the contract spells out that the business retains trade secret ownership of any deliverables or outputs, and that the vendor has a duty to keep them secret. Without clear language, there could be arguments about joint ownership or about the AI firm reusing similar outputs for other clients. Protect your secrets by contract upfront.

Lastly, consider the lifespan of trade secrets. Unlike patents (which expire) or copyrights (which eventually enter public domain after a term), trade secrets can last indefinitely if they remain secret. Generative AI can potentially keep producing new valuable insights for a company, and each of those can extend the company's competitive advantage as long as they don't leak. This makes robust trade secret management all the more crucial in the AI era -- it's not a one-time effort but an ongoing process of identification, classification, and protection of a constantly evolving set of information. Companies that excel at this will be able to harness AI while still preserving exclusive advantages, whereas those that are careless might find their AI-driven innovations quickly copied or lost to the public domain.

Conclusion

The advent of generative AI presents both opportunities and challenges for intellectual property in the corporate world. U.S. law, as it currently stands, strongly anchors IP rights to human creators -- a principle that leaves purely AI-generated works outside the traditional protections of copyright and patent. Businesses must navigate this reality by ensuring human involvement in creative and inventive processes, and by leaning on trade secret law and contractual safeguards to fill the gaps. While there are ongoing debates and the possibility of legal reforms on the horizon, companies cannot assume the law will automatically catch up to technology. A prudent corporate strategy today will acknowledge the limitations (e.g., no copyright for fully AI works, no patents for AI-only inventions) and implement the recommendations outlined above to secure and protect IP to the fullest extent possible. By combining human creativity with AI efficiency, and by rigorously protecting confidential outputs, businesses can enjoy the benefits of generative AI without forfeiting ownership and control over their valuable intangible assets. The landscape may evolve, but a proactive and informed approach will ensure that companies remain on solid legal footing as they innovate with AI.

Sources: The information in this report is drawn from U.S. statutes, case law, agency guidance, and expert commentary, including recent court decisions (Thaler v. Perlmutter, Thaler v. Vidal), U.S. Copyright Office publications[1], USPTO memoranda[7], and analyses by legal practitioners[14][3]. These sources are cited throughout in the format 【citation】 for reference. The report focuses exclusively on U.S. law as of 2025 and will need to be updated if significant legal changes occur.


References

# Source
1 Congressional Research Service - AI and Copyright Law
3 Akin Gump - Federal Circuit: Inventor Must Be Human
5 Beck Reed Riden - Trade Secrets in the AI Era
7 USPTO - Inventorship Guidance for AI-Assisted Inventions
9 Davis Polk - Safeguarding Trade Secrets with Generative AI
14 Baker Donelson - AI Cannot Solely Author Copyrightable Works
44 OpenAI Terms of Use
45 Reddit Discussion on ChatGPT Output Ownership
46 Legal Analysis of ChatGPT Output Ownership
48 ArentFox Schiff - AI Cannot Be Inventor Under Patent Act
64 Patently-O - USPTO Inventorship Policy

Share this article

Want to Talk?

Schedule a fixed-fee micro-consultation with John.