Moving Average Inc.

AI Work Made for Hire: Who Owns Employee AI-Generated Content?

What founders need to know about protecting AI-generated intellectual property

John M. P. Knox

John M. P. Knox

Founder

Connect on LinkedIn

Caution I am not a lawyer, my AI is not a lawyer, and this is not legal advice. Below is an edited research report I created using ChatGPT 5.2 on 2026-1-29, and Claude Code on 2026-3-8; it may contain errors. Caveat Emptor.


Executive AI Roundtable

AI governance and IP strategy come up constantly in my weekly closed-door roundtable for founders and C-level leaders. No vendors, no pitches — just operators comparing notes.

Request an Invitation →

Executive Summary

Generative AI tools like ChatGPT, Claude, Midjourney, and others are increasingly used in corporate settings to create content and assist in innovation. However, under current U.S. law, intellectual property (IP) rights in AI-generated output are limited and depend on human involvement. Copyright law requires human authorship for protection, meaning works produced entirely by AI with no creative human input are not eligible for copyright[1].

Patent law likewise mandates that an inventor be a natural person, so inventions conceived solely by AI cannot be patented[3]. In contrast, trade secret law does not require human creation -- any valuable information kept secret (including AI-generated data or processes) can potentially qualify as a trade secret[5].

For businesses, these legal principles mean that ownership of IP generated by AI is not automatically secured by existing IP regimes. Companies should ensure that humans play a meaningful role in the creative or inventive process if they wish to obtain copyrights or patents on AI-assisted outputs. In scenarios where IP protection is unavailable (e.g. a fully AI-generated work), firms should consider alternative strategies such as treating the output as a trade secret or otherwise controlling its dissemination. The following report provides a deep dive into U.S. copyright, patent, and trade secret doctrines as they relate to generative AI, citing recent case law, agency guidance, and expert commentary. It concludes with practical recommendations for businesses to navigate IP ownership when using AI tools.

Critically, the work-made-for-hire doctrine does not apply to AI-generated output — even when employees create it using company tools — because AI cannot be an employee or party to a contract under U.S. law. Companies need a clear AI IP policy to fill this gap.

Looking for a practical checklist? See How Should Companies Protect IP When Using Generative AI? for actionable steps your leadership team can implement today.

Building an AI IP Policy: Recommendations for Businesses

  • Involve Human Creativity in AI Outputs: Ensure a human makes a significant creative contribution to AI-generated content intended for copyright protection or patenting[1][7]. For example, have employees edit, curate, or direct AI outputs such that the human's original expression or inventive concept is evident. This helps satisfy the "human authorship" and "human inventorship" requirements.
  • Document and Attribute Human Contributions: Keep records of how humans contributed to AI-assisted works or inventions. If seeking copyright, claim only the portions created by humans and disclose the AI-generated parts when registering[1]. For patents, be prepared to explain each named inventor's role in conceiving the invention, especially if AI tools were used.
  • Leverage Trade Secret Protection: For valuable AI-generated outputs that cannot be copyrighted or patented, maintain them as trade secrets. Keep the information confidential and limit access on a need-to-know basis[5]. This can protect the material as long as it remains secret and continues to have economic value from its secrecy.
  • Update Company Policies and Training: Revise employee agreements, handbooks, and training materials to include guidelines on the use of generative AI[9]. Clearly prohibit inputting sensitive or proprietary data into public AI tools and specify which tools (if any) are approved for work use. Educate staff on the IP and confidentiality risks of AI.
  • Negotiate Strong Confidentiality with AI Providers: If using third-party AI platforms (e.g. via API or enterprise services), negotiate terms of use and non-disclosure agreements to safeguard your data[9]. Ensure the provider contractually acknowledges that all inputs and outputs containing your confidential information remain your property, and that the AI service will neither disclose that data nor incorporate it into publicly available models[9]. Opt out of data sharing or model-training uses of your inputs whenever possible.
  • Monitor and Control AI Usage: Implement technical measures to track employee use of AI tools on company devices and networks[9]. This might include monitoring traffic to AI tool websites or using DLP (data loss prevention) systems to flag large text/code submissions to AI. Conduct periodic audits and remind employees that misuse of AI (such as pasting secret code into ChatGPT) can lead to discipline.
  • Plan for IP Strategy Adjustments: Given the evolving legal landscape, build flexibility into your IP strategy. If an invention was significantly AI-generated and no clear human inventor exists, recognize that patent protection is currently unavailable -- focus on protecting the innovation through secrecy or rapid product development instead[3]. If you rely on AI for creative content, consider trademark or brand protection for distinctive elements, since the underlying AI-produced expression might be unprotectable.
  • Stay Informed and Seek Legal Counsel: Laws and regulations around AI and IP are quickly developing. Monitor guidance from the U.S. Copyright Office and USPTO, as well as legislative updates. Consult legal experts when deploying AI in creative or R&D projects to ensure compliance and to adjust contracts (e.g. work-for-hire agreements or inventor assignment clauses) in light of the latest best practices.

Dr. Stephen Thaler's AI-generated artwork "A Recent Entrance to Paradise," created by his Creativity Machine algorithm. U.S. courts and the Copyright Office have held that works produced autonomously by AI, without human creativity, are not eligible for copyright protection[14][1].

The Human Authorship Requirement

U.S. copyright law has a longstanding principle that only human creations are entitled to copyright. Neither the Constitution's IP Clause nor the Copyright Act explicitly defines "author," but courts have consistently interpreted these laws to require a human author for a valid copyright[1]. In famous pre-AI cases, courts denied copyright to works without human creators -- for example, a photograph taken by a monkey was deemed to have no human author and thus no copyright[1]. Similarly, the U.S. Copyright Office's Compendium of Practices has long instructed that it will "not register works produced by a machine or mere mechanical process...without any creative input or intervention from a human author"[1]. This human authorship doctrine is now being applied to generative AI outputs.

In 2022, Dr. Stephen Thaler tested this requirement by attempting to register an artwork that his AI system created autonomously, with no human involvement in the creative process[14]. The Copyright Office rejected the application, and subsequent litigation has firmly upheld the Office's position. In Thaler v. Perlmutter, the D.C. District Court in 2023 ruled that "human authorship is an essential part of a valid copyright claim."[1] The court reasoned that copyright's purpose is to incentivize human creativity, and only humans (not machines) are the intended beneficiaries of that incentive[1]. On appeal, the D.C. Circuit affirmed in March 2025, holding that the Copyright Act "requires all eligible work to be authored in the first instance by a human being."[1] The appellate court pointed to numerous provisions of the Act that only make sense if an author is human -- for example, copyrights vest initially in the author, last for 70 years after the author's death, and can be inherited by an author's widow or children[1]. An AI cannot meet these criteria (it has no lifespan, cannot sign legal transfers, etc.), so an AI cannot be the legal author of a work[14]. Notably, the courts also rejected arguments to treat the AI as an employee or co-author. Thaler's attempt to use the work-for-hire doctrine failed because no non-human can be a party to the necessary employment or commission contract[14]. Claims of joint authorship with an AI were likewise dismissed, as an AI lacks the conscious intent to merge contributions required for joint work status[14].

Bottom line: under current law, if a generative AI by itself produces the expressive elements of a work, that output has no human author and therefore no copyright protection. It effectively falls into the public domain, meaning no one can claim exclusive rights to it[1]. This remains true even if a human initiated the process (e.g. by entering a prompt) but did not contribute creativity to the final expression. The owner or user of the AI system does not automatically become the "author" of such content in the eyes of copyright law[14]. This has serious implications for businesses: any purely AI-generated text, image, or audio they create cannot be relied on for exclusive IP rights against competitors.

Acknowledging the rise of AI-generated material, the U.S. Copyright Office released formal guidance in March 2023 on registering works that contain AI-generated content[1]. This policy clarifies how applicants should delineate the human-authored portions of a work from those produced by AI. The Copyright Office's key criterion is creative control: "what matters is the extent to which the human had creative control over the work's expression."[1] If the AI was responsible for determining the expressive elements (for instance, the imagery, wording, or composition) with minimal human shaping, then those AI-produced elements are not considered human-authored[1]. On the other hand, if a human selects or modifies the AI output in a sufficiently creative way, or combines it with original human-authored material, the human's contributions can be protected by copyright[1].

Under this guidance, applicants for copyright are instructed to identify and exclude any AI-generated portions when submitting a work for registration[1]. In practice, this means a company or creator using AI should describe, in the application, which parts of the work they created (text, editing, arrangement, etc.) and "disclaim" the AI-generated parts (which will not be covered by the registration)[1]. Several recent Copyright Office registration decisions illustrate the application of these rules:

  • "Zarya of the Dawn" (Feb 2023): A graphic novel by Kristina Kashtanova that included AI-generated illustrations. The Office concluded that while the overall book (text and the compilation of text/images) could be registered as a creative work, the individual Midjourney-generated images themselves were not copyrightable[1]. Kashtanova was recognized as the author of the story and the selection/arrangement of elements, but she could not claim authorship of the raw AI artwork. This partial registration affirmed that human creativity in text and curation was protectable, but purely AI visuals were not[1].
  • "Théâtre D'opéra Spatial" (Sep 2023): An artwork generated by AI (which famously won a digital art competition). The human submitter made some modifications to the AI output but failed to disclose which portions were AI-generated. The Copyright Office, applying its new guidance, refused registration because the applicant did not properly identify and limit the claim to the human-edited elements[41][42]. In essence, even if an AI image is later edited by a human, the human must clearly delineate their own creative input; otherwise the Office will assume the bulk of the work is non-human and reject the claim.
  • "Suryast" (Dec 2023): An artwork created by an AI system that combined a photo taken by the human applicant with a famous painting's style (via an AI style transfer). The Office denied this registration, reasoning that the AI was "responsible" for the merged output -- the human supplied a base photo and chose a style, but the AI's algorithm did the substantive creative work of applying that style. Again, without a human controlling the expressive result, the work failed the authorship test[1].

These examples demonstrate that the threshold for human contribution is meaningful creativity. Simply providing a prompt or source material to an AI, or making minimal tweaks to an AI-generated piece, is usually not enough to claim authorship. The human must inject some original expression or arrange the AI material in a creative manner to have a protectable interest[1]. If that threshold is met, the human-authored components can be owned and enforced like any traditional copyrighted work (and in a work-for-hire context, a company can own those human-authored parts created by its employees). But any elements that are purely AI-generated remain unprotected -- effectively unowned -- under copyright law.

Why Work Made for Hire Doesn't Apply to AI Content

For companies leveraging generative AI, the current state of copyright law carries important ramifications. First, who "owns" AI-generated content? Under the law today, if the content is entirely machine-produced with no creative human input, then no one owns it in a copyright sense[1]. The content defaults to the public domain, meaning competitors (or anyone) could copy or reuse it without infringing copyright. This is a stark departure from traditional company-created content, which is usually owned by the employer via work-for-hire or assignment. Simply put, if no employee or contractor can legally be deemed the author, the company has no copyright.

Corporate Strategies for AI-Generated Content

Companies should therefore aim to have human authors in the loop. In practical terms, this might involve using AI to generate a draft or image, then having an employee substantively revise or curate that output. The final work can then be claimed as authored (at least in part) by the human. For example, a marketing team might use ChatGPT to get ideas for ad copy, but a human copywriter should refine the language and add creative flourishes -- yielding a text that is a human-AI hybrid, with the human contributions eligible for copyright. When registering such works, the company (or its counsel) should carefully follow the Copyright Office guidance: e.g. listing the human author for "text and editing" but excluding "AI-generated draft" or similar in the application[1]. By doing so, the company can secure copyright (owned either by the individual with a work-made-for-hire agreement, or directly by the company if authored by an employee within scope of employment) on the portions that merit protection, while being transparent about any unprotectable AI portions.

It's also crucial for businesses to manage expectations: content that remains largely AI-created (like an image wholly made by DALL·E with just a text prompt from an employee) cannot be monopolized. Competitors could potentially use a very similar prompt to generate a nearly identical image, and the original company would have no copyright claim to stop them. As a risk mitigation, companies may choose to keep high-value AI-generated content confidential instead of publishing it -- effectively using trade secret law to protect it (discussed more below). They should also review the terms of service of AI tools they use: many AI providers (such as OpenAI) include terms asserting that the user is assigned any rights in the output to the extent possible under law[44]. While such terms can contractually transfer any IP that does exist to the user, they cannot override U.S. copyright law's requirements. So even if an AI service "gives" the company ownership of the output, that ownership may be meaningless if the output lacks human authorship and thus has no copyright at all[45][46]. Businesses must accordingly treat purely AI-produced materials as unprotected assets -- either freely usable by all, or protectable only through secrecy or contractual agreements, not through exclusive copyrights.

Finally, if disputes arise (e.g. another party copies an AI-generated report or image from your company), be aware that enforcing rights will be challenging unless you can demonstrate human creative input. The current legal precedent (Thaler's case and others) would likely lead a court to reject a copyright infringement claim on a solely AI-created work, because the plaintiff never had a valid copyright to begin with[1]. Until laws change, corporate counsel should focus on proactive measures -- ensuring human involvement in creation, documenting authorship, and using technological and contractual protections -- to maximize the ownership of IP when using generative AI.

Patents and AI-Generated Inventions

Inventor Must Be a Natural Person

U.S. patent law has confronted a parallel question: can an AI be the inventor of a new invention? Under current statutes and case law, the answer is clear -- no, an AI cannot be listed as an inventor on a patent. The Patent Act specifies that a patent application must name the "inventor" or inventors, defined as the individual(s) who conceived the invention[3]. Courts and the U.S. Patent and Trademark Office (USPTO) have interpreted "individual" to mean a natural person (a human being), not an AI or other non-human entity[3][48]. This came to a head in the case of Thaler v. Vidal, where Dr. Stephen Thaler (the same individual from the copyright case) attempted to patent two inventions (a beverage holder and a light beacon) that he claims were conceived by his AI system "DABUS" without any traditional human inventor[3][48]. Thaler listed DABUS as the sole inventor on the patent applications. The USPTO and courts uniformly rejected these applications on the basis that only humans can be inventors under U.S. law[3].

In 2022, the U.S. Court of Appeals for the Federal Circuit affirmed that under the Patent Act, an inventor must be a human person, and thus an AI cannot hold that title[3]. The court observed that throughout the patent laws, references to inventors use human-centric terms (like pronouns "himself/herself" and concepts like an inventor's oath), strongly implying Congress meant a flesh-and-blood individual[48]. The Federal Circuit's decision definitively held that inventions lacking any human contributor to their conception are unpatentable -- there is simply no valid inventor to name[3]. The U.S. Supreme Court declined to review the case in April 2023, leaving this interpretation as settled law[48].

It's worth noting that this limitation aligns with international trends: most other patent systems (UK, EU, etc.) have also refused to recognize AI as an inventor, although one outlier was South Africa which granted a patent with DABUS listed (under a more lenient formality process)[3]. But as far as U.S. law is concerned, a patent will not issue with an AI alone as the inventor. Any patent application requires at least one human inventor who contributed to the conception of the invention in the claims.

AI-Assisted Inventions: USPTO Guidance

While an AI can't be the sole inventor, what about inventions developed with the assistance of AI? Many companies are using AI tools to generate ideas, optimize designs, or run experiments that lead to inventions. The USPTO has recognized this scenario and in February 2024 issued formal Inventorship Guidance for AI-Assisted Inventions[7]. The guidance makes clear that not all AI involvement bars patentability. If a natural person has made a significant inventive contribution -- even if an AI tool was used in the process -- that person can and should be named as the inventor[7]. Patent protection may be obtained for such inventions, because the law only requires that a human be responsible for the inventive concept. The mere use of AI as a tool or collaborator does not disqualify the patent, as long as a human intellect actually conceived the invention's novel aspects[7].

According to the USPTO's guidance, examiners will presume that the listed inventors in an application are correct if they are natural persons[7]. The Office does not intend to scrutinize or question inventorship in most cases unless evidence in the record suggests that the named people did not actually invent the subject matter (for instance, if an applicant openly states the invention was generated by AI)[7]. Only in rare instances would the PTO inquire whether the contribution of the human was sufficient. The takeaway is that as long as at least one human is identified as an inventor, and no AI is formally named as an inventor, the patent system will proceed normally[7]. If an applicant were to try naming an AI or otherwise indicate that no human inventor exists, the application will be rejected for lacking a proper inventor (35 U.S.C. §§ 101, 115)[7].

The USPTO's 2024 examples provide some practical clarification: for instance, if an AI algorithm outputs a potential drug molecule design, and a scientist recognizes one of those outputs as a promising invention (and verifies it), the scientist is the inventor because they exercised judgment and contributed to the conception by selecting that particular compound for development. Conversely, if an AI is left to autonomously generate and test thousands of prototypes and one is chosen with minimal human insight, it becomes harder to identify a true human inventor. In such gray areas, the current guidance essentially urges: err on the side of naming a human who had any meaningful role, because otherwise you have an unpatentable result. In effect, the USPTO has adopted a "don't ask, don't tell" posture: it will not require applicants to disclose the degree of AI involvement as long as a human inventor is named and signs the required oath[64]. This policy (as of late 2025) encourages practitioners to fit AI-enabled inventions into the traditional framework by ensuring a human is credited with the inventive act[64]. While this may create a legal fiction in some cases (attributing authorship to a human who may have only guided or selected an AI's output), it is currently the only way to obtain patent protection[64].

Patent Ownership and Corporate Strategy

For businesses, the constraints on AI inventorship mean that any patentable innovation coming out of AI use must be tied to a human inventor. In corporate R&D, this usually isn't a problem because engineers and researchers use AI as a tool rather than an autonomous inventor. As long as your team can identify who conceived the invention (even if AI was used to help), you can list that person (or persons) on the patent application. Standard practice would then have that inventor assign the patent rights to the company, as with any employee invention. Companies should continue to use invention disclosure forms that ask researchers to detail how an invention was arrived at -- including whether AI was used -- so that legal can evaluate inventorship and ensure proper assignments are in place. Employment agreements should also explicitly state that any AI-assisted inventions are within scope of the employee's obligation to assign IP to the employer, to avoid any doubt that the company owns the rights once a patent is granted (noting, of course, that the patent will only be granted if a human is inventor).

On the flip side, if your business is developing inventions entirely via AI with minimal to no human inventive step, you face a dilemma: you cannot obtain a valid patent unless a human can legitimately be named inventor. Listing someone who didn't actually contribute to the idea would risk invalidating the patent (and could be deemed inequitable conduct). Therefore, truly autonomous AI-generated inventions are effectively unpatentable under current law[3]. This doesn't mean the invention has no protection -- but it means you must rely on other forms of IP or competitive advantage. For example, you might treat the invention as a trade secret (if it's not easily reverse-engineered) rather than disclosing it in a patent application[3]. In fact, legal experts explicitly advise that given these uncertainties, "patents are not the best vehicle to protect AI-generated inventions under the current framework... other forms of IP, such as trade secrets, should be considered"[3]. We'll discuss trade secrets in the next section, but from a patent strategy perspective, companies using AI should evaluate each innovation and ask: Was there a meaningful human insight here? If yes, proceed with patenting (with that human as inventor). If not, recognize that a patent likely isn't obtainable until laws change, and adjust plans accordingly (e.g., keep the invention confidential or publish it defensively).

It's also important for businesses to watch for legal developments. The debate over AI inventorship is not settled in a policy sense. The USPTO in 2023 solicited public comments on AI and inventorship[48], and there are ongoing discussions about whether patent law should be amended to accommodate AI-generated inventions in the future. Some scholars argue that denying patents in these cases could stifle innovation (since companies might choose secrecy over disclosure), while others contend that allowing AI inventors would undermine the human-centric premise of IP law. For now, however, the law is unambiguous: a patent must have a human inventor, and any AI contributions can only be claimed via that human's involvement. Companies should ensure their patent filing practices reflect this -- for instance, by avoiding any language in applications that suggests an AI "invented" something on its own, and by carefully crafting patent inventorship narratives to center on human decision-making and creativity.

Trade Secrets and AI

AI Outputs as Protectable Trade Secrets

Trade secret law offers a comparatively flexible and inclusive framework for protecting valuable information, including information generated by or with AI. Under the U.S. Defend Trade Secrets Act (DTSA) and state Uniform Trade Secrets Act (UTSA) provisions, a "trade secret" can be any form of information -- technical, business, scientific, etc. -- that derives economic value from not being generally known to others and is subject to reasonable efforts to maintain its secrecy. Notably, there is no requirement of human authorship or originality in trade secret law[5]. Unlike copyrights (which protect "original works of authorship") or patents (which require an inventor), trade secrets can encompass data or material regardless of how it was created, so long as it is secret and valuable. This means that AI-generated outputs can qualify as trade secrets on the same footing as human-generated information, provided the company treats them as confidential[5]. For example, if a model like ChatGPT produces an insightful strategy document or a piece of code and the company keeps that output internal and confidential, it could be a protectable trade secret of the company. Similarly, an AI-driven process or model (like a proprietary machine learning algorithm or the specific configuration of a generative model) can itself be a trade secret if it's not disclosed and offers a business advantage.

For corporate IP strategy, trade secrets are thus an attractive option to own and control AI-related IP. There's no filing or registration required -- protection is automatic as long as secrecy is maintained. If another party misappropriates (steals or improperly discloses) the secret, the company can seek legal remedies under DTSA or state law. Importantly, because trade secrets don't demand human inventiveness, they fill the gap when other IP rights fall short. As discussed, purely AI-created works have no copyright and AI-conceived inventions have no patent, but both can still be valuable information. Trade secret law is the tool to protect that value: it "does not limit protection to human-created information," so any information meeting the criteria can be shielded[5]. In practice, many companies are already relying on trade secret protection for AI-related assets -- for instance, the weights and training data of AI models are often kept secret, and any unique insights or designs output by AI can be guarded internally rather than published.

However, the trade secret route comes with conditions: the secret must truly remain secret (once it becomes public, protection is lost), and it must not be "readily ascertainable" by others through proper means. A potential challenge in the AI era is that what used to be costly or difficult for a competitor to assemble might now be quickly replicated by an AI system. Information that required significant human effort to compile (and thus was protectable as a secret) might become easily derivable from public data using AI tools, raising the question of whether it's still a "secret" in the eyes of the law[5]. Courts may eventually grapple with scenarios like an AI re-generating a competitor's secret formula or list from scratch. For now, though, the core principle stands: if your company has an AI-generated result that is valuable and not known outside the business, you can claim and enforce it as a trade secret, regardless of the lack of human creativity in its creation. The key is demonstrating both its secrecy and its value.

Confidentiality Risks with Generative AI

Using third-party generative AI tools (like cloud-based AI APIs or public web interfaces) introduces a significant confidentiality risk that companies must manage to preserve trade secret protection. It is a fundamental tenet of trade secret law that if you disclose your secret to a third party without protective measures, you may destroy its trade secret status[9]. When an employee inputs confidential information (e.g. proprietary code or sensitive data) into an AI like ChatGPT, that information is being sent to the AI provider's servers -- a third party. Unless the provider has robust terms assuring confidentiality, this act could be seen legally as revealing the information, thus forfeiting trade secret protection. Even if the provider doesn't intentionally publish it, the fact that it's outside the company's control can be enough to call secrecy into question. For instance, if the AI service is logging prompts or using them to train models that might regurgitate similar information to other users, the "secret" could slip out. This concern is not theoretical: studies have found that a significant portion of what employees paste into tools like ChatGPT includes confidential or sensitive data[9]. There have been high-profile incidents of employees unwittingly leaking proprietary code by asking ChatGPT to debug it, after which the data becomes part of the AI's knowledge base (accessible in some form to others).

To mitigate this, companies should set clear rules: if an AI tool is not approved for use with sensitive info, employees must not use it for those purposes. Some organizations have gone as far as banning the use of public generative AI at work until secure, private instances are available. Others allow use but only with "sanitized" inputs (no client data, no source code, etc.). The appropriate approach may vary, but reasonable measures must evolve to include AI-related precautions[5]. Going forward, what counts as "reasonable efforts" to maintain secrecy will likely include steps like restricting employee use of public AI platforms for anything confidential[5], deploying technical solutions to monitor or block such usage, and training staff on the dangers of prompt-based leaks[5]. If companies fail to update their secrecy protocols for the AI age, they might be found not to have taken adequate measures, jeopardizing their trade secret claims.

Another risk is the "readily ascertainable" prong of trade secret law. AI's ability to generate outputs from large public datasets could make some information that used to be secret now essentially reproducible. For example, if a trade secret is a compilation of publicly available information (say, a special market research report), an AI could potentially recreate a similar compilation with minimal effort, which might lead a court to say the information was readily ascertainable and not protectable[5]. Companies should be prepared to argue why their specific secret is still unique or not easily duplicated by AI. This may involve emphasizing proprietary aspects or the fact that an AI would not actually arrive at the same result without the secret inputs. Nonetheless, it's a developing area -- courts have yet to fully test how AI affects the trade secret analysis, but experts predict a higher bar for what isn't readily ascertainable in light of AI capabilities[5].

Given these challenges, businesses should proactively update their trade secret management programs to account for generative AI. Some best practices include:

  • Policy Updates: As noted in the recommendations, integrate AI-specific guidelines into confidentiality agreements and employee manuals[9]. For example, explicitly prohibit entering company secret information into any non-approved AI tools, and require employees to report any accidental exposure immediately[9]. Regularly remind and get acknowledgments from staff regarding these rules (e.g., during annual training or exit interviews)[9].
  • Approved Tools & Environments: If employees can benefit from AI, consider providing a secure, vetted AI environment. This could be an on-premises AI system or a cloud AI service under a strong enterprise agreement. By channeling use to approved systems, companies can better control how data is handled. Ensure that any AI vendor used is contractually bound to keep your data confidential and not to use it for any purpose other than providing the service[9]. Some AI providers offer "data privacy" modes or opt-outs from model training -- take advantage of these.
  • Data Tagging and Filters: Mark sensitive documents with metadata or warnings if they should not be shared with AI systems. Implement filters that detect and block uploads of certain data (like source code or customer identifiers) to external websites. These technical controls can complement policy by reducing the chance of accidental leaks.
  • Monitoring and Auditing: Use network monitoring to flag heavy usage of AI web services or unusual data transfers that might indicate someone dumping company info into a chatbot[9]. Tools can log queries made to AI services from company devices, allowing post-hoc review. If an incident is suspected, forensic analysis can show if an employee asked an AI about confidential subjects[9]. Quick detection enables the company to respond (ask the AI provider to delete the data, seek an injunction if needed, etc.).
  • Internal AI Solutions: For highly sensitive projects, an alternative is deploying internal generative AI models (or using open-source models) that run within the company's secure environment. This way, data never leaves your control, and the outputs remain in-house. The trade-off is the cost and maintenance of such systems, but it might be warranted for "crown jewel" secrets.
  • Plan for Incidents: Despite best efforts, mistakes will happen. Have an incident response plan specifically for AI-related disclosure incidents. This might involve immediately contacting the AI provider to ensure data isn't retained, assessing the legal implication (did we lose trade secret protection or can we argue it's still secure?), and taking remedial action like reminding the workforce of policies or disciplining repeated offenders. Being able to show a court that you responded robustly can support an argument that you still took reasonable measures overall (one lapse shouldn't kill a trade secret if handled properly).

It's also wise for companies to classify what types of AI-generated output they consider proprietary. For example, if an AI system internally generates analytics or optimizations unique to the company's operations, mark those outputs as confidential and store them securely, just as you would a human analyst's secret report. Assume AI outputs can be trade secrets and treat them accordingly[9]. Conversely, if you use AI to create content that you plan to make public (like a marketing image or blog post), recognize that once public, trade secret law no longer applies -- you'd be relying on copyright (which, as discussed, may not exist if the content was mainly AI-made). Thus, for public-facing AI-generated content, consider whether you should infuse more human creativity (to claim copyright), or if not, accept that the content cannot be owned and adjust your competitive strategy (perhaps focusing on speed to market or branding rather than exclusivity of the content).

In a corporate context, any trade secret developed or obtained in the course of employment typically belongs to the employer (assuming proper agreements are in place). This is just as true for AI-generated information. If an employee uses an AI tool at work and the result is something valuable and kept secret by the company, the company can claim ownership of that as a trade secret (just as it would for a report or code written by the employee). It's prudent to have invention assignment or confidentiality agreements explicitly state that outputs from tools or software used in the scope of employment are the property of the employer. That way, there's no ambiguity that, for example, an engineer's use of an AI coding assistant to generate code yields company-owned code (kept confidential if not released). The employee should not have any personal claim just because an AI did some of the work -- it's part of their job output.

One area to watch is if companies collaborate with external AI vendors or consultants to generate solutions. In those cases, ensure the contract spells out that the business retains trade secret ownership of any deliverables or outputs, and that the vendor has a duty to keep them secret. Without clear language, there could be arguments about joint ownership or about the AI firm reusing similar outputs for other clients. Protect your secrets by contract upfront.

Lastly, consider the lifespan of trade secrets. Unlike patents (which expire) or copyrights (which eventually enter public domain after a term), trade secrets can last indefinitely if they remain secret. Generative AI can potentially keep producing new valuable insights for a company, and each of those can extend the company's competitive advantage as long as they don't leak. This makes robust trade secret management all the more crucial in the AI era -- it's not a one-time effort but an ongoing process of identification, classification, and protection of a constantly evolving set of information. Companies that excel at this will be able to harness AI while still preserving exclusive advantages, whereas those that are careless might find their AI-driven innovations quickly copied or lost to the public domain.

Public Domain and AI-Generated Works

When a work lacks copyright protection -- as is the case for purely AI-generated content under current U.S. law -- it is generally considered to be in the public domain. Public domain means that no one holds exclusive copyright over the work, and in principle, anyone may copy, distribute, modify, or build upon it without permission. Classic examples include works whose copyright has expired (such as Shakespeare's plays), works by the U.S. federal government, and works that were never eligible for copyright in the first place. AI-generated content that fails the human authorship requirement falls squarely into this last category.

What Public Domain Does and Does Not Mean

Being in the public domain means the work is free from copyright restrictions. It does not mean the work is free from all legal restrictions. This is a critical distinction that many businesses overlook. A work can simultaneously be in the public domain for copyright purposes and yet be legally restricted from use by other mechanisms:

  • Trade secret protection. If a company generates valuable AI output and keeps it confidential -- never publishing or disclosing it -- that output can be protected as a trade secret even though it would have no copyright protection if released. The moment it is disclosed publicly, trade secret protection evaporates, and the work enters the public domain with no IP protection at all. This creates a binary outcome: keep AI output secret and it is legally protected; publish it and it belongs to everyone.

  • Contractual restrictions. AI vendor terms of service, non-disclosure agreements, and licensing contracts can restrict how AI-generated output is used, shared, or redistributed -- regardless of copyright status. For example, an AI platform's terms might prohibit using its output for certain purposes or require attribution. These contractual obligations bind the parties to the agreement even if the underlying work is in the public domain.

  • Other IP rights. A public domain work might still incorporate elements protected by trademark, patent, or rights of publicity. For example, an AI-generated image that depicts a trademarked logo is free to copy from a copyright standpoint, but using it commercially could still constitute trademark infringement.

Strategic Implications for Businesses

The intersection of public domain and trade secret law creates an important strategic choice for companies using generative AI. If your AI produces a valuable analysis, process, or dataset, you can protect it indefinitely as a trade secret -- but only if you never make it public. Once published, the work enters the public domain with no copyright fallback, and your competitors are free to use it.

This means companies should evaluate every AI-generated output along two dimensions: (1) Is it valuable enough to keep secret? If so, treat it as a trade secret with appropriate access controls, confidentiality agreements, and security measures. (2) If it must be published, can a human add enough creative contribution to secure copyright? If a human substantially edits, curates, or arranges the AI output, the resulting work may qualify for copyright protection on the human-authored elements, giving you at least partial exclusivity even after publication.

The worst outcome is publishing AI-generated work without human creative contribution and without contractual protections in place -- you lose both trade secret and copyright protection, leaving the work fully in the public domain with no recourse.

Can AI-Generated Content Be Trademarked?

Trademark law operates on fundamentally different principles than copyright or patent law, and this distinction matters for businesses using generative AI. While copyright protects creative expression and patent law protects inventions, trademark law protects words, symbols, designs, and other identifiers that distinguish one company's goods or services from another's. The key question for trademark is not who created it but does it function as a source identifier in commerce?

USPTO Policy on AI and Trademarks

The USPTO has not imposed a blanket prohibition on trademarking AI-generated content. Unlike copyright (which requires human authorship) and patent law (which requires a human inventor), trademark law has no authorship or inventorship requirement. A trademark must meet different criteria: it must be distinctive (not generic or merely descriptive), it must be used in commerce to identify the source of goods or services, and the applicant must have a bona fide intent to use the mark. Whether the mark was designed by a human, an AI, or a combination of both is not a factor in the registration analysis.

This means that an AI-generated logo, slogan, brand name, or product name can be registered as a trademark with the USPTO -- provided it meets the standard requirements for distinctiveness and use in commerce. A company could, for example, use an AI tool to generate dozens of logo concepts, select one, and register it as a trademark without disclosing that AI was involved in the design process. The USPTO does not ask applicants whether AI was used to create the mark.

Important Limitations

While trademark registration itself is AI-friendly, several practical limitations apply:

  • No copyright in the underlying work. A company might successfully trademark an AI-generated logo for use as a source identifier, but if the logo was generated entirely by AI without meaningful human creative input, it may lack copyright protection. This means the company can prevent others from using the logo in a way that causes consumer confusion (trademark infringement), but it may not be able to prevent others from copying the design itself for non-trademark purposes -- such as using a similar design on merchandise in a different market where confusion is unlikely.

  • Trademark does not protect content broadly. Trademark law protects identifiers, not the substance of creative works. A company cannot trademark an AI-generated article, report, or dataset. Trademark is limited to names, logos, slogans, trade dress, and similar identifiers used in commerce.

  • Distinctiveness still required. AI tends to generate designs and names that reflect common patterns in its training data. The more generic or descriptive an AI-generated mark is, the harder it will be to register. A mark that AI generates by blending common design elements may be refused as merely descriptive or as likely to cause confusion with existing marks.

Strategic Takeaway

Trademark law is the one area of U.S. IP law where AI-generated content faces no fundamental barrier to protection. For businesses, this means that branding assets -- logos, slogans, product names -- can be developed with AI assistance and still receive full trademark protection. This stands in sharp contrast to copyright and patent, where human involvement is a prerequisite. Companies using AI to develop their brand identity should focus on selecting distinctive marks and establishing them in commerce, where trademark law can provide durable protection that copyright and patent cannot.

Conclusion

The advent of generative AI presents both opportunities and challenges for intellectual property in the corporate world. U.S. law, as it currently stands, strongly anchors IP rights to human creators -- a principle that leaves purely AI-generated works outside the traditional protections of copyright and patent. Businesses must navigate this reality by ensuring human involvement in creative and inventive processes, and by leaning on trade secret law and contractual safeguards to fill the gaps. While there are ongoing debates and the possibility of legal reforms on the horizon, companies cannot assume the law will automatically catch up to technology. A prudent corporate strategy today will acknowledge the limitations (e.g., no copyright for fully AI works, no patents for AI-only inventions) and implement the recommendations outlined above to secure and protect IP to the fullest extent possible. By combining human creativity with AI efficiency, and by rigorously protecting confidential outputs, businesses can enjoy the benefits of generative AI without forfeiting ownership and control over their valuable intangible assets. The landscape may evolve, but a proactive and informed approach will ensure that companies remain on solid legal footing as they innovate with AI.

Sources: The information in this report is drawn from U.S. statutes, case law, agency guidance, and expert commentary, including recent court decisions (Thaler v. Perlmutter, Thaler v. Vidal), U.S. Copyright Office publications[1], USPTO memoranda[7], and analyses by legal practitioners[14][3]. These sources are cited throughout in the format 【citation】 for reference. The report focuses exclusively on U.S. law as of 2025 and will need to be updated if significant legal changes occur.


References

# Source
1 Congressional Research Service - AI and Copyright Law
3 Akin Gump - Federal Circuit: Inventor Must Be Human
5 Beck Reed Riden - Trade Secrets in the AI Era
7 USPTO - Inventorship Guidance for AI-Assisted Inventions
9 Davis Polk - Safeguarding Trade Secrets with Generative AI
14 Baker Donelson - AI Cannot Solely Author Copyrightable Works
44 OpenAI Terms of Use
45 Reddit Discussion on ChatGPT Output Ownership
46 Legal Analysis of ChatGPT Output Ownership
48 ArentFox Schiff - AI Cannot Be Inventor Under Patent Act
64 Patently-O - USPTO Inventorship Policy

Frequently Asked Questions

Does work made for hire apply to AI-generated content?

No. Under U.S. copyright law, the work-made-for-hire doctrine requires that a work be created by a human employee within the scope of employment, or by a human independent contractor under a qualifying written agreement. Because AI is not a legal person, it cannot be an employee or a party to a contract. AI-generated content produced without meaningful human creative input falls outside the work-made-for-hire framework entirely, leaving the output without copyright protection regardless of who prompted the AI or paid for the tool.

Who owns intellectual property created by AI?

Under current U.S. law, nobody automatically owns IP rights in purely AI-generated output. Copyright requires human authorship, and patent law requires a human inventor. If an AI produces a work or invention entirely on its own, that output is not eligible for copyright or patent protection. However, when a human makes a significant creative or inventive contribution — by directing, editing, selecting, or arranging AI output — the human's contribution may qualify for protection. Companies should also consider trade secret law, which does not require human creation and can protect any valuable confidential information, including AI-generated data and processes.

What should a company AI IP policy include?

A practical AI IP policy should address several key areas: (1) require employees to make meaningful human contributions to any AI-assisted work intended for IP protection; (2) establish documentation practices so human contributions to AI-assisted outputs are recorded and attributable; (3) define disclosure obligations for when and how AI involvement must be reported, especially for copyright registrations and patent applications; (4) review and negotiate AI vendor agreements to ensure your company retains rights to outputs and that confidential inputs are protected; (5) implement trade secret protections — including access controls, confidentiality agreements, and data handling procedures — for valuable AI-generated outputs that may not qualify for copyright or patent protection.

Can AI be listed as an inventor on a patent?

No. The U.S. Patent and Trademark Office and federal courts have confirmed that only natural persons can be named as inventors on patent applications. In Thaler v. Vidal, the Federal Circuit held that the Patent Act's use of "individual" refers exclusively to human beings. However, the USPTO has clarified that AI-assisted inventions can be patented when a human made a "significant contribution" to the invention — the human is named as the inventor, not the AI.

Can AI-generated content be trademarked under USPTO policy?

Yes. Unlike copyright and patent law, trademark law has no authorship or inventorship requirement. The USPTO does not ask whether AI was involved in creating a mark. An AI-generated logo, slogan, or brand name can be registered as a trademark provided it meets the standard requirements: it must be distinctive (not generic or merely descriptive), used in commerce to identify the source of goods or services, and the applicant must have a bona fide intent to use it. However, trademark only protects source identifiers — not creative works broadly. A company can prevent others from using a confusingly similar mark, but if the underlying design lacks copyright protection (because it was AI-generated without human creative input), others may be free to copy the design for non-trademark purposes.

Do employees own the AI tools and work product they create at work?

It depends on what was created, how, and what your employment agreements say. If an employee uses AI tools to generate code, text, or designs during work hours using company resources, the company likely has the strongest claim — but not through copyright's work-made-for-hire doctrine, which doesn't apply to AI-generated output. Instead, the company's rights depend on employment agreements (IP assignment clauses), trade secret protections, and the degree of human creative contribution the employee made. Without clear contractual terms and an AI usage policy, ownership disputes are almost inevitable. The safest approach: update employment agreements to explicitly address AI-assisted work product, require documentation of human contributions, and treat undisclosed AI-generated output as a trade secret.

How should companies set AI policy for intellectual property ownership?

Start with three pillars: (1) Define what "meaningful human contribution" means at your company — this determines whether AI-assisted work product qualifies for copyright protection. (2) Require disclosure and documentation whenever AI is used to create work product, especially for anything that might be filed for copyright registration or patent protection. (3) Update employment and contractor agreements to assign rights in AI-assisted work to the company, since work-made-for-hire alone won't cover it. Beyond that, negotiate your AI vendor agreements carefully — many default terms give the vendor broad rights to use your inputs for model training. And implement trade secret protections (access controls, NDAs, data handling procedures) for any valuable AI-generated output, since trade secret law doesn't require human authorship and can protect what copyright cannot.

Is AI-generated content in the public domain? Can anyone use it?

AI-generated content that lacks human authorship is not eligible for copyright and is generally considered to be in the public domain — meaning no one holds exclusive copyright over it and, in principle, anyone can copy or use it. However, "public domain" only means the work is free from copyright restrictions, not from all legal restrictions. If the AI output was never published, it can still be protected as a trade secret. Contractual restrictions — such as AI vendor terms of service or non-disclosure agreements — can also limit use regardless of copyright status. And other IP rights like trademarks or patents may still apply to elements within the work. The practical takeaway: AI-generated output you publish without significant human creative contribution enters the public domain with no copyright protection, but AI-generated output you keep confidential can be protected indefinitely as a trade secret.

Share this article

Want to Talk?

Send me a quick message and I'll get back to you.

Full form →