On January 29, the U.S. Copyright Office published the second part of a planned three-part report on copyright and artificial intelligence (AI), this time focused on the question of copyrightability for AI-generated creative works. The first part, published in July 2024, explored the legality of so-called digital replicas of individuals’ likenesses, or “deepfakes.” The report is the product of a sweeping new initiative on AI launched by the Copyright Office in 2023 in response to the first crop of copyright registrations for works containing AI-generated expressive elements.
The Copyright Clause vests in Congress the authority to “secur[e] for limited times to authors . . . the exclusive right to their . . . writings.” In Community for Creative Non-Violence v. Reid (D.C. Cir. 1988), the U.S. Supreme Court explained that an author is “the person who translates an idea into a fixed, tangible expression entitled to copyright protection” (emphasis added).
In a preliminary statement of policy preceding the report’s publication, the Copyright Office confirmed the requirement of human authorship for obtaining copyright protection; applicants intending to register a work containing more than a de minimis amount of AI-generated material must disclose that fact and to describe their own human contribution to the ultimate work. This comports with the requirement of human authorship for copyright protection (see the Ninth Circuit denying copyright registrations to a monkey in Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018) and a non-human “spiritual being” in Urantia Found v. Kristen Maaherra, 114 F.3d 955 (9th Cir. 1997)).
In the new report, the Office clarifies the circumstances in which a human author is eligible for copyright in a work containing AI-generated expression, but ultimately refuses to endorse a bright-line test, relegating the issue to fact-specific, case-by-case determination by federal courts. The Office imagines a sliding scale of human control on which every work containing AI-generated elements may be placed. On one extreme are works that are wholly the expressive output of an AI, wherein the AI is responsible for the “spark of creativity” necessary for copyright to attach. On the other side are more “assistive uses” of AI, such as de-aging actors or digitally excising an object or person from a photograph, which merely enable a human author to create the final product they already have in mind.
The ultimate question is a familiar one: Is the work “basically one of human authorship, with the computer merely being an assisting instrument,” with the traditional elements of authorship — selection and arrangement of its component elements — conceived by a natural human, or is the computer responsible for the work’s conception?[1]
Today’s popular generative AI systems can create everything from text (ChatGPT, DeepSeek) to entire images (Midjourney, DALL-E) after receiving only a few words of natural language prompting. Users can describe their desired output with varying specificity, instructing the AI to create an image of a certain subject or topic, in a particular visual style or by applying a distinct visual technique. Once an output has been received, the user can iterate and reiterate on the work by revising the prompt, adding, removing, or clarifying instructions as needed until the user is satisfied with the end-product.
One troubling aspect of the prompting process, according to the report, is that prompting is an unpredictable process deficient in the necessary element of human control: A user could input identical prompts on two separate occasions and receive completely different outputs. The AI may choose to disregard certain instructions, or it may inexplicably add undesired elements that were never triggered by human prompting. If the AI is responsible for the final creative interpretation of the user’s text input, and indeed exercises some “creative” judgment in terms of arrangement and selection of an image’s elements, the question of authorship becomes a murky one.
The report provides at least one concrete conclusion: Prompts alone do not provide sufficient human control to make generative AI users the author of an output for copyright purposes, at least given today’s available technology. Whether an output is copyrightable depends on the nature and extent of a human’s contribution beyond mere prompting, and whether that contribution qualifies as authorship of the output’s expressive elements. The Office confirmed, however, that the actual text of the prompts remains copyrightable just like any other human-generated expression, provided it meets the requisite level of creativity.
The report also makes clear that human authors may claim copyright of a work that incorporates some wholly AI-generated expressive elements if the human author was personally responsible for selecting, coordinating, and (re)arranging the AI-generated material in a creative way, with the copyright extending to the creative selection and arrangement of those elements. Some AI programs, like Midjourney, actually allow users to select and regenerate regions of a generated image with a modified prompt; the Office believes that some works created in such a manner will meet the minimum standard of originality.
2025 is sure to be a whirlwind for the regulation and growing acceptance of AI. As the generative AI landscape continues to evolve, Troutman Pepper Locke is your resource for understanding the potential risks and opportunities associated with the new technology.
[1] U.S. Copyright Office, Sixty-Eighth Annual Report of the Register of Copyrights for the Fiscal Year Ending June 30, 1965 (1966), https://www.copyright.gov/reports/annual/archive/ar-1965.pdf.
2024 was a pivotal year in the regulation of data practices, with increased scrutiny of artificial intelligence (AI), data brokers, and the ecosystem of commercial data, and the continued proliferation of comprehensive United States (US) state privacy laws with bespoke twists such as expanded protections for teen data. While new laws created headlines, existing laws and consumer protection frameworks proved equally important in shaping the regulatory landscape, especially in the U.S. This convergence, in conjunction with uncertainty around the priorities of key federal agencies such as the Federal Trade Commission (FTC), presents challenges and opportunities for organizations, particularly those that depend on the data broker ecosystem or data broker services
To access the report directly, please click here.
The past year once again saw a breadth of court decisions addressing a wide variety of directors and officers and professional liability insurance coverage issues. At various levels, state and federal courts across the country issued notable decisions in this arena. In this report, we focused on topics we believe will continue to be important in the directors and officers and professional liability insurance field. We hope you find the following selection of cases to be informative and helpful.
TOPICS COVERED IN THIS REPORT INCLUDE:
- Notice
- Related Claims
- Prior Knowledge, Known Loss, and Rescission
- Prior Acts, Prior Notice, and Pending and Prior Litigation
- Dishonesty and Personal Profit
- Restitution, Disgorgement, and Damages
- Insured Capacity
- Insured v. Insured Exclusion
- Coverage for Contractual Liability
- Professional Services
- Independent Counsel
- Advancement of Defense Costs
- Allocation
- Recoupment of Defense Costs and Settlement Payments
- Consent
Access the full report here, and feel free to share it with your contacts who may have an interest in its content.
Attorneys at Troutman Pepper Locke LLP discuss the U.S. Justice Department’s efforts to combat cybersecurity fraud and some best practices for government contractors seeking to mitigate noncompliance risks.
Click here to read the full article on Thomson Reuters Westlaw Today.
President Trump hit the ground running, issuing more executive orders, memoranda, and other actions on Inauguration Day than any previous president. Agencies are already working to implement those actions. Many of the actions are interrelated, so Troutman Pepper Locke’s Environmental + Natural Resources team has put together the following resource to help assess the impact of these actions on environmental policy, and how the various actions fit together.
View our Drive-by Summary: Environmental and Energy Implications of Trump Executive Actions.
The Environmental + Natural Resources Group also contributed to this article.
Wrestling fans have their own take on the wonders of the world. André the Giant, dubbed by fans as the Eighth Wonder of the World, was a wrestling legend. The Ninth Wonder of the World was Chyna, a trailblazer who broke barriers in the industry. A key member of D-Generation X (DX), Chyna stood alongside one of the most infamous factions in the history of the World Wrestling Federation (WWF). Known for their rebellious antics, DX embodied the wild energy of the WWF’s “Attitude Era,” when professional wrestling wasn’t just entertainment, it was a cultural phenomenon.
While the WWF was captivating audiences with spectacles like WrestleMania, a different battle was brewing outside the ring – with a very unlikely rival. The World Wildlife Fund, a global conservation organization, was claiming ownership over the same name: WWF.
Click here to read the full article on IP Watchdog.
In January, the U.S. Food and Drug Administration (FDA) issued its first guidance on the use of artificial intelligence (AI)[1] models in drug development and in regulatory submissions titled, “Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products” (Draft Guidance). As FDA noted in its news release announcing the Draft Guidance, the use of AI to produce data or information regarding the safety, effectiveness, or quality of a drug or biological product has increased “exponentially” since 2016, including in drug application submissions over the last several years based, in part, on AI components.[2]
The public comment period is open through April 7.
While, predictably, FDA makes clear that it “does not endorse the use of any specific AI approach or technique,” the Draft Guidance provides a “risk-based” credibility assessment framework intended to establish and evaluate the credibility — or “trust” — of an AI model for a particular “context of use” (COU). It applies to the nonclinical, clinical, postmarketing, and manufacturing phases of the drug development lifecycle. Consistent with FDA’s regulatory authority, it excludes drug discovery and operational efficiencies (think: workflows, resource allocation, the mechanics of drafting regulatory submissions). In other words, the Draft Guidance does not address AI models that do not impact patient safety, drug quality, or the reliability of results from nonclinical or clinical studies.
This article highlights the key takeaways for drug sponsors and manufacturers from this long-awaited regulatory guidance.
1. Adopt FDA’s risk-based framework for assessing AI model credibility.
FDA’s risk-based framework consists of the following seven-step process that it expects sponsors to use to establish and assess AI model credibility:
- Step 1 – Define the question of interest that will be addressed by the AI model. This should describe the specific question, decision, or concern addressed by the AI model. As discussed in the Draft Guidance, an example would be, in a commercial manufacturing context, whether an injectable drug’s vials meet the established fill volume specifications. In the clinical development context, a question of interest might be whether certain clinical trial participants can be considered low risk for a known associated adverse reaction and not need inpatient monitoring after dosing.
- Step 2 – Define the COU for the AI model. The COU provides the scope and role of the AI model used to answer a question of interest. The description of the COU should explain what will be modeled and how model outputs will be used, as well as whether any other information (e.g., animal or clinical studies) will be used in conjunction with the model output to answer the question of interest.
- Step 3 – Assess the AI model risk. Model risk is a combination of model influence (amount of AI model-generated evidence relative to other contributing evidence used to inform the question of interest) and decision consequence (the impact of an adverse outcome resulting from an AI-generated, incorrect output). A greater amount of model influence or decision consequence increases the risk of the AI model and requires more regulatory oversight.
- Step 4 – Develop a plan to establish the credibility of the AI model output within the COU. This “credibility assessment plan” should rely on interactive feedback from FDA about the AI model risk (Step 3) and the COU (Step 2). Early engagement with FDA is recommended to ensure the appropriate credibility assessment activities are adopted based on the particular model risk and COU. Credibility assessment plans should include descriptions of:
-
- (A) The Model – Provide AI model inputs, outputs, architecture, features, feature selection process and any loss functions, parameters, and rationale for choosing the specific modeling approach.
-
- (B) Model Development Data – Incorporate training data and tuning data. Training data builds AI models by defining model weights, connections, and components. Tuning data explores optimal values of hyperparameters and architectures. Describe the data management practices for the training and tuning datasets and characterize those datasets.
-
- (C) Model Training – Explain the AI model’s learning methodology (e.g., supervised, unsupervised), performance metrics and confidence intervals (ROC curve, recall or sensitivity, positive/negative predictive values, true/false positive and true/false negative counts, positive/negative diagnostic likelihood ratios, precision, and/or F1 scores), regularization techniques, and training parameters. Specify whether a pre-trained model was used, describe the use of ensemble methods, explain any calibration, and outline the quality assurance and control procedures.
-
- (D) Model Evaluation – Note the model’s data collection strategy, specifying how data independence was achieved and whether there was any overlapping data use between development and testing phases. Include information on the reference method used. Provide information on the applicability of the test data to the COU, the agreement between predicted and observed data (using test data independent of development data), and the rationale for the chosen model evaluation methods. Performance metrics (see model training section above) and limitations of the modeling approaches should also be included.
- Step 5 – Execute the plan. The importance of discussing the plan with FDA before execution to set expectations and to identify potential challenges and how those challenges can be addressed cannot be overstated.
- Step 6 – Document the results of the credibility assessment plan and discuss deviations from the plan. Create a credibility assessment report providing information on the AI model’s credibility for the COU and describing any deviations from the credibility assessment plan outlined in Step 4. The credibility assessment report may be a self-contained document included as part of a regulatory submission or in a meeting package. It may also be held and made available to FDA upon request (e.g., during an inspection). The sponsor should seek FDA’s input regarding whether the credibility assessment report should be proactively submitted to FDA.
- Step 7 – Determine the adequacy of the AI model for the COU. If FDA or the sponsor determines that an AI model is not appropriate for the COU, there are several options for the sponsor moving forward:
-
- (A) Reduce the AI model’s influence by adding other types of evidence in response to question of interest.
-
- (B) Add development data to increase the model’s output or dial up the rigor of the credibility assessment activities.
-
- (C) Create controls to mitigate risk.
-
- (D) Update the modeling approach.
-
- (E) Classify the AI model’s credibility as inadequate for the COU, which will require model rejection or revision.
2. Prioritize life cycle maintenance — and create a plan to manage it.
The Draft Guidance also highlights the importance of life cycle maintenance, or the management of changes to the AI model to ensure it remains fit for use for its COU throughout the drug product life cycle. Since AI models are data-driven, they can autonomously adapt without any human interventions — and this requires ongoing monitoring. Still, the level of oversight required should correspond to the model risk outlined in Step 3 of the credibility assessment plan.
FDA recommends adopting a risk-based life cycle maintenance plan including model performance metrics, monitoring frequency, and retesting triggers. Quality systems should incorporate these life cycle maintenance plans, and marketing applications should include a summary of any product or process-specific AI models.
Any AI model changes affecting performance should be reported to FDA if required pursuant to applicable regulations.
3. Engage with FDA early and often.
Sponsors and other interested parties should proactively reach out to FDA to clarify regulatory expectations regarding the use of AI models in drug and biologic development. As noted above, early engagement with FDA allows sponsors to set expectations regarding the appropriate credibility assessment activities for the model and identify and address any potential challenges early to ensure they are adequately addressed.
Sponsors may request a formal meeting with FDA to discuss the use of AI in connection with a specific development program. The agency also cites the following engagement options depending on the AI model’s intended use:
- Center for Clinical Trial Innovation (C3TI)
- Complex Innovative Trial Design Meeting Program (CID)
- Drug Development Tools (DDTs)
- Innovative Science and Technology Approaches for New Drugs (ISTAND)
- Digital Health Technologies (DHTs) Program
- Emerging Technology Program (ETP)
- CBER’s Advanced Technologies Team (CATT)
- Model-Informed Drug Development (MIDD) Program
- Real-World Evidence (RWE) Program
Conclusion
FDA’s Draft Guidance provides a helpful roadmap for sponsors and manufacturers navigating agency expectations around AI modeling and drug development.
In summary, FDA has recommended the following steps:
(1) Follow the risk-based framework for AI model credibility;
(2) Create (and follow) a life cycle maintenance plan; and
(3) Engage with FDA about the agency’s emerging regulatory expectations.
On January 23, President Donald Trump signed an executive order “Removing Barriers to American Leadership in Artificial Intelligence” and took steps to rescind the Biden administration’s executive order on AI, which had placed certain restrictions on businesses in an effort to create safeguards for AI development, protecting against issues that may emerge amid automated decision-making in employment contexts, as well as potential worker displacement. This shift, along with changes at FDA based on the new administration, will require careful monitoring of AI policies as they continue to evolve.
If you have questions about the impact of FDA’s Draft Guidance on “Considerations for the Use of AI to Support Regulatory Decision-Making for Drug and Biological Products,” we recommend consulting with legal counsel, including Troutman Pepper Locke.
[1] AI refers to “a machine-based system that can, for a given set of human defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”
[2] FDA Proposes Framework to Advance Credibility of AI Models Used for Drug and Biological Product Submissions | FDA; see also Artificial Intelligence for Drug Development | FDA.
This article was originally published on February 10, 2025 on Law360 and is republished here with permission.
Businesses are constantly seeking innovative ways to improve their customers’ experiences.
In past centuries, it was the owner of the general store who knew a customer’s purchase preferences and needs. The owner would order goods they knew their customers would need and would market those items. In today’s market, delivering a personalized experience requires analyzing data, which in turn requires complying with, among other things, privacy laws and customer expectations regarding their privacy.
One innovation intended to address these compliance concerns is the “data clean room,” or DCR, a cloud data processing technology that allows companies to exchange and analyze data without sharing their entire customer information database.
For example, advertisers might analyze consumer purchase pattern data from different businesses in DCRs to offer targeted discounts to their customers only for services they would be interested in; credit card companies could leverage DCRs to share anonymized transaction history to identify fraud across different platforms; and retail shops can combine purchase histories with demographic information to curate products tailored for each consumer.
When used properly, the DCR can be immensely beneficial. Thus, DCR users need to put proper security measures in place to balance these goals with consumer privacy.
On Nov. 13, the Federal Trade Commission released a blog post[1] about DCRs to warn businesses not to think of DCRs as a one-stop solution to solve all compliance issues, because, despite their squeaky-clean name, the FTC believes DCRs can have complicated implications for user privacy.
How Data Clean Rooms Work
According to IAB Technology Laboratory’s DCR Guidance and Recommended Practices, a DCR is a “secure collaboration environment which allows two or more participants to leverage data assets for specific, mutually agreed-upon uses, while guaranteeing enforcement of strict data access limitations.”[2] This means that companies can share and match their deidentified transaction data to provide their consumers tailored experiences.
Prior to the use of DCRs, companies used anonymization techniques to protect consumer privacy while analyzing datasets subject to laws with use limitations — for example, replacing names with pseudonyms.
However, with advanced artificial intelligence algorithms able to sweep the internet and better analyze data patterns, there are growing concerns that anonymized data can be reverse-engineered if the unique characteristics of the data are combined with external information. This could lead to individuals being reidentified, despite a company’s best efforts to protect consumers’ personally identifiable information.
DCRs mitigate such concerns by providing a tool that further deidentifies data while still producing analysis that allows companies to provide consumers personalized experiences.
Specifically, DCRs use differential privacy. Differential privacy adds another layer of protection to anonymized data, making it harder to reverse-engineer personal information. Differential privacy is achieved by using mathematical frameworks to intentionally inject “noise,” or irrelevant data, into aggregated datasets. The added data preserves the pattern of data for users to analyze but prevents them from reversing the pattern to track the information of any particular individual.
DCRs can be analogized to seeing a gathering of people through frosted windows. You might be able to get a general idea of whether music is playing or how many people are present, but you won’t be able to discern the exact song or attendees’ faces.
Additional DCR security measures may be added through a combination of data isolation, privacy-enhancing technologies, privacy control mechanisms and access controls, all of which ensure strict data protections while enabling analysis. Data isolation allows companies to separate different datasets and limit access to only certain subsets of data.
Companies can then manage both access and the potential effects of a data breach, even if a DCR is compromised. Privacy-enhancing technologies such as encryption and injection of irrelevant data can minimize the risk of personal data being tracked back to the individual. Access control mechanisms such as limiting the number or type of queries or access time can give DCR users additional control over each party’s data use.
Regulatory Concerns and Lessons From Enforcement Examples
As reflected in the FTC’s post, regulators are placing increasing scrutiny on technologies like DCRs to suggest they are not a “magic bullet” that automatically guarantees privacy compliance.
While DCRs allow companies to utilize their own datasets for analysis, the FTC notes, their efficacy depends on the safeguards implemented by the companies operating them. Effective efforts to regulate new technologies must include industry input and objectively address any potential issues.
There is a risk that excessive or burdensome regulation could tie up new technology based on only a few instances of companies pushing the limits, thereby stifling innovation and ultimately harming consumers. DCRs, when used with the proper security and administrative controls, help further the cause of protecting consumers’ privacy through additional deidentification.
Federal and state regulators should focus on making DCR use safer, not making their use unfeasible. It is expected that the Republican-led FTC will agree. Andrew Ferguson, chairman of the FTC, has expressed that he will not be on the “pro-regulation side of the AI debate,” and raised concerns that if “regulators and lawmakers attempt to ban or seriously curtail targeted advertising, they will be undoing the balance of the online economy.”[3]
With a change in leadership, the FTC will likely be less aggressive toward regulating technology such as the DCR.
Even absent direct rulemaking, the risks of failing to ensure privacy safeguards when using DCRs or other technology remain. The FTC enforces prohibitions against unfair or deceptive acts or practices under Section 5 of the FTC Act, and the failure to implement good technical, administrative and physical controls may lead to FTC enforcement.
For instance, in January 2024, the FTC issued an order against X-Mode addressing the allegedly improper collection and use of precise geolocation data without consumers’ affirmative express consent.[4] The order prohibits X-Mode from using, selling or disclosing sensitive location data.[5] Additionally, the FTC order mandates the deletion of previously collected precise geolocation data and the products and services developed based on it unless the consumers give consent or the sensitive location data is deidentified.
The FTC found X-Mode’s original notices to be insufficient because, while it did identify collection, sharing and use of location information for ad personalization and analytics, it did not call out sensitive location collection and use for certain sensitive uses.
Similarly, that same month, the FTC ordered InMarket Media, a data aggregator and digital marketing company, to delete all the location data it previously collected, and any products developed using this data, due to allegedly failing to fully inform consumers about how their data could be used for targeted advertisements.[6] The data and products derived from this data were ordered to be deleted unless the company obtains consumer consent or ensures the deidentification of the data.
The FTC has also brought cases against BetterHelp in March 2023[7] and GoodRx in February 2023[8] for allegedly disclosing consumers’ sensitive health data without proper authorization. These examples underscore the importance of maintaining transparency and obtaining consumer consent for companies to avoid legal exposure.
While it is uncertain whether the FTC’s enforcement priorities may change, state attorneys general have similar unfair or deceptive acts or practices authority and thus could similarly police consumer data privacy.
For example, the California Privacy Protection Agency implements and enforces the California Privacy Rights Act of 2020. The California attorney general’s office has also secured settlements against businesses in the retail, food service and mobile game industries for alleged violations of the California Consumer Privacy Act.
Additionally, the Texas Attorney General Ken Paxton launched a dedicated team in his Consumer Protection Division to focus on enforcing Texas’s privacy laws, including the Deceptive Trade Practices Act. The current patchwork of state privacy laws provide different regulatory frameworks for consumer data privacy.
Companies should ensure they have robust privacy policies, procedures, and personnel or business practice trainings in place to strengthen administrative control over consumer information in compliance with states’ comprehensive privacy laws.
Best Practices for Mitigating Risks
All strong compliance programs adopt privacy by design and defense in depth. This starts with reasonable technology controls.
DCRs already establish access and rights controls. DCR users or DCRs also deidentify consumer data. However, diligence by DCR users is required to ensure that such controls are sufficient. For example, there has been much debate about what steps are required to truly deidentify information.
Regulators at the state and federal level have tried to provide guidance on this question. For instance, the Health Insurance Portability and Accountability Act provides concrete guidelines on how protected health information can be considered deidentified. The HIPAA safe harbor deidentification is satisfied when specific patient identifiers, or identifiers of related persons, are removed so that a covered entity has no actual knowledge to reidentify the patient.
Once the protected health information is deidentified, it is not considered protected health information, and the restrictions on its use or disclosure are much less stringent. These concepts should be addressed in any agreement with the DCR. In addition to diligence, companies engaging DCRs or similar devices should consider additional administrative controls.
Consumer Notice and Consent
Organizations should implement clear notice and consent process about using a DCR to analyze consumer information. As with the application of any new technology, businesses should review and update their privacy policies to provide consumers notice and obtain consent to make sure they cover the full range of potential uses and sharing, such as the use of DCRs.
For example, the business should comply with the rules they notified and obtained consent from consumers about, including the use of DCRs, to ensure that a reasonable consumer would expect such uses and/or sharing.
This requires tagging the data with consent rules to align with the consumer’s expressed desires. The business can then limit where that personal information is being disclosed, shared or sold to align with the consumer’s consent.
Vendor Management
A DCR provider who has access to the datasets could cause the personally identifiable information to leave the DCR. To prevent issues from arising, a business must have a solid vendor management program.
While there are no one-size-fits-all solutions, business should consider several factors as part of their vendor management programs. A business should review the state privacy laws to check if their vendors qualify as service providers under that law.
If a vendor qualifies, the business should specify in the vendor contract the purpose of processing personal information; restrictions prohibiting the vendor from retaining or disclosing information; protocols in the event of a security incident; confirmation that the vendor will cooperate in the business’s compliance with privacy laws; and the vendor’s responsibility of maintaining reasonable security practices and properly segmenting data they process on behalf of the business.
It would also be helpful to include the business’s right to audit the vendor’s security practices, authorize or object to the vendor’s subcontractor selection, and require the vendor’s subcontractor to have the same obligations as the vendor.
Businesses engaging DCR provider vendors should additionally consider the vendor’s work procedures and policies regarding data deidentification and ownership of the data. Companies during their due diligence in choosing a vendor should test administrative controls and security procedures the vendor has in place to make sure the DCR and the data processed in it will remain deidentified and accessible only to necessary employees.
Furthermore, businesses should consider who has the right over the processed data that comes out of DCRs. Data ownership, trade secret and copyright issues can arise when the business and vendor do not discuss in advance who has rights over the analyzed dataset. Setting up a contributory model — a set of defined guidelines that allow contributors to add to the system — could also be helpful in leveraging proper administrative controls over the data.
DCRs offer a robust solution for organizations that seek to improve their customer experience while also protecting customer privacy. When equipped with appropriate security measures, DCRs can mitigate businesses’ reidentification concerns, enabling businesses to analyze data and tailor products and services to meet customer needs.
Implementing comprehensive administrative controls, security processes and vendor management systems are vital steps for businesses to leverage innovations like DCRs within the boundaries of legal compliance.
[2] https://iabtechlab.com/blog/wp-content/uploads/2023/06/Data-Clean-Room-Guidance_Version_1.054.pdf.
[3] https://www.ftc.gov/system/files/ftc_gov/pdf/guardian-ferguson-dissenting-statement-final.pdf.
[7] https://www.ftc.gov/legal-library/browse/cases-proceedings/2023169-betterhelp-inc-matter.
[8] https://www.ftc.gov/legal-library/browse/cases-proceedings/2023090-goodrx-holdings-inc.
Jorden Rutledge, an associate in Troutman Pepper Locke’s Business Litigation Practice Group, was interviewed in the February 7, 2025 InformationWeek article, “What Types of Legal Liabilities Are Emerging From AI?.”
Click here to read the full interview.
In 2024, the solar energy generation industry experienced its largest-ever annual rise globally, fueled by China’s 44% solar output boost from January to November 2024.[1] Solar energy output also continues to expand domestically, with the U.S. generating approximately 283 terawatt-hours in 2024, comprising a 14.7% share of the global market. In fact, domestically, solar energy now accounts for more than half of all new electricity on the grid, and, with a continued focus on renewable energy, is projected to continue to grow.[2]
The industry’s growth has been fueled by decreasing costs of solar panels, improvements in solar technology, and supportive government policies. However, like other global industries, solar power relies on a globalized supply chain. Many of the components and materials necessary to manufacture photovoltaic solar cells (PV cells) — the key component of solar panels that convert sunlight to electricity — originate overseas. China dominates the manufacturing and export of these components. However, China has recently expanded its production chains into surrounding countries like Malaysia, Vietnam, and Thailand. The U.S. relies heavily on the import of these components to produce solar panels domestically.
Given the substantial reliance on imported materials by U.S. providers of solar power, tariffs have naturally played a consistent role in dictating the growth of the U.S. solar industry. Tariffs, however, are nothing new and players in the solar industry have had to navigate tariffs and other restrictions on the import of necessary materials for years. Beginning in 2012, the Obama administration first set duties of roughly 36% on the import of Chinese solar cells and panels. Over the following decade-plus, foreign suppliers of PV cells and other necessary components have engaged in a constant push-and-pull with U.S.-imposed tariffs. The first Trump administration imposed Section 201 tariffs on imported solar cells and modules in January 2018 for a period of four years. The Biden administration then extended and modified the existing safeguard tariffs for an additional four years in February 2022.[3]
When U.S. administrations, working with the Office of the U.S. Trade Representative have raised tariffs, domestic suppliers have often accused foreign firms of bypassing these tariffs by employing new methods of supply or locations of export to continue to grow their global solar market share.[4] The more impactful of these cases was brought by Auxin Solar in February 2022, in which the California-based solar panel manufacturer submitted a petition to the U.S. Department of Commerce, alleging that Chinese solar manufacturers were circumventing antidumping and countervailing duty (AD/CVD) orders in place against Chinese-origin solar cells and modules by building portions of the components in Cambodia, Malaysia, Thailand, and Vietnam. The case led to significant disruptions in solar development cycles as a large share of projects were unable to confidently project capital expenditures during the pendency of the litigation.
More recently, the Biden administration took steps to improve the domestic solar supply chain. In September 2024, President Biden announced a $40 million investment across the domestic solar supply chain to improve the life cycle of photovoltaic solar systems and to boost the global competitiveness of U.S. manufacturing.[5] Further, effective January 1, 2025, Biden announced a doubling of tariffs on polysilicon and solar wafers, key components of solar cells, being imported from China.[6] Perhaps most impactfully, the Inflation Reduction Act includes both (i) a domestic content bonus credit, which enhances the investment tax credit and production tax credit for otherwise qualifying projects that meet certain domestic sourcing requirements,[7] and (ii) the Section 45X Advanced Manufacturing Production Tax Credit for the production in the U.S. and sale to unrelated persons of eligible components.[8]
As the new administration takes office, it is uncertain what policies may be continued or newly implemented to boost the solar industry and renewable energy generally. However, President Trump has foreshadowed the enactment of more trade policies to protect U.S. interests and, potentially, to boost domestic manufacturing. These include imposing tariffs as high as 60% on imports from China and 10% on all other goods coming into the U.S.[9] On February 1, 2025, Trump provided a look into how these future tariffs may take shape, announcing 25% tariffs on Mexico and Canada, with a lesser 10% tariff for Canadian crude oil, and a 10% tariff on China.[10] Although details of which specific industries will face increased tariffs is likely to come more into focus as the tariffs are implemented, foreign components critical to the solar industry are unlikely to be spared.
Beyond tariffs and duties, outright import restrictions and/or prohibitions on the incorporation of equipment in the bulk power electric grid are conceivable. In 2019 and 2020, the first Trump administration issued a pair of executive orders prohibiting the acquisition and installation of “bulk-power system electric equipment” supplied by foreign adversaries (including China) and persons subject to their control.[11] The orders were ultimately largely revoked[12] under the Biden administration, but until then created significant confusion among the solar development community about the extent to which Chinese equipment was permitted to be incorporated into utility-scale projects.
The Biden administration, however, continued the trend in some respects, first with the Hoshine Withhold Release Order[13], and then with the passage and implementation of the Uyghur Forced Labor Prevention Act, each targeting goods (including solar panels) tainted by compulsory labor in western China for exclusion from entry into the U.S. The Biden administration named several Chinese solar vendors to the UFLPA blacklist[14], and the Trump administration appears poised to continue to make use of the blacklist[15]. An escalation of the trend of excluding Chinese equipment from the U.S. energy supply chain could have a destabilizing effect on certain projects that cannot simply be absorbed into project economics the way at least some level of tariffs can be. Here again, the onshoring of supply chains, and nimble advocacy and procurement practices may help mitigate the disruption.
It remains to be seen whether additional tariffs will be imposed and how such tariffs may promote or disrupt domestic production of solar cells and emerging supply chains.[16] In the short term, the prospect of additional tariffs presents the U.S. solar industry with a potentially challenging landscape, considering the current gap between U.S. solar module manufacturing capacity and the availability of domestic solar cells, wafers, ingots, and polysilicon. However, opportunities also come with a potential continued focus on promoting domestic U.S. manufacturing. The solar industry has demonstrated a decades-old track record of adaptability and continues to drive the growth of renewable energy in the U.S. Companies in the solar industry should closely track developing policies and regulations as the new administration takes office.
Troutman Pepper Locke continues to monitor these developments and are here to help you navigate this quickly evolving landscape. If you have further questions or seek advice based on your specific situation, please reach out to the authors or any member of our Construction practice.
[1] Gavin Maguire, Key solar themes to track after torrid 2024 for investors, Reuters (Jan. 8, 2025, 7:00 AM EST), https://www.reuters.com/business/energy/key-solar-themes-track-after-torrid-2024-investors-maguire-2025-01-07/.
[2] See Jeff Brady, People are rushing to install solar panels before Trump becomes president, NPR.org (Jan. 12, 2025, 5:00 AM EST), https://www.npr.org/2025/01/12/nx-s1-5228024/trump-solar-tax-credits (citing Solar Market Insight Report: Q4 2024, Solar Energy Industries Association (Dec. 4, 2024), https://seia.org/research-resources/us-solar-market-insight/).
[3] https://ustr.gov/issue-areas/enforcement/section-201-investigations/investigation-no-ta-201-75-cspv-cells.
[4] Lewis Jackson, Nichola Groom, Kripa Jayaram, Pasit Kongkunakornkul, & Sumanta Sen, US solar tariffs can’t keep up with Chinese firms, Reuters (Nov. 3, 2024, 10:00 PM EST), https://www.reuters.com/graphics/USA-CHINA/SOLAR-HISTORY/gdpzkdeqlvw/.
[5] Energy.gov, Biden-Harris Administration Announces $40 Million to Support a Domestic Solar Supply Chain, U.S. Dept. of Energy (Sept. 12, 2024), https://content.govdelivery.com/accounts/USEERE/bulletins/3b50fcb.
[6] Notice of Modification: China’s Acts, Policies and Practices Related to Technology Transfer, Intellectual Property and Innovation, 89 Fed. Reg. 76581 (Sept. 18, 2024).
[7] https://www.troutman.com/insights/treasury-and-irs-issue-updated-domestic-content-guidance-under-ira-and-first-updated-elective-safe-harbor.html; https://www.irs.gov/credits-deductions/domestic-content-bonus-credit.
[8] https://www.troutman.com/insights/clean-energy-tax-credits-explained-the-section-45x-advanced-manufacturing-production-tax-credit.html
[9] David Lawder, Trump upended trade once, aims to do so again with new tariffs, Reuters (Jan. 16, 2025, 6:24 AM EST), https://www.reuters.com/markets/us/trump-upended-trade-once-aims-do-so-again-with-new-tariffs-2025-01-16/.
[10] https://www.whitehouse.gov/fact-sheets/2025/02/fact-sheet-president-donald-j-trump-imposes-tariffs-on-imports-from-canada-mexico-and-china/
[11] https://www.federalregister.gov/documents/2020/05/04/2020-09695/securing-the-united-states-bulk-power-system
[12] https://www.directives.doe.gov/directives-documents/400-series/0438.1-BOrder
[13] https://www.cbp.gov/document/guidance/hoshine-wro-updated-guidance-document-industry
[14] https://www.dhs.gov/archive/news/2025/01/14/dhs-announces-addition-37-prc-based-companies-uflpa-entity-list
[15] https://www.reuters.com/business/retail-consumer/trump-administration-considers-adding-shein-temu-forced-labor-list-semafor-2025-02-05/
[16] See Kirsten Errick, US solar manufacturers face challenging landscape of tariffs and supply gap, S & P Global (Jan. 14, 2025), https://www.spglobal.com/commodity-insights/en/news-research/latest-news/electric-power/011425-us-solar-manufacturers-face-challenging-landscape-of-tariffs-and-supply-gap.




