Data Dive
Written and curated by a multidisciplinary group of attorneys, Akin Data Dive delivers key insights on cybersecurity, privacy and other data-related topics impacting organizations across the globe.
Search Results
Data Dive
During the course of any lending transaction, lenders will conduct a due diligence review of the borrower, including reviewing any relevant “know-your-customer” information. In the context of a fund finance transaction, this due diligence is likely to include a review of fund organizational documents, subscription agreements and side letters, if any, from the fund’s investors. Providing this information to lenders is an essential and practical aspect of incurring any fund-level financing, and is often expressly permitted by a fund’s governing documentation. Especially in the context of a subscription credit facility, where investor commitments and the related right to collect capital contributions are the primary source of repayment for the loan, a lender will need to see information that could potentially include sensitive or confidential information about investors.
Data Dive
Following the publication of the European Union’s Artificial Intelligence Act (AI Act or Act) on 12 July 2024, there are now a series of steps that various EU bodies need to take towards implementation. One of the first key steps is in relation to the establishment of codes of practice to “contribute to the proper application” of the AI Act.
Data Dive
On July 30, 2024, the Senate passed the Kids Online Safety and Privacy Act (S. 2073) via an overwhelmingly bipartisan vote of 91-3 shortly before departing for the August recess.
Data Dive
On 12 July 2024, the European Union Artificial Intelligence Act (AI Act or Act) was published in the Official Journal of the European Union (EU), marking the final step in the AI Act’s legislative journey. Its publication triggers the timeline for the entry into force of the myriad obligations under the AI Act, along with the deadlines we set out below. The requirement to ensure a sufficient level of AI literacy of staff dealing with the operation and use of AI systems will, for example, apply to all providers and deployers on 2 February 2025.
Data Dive
On June 18, 2024, the United States Securities and Exchange Commission (SEC) announced a settlement with R.R. Donnelley & Sons Company (RRD) for alleged internal control and disclosure failures following a ransomware attack in 2021. Without admitting or denying the SEC’s findings, the business communications and marketing services provider agreed to pay a civil penalty of over $2.1 million to settle charges alleging violations of Section 13(b)(2)(B) of the Securities Exchange Act of 1934 (Exchange Act) and Exchange Act Rule 13a-15(a).1
Data Dive
In May, the National Institute of Standards and Technology (NIST) issued updated recommendations for security controls for controlled unclassified information (CUI) that is processed, stored or transmitted by nonfederal organizations using nonfederal systems, (NIST Special Publication 800-171 (SP 800-171), Revision 3). These security requirements are “intended for use by federal agencies in contractual vehicles or other agreements that are established between those agencies and nonfederal organizations.”1 While these new controls are only applicable to nonfederal entities that agree to comply with the new issuance, Revision 3 signals the next phase of expected security for government contractors.
Data Dive
On May 21, 2024, the European Union finalized the adoption of the groundbreaking EU Artificial Intelligence Act, a comprehensive and sector-agnostic legislation that extends globally. This 420-page Act aims to regulate the deployment and development of AI systems, categorizing them into high-risk and low-risk, and even banning certain types of AI. The Act emphasizes trust, transparency, and accountability in AI usage, promoting the safe integration of AI technologies. This legislation sets a potential global benchmark for AI regulation, although its complexity may pose interpretative and implementation challenges for stakeholders. We set out the key provisions below.
Data Dive
On May 17, 2024, Colorado Governor Jared Polis signed into law S.B. 205, a pioneering piece of legislation aimed at regulating high-risk AI systems. This new law, set to take effect on February 1, 2026, introduces stringent requirements for AI developers and deployers, focusing on risk management and the prevention of algorithmic discrimination. This legislation marks a significant step in state-level AI regulation, potentially setting a precedent similar to the impact of GDPR on privacy laws.