Exploring the Intersection of Intellectual Property, Copyright and Artificial Intelligence | Akin Intelligence
In this episode of OnAir with Akin, lobbying and public policy partner Hans Rickhoff and senior counsel Reggie Babin lead a discussion with intellectual property partner David Vondle on intellectual property, copyright, and how that intersects with both the courts and with federal policy.
Subscribe to OnAir with Akin via iTunes, SoundCloud, YouTube, Spotify and Google Podcasts.
Episode Transcript
Hans Rickhoff:
Welcome to our December edition and our second episode of Akin Intelligence Podcast. These podcasts are designed to complement all of the other material that Akin puts out related to artificial intelligence, both here in the United States as well as internationally. We are very fortunate today to be joined by our colleague David Vondle, who is going to give us insight into intellectual property, copyright, and how that intersects both with the courts and with federal policy. Reggie and I are very fortunate to have David here. He is going to dive into some of these issues, but before he gets into these issues, Reggie, you have been paying very close attention to your former boss's Insight Forums and Leader Schumer had one specifically on intellectual property and copyright. And maybe you could give us perspective on that Insight Forum and how you see these Insight Forums moving forward.
Reggie Babin:
Yeah, and thank you Hans, and as we're recording this, the Insight Forum series has wrapped for the year. As you mentioned, since we last spoke, there was a session on copyright and IP in which members from the entertainment and creative community we're allowed to give their perspectives on how intellectual property law should be treated as it relates to artificial intelligence. And this is one area that frankly has risen to the fore as one of immediate concern.
There are a number of legislative proposals in the house and the Senate on how to deal with things like deep fakes, with the use of the voices and images of public figures. And it's an area, frankly, that Congress is really trying to wrap its arms around, but it's also an area that the courts have weighed in on, which is why I'm really excited to have David here today to talk about some of the work he's been doing to track the developments intellectual property, and artificial intelligence, and law. And so it's, we'll see into the next year whether Congress is able to actually advance legislation on this issue. But in the meantime, we can't expect that the judiciary will play a role in shaping how this technology, and these computer generated images, and texts, and voice will be treated under existing intellectual property law.
Hans Rickhoff:
And David, as Reggie pointed out, this has really been a focus of Congress, but it's also been a focus of the courts, and as Congress determines what's appropriate as it relates to copyright law, how do you currently see the ongoing activity in the courts and this issue progressing there, recognizing that Congress has a lot on their plate right now and that the courts may be making some of these interpretations and decisions faster than Congress is able to actually put out new law and regulate?
David Vondle:
Yeah. Thanks Hans, and thanks Reggie for having me on today. Just to take a step back and big picture perspective, I'm going to talk about what a copyright is and how we've got to this point. So a copyright, the statutory definition is it's an original work of authorship, fixed in a tangible medium of expression. Now, that's the legal definition. Basically what that means is there must be a definite perceptible form put down on something, instead of in somebody's head. So once it's put down, then there's a copyright that attaches to that work, and those works can be anything. It could be books, still images, moving images, audio, visual works, sculptures, musical works or a website material, software, anything like that is potentially copyrightable. So one key question about copyright is, who is the creator? Historically, a human has always been the creator. There's a saying in copyright law that human authorship is a bedrock requirement of copyright.
So now we're seeing works of authorship generated by AI tools, whether it's generative AI or something else. Are these works protected by copyright? And in the US there's been a number of decisions, both from the copyright office and I think we're seeing them trickle out elsewhere in the courts, is that AI cannot be the creator of the works and therefore AI generated works are not protectable by copyright. At this point, that's the U.S.' strong position on this is that AIs cannot be the creator of a copyright. We're starting to see some, and I think that's consistent across the board in most countries, just recently there was a decision out of China that's suggested they may be taking an opposite stance, but it's very preliminary. So we don't know what their stance is going to be at this point. So we could see some divergence there, but historically, human always had to be the creator of the work. And one of the big questions coming out of this is, what are the implications of not allowing AI generated works to be protectable by copyright?
So what that would mean is that the creator of the work can't stop others from copying or distributing their work. So if an AI generates something, it's very possible that somebody else could take it and do whatever they want with it. So although it's not protectable by copyright, one of the things we're seeing is that an AI generated work can still infringe someone else's copyright and the damages could be significant. So that's still an open question. It's very early stages in some of the pending litigation going on against these AI companies that provide these AI tools. But that is one issue we're seeing is that, although you cannot register your copyright in these works that are generated by AI tools, you may still be liable for a lot of different reasons for copyright infringement or for other claims as well. And those are continually to work through the courts on a case by case base.
Reggie Babin:
Now, is there a certain amount of human modification that could lead to an AI generated work being copyrightable, or is the mere fact that the original draft was generated by an artificial intelligence system de facto disqualifying, in terms of eventually pursuing a copyright?
David Vondle:
So yes, there is some human contribution that can be made to the output of the generative AI. We don't know what that is yet. We know where the lines are being drawn. There's a very recent case coming out of the copyright office where there was somebody, what they did was they created a graphic novel with text and the images were output by the generative AI. And what the copyright office said, in essence, was that the output of the generative AI was not eligible for copyright. However, the text that the author put into the graphic novel was potentially eligible for copyright. So there has to be some human contribution there that's still working its way through, and there's some really interesting developments in a number of other cases similar to that. But that's a good example of where a human contribution can lead to some form of copyright protection, but it won't cover the entire work, but it will cover whatever the human contributed to it. Again, it's just sort of like where's the line going to be drawn? At this point, we're not really sure.
Hans Rickhoff:
David, what we're seeing on Capitol Hill a lot, are a number of creators and innovators coming to members of Congress and complaining that their works are being used to train large language models, and the output looks very similar to their original pieces of work. What rules and regulations are available to them? And what do you see as some of the potential issues moving forward for Congress to contemplate as they try to figure out who actually helped train these models? And is there any type of ownership? And is there any type of remuneration that those original authors deserve, in light of that?
David Vondle:
There's a pending litigation involving a number of that specific issue right there. It's still very early stages. We're seeing it work its way through the courts. So in essence, what we're talking about is the use of third party content or data to train AI. And we're seeing that a lot of these AI companies are, I'm going to call them risks, they're taking risks to that could lead to potential liability for infringing someone else's copyrights. This has led to numerous plaintiff and class-action lawsuits that are still, that as I mentioned, very in the early stages, because these AI large language models, they are trained or fine-tuned using this third party content, largely through the scraping of websites. Usually the scraping of the websites is done without authorization or without a license from the websites, which could lead to potential copyright infringement issues, and it could be in violation of the website's terms of use or whatever the website has there. And that could also lead to a form of like breach of contract.
So you're seeing a lot of creators, authors of books, authors of videos, things like that, that are suing the AI companies, at this point, for improperly using their works. And the AI companies are very early stages, but they will defend themselves, and they're going to get into some interesting legal issues in those case about whether there actually is a copyright there, and if there is whether there was a fair use, which is a statutory defense to a copyright infringement claim. And again, that's going to be a very fact case basis. But those cases could lead to a lot of potential liability for, not just the AI companies, but for potentially for their users, although, they are offering indemnification for potential infringement claims as well.
Reggie Babin:
Can you explain just a bit what the fair use doctrine is? Because it's a thing that, particularly when you're dealing with IP policy on the hill, if you run out of options in terms of your argument, you'll just throw out the fair use doctrine and put it on the table as a trump card. Can you give just a brief explainer for the listeners as to what that is and why it matters for the AI conversation?
David Vondle:
Sure. So the fair use is a statutory defense to copyright infringement. So in short, a plaintiff has sued someone for copyright infringement and the defendant has said, "Well, even if I am infringing your copyright, what I'm doing is considered a fair use." And the Copyright Act lists four factors to be considered for fair use. The first factor is the purpose and character of the use, including whether the use is of a commercial nature or if it's for nonprofit, educational use. So for example, if the user is doing something with the copyright and then is repackaging it and selling it, that's probably going to be considered a commercial use. But if they're using it for research purposes, that may not be considered a commercial use and they'll weigh the facts on that particular factor.
Second factor is the nature of the work. What kind of work is it? Is it an image? Are they using, well, we'll get to that factor in a second, but what's the nature of the copyrighted work? The third factor is the amount, that substantiality of the portion, used in relation to the work as a whole. So are you using just a small piece of the work? Is it one page of a thousand-page book or are you using the entire book to put into your AI, into your large language model? And the fourth factor is the effect of the use on the potential market for or the value of the work. So are you undercutting what the copyright owner could make for that work? All these factors, again, are very fact specific. This is relatively new with the AI space.
So some of the facts that will be considered, I believe in the fair use context are going to be, how is the copyright work being used? Is it used to train the AI or is it being used in the application of the AI? Is the user putting in prompts, things like that? They're also probably going to look at the actual algorithm, the AI algorithm, and see the content and the data that is used to train it. They're going to look at how many works are used to train the data. Is the copyrighted work one of one that's used to train it or is it one of a million? And if it's one of a million, I imagine that's probably going to lean more towards the fair use side. Does the AI replicate a specific creator style? Which is what we've seen a number of times, in other forms, not necessarily just for AI, but we've seen it recently in a Supreme Court decision involving a picture of Prince, the artist formally known as Prince.
Reggie Babin:
Rest in peace.
David Vondle:
Yes, rest in power. And then the scope of the use. Is it training AI, commercial purpose, research scholarship. There's no easy answers here basically. So I imagine this will take several years for it to work its way through the courts, but all these fair use facts that we're talking about now, they're all going to be considered relatively novel. So I think we're going to see some really interesting novel defenses on the fair use side that we haven't seen before.
Hans Rickhoff:
Liability issues are front center with AI, and you touched on indemnification. Do you talk a little bit more about how users of this technology should feel comfortable using it in light of some of these indemnification clauses that companies are putting forward? And how do you think those will play out, either in the courts, or how do you think Congress will look at those?
David Vondle:
That's a tough one. Hans's question is a lot of the companies that supply AI packages have recently come out with statements that say, "We will indemnify all of our commercial users to the extent there are any claims of copyright infringement." Or some others have gone a little bit further and I think they may have even gotten into privacy issues. So that's good. There are some holes in the indemnification that I think that AI companies may use to try to navigate through their indemnification obligations. So one thing they've said is, "We'll indemnify you unless you are using our AI to intentionally infringe." Now what does that mean? Does that mean if the indemnification should apply to an enterprise with thousands of people and there's one bad actor there, are they going to say that that is sufficient to get out of the indemnification, where the enterprise should not be liable, or is it just that user?
So I do think this indemnification issue will certainly be litigated because I'm sure there will be some users out there who will be excluded from indemnification and there will be lawsuits between those users and the AI companies seeking obligations for the indemnification, so they're not on the hook. But again, this is a very recent development where some of these indemnifications have come out within the last few months, and I think they're continuing to evolve, and I think some of those developments are going to be forthcoming that could really shake up what those indemnification obligations mean for those companies.
Reggie Babin:
No. Can we pivot a bit to the executive branch? In our last episode, we discussed President Biden's recent executive order, and I'm fairly certain that no pages have been removed since we last spoke. So it is still the longest executive order in history. Hans described it as having something for everyone, but not too much for anyone. So as someone who spends a lot of time thinking about intellectual property and artificial intelligence, can you tell us a bit about what is in there for you, Mr. Vondle?
David Vondle:
Of course. I think there's a couple of things in there. Before I even address the executive order, I will say that the U.S. Copyright Office is performing a study right now. Earlier this year, the U.S. Copyright Office began an initiative to examine copyright law and policy issues raised by AI. They had several public listening sessions, so they hosted public webinars to gather and share information about current technologies and their impacts. And then the U.S. Copyright Office published a notice of inquiry in the federal register in, I believe it was in August, where they're going to issue a study that says they want to look at the issues of surrounding generative AI and they want to use the information to analyze the current state of the law, identify any unresolved issues and evaluate potential areas for congressional action.
So they requested comments about the use of copyrighted works to train AI models, the appropriate levels of transparency and disclosure with respect to the use of copyrighted works, the legal status of AI generated outputs and the appropriate treatment of AI generated outputs that mimic the personal attributes of human artists, to go back to the Prince issue again. The deadline for initial comments to that notice of inquiry were October 30th of this year, and there were nearly 10,000 comments received. Looked at a few of them, and I'm not going to look at all of them.
Reggie Babin:
How close to 10,000 do you think you'll get?
David Vondle:
Maybe a couple dozen.
Reggie Babin:
Okay.
David Vondle:
Those are all available for everybody to look at. Those are on regulations.gov. Replies were due actually yesterday, for those initial comments. So I've not looked at any of those replies yet, but I expect there'll be a similar number of replies or maybe even more. So you have this copyright office that's going to issue this study. The executive order issued in October, I think appreciated that the U.S. Copyright Office was performing this study because they directed the U.S. Copyright Office in 270 days, or 180 days after publishing the AI study, whichever is later. Then they wanted the Copyright Office to recommend additional executive actions to address issues related to copyright protections for AI generated works and the use of copyrighted work to train AI.
In connection with that, they also directed the United States Patent Office, who we haven't talked about today, to provide guidance to patent examiners and patent applicants of how to address the use of AI. And although similar to copyright, under U.S. law, an AI cannot be an inventor and therefore can't obtain patent protection. A human using AI as a tool could potentially obtain protection, and that I think we're seeing that work its way through the Patent and Trademark Office. So I expect this issue will be addressed in the USPTO's guidance in response to the executive order. So the executive order also has additional directives for the patent office and the copyright office, to consult and issue additional recommendations to the White House on potential executive actions. So I think once this U.S. copyright study is issued, then we're going to see a lot more movement in response to the executive order that issued in October.
Hans Rickhoff:
Talking a little about consent, and you've touched on a number of these issues when it comes to creators and developers, especially in the context of training data sets. When you're thinking about content provenance and when you're thinking about things like watermarking or labeling, I'll note that outside of the executive order, we haven't really seen any comprehensive, obviously AI, legislation pass, but you said yesterday and today is for our listeners, it's December 8th, so that was December 7th.
David Vondle:
December 7th.
Hans Rickhoff:
Well, also on December 7th, the conference report from the National Defense Authorization Act came out and they even go into some provisions related to artificial intelligence. If you haven't seen those updates, please check out the Akin website for those. But one of them in particular talks a little bit about generative AI in the context of watermarks and some of those provenance issues. Can you talk about how more and more of these creators are using that, and maybe the impact when it comes to training data sets when they're using these types of product that have been watermarked or labeled?
David Vondle:
Yeah, absolutely. I think creators are aware of what AI can do. They can literally take millions of works and pump it into a large language model, and then the AI can use whatever's in there in order to create some new work based on those millions of things. And what we're seeing is that some of these creators, these authors, found that without any kind of watermark or some other form, they don't know what's actually in there.
Well, they don't know if their work is actually being used by the large language model or the AI program to generate the new work. So based on the technology they're using, they're putting things like essentially a watermark in their copyrighted work so that if it is ingested by the LLM and used in a work, they'll be able to track their work as it's used by the LLM, and then they'll be able to determine for sure if there was actually used by the AI. Assuming that the watermarks still works, and we are seeing some litigation actually where there were certain authors who did find that, I believe it was Getty Images, that found that some of their pictures that the Getty Image watermark or stamp on the image was actually output by the AI. So in that sense, they know for a fact that their images were being used, most likely without authorization or without license. So on their end, that does help the author identify whether the work is actually being used as opposed to just speculating or assuming that it's in there.
Hans Rickhoff:
Well, David, this has been super helpful. Let's take everything that you've gone over, over the last 15 minutes, and try to put this in context. Let's say Reggie is a huge fan of the '90s sitcom, Friends, and he feels like there's a lot-
David Vondle:
Who isn't?
Hans Rickhoff:
Yeah, who isn't?
Reggie Babin:
I'm more of a Seinfeld guy, but for the purpose of the discussion, we'll stick with-
Hans Rickhoff:
Yeah, a little bit, and he feels there's a lot of unresolved plot lines and questions that are left. So you take all those previous editions of Friends and you use artificial intelligence to devise a new series or to maybe even go back and fill in some of those holes and questions. What are some of the issues? And you can summarize what you've already said, using that as an example, and are there any avenues where that would be permissible, like if he was going to turn that into a graphic novel, or to a comic book? Or at the end of the day it's just simply not permissible? Or if it is and you own the copyright, what are some lanes that you could see some light at the end of the tunnel, in terms of it actually being copyrightable content, or not having any issues in terms of it being used in the public domain?
David Vondle:
Yeah, it's a great question and it is one that's still very much an open issue. So there's a lot of issues that could relate to this particular hypothetical. One issue is, as Friends presume has every episode is copyrighted and there's always the copyright, the sign at the end of every episode, not just of Friends, but also NFL games or anything like that. And if you're going to simulate all the NFL games that have ever been played and find out who would win, things like that. So there are copyright issues there, of course, potential infringement issues, and I think one of the ways that it may not make sense, you're saying, "Well, I'm not using that particular Friends episode. I'm not creating an episode, I'm creating a graphic novel, or a comic book, or a book, or something like that." In copyright law, there is this aspect of you're not allowed to create a derivative work, somebody else's copyrighted work.
So if somebody has a copyright on something, you're not supposed to be able to take that expression and then go ahead and use it for something else. So even though you may not use the copyright itself or may not copy it per se, the fact that you've created this derivative work would still leave you on the hook, potentially for liability for copyright infringement. So that's one issue. And again, if the license is there, if this is an authorized use, that removes all the potential liability there or should, in theory, use that. Now what can you do with that? As we talked about a little bit, there has to be some kind of human contribution to it. And where is the line drawn? We don't know yet, and we talked about the graphic novel case. There's another case that's working its way through, I think it's worked its way through the copyright office officially. And now it might go somewhere else, where there was an artist who created a work that actually won the 2022 Colorado State Fair Award for best images.
Reggie Babin:
[inaudible 00:22:32].
David Vondle:
[inaudible 00:22:32]. Annual fine art competition.
Reggie Babin:
That's what it was.
David Vondle:
But the author refused to disclaim any part of the work that was AI generated. So long story short, as the author used over 600 prompts to get this image and said, "I should be entitled to this entire image because this is all mine," and the copyright office said, "No, we're not going to let you use that because what you did, this is all still AI generated, even though over 600 prompts were used to generate the image." So I think it's, kind of goes back to that line drawing item we were talking about. Where's the line drawn between, where is the human contribution and where's the AI contribution? And I think, just using the AI as a tool and expecting the output of that to be copyrightable or to have some kind of protection over it, that's not going to be the case, at least under current law and current understanding of what the law is.
There could be, again, whatever the human contribution is, if you've take that Friends novel, or book, or whatever you're creating, and maybe Ross and Rachel, who really could have a big blowout and they don't add up together, which would make everybody very sad.
Reggie Babin:
Spoiler alert.
David Vondle:
It's been a lot of time since I've watched a Friends episode, so I don't know how it ended up, but I do think that you would have some kind of original contribution there. But again, kind of goes back to, are you creating a derivative work of somebody else's copyright work? So you may have some potential liability there.
Reggie Babin:
Awesome. Well, this has been great, David. I hope you think it was a fair use of your time.
David Vondle:
It always is. This is great. It's a fair use. We created a lot of great derivative work there.
Reggie Babin:
There you go. Beat the drum.
David Vondle:
Yes, absolutely. Well, thank you for the opportunity. This was fun.
Hans Rickhoff:
This was great. And for those listeners that didn't hear our first episode, our colleague, Alan Hayes, went through the executive order and encourage you all to go back and listen to that episode as well.
David Vondle:
And that's on video.
Reggie Babin:
Yeah, the response was so positive that we decided to pivot to audio only.
David Vondle:
You got off the hook easy. I think there's a lot of happy people about that.
Hans Rickhoff:
Moving on to the legislative side, Reggie, when we think about copyright and image, and likeness, one of those issues that really resonates with members of Congress are elections. With 2024 fast approaching, how do you see Congress looking at these issues in the context of deep fakes, in the context of their individual elections, and do you see Congress doing anything related to ensuring that the public understands what's real and what's not real when it comes to some of these issues?
Reggie Babin:
This is an issue that has been flagged as an area of concern. The majority leader expressed his desire to move maybe more quickly on the issue of addressing potential misinformation, in elections and other issues. The chair of the Senate Intel Committee, Senator Warner, has also stated that he has a small working group that is trying to figure out a way to deal with this issue. Senate Rules Committee hosted a hearing on exploring the use of artificial intelligence and the impact of a lot of elections democracy. So it's certainly top of mind. It's one of those issues though that's challenging because it starts to creep into those areas where the potential, or traditional, fault lines that trip up policy tend to lie. And, right, what's the difference between misinformation versus the difference of opinion, and how are we able to properly reign in the technology without infringing on free speech rights?
And those are some of the big questions that members of Congress are going to have to wrestle with, that they're going to be able to come to some type of quick resolution on this issue. But it's certainly something that members on both sides are looking at it, and I think it's an issue that at least at the top of next year, will be top of mind as we move into primary season. And we've already seen the ad from the, I believe it was the NRCC, using generative AI to produce an ad about immigration policy. So we're already up and running into election season, and there's going to be a lot of concern about what the technology could mean as this has been, I think described as the first generative AI election, not only in the US but globally, and we'll see how policy makers tackle this issue.
Before we get on, I'm curious for your thoughts. We're now a little bit deeper into the tenure of Speaker Mike Johnson, and obviously we've seen a lot of activity in the Senate, but it's been a little less clear what the house is going to do. I'm wondering if you have any thoughts from the last time we spoke about where we might see the House of Representatives move on this issue as we turn the page into 2024?
Hans Rickhoff:
That's a great question, and I think the house feels probably a lot like the United States felt at the beginning of this year, like they want to do a lot of catch up and they want to do it very quickly, but they want to do it in a way that's thorough and comprehensive and well-thought-out. I think the new speaker, obviously there's a number of issues on his current agenda, in light of everything going on here in Washington, D.C. and across the world, but I think artificial intelligence is something that he wants to take a look at. Granted, he didn't have the natural constituency the former speaker had when it comes to companies out of his home state of Louisiana, but he does have an interest. He did sit on the judiciary committee. I think he also recognizes that he wants his committees, and committee chairman, and ranking members, because he wants to do this in a bipartisan way, to think about all these issues in the context of their individual committee jurisdiction and responsibilities.
So I think it's a good question. I think what we'll see is an increase in activity by the house starting in January, as it relates to artificial intelligence. I think you'll see a number of the members, like you've already seen in the past, like Mr. Obernolte and Mr. McCaul continuing to lead some of those efforts. But I think what you'll also see is, which we've seen for example on the Energy and Commerce Committee, a series of hearings that are designed around the committees and their responsibilities. And we're seeing that already. We saw some oversight hearings just last week on the recent executive order. I can see more oversight happening on the executive order as things potentially don't progress on the timelines that were laid out in that executive order, or the house majority perceives that there's too much overreach from that executive order, or if there's more money that's needed to actually implement that executive order since there was no funds actually allocated for the executive order.
But in short, Reggie, I think the new speaker is really going to rely on other members of Congress to help with this issue. And I think the ambitions similar, like we've seen in the Senate, would be to make that a very bipartisan effort. I think you could see some fracturing along partisan lines when it comes to, again, oversight of the EO or other specific issues. But by and large, we've seen artificial intelligence legislation be a very bipartisan process, and I think we'll continue to see that moving into 2024.
Reggie Babin:
But I think one thing that'll be particularly worth watching is that potential fracturing, right? We've been talking to clients and talking to potential stakeholders all year, talking about how bipartisan, how collegial this conversation has been. Now that we're starting to get words on paper, now that we're moving from concepts into potential mandates, we'll start to see some pushback from industry, from one political party or another on different proposals, and we'll really get into the heart of the policymaking process.
So I'm looking forward as we move from the AI Insight Forums in the Senate to a committee-based legislative process, as we continue to see the EO implemented, and particularly on the OMB side with agency procurement and deployment, how impacted stakeholders continue to engage when they express disagreements, and what's the tone and tenor of those disagreements and where those fault lines lie as we try to find consensus on any number of issues. So 2024, there's a lot in store for AI policy, and Hans and I will be tracking along with our colleagues here at the firm, and we will also be joining you guys live from lovely Las Vegas next month as we venture out west to attend CES and figure out what the folks on the ground are thinking about AI and all of its potential use cases.
Hans Rickhoff:
Reggie is a hundred percent correct. I think the last time we had our, and thank you for correcting us David, podcast series, there's been a solid, I don't know, 20-something days, and there's already been six congressional hearings. The executive order has been rolled out more in earnest, and all of that is on our website, and please take a look. You can subscribe and get updates in real time to anything AI related, both here domestically from a federal policy perspective, but also internationally. So encourage everyone to do that and look forward to our next episode, which as Reggie said, will be at CES in Las Vegas. Thank you for listening, and thank you for joining us today. Hans and Reggie.
Reggie Babin:
Thank you, David.
Jose Garriga:
OnAir with Akin is presented by Akin and cannot be copied or rebroadcast without consent. The information provided is intended for a general audience and is not legal advice or a substitute for the advice of competent counsel. Prior results do not guarantee a similar outcome. The content reflects the personal views and opinions of the participants. No attorney-client relationship is being created by this podcast and all rights are reserved.