Georgia Office

1711 Price Street
Savannah, GA 31401

(912) 544-2050

Talk with an attorney
Schedule Consultation

Allegedly Podcast Generative AI Hollywood Legal Reform Lawyers

Reality Check: A.I. Rise Demands Swift Hollywood Legal Reform

allegedly with Bo and Ryan | Season 2 Episode 14

Allegedly… with Bo and Ryan Podcast S2E14| Transcript

Bo: [00:00:00] Hello everyone, and welcome to another episode of Allegedly with Bo and Ryan. I’m Bo Bowen. 

Ryan: [00:00:06] And I’m Ryan Schmidt. We’ve got quite the episode for you today. We’re looking at generative AI, specifically the legal ramifications of its use in the entertainment industry. But remember the last episode we did, “Lights, Camera, Algorithm.”

 

Bo: [00:00:21] How could I forget with a pithy little name like that, right? I mean, less than two months ago, we discussed the potential risks and rewards of AI in Hollywood, and we even touched upon the future of AI in the entertainment industry.

 

Ryan: [00:00:36] Absolutely. And in that short time, less than two months ago, generative AI has improved at absolute breakneck speed.

 

Bo: [00:00:45] It has been amazing, right? I mean, pretty much right after we talked about all these crazy future hypotheticals, all the things that generative AI may be able to do in the future. Literally the night after we released that podcast, “Black Mirror” launched its new season with an episode called “Joan is Awful.” I mean, that episode, it just showed exactly the fears and concerns that we as entertainment lawyers have for our creator clients. You know, about this emerging technology. I mean, it was so spot on to what we talked about. Part of me really kind of feels like we willed it into existence.

 

Ryan: [00:01:31] I’ll take that credit. I mean, if you haven’t seen it yet, it’s definitely worth checking out. In the episode, there’s this tech executive named Joan, who’s played by Annie Murphy from “Schitt’s Creek,” and she finds out that a streaming service called Jamberry, which is essentially Netflix, released a show called “Joan is Awful.” The creepy part, though, is that the show mirrors her own life and is created exclusively with generative AI, which had been spying on her through her phone. Like, think of like a Siri who’s always listening. And one night she fires up her favorite streaming app after a long day of work just to see a lifelike, yet digitally generated Salma Hayek acting out a highly dramatized version of her day.

 

Bo: [00:02:17] I think my favorite part of the whole thing is that Jamberry. In the episode, it is 100% the villain of the show and they make it very clear they are talking about Netflix.

 

Ryan: [00:02:32] Oh, absolutely.

 

Bo: [00:02:33] I mean, even when the Jamberry thing comes on, it even goes ‘tu-dum.’

 

Ryan: [00:02:37] Same font, same everything.

 

Bo: [00:02:39] So think about it. You have an episode in which Netflix is the villain being streamed on Netflix while simultaneously people are out in the street picketing Netflix for the exact same behavior that Netflix is portrayed as engaging in on the show. I mean, it’s kind of hard to believe that that episode was approved in advance. You know, I feel like they may have put the kibosh on it.

 

Ryan: [00:03:09] Oh for sure. I mean, as you know, we’ve been closely following all the developments of this technology and how it’s used in “Joan is Awful” is no longer science fiction. You know, it’s already readily available. Right after we released that episode, “Lights, Camera, Algorithm,” Runway Gen-2, which is a video AI generator, released a text-to-video generative AI that’s already incredibly impressive and capable, but it gets exponentially better each week as more and more users create new output.

 

Bo: [00:03:46] We talked about before where this technology could go, but there is now so much to unpack. We really do need another show to dive into this even deeper.

 

Ryan: [00:03:58] That’s that’s right. And that’s why we structured today’s episode more around the legal implications, namely the intersection of generative AI with governmental action, copyright, right of publicity, contracts, and ethics.

 

Bo: [00:04:14] Well, I’ll take the easy one. Okay. Governmental action. So what’s been done so far? Well, the answer is pretty simple. Nothing. So, I mean, look, there have been many, many calls to federal lawmakers for new legislation addressing AI. But not one single new bill has been introduced so far.

 

Ryan: [00:04:41] Oh, definitely. And it’s interesting that the most headway in this new frontier has actually been made from the AI companies themselves, making voluntary commitments towards self regulation in hopes of somehow influencing that regulatory environment.

 

Bo: [00:04:58] One interesting proposal I read about kind of been floating around is licensing requirements for AI algorithms, where the government would have oversight very similar to that they have with the FDA for drugs, for example. But under that proposal, any algorithm could go public, but first it would have to pass a rigorous testing and certification process.

 

Ryan: [00:05:23] Yeah, that’s pretty wild. It’s an interesting thought, but seeing as the US government doesn’t already use those types of powers to regulate existing software and existing social media, it seems like a heavy lift and kind of unlikely that that will happen.

 

Bo: [00:05:40] Well, there’s another reason it seems very unlikely. I mean, while it’s getting better, let’s face it, Congress is still primarily made up of old white dudes who aren’t internationally renowned for their grasp of burgeoning technology.

 

Ryan: [00:05:58] Fair enough.

 

Bo: [00:05:59] I mean, it wasn’t that long ago, if you remember, that Senator Ted Stevens from Alaska very seriously described the Internet as, quote, a series of tubes. So I don’t have a ton of hope in being saved by some ingenious new legislation coming out of Congress. I mean, but that having been said, what about existing law? It’s kind of your bailiwick here, Ryan.

 

Ryan: [00:06:27] Okay.

 

Bo: [00:06:27] Existing laws such as copyright, I mean, can’t that be applied to this new medium? I mean, because it is interesting to think of AI, you know, as a creator, but I guess the big question would be if AI creates art, who owns the copyright at that point?

 

Ryan: [00:06:46] That’s a great question. It’s one that we’re getting all the time here at our firm. And as we’ve discussed on this show before, copyright only protects original works of authorship. So in a very public statement, the US Copyright Office has taken the position that most outputs from AI systems aren’t original works since they weren’t created by a human author.

 

Bo: [00:07:10] Well, right. And my understanding is that that statement was in connection with a copyright application denial earlier this year. I think it was. It was a comic book writer.

 

Ryan: [00:07:20] Yeah.

 

Bo: [00:07:21] Yeah, it was. She submitted her work for copyright registration. The storyline was human authored, but the artwork was all AI generated using Midjourney.

 

Ryan: [00:07:32] Absolutely. And the Copyright Office said, Hey, the story is great, we’ll copyright that, no problem. But the images, I don’t think so.

 

Bo: [00:07:40] I mean, that decision is in line with past rulings. I mean, remember the monkey selfie?

 

Ryan: [00:07:47] Ah, yes. So for those who don’t know, the World Wildlife Foundation tried to copyright an image that was actually captured by a monkey. There was this camera that was left in the in the forest and a monkey walked up to it and clicked click the button to take its own picture. And it was cute. It was adorable. But what the US Copyright Office and the Supreme Court actually said is that copyright protects human authorship, so a monkey can’t own a copyright. And since there was no human author, there was no copyright. But the same exact thing actually happened in another case where a Christian organization, divinely inspired, of course, tried to register a song supposedly written by the Holy Spirit.

 

Bo: [00:08:27] Oh, my God. Well, I mean. Okay, exactly. I mean, it’s definitely not black and white because every AI tool differs in their uses and operations, and some might require very extensive human interaction, but others are extremely autonomous and require really none.

 

Ryan: [00:08:49] And that’s where it’s going to get really murky. I mean, the copyrightability is going to depend on the level of human interaction involved.

 

Bo: [00:08:58] Well, precisely. So I guess right now the crux of the matter kind of boils down to: were the traditional elements of authorship conceived by man or machine?

 

Ryan: [00:09:11] Yeah, absolutely. But if you look at this comic book example, the Copyright Office actually granted the application at first because they had no idea it was made by AI. And I remember, we were seeing all these articles come in and I was like, The Copyright Office has approved this AI comic book. This is a new statement for its position on on copyright in AI works. But it wasn’t until the author started bragging about the fact that she owned this copyright to AI generated images, the Copyright Office took notice and looked at that application and said ‘revoked.’

 

Bo: [00:09:48] Yeah, I’m going to put on my Atticus Finch hat here for a minute. You know, my Clarence Darrow Perry Mason hat. I’m going to give you some amazing expert legal advice.

 

Ryan: [00:09:59] Okay.

 

Bo: [00:09:59] All right. Everyone listening. Remember, Avoid admitting to breaking the law online. It’s a good rule of thumb.

 

Ryan: [00:10:09] Great advice.

 

Bo: [00:10:11] I mean, while we aren’t obviously promoting you to intentionally lie on your copyright application. Doing so comes with some very serious civil and criminal penalties. And it brings up a fair point. It’s going to become very difficult for a copyright office to know if and when AI was actually used.

 

Ryan: [00:10:34] Yeah, I don’t know. I don’t know how you would unless there’s new some types of tools that are created or somebody goes online and says, “hey, look, thank you, Copyright Office. I didn’t actually make this.”

 

Bo: [00:10:45] But even I mean, think about it. You get those tools and then just new technology comes along to fake or to fool those tools.

 

Ryan: [00:10:53] Yeah.

 

Bo: [00:10:54] It’s just it’s going to be a never ending circle.

 

Ryan: [00:10:55] So you’ve got that authorship issue. And I think that that is going to be a big battleground for new legislation and new policies by the US Copyright Office. I mean, it’s certainly going to impact the commercial value, especially that studios might have, if they can’t wield that protection. But beyond that, everybody is also very concerned about the infringement aspect of this. I mean, we got AI tools like Deepfake or content generators, but there’s another issue at hand: the unintentional infringement of existing copyrights.

 

Bo: [00:11:32] Oh yeah, that’s a huge deal right now. So here’s the debate in a nutshell is training AI models on copyrighted materials in and of itself a breach of copyright?

 

Ryan: [00:11:47] It’s a great question. It could honestly go either way. When you look at copyright, you’ve got this bundle of rights under Section 106 of the Copyright Act. The right to reproduce, the right to distribute, perform publicly, display, publicly, create derivative works. That’s where you get all of your rights. I mean, in order for this to be an infringement, you have to tie it to one of those those rights. So no one really knows what these algorithms are truly trained on. But there’s certainly a bunch of copyrighted works that go into that pot. You know, take ChatGPT, for instance. It is trained on a petabyte of data.

 

Bo: [00:12:27] I don’t know what that is, but it sounds kind of dirty.

 

Ryan: [00:12:32] A petabyte, I had actually never heard of this before. I don’t know if like, yeah, it does sound kind of weird or it sounds like something I would eat at a Greek restaurant, but before I had to go look at it and convert it to something that was much more familiar with, of course, a gigabyte, right? You’ve heard of a gigabyte?

 

Bo: [00:12:47] Sure.

 

Ryan: [00:12:48] A petabyte is a 1,000,000 gigabytes.

 

Bo: [00:12:52] Oh, boy.

 

Ryan: [00:12:53] That is what ChatGPT is trained on to create its large language model. But is it legal? So one could argue that AI is just learning. It’s just absorbing information like a human does. And like if I just decided to spend the rest of my life scrolling the internet and reading, that’s not copyright infringement. I’m just kind of absorbing the information that’s out there.

 

Bo: [00:13:18] Exactly.

 

Ryan: [00:13:19] But the counterargument is that in feeding and storing that data, you’re actually making a reproduction of it that might violate copyright. So there’s been a bunch of lawsuits that are already popping up with book publishers, especially suing OpenAI, the creators of ChatGPT, alleging that the training itself is an infringement. We’ll have to wait and see, of course, though, how that all unfolds.

 

Bo: [00:13:43] Well, but correct me if I’m wrong. Right. But even if OpenAI wins those cases and wins that argument, there’s still the tricky matter of sampling, right?

 

Ryan: [00:13:55] Yeah, absolutely.

 

Bo: [00:13:56] I mean it generative AI, my understanding, blends everything that it’s been trained on. So it could very possibly and probably completely unknowingly just create something that infringes upon copyrighted material by having actual aspects of the elements that it was trained on appearing in the new material. Right.

 

Ryan: [00:14:23] And that could happen even if that end user wasn’t even trying to violate anyone’s copyrights. But that example becomes even clearer when you ask the AI to purposely infringe. You know, for example, I can ask ChatGPT to write me a story in the style of J.K. Rowling using all Harry Potter characters that she wrote that picks up exactly where that final book ended. Now, if this AI churns out this new Harry Potter novel, who’s to blame? Was it the person who gave the instruction or the AI platform itself? Again, arguments could go either way, but let’s look at it from a collectibility or judgment type of analysis. Unless that end user was making tons of money on this unauthorized sequel, I think J.K. Rowling would much prefer to go after OpenAI directly because they’re going to have the deep pockets to pay.

 

Bo: [00:15:19] Well, you know what that kind of reminds me of is cars that drive themselves. So who’s responsible if a self-driving car causes a wreck? Is it the person who said, all right, I want you to leave and, you know, drive yourself to this place? Or is it the car company that manufactured it, claiming that it had this capability? So, you know, I mean, the fact of the matter is Congress and the US Copyright Office, they’ve got their work cut out. I mean, they are going to have to set very clear rules on these issues, especially since there are currently no bright line AI rules for intentional copying misappropriation in distribution, anything?

 

Ryan: [00:16:04] Oh, for sure. But the way we see it here, you know, copyright is only one part of that legal equation. When we talk about AI, the right of publicity is just as important.

 

Bo: [00:16:15] I mean, every individual, what Ryan’s referring to, every individual has an inherent right to control their image, especially in the commercial landscape. So while this could, in theory apply to everyone, it’s especially vital for celebrities and public figures that, you know, people may want to appropriate to endorse their products or, you know, use in a movie or a commercial or anything like that. So. When you’re a celebrity or a public figure, your name, face, voice, all those distinct characteristics. They’re not just personal attributes. They are valuable assets, extremely valuable. So picture this. Okay, A celebrity’s endorsement or just mere association can skyrocket the success of a product or project. I mean, think. Air Jordans, for example. Right. But it’s not just about the money. It’s about safeguarding the brand, ensuring that no one misuses their image to mislead consumers or tarnish their reputation. I mean, I don’t think Michael Jordan wants his name on, you know, some sneakers that, you know …

 

Ryan: [00:17:30] Payless Brand.

 

Bo: [00:17:31] Or you put them on and it immediately breaks your ankle, you know, I mean, so essentially the right of publicity is that protective barrier and monetization tool all wrapped into one.

 

Ryan: [00:17:45] It’s really well said. And there are many creators that are genuinely concerned that their image will be used for profit without their knowledge, permission or compensation. For example, deepfakes. These hyperrealistic AI generated videos can manipulate or represent a celebrity’s image and it looks just as good as the original. No. What is the legal redress there?

 

Bo: [00:18:11] Yeah, and that’s happening constantly already. And what about monetization and consent? If my likeness trains an AI model or is incorporated into its output, how am I compensated for that? What rights do I have to give or deny consent to the use of my likeness?

 

Ryan: [00:18:32] Absolutely. And these are all questions that are being asked during these writers and actors strikes. And I expect there’s going to be many more NIL type cases that crop up, especially where a video image or even song starts gaining traction and earning revenue. Suffice to say, you will absolutely have full right and control when your likeness is used and you should have a right to sue.

 

Bo: [00:18:56] Yeah, I mean, that’s certainly the hope and that’s certainly one of our goals here, is to make certain that our clients’ name, image and likeness is protected at all times. I mean, there’s going to be some big cases here coming down the pike that are going to kind of set the tone, set examples for everything to come. In the meantime, AI companies, if you own an AI company, you have to proactively self regulate. You have to know these safeguards and these rules are coming, much like I think Midjourney updated its algorithms to prevent images from being generated in the style of living artists, right? I mean, it doesn’t guarantee the tools will be lawsuit free, but at least it’s a good faith step in the right direction.

 

Ryan: [00:19:42] Absolutely. Now let’s talk about contracts. I mean, the legal standards of AI will not be developed by legislatures.

 

Bo: [00:19:51] Yeah, it’s safe to say.

 

Ryan: [00:19:52] It will be through carefully drafted and fiercely negotiated contracts.

 

Bo: [00:19:57] And we’re seeing that already.

 

Ryan: [00:19:58] Yeah, absolutely. And that’s really one of the big goals of these strikes right now. And this is our most immediate solution, getting a good contract, especially given how slow that legislative process can be.

 

Bo: [00:20:11] 100%. I mean, in addition to fair and transparent contracts dealing with creatives, name, image and likeness, there’s going to need to be provisions for every other conceivable use of AI. Let’s paint a picture here. All right. So say a major studio hires an AI company for its script writing software. This software drafts a script that’s hailed as the next big thing. Who’s going to own that copyright? Is it the studio or is it the company?

 

Ryan: [00:20:42] No kidding. There’s so many questions to ask. Like, is this a joint work between the studio and the AI? Does the studio use work for hire agreements, waivers, licenses, or requires the AI software to disclaim all rights? I mean.

 

Bo: [00:21:02] There’s going to be obvious cost savings for the studios. I mean, the less they have to pay writers, you know. I mean, it’s scary, but I mean, thus far the quality is nowhere close. But you can’t anticipate that that’s going to be the case forever.

 

Ryan: [00:21:19] Especially with how good it’s getting, how quickly.

 

Bo: [00:21:21] Exactly. So, you know, it’s very easy to see what’s underpinning the current strike. I mean, there has to be controls put in place. And think about it. Let’s say a studio did rely upon an AI written script. Even that, again, it’s going to raise so many questions. What what how are the royalties going to be divided in that situation? I mean, what if human writers are then brought in to tweak or edit the content? What formulas are they going to use to calculate that breakdown? Is it going to be time based, quality based? I mean, it’s just it’s a nightmare that needs to be handled upfront right now, no question.

 

Ryan: [00:22:05] And and the obvious thing, of course, is what happens when a human writer alleges that the AI borrowed heavily from their previously published script without permission or compensation?

 

Bo: [00:22:18] It’s a whole new world right now, and the possibilities of AI being abused and taking positions away. I mean, it is truly terrifying and we certainly are 100% standing behind the writers that are striking.

 

Ryan: [00:22:36] There are so many other considerations surrounding the AI and the use, especially of a celebrity’s likeness. You know, say a studio decides to create an entire movie featuring an AI generated actor alongside human actors. And you think of the AI actor as a licensed recreation of a major star like Brad Pitt. The performances is with both compensation and permission, but he never has to step foot on set, never has to read the script, prepare, or even speak a single line.

 

Bo: [00:23:09] I mean, that raises so many questions. I’m sitting here thinking as an attorney that drafts these agreements on behalf of both the actors and the productions. Oh my God. How do you deal with that? I mean, think about it. How are the residuals and royalties going to be managed for an AI actor’s performance? How are the human actors contracts? How are they going to account for acting alongside an AI entity? Especially when you think about billing and the credits and their screen presence. If an AI actor’s performance is based on the likeness and mannerisms of a real actor, is there a right of publicity or likeness issue?

 

Ryan: [00:23:54] Well, there certainly wouldn’t be if the actor gave that permission. But again, that’s why it’s important to create these fair standards that ask for that consent.

 

Bo: [00:24:04] Well, that really makes me think about credit for behind the scenes contributions as well. You know, it’s one thing if you know, you’re the lead actor in reality, but technically, there’s a Brad Pitt who’s the star of the movie and now you’re getting second billing. That’d be a pretty tough pill to swallow. But behind the scenes, that’s causing a whole nother set of issues.

 

Ryan: [00:24:30] Yeah. Give me an example.

 

Bo: [00:24:31] Well, I mean, if a studio uses I as the director or a film editor and it relies on the AI’s algorithms to make the best editing choices for a film’s flow or story progression. Okay, What happens? What happens if the final product isn’t to the studio’s liking? I mean, human directors, editors, they can’t you can’t hold an AI liable in the same way, right? I mean, the end product is only as good as the prompts that it is provided. So who is contractually responsible for the inability to achieve a desired result? If it’s a human editor, you just say to that human editor, go fix it or you’re not going to get paid. Right? And that’s how it works. But it’s completely different. I mean, you can’t just, you know, what do you do? Run it through the AI program again? I don’t know the answer to that.

 

Ryan: [00:25:28] Until you get what you what you like. I mean, it’s tough.

 

Bo: [00:25:32] What happens with if the film gains critical acclaim, you know, who’s going to get the awards, who’s going to get the accolades.

 

Ryan: [00:25:40] For Best film Editing goes to AI.

 

Bo: [00:25:46] So, I mean look, You’ve got so many legal issues that are being brought, but to me, it’s not the legal issues here that caused so much concern. It’s really the ethical issues like consent, representation, compensation, all of those things. Is it truly ethical for AI to create anything if the result is a human creator then gets replaced?

 

Ryan: [00:26:18] But then you got the issues, the other ethical issues of bias and stereotyping. If AI is trained on already biased data sets, which they often are, it can perpetuate harmful biases.

 

Bo: [00:26:36] Well, I’ve read a lot about that. You’re right. And Congress has the Algorithmic Accountability Act, originally proposed for social media to address, these biases almost certainly exist within these AI systems. If it’s passed, it would empower the FTC to be a watchdog over and regulate these algorithms for biases that promote and reinforce harmful stereotypes.

 

Ryan: [00:27:02] Then there’s the debate, I think, of transparency. Should audiences be told that what they’re watching or reading was generated by AI? You got actor, writer and director Justine Bateman has been pioneering this discussion. She’s even established Credo 23, which is a badge of honor for films that will be added to a film’s end credits to signal to the audiences that it was made completely without generative AI.

 

Bo: [00:27:30] I love that idea. And you know, I get it. You think, Well, why are you complaining that it may replace people? I mean, that’s been happening in every industry for as long as, you know, ever since the Industrial Revolution, you know, the assembly line replaced people. But you’re talking about something completely different now. You’re talking about true creativity and art and things that just simply can’t be replicated by a computer. I mean, it just really brings up the age-old debate of authenticity. Can AI ultimately, regardless of its ultimate sophistication, can it replicate the soul of human creation?

 

Ryan: [00:28:16] The creative part of of what AI is capable is, is probably the area that we thought would least likely serve generative AI. You know, when you think about AI five, ten years ago, you think about it’s going to replace these repetitive tasks, but I’m not quite sure why generative AI is being so relied upon to create our artistic expression, the things that actually bring us the most joy, you know?

 

Bo: [00:28:45] Yeah. I mean, it’s. As it becomes more and more ingrained in the entertainment industry, these discussions and bringing up these issues, it’s that’s going to become even more crucial.

 

Ryan: [00:28:59] Yeah, I mean, what we’ve laid out today is just the tip of the iceberg.

 

Bo: [00:29:03] Absolutely. Generative AI isn’t just a tool. I mean, it’s poised to be a collaborator. And we need to decide how we’re going to work together in a way that is effective and takes advantage of, you know, the real true benefits that it does offer, without it actually threatening the livelihood of human creatives. So for creators, legal professionals, everyone in between, the future promises to be, if nothing else, very interesting.

 

Ryan: [00:29:40] Couldn’t have said it better myself, though. Well, that is our show for today. As always, thank you for tuning in. To continue to receive free edge-of-your-seat legal anecdotes, please subscribe to our show and share with at least one friend.

 

Bo: [00:29:52] Until next time, Thanks.

about the hosts

Bo Bowen

Charles “Bo” Bowen is Savannah’s preeminent corporate and entertainment attorney. Bo’s clients range from dozens of well-known movies and television shows to small local businesses to large multinational corporations. When asked if it’s true he can draft corporate resolutions and partnership agreements in his sleep, Bo cracks a sly smile and responds, “In fairness, there’s really no other way to do it.”

It’s that quick wit that has helped catapult Bo to the top of his profession. Clients love him because he’s confident, fast, and entirely entertaining. According to Bob Cesca, a national political commentator, writer, and radio host, Bob had hired lawyers all over the country but had never met one like Bo. “From the first moment I met him, it felt like we had been lifelong friends. When I reached out to Bo, I was very upset over a legal issue that had been plaguing me for months. He instantly made me laugh, but he also made me feel calm, safe, and protected,” said Bob. “And then he literally picked up his phone and resolved the entire case with one call.”

Bo takes great pride in righting wrongs, no matter the opponent. So lest you believe his ready smile and quick laugh are in any way representative of his skill, a few minutes in the courtroom will quickly disabuse you of that notion. He is a highly skilled and ruthless psychopathic assassin, metaphorically speaking. His fearlessness and success in the courtroom against all foes, no matter how powerful or seemingly invincible, has inspired fierce loyalty from his clients and earned him nicknames such as “giant killer” and “dragon slayer.”

Bo came to the conclusion early in his career that being a lawyer is not much fun, so he started The Bowen Law Group with the modestly-stated ambition of completely changing the way law is practiced. By all accounts, he has succeeded.

When asked how he would describe Bo, Bob Cesca thought for a moment. “Bo combines the swagger and charm of George Clooney with the quick wit of Mark Twain and the legal ability of Perry Mason,” Bob finally responded. “I’ll put it this way: Bo is the lawyer that God would have invented if He had thought that at all a good idea.”

Ryan Schmidt

Originally hailing from New Hampshire, Georgia transplant Ryan Schmidt is an Attorney at The Bowen Law Group. A lawyer passionate about protecting the rights of creatives and business owners, Ryan’s law practice focuses on entertainment and music law, business formation, contract disputes, non-compete litigation, and creditor’s rights. 

Ryan, who toured extensively as a singer/songwriter prior to law school has been featured on the NBC’s “The Voice” and Apple iTunes’ “New Music Page” and was named “Critics’ Choice” at the Starbucks Music Makers Competition. As a professional musician, he experienced firsthand the cutthroat nature of the business and the restrictive contracts creatives are too often asked to sign. Answering the call to be a fighter for his fellow artists,  content creators, and influencers, Ryan knew he needed to pursue a career in law. And so, Ryan attended Belmont University College of Law in Nashville, where he graduated at the top of his class, summa cum laude, after serving as Executive Officer for both Belmont’s Law Review and Federalist Society.

Before moving to Savannah, Ryan clerked for a Nashville-based law firm representing clients in the music industry, fine arts, and digital media. Since joining The Bowen Law Group in 2018, he has represented countless clients in various business and entertainment matters.

For Ryan, being an advocate is not only his duty but also his privilege. As a lawyer, he stands in between what is and what should be. Each day is another opportunity to narrow that gap.