Demystifying Artificial Intelligence

Episode 8 July 12, 2023 00:44:30
Demystifying Artificial Intelligence
The Loop Marketing Podcast
Demystifying Artificial Intelligence

Jul 12 2023 | 00:44:30

/

Hosted By

Elise Stieferman

Show Notes

In today's epsisode, we are joined by Coegi’s SVP of Marketing and Innovation, Ryan Green, and our Director of Innovation, Savannah Westbrock, as they discuss and demystify generative artificial intelligence. In this episode, you’ll learn: 

- - -

About Coegi: 

Coegi is a performance-driven marketing partner for brands and agencies enabled by a best-in-class technology stack to deliver specialized services across digital strategy, programmatic media buying and integrated social media and influencer campaigns.

Learn how Coegi can work with your brand or agency: https://coegipartners.com/services

Read more on our blog: https://coegipartners.com/resources

Follow @CoegiPartners:

LinkedIn: https://www.linkedin.com/company/coegi-llc/

Facebook: https://www.facebook.com/coegipartners/

Instagram: https://www.instagram.com/coegipartners/

View Full Transcript

Episode Transcript

Ryan Green: So, welcome to the Loop Marketing Podcast. My name is Ryan Green. I'm the Senior Vice President of Marketing Innovation at Coegi and today I have tapped one of my employees, Savannah Westbrock, who is our Innovation Director and the founder of our Innovation Department here at Coegi. Thanks for coming on the podcast. I kind of forced you to so you didn't have much option <laugh>. Savannah Westbrock: Yeah, no, I think this is going to be really great. I'm excited to have kind of a looser conversation with you about AI. We have seven years together of bouncing around on topics like this and making kind of loose predictions, critiquing tech that we see in this space. So I think knowing everything that has so rapidly changed, not only with performance AI that we've been using constantly as programmatic buyers, but also now with Generative AI affecting marketing in so many different ways. It's gonna be a fun conversation. Ryan: Yeah. Greatly looking forward to it and this has been a long time coming. Coegi has been pretty quiet about Generative AI and that has been a very conscious decision knowing how impactful Generative AI can be, how headline driven the conversation has been too. We wanted to be very deliberate and focused in the way that we synthesize all of the information and misinformation about Generative AI, how people are talking about Artificial Intelligence generally with a lot of the headline chasing and armchair opinions of the Twitter sphere around the topic. It's not that we've ignored it and it's not that we're not even practicing working in those tools, but we wanted to be really cognizant of the advice that we're giving to clients. We weren't gonna be an agency that was gonna spin up an AI division six days after open AI came to market like some of our competitors have. We knew that we had to take a very comprehensive look at the technology and maybe foregoing being the very first tip of the spear, first mover on that, to be able to do a true due diligence to understand the impact. It's still changing every day so the analysis that comes out of this podcast today could be different than what we would say 3, 6, 12 months from now. But there's enough concrete for us to start to put a point of view together and Savannah, you've really led that research for us. So, first maybe a good topic would be to talk about the historical context of AI and how we have been using machine learning, artificial intelligence at Coegi since our inception and what's different about the conversation today as it relates to Generative AI? Savannah: Yeah, that's a great place to start because I think part of why we are a little bit quiet in the beginning was we were looking at all of this sudden marketing coverage of AI and kind of scratching our heads wondering, okay, what is so new to this conversation? Why does it seem like everything is suddenly being called AI? Why are we slapping that label in every new piece of technology? Whether it's truly, genuinely, an AI or if it's some secret sauce in quotation marks algorithm, if it's just a fancy Excel macro, we're calling it AI now. So part of our process has truly been evaluating, like you said, Ryan, trying to be really methodical and understanding. Is this something that's truly net new and plays into the developing technologies that we're seeing come out of the past six months? Or is it just kind of marketers doing our marketing role and kind of slapping a fresh coat of paint on some old tech? But even then, these tools, part of their benefit is that they can teach themselves. So, whether it is truly those Generative AI pieces that are capturing that public interest and public attention like Chat GPT, like Mid Journey, or if it's just the marketing platforms that we’re using every day, start to get a little bit smarter and a little bit faster with their decision making. We've had to be really diligent to your point earlier about knowing what this tech actually is so we feel confident when we're making those recommendations to clients or when we're using these tools internally. Ryan: I think because of our place, for the past 10 years as a conduit of marketing technology, we've gotten a thicker shield to sales pitches that use buzzwords in general, but certainly ones that purport to have artificial intelligence or machine learning attached to them. We know that a lot of those are black box smoke and mirror magic platforms that just suddenly decrease the ROI for a client by 85% just by using one of those magic tools, right? And we know that there's a lot of gamesmanship in those conversations to the point where I probably block more emails than I read at this point because of the volume of technology companies purporting to use AI in a non-differentiated way. However, the companies that have done that really well have made media buying and programmatic media buying feasible. Programmatic wouldn't exist without artificial intelligence. Programmatic wouldn't exist if you couldn't evaluate a million different inventory placements in a second. We've always had to lean into that and I think that's why programmatic based media practitioners are really well positioned to understand Generative AI, which is, as I'm defining it, is the ability to converse with data sets and natural language. That's obviously a lot different than building a DSP campaign, but a lot of the underpinning on how we learned how to leverage those tools for programmatic buying, I think do apply to, how to look at what's possible and what's not, as it pertains to generative AI. So, that's why we've taken a little bit of a careful approach because being such practitioners for a decade now in that field gives us, I think, a really great opportunity to think of innovative ways within the Generative AI, a sandbox, to be able to push some really innovative things forward to create automated workflows and to eventually add a lot more efficiency to the work that we do as an agency and therefore the work that we do on behalf of our clients. Savannah: And that's where I would say one of the benefits that comes from having that sort of strong programmatic background, to your point Ryan, is we can kind of mentally differentiate what we're looking at when we're evaluating an AI tool. AI is one of those really large terms that is applied in several different ways. So, when we're looking at something and trying to evaluate how it would best fit into a strategy or if it's something that just gives efficiency to our internal teams or if it is something truly innovative and brand new that might warrant a really intentional testing strategy. What we've looked at is, is this something that is a little bit more logic based? Which most machine learning algorithms that we talk about when we're looking at a DSP, or a buying platform, or some other sort of marketing technology, oftentimes those are really advanced like “if this, then that,” type of statements. And those are really straightforward for us to say, okay, let's give it this much to test. Let's see if it can really just handle throwing the keys to it, or what amount of human oversight is needed here. Whereas with these generative tools that are really capturing the headlines, the public attention, it really feels like the floodgates sort of opened about six months ago and part of what our job has been is to remind clients, for the most part, these tools are still kind of in sort of a public beta test, right? They're really exciting, there's a ton of potential for the future, but they're not quite ready to fully take over the responsibilities of building a full campaign from start to finish. They're not quite legally safe enough yet to generate your entire creative strategy using them. There's just a lot of questions where it's encouraging to see the future, but we do have that responsibility to ourselves, and to our clients, and the partners that we work with to say, okay, but let's look at these nine different questions that we have about this tool before we truly do, genuinely, test it the same way we might with a logic based AI that programmatic has a long history with. Ryan: That's a really good distinction. The logical or the Boolean logic that can be used a lot of the times in building audiences and federating those audiences to platforms, does have a different look and feel to the guardrails that you can put on Generative AI. It's harder for us to know the source of content that comes from Generative AI. I know Congress has had a lot of questions and I know that you spent six hours on C span one day <laugh> when I came into the office watching congressional hearings. I don't know if that was the first time that you've been on C span. Savannah: Thankfully it was only four hours and the time I had done that previously was the TikTok ban question, so very related to my job. Ryan: It absolutely was and I'm glad we had <laugh> somebody watching. As you were looking at that and developing your own point of view as to how Generative AI can and should or shouldn't work on behalf of marketers, what are the couple things that you look at as areas of opportunity and areas of caution when looking at Generative AI specifically? Savannah: Yeah, the Congressional hearing was truly a wild thing to sort of listen to because at least on the topics that have been relevant to our industry, it was one of the first times I've ever listened to those committee members ask questions that felt really informed and relevant. So, for a part of it, it was a brush of fresh air. We didn't have any of those senator, we run Ads moments. Ryan: The internet is a series of pipes <laugh>, I believe. Savannah: Yeah, exactly. The questions being posed seemed very informed and relevant, as I said. And the responses were fairly encouraging from, Sam Altman was there, the head of Open AI. We also had some academics, some AI ethicists, and a lot of their conversation was focused on more far off implications, but things that could affect the way that these tools develop. So, part of their concern obviously was with privacy. When you are using those tools, when you are prompting them, when you are uploading documents to them and asking them to summarize it, Do they learn from that data that you have shared? Do they share it with other tools? If you upload something in a Chat GPT, does it become a part of all GPT tools? Anything using Open AI's API? And some of the answers were truly in the realm of, well, if I tell you I am giving my competition information that I don't want them to have. We're in an interesting spot with this technology cause I feel like we're kind of in like a gold rush and a space race at the same time. So, part of the reason I say so many of these tools are still kind of half baked or in that beta test is because open AI making its tools public last year really started that clock and started that race where even though folks like Google, Meta, and Microsoft, they'd all been working on artificial intelligence for decades and exploring those type of applications, really, AI has been a part of programming since the fifties and sixties theoretically. But now we're in this spot where Chat GPT got so much immediate worldwide attention and so much excitement. There's really a lot of money to be made in being the first company that gets it right. So, a lot of these tools that are rolling out, if we're gonna use them in a marketing context, I think it's just important to have that critical eye, that little voice in the back of your head saying, okay, this is an experimental tool, this is half baked. If it's free, the reason I have access to it is because it's learning from how I use it. I am kind of a volunteer tester of this tool. Ryan: I know you've used these tools for a little while too. In your actual experience, have you seen how scarily accurate, have you seen clear deficiencies? I'll give you an example. I've asked it to put a recipe together with some kitchen sink type, I have six things left in my refrigerator, can I get a recipe for it? And they wanted me to add eggs to this recipe when it was not part of the ingredients that were listed on there. Luckily I had an egg and was able to finish making dinner for my family, but I was like, yeah, that's a clear error, right? What things have you seen that have amazed you and what have you seen that is lacking from an accuracy standpoint? Savannah: Yeah, that's such a good question because there's so much we could talk about here. Using your recipe example, I think in general, that's when it's useful to keep in mind that there's a bit of a misconception about how these tools actually work. We get a little bit of projected meaning onto what it is spitting out, but they're not thinking and answering your question, they're just word associations. I've seen a lot of people refer to, especially those text-based AIs as word calculators. So, in the same way that a mathematical calculator can do a lot of stuff and is extraordinarily helpful, they can't handle things like irrational numbers. So, similarly with this technology we can expect it to do a lot of really incredible formulaic work. We can expect it to be able to synthesize recipes from across its training data and say, okay, egg is normally paired with bacon or pancake, or something along those lines. When I have asked it more challenging questions, I've still been really impressed with a lot of the information it spits back out, but it is pretty amateurish. So, one example that we did, we prompted it to make a media plan. So we got a question from one of our clients of, hey, how would you recommend this goal? What would we want to do here? We plugged it into chat GPT, just using the exact sort of language, kind of spiffed up to where we weren't revealing anything proprietary to it. But what it answered, when we were looking at it, was solid. None of it was wrong. It was just a little sophomoric. It looked a lot like what you would see from a Marketing 1000 class project. Ryan: And it’s designed that way on purpose to get to, not the very lowest common denominator, but to a ninth grade reading level and not a collegiate reading level, right? Or basic to intermediate thinking and not necessarily advanced. I think that's supposed to be a feature at the moment of where they're trying to put guardrails on the technology. Would you agree with that? Savannah: Well, it's not so much, by my understanding, a guardrail, as much as it is just how the tech actually works. So, I really appreciated how the Washington Post’s explainer laid this out. In terms of how these tools “actually think” they know that the sentence, “the cat goes to the litter box,” makes more sense than, “the dog goes to the litter box.” And similarly, they know that “the dog goes to the penalty box,” probably is not a sentence that makes sense to spit out. But being human beings, we know there may be some crazy wild thing where a dog somehow gets on the ice and has to get put in a penalty box, right? So, it's gonna take a little bit of time for any sort of nuance or wild wacky things actually make sense because they've been explicitly trained, to your point, to look for what is the most probabilistically likely group of words that I need to return here. And so since it's trained on this pile of information, whether it's websites that are hosted on Google's, whatever it's hosting source is called forgive me, or whether it's what has actually been uploaded to it, it is looking for those commonalities and those denominators because that's just what it's trained to do. Ryan: So, I'd like to get into a couple more marketing specific questions that seem to come up. I'm sure you'll know a couple of them. One is, is Generative AI going to take the job of copywriters? Do you think that if you were a copywriter, you should be scared for the future of your job? Savannah: I think if a brand can use the current state of this technology to replace its copywriting, it's probably not producing copy that's really worth reading or that is going to resonate with their audiences. I've got a good example of this actually. McDonald's in Brazil did a really interesting outdoor ad pretty quickly after Chat GPT was released where they just asked, "What is the most iconic burger in the world?” And the answer was the Big Mac. And all they did was take the response from Chat GPT. They kind of color coded the paragraph to where it sort of looked like a burger, It was very cute. But, the more impactful ad was the one Burger King then took out right next to it, and they just asked, “and which is the biggest?” <laugh> And then did the exact same art direction and obviously we know using these tools, if you plug in the phrase like, “which is the biggest” right now to Chat GPT, it doesn't have a clue what you're talking about. There's not really any context for it. So, we know that that was a human idea, that was human copywriting used to compliment the larger tool. And that is the type of creativity I think we're more likely to see moving forward, rather than this sort of expectation that these tools are gonna get so creative that they replace genuine, responsive, reactive, personable creativity. Ryan: That was kind of a softball question to you, but I think it goes back to the responses that you see come out of Generative AI tools being pretty sophomoric in their complexity. So, therefore it's probably not the most creative thing that is going to come out. However, the volume that you're able to get of vanilla content, I think especially from personal brand lens could be helpful in having a baseline of being able to answer certain questions. I could see it from an SEO perspective. SEO has been game for a long time anyways. You could answer some key questions that the internet may have about your company or FAQs, things of that, at least using some of those initial responses to help you fodder for already boring content that's in an FAQ, for instance. So, I could still see some uses of it. But certainly from the creative agency perspective, I have not seen particularly creative or the inability to situate the research of where some facts come from as well and knowing that there is, not misinformation, but certainly non accurate facts that come through. There's still a long way to go where this is a viable replacement for a researcher, for a copywriter, in my opinion. Savannah: Well, and I think it's not wrong to expect these tools long term to revolutionize the workplace. I think there are AI evangelists, there are AI catastrophists, and like most things, I tend to kind of land in that middle. The voices that I trust are the ones saying, I'm really excited for the potential that this can bring, but we're going to have to set those guidelines. We're gonna have to recognize that these are new tools and like every new tech revolution, there's gonna be some time needed before we really see what truly changes the world and what's a little bit falling by the wayside. So, when I get that AI anxiety that I think a lot of people feel when they read those pieces saying “Skynet's here,” “the terminator's coming,” I sort of look back at, okay, email revolutionized the workplace. Having Microsoft Office Suite and just word processors in general, that was a massive tech revolution that changed the workplace forever and it changed what some jobs look like. But where I have kind of landed after doing all of this research is thinking, we need to be comparing this less to a massive new paradigm shift and more to just the way that tools have evolved over time naturally. And how some of them have really become commonplace and others we've seen sort of just fizzle out. Ryan: So speaking of guardrails, how have you advised organizations, including our own, on how to bring AI tools into the workplace for testing for the greater companies understanding? And what advice would you have on the types of people that, within an organization need to have a buy-in, who needs to be consulted, when considering more formally bringing Generative AI tools to the workplace? Savannah: It's been a tricky conversation to navigate because so many of these tools are public. So, we have to assume at our organization anyone can make a Chat GPT account and start playing with it. So, I worked with our IT department to start a little bit of an exploratory task force of, okay, across our company, across different roles, we need to find ways that we can lean into this technology a little bit and encourage some experimentation, but have those conversations about our responsibilities. Not only to our own staff, but also to our clients. So, for example, some of the questions we're exploring are do we have a specific AI use agreement in our handbook that says things like, I will not upload my client's private contract information into a public tool. Some of those are low hanging fruit, and some of these things we truly don't have answers for yet. But the purpose of setting a very intentional, methodical task force up top, is so you're not just sprinting at the pace that these tools are being released, but rather you're looking at the technology as a whole and trying to anticipate how my role will change, how my coworkers roles will change, how our engagement and interaction with our clients will change, and what we should be doing, at least internally, before broader society and broader workplace kind of catches up to the lightning fast speed that the tech is moving in right now. Ryan: That deliverance is very much appreciated and I think is the way that we would advise organizations to look at bringing this in because there's a risk and reward and there's certainly a decent amount of risk. In particular, Coegi has a lot of brands–clients that are in sensitive categories. So already, we have tools in place to regulate the flow of data and information in programmatic technology to ensure privacy, to ensure that we're abreast to regulatory, guidelines and policies globally. So in that same vein, we wanted to do a similar exercise with Generative AI and are just starting to bring that framework into the workplace so that we can have a really clear set of guidelines to operate from. Because there is a lot of potential to improve efficiency and to really find some compelling uses of content, in particular, that Generative AI provides for us. So another question I'd have too, just for you personally, where have you seen Chat GPT and Bard being helpful to your work life? What things have excited you? Where do you see some of the easiest opportunities to improve your workflow and efficiencies with Generative AI tools? Savannah: Yeah, I have gotten so much immediate benefit from the way that large language models have improved captioning and transcribing. I think that's one of their strongest areas. I am someone who typically does use captions when I'm in meetings, especially when I've got multiple people talking at the same time. It just really helps me process what's going on. And I've used them for years and I always sort of had like a two second lag when I was listening to other people speak. Some of that lag has started to go away. They're just getting smarter and faster. But additionally we were chatting about this, the difference between really listening and note taking and that pressure that people feel sometimes, especially when you're at a conference or a meeting that's really critical. You wanna make sure you capture every word but, at some point, you kind of tune out of the actual conversation and you're just acting like a scribe or a college student at a lecture who isn't really participating. So, looking at some of the transcription tools, even the ones that can be added to webinars and shared afterwards with the full group, I think has really helped us try to stay present in those meetings. And it’s helped us with that efficiency too, because you don't feel that pressure to say, whoa, whoa, whoa. Wait, wait, go back. I missed what you said. We know that we'll have that record to look back on and it keeps us accountable to follow up on the points that we have in those brainstorming meetings that are so easy to forget when you drive home. Ryan: It's very hard to both be an active listener and a note taker at the same time and a big part of communication is active listening. And that's not just listening to the words, but the body language, the facial expressions that your counterpart of the people in the room have. So, having that automated, in particular, in meetings where you know that there are a variety of types of communication styles. If there's a big, boisterous, big thinker in the room next to somebody who's pure analytical, I've found being able to go back to my notes has been very valuable. So, I think it's something that allows us to be much more present, especially with video conferencing as more of a norm than an exception. That's certainly part of it too. Savannah: What about for you? You've done a really great job of keeping this conversation moving, but I feel like we kind of slipped into you interviewing me at this point. What are you excited about Ryan? Ryan: I actually am a little more bullish on the possibility of content to be produced faster and hopefully with more access to research in particular. It can take a long time to field Google to get in traditional search to be able to find answers. It was a really interesting example at the Forrester conference that we were just at. Their CEO said that he was on an airplane and the captain on the headset said that there was going to be a delay and he said it was because they needed to change the tires. He had never realized that you needed to change the tires on the airplane. So he quickly went into Chat GPT and said, I'm on an Airbus A3, 2, 1. How long will it take to change the tires? And it said, on average it'll take two hours. That was long enough where he was going to miss his connecting flight. So he got on another flight, had his assistant book it. As he was getting off the plane, the captain came on and said it's gonna be about two hours until we're able to leave. But he knew it because he was able to get to that answer more quickly. I have a personal challenge for myself. I'm not going to use Google search for the next three months. I'm only gonna use Bard and Chat GPT just to really immerse myself in what this new paradigm is going to look like. I don't think that means Google's suddenly going away or their business model’s upended, but I do think the way that we have interacted for the past 20 years with the search engine is going to be different. The ways that we prompted. And we're all, instead of becoming creators, I think are going to ultimately become better editors. I think that the journalistic mentality of editing and fact checking is gonna be even more critical and important. Especially as, and this isn't something I'm excited about, this is a fear that I have, but it's gonna be a lot easier to automatically mimic human behavior as far as made for advertising websites. The ANA had a big thing on the percentage of spend going for made to advertising websites. Coegi has sniffed those out for a long time. We have 60,000 websites on our blacklist. It's gonna be a lot easier to mimic those sites using natural language processing. So that's a little scary and I think it's gonna be harder to sniff out what's real and not on your social news feeds, et cetera. So, it's gonna be important for everybody to be their own journalist, their own fact checker and to start to be able to take that mentality to their interactions on the web in general. Savannah: And that's where I'll return to a question you asked me earlier about advice I would give. My two biggest pieces that I have been sharing with our teams and with our clients over and over are related to misinformation and the need for fact checking. The first being, just the simple fact that, most of these public tools, their knowledge, if you will, cuts off in 2021. That is when they stopped inputting with training data. So they're still learning based on what users are uploading to it. But if we think about how much the world has changed already since 2021, I know it wasn't that long ago, but particularly, we have so many clients who are in that healthcare and pharma space and their technology advances quite rapidly as well. If we prompted GPT for a history timeline of a specific medication, for example, or a specific disease state, it's not going to have the most accurate information and it's not going to have the most up-to-date information. Whatever it has is whatever it was trained on. So, you do have to do your due diligence to remember, these tools don't have access to the full, full internet and all of the gated knowledge that might exist in a medical journal, for example. Which leads me to my second piece of advice, which is to keep in mind that they make stuff up. And even that is personifying them a little too much. They're not doing it intentionally, but if anyone listening isn't familiar with this, look up the term AI hallucinations and you'll get a good explainer of how just because the technology is really functioning as a word calculator, it's not going to return an answer that says, I don't know the answer to your question. They have a tendency to truly just craft out of thin air. So there's a case of an attorney who wrote a briefing for a judge using Chat GPT and he didn't triple check all of the cases that it had cited and he found out that most of them were fake and they sounded really legitimate. It was like Gonzalez versus the state of Washington concluded that blah, blah, blah. And it was the judge who did that due diligence of fact checking and looking through and ended up throwing out the full case. So, stakes are often a little bit lower in marketing, but still really critical to keep in mind when you are using these tools for that research is that you can't solely use these tools for research. They should really be an ideation or a starting place. Ryan: Yeah, that's incredibly important. The research part and the fact checking still needs to be done today. The old school way. You need to go to the sources. Look those things up. It's the same as not just trusting Wiki because it's on the Wikipedia page, right? It’s a long way from replacing a research analyst, that's for sure. But where it has more promise in my opinion, would be maybe in organic social, for headline generation. Looking at paid search, volume of headlines, descriptions, and keywords. And as I said earlier, also SEO and being able to answer some of those base questions that the internet may be asking around topics that you wanna have authority in. Those are places where I do think you should be looking at Generative to help put at least the foundation and framework into an argument. It shouldn't be the whole thing though. And if you're as silly as the attorney who's copying and pasting something as important as a court case briefing that you're putting in front of a judge, you shouldn't have that job now. You're skipping many steps and Generative AI may never get to a place where you're gonna be able to have full trust in what's behind it. But it is a good way of modeling the base of the internet, right? It's almost, and this is probably why I read it, it has a fight with Generative AI right now and scraping it, right? It is a way of getting that baseline of how the internet feels about something. Savannah: Well, and we haven't talked very much about image generation yet because that is an area that is extremely under legal scrutiny right now. So our sort of blanket opinion has been to not use any sort of open AI product for image generation, but still, I found in terms of how we can use it internally, it's especially helpful for that ideation and just collaboration. Especially for folks like us who don't have a big creative background. I know you never wanna be the person talking to a creative guy, asking him to, make it pop, right? Or give that other sort of type of feedback. So instead of this, I have found it to be really helpful to say, okay, this is this sort of aesthetic, this is the vibe, this is where we expect the client will be happy if they see this type of imagery or, you know, I can't even draw a stick figure. So if I wanted to put a storyboard or something in a pitch, it would be helpless for me to try to do it myself. But it would also be kind of a waste of time to tap internal people to try to do it for me. If it's just for a really high level exploratory conversation. You can use these tools for things like that without getting into hot water. I think it's mostly just our general point of caution to have that human oversight before you truly put a budget to something and try to set it live. Ryan: Yeah, I agree. At least in my limited use of Mid Journeys of the world, the prompts to get something to look okay even are really complicated. And the prompts for Generative AI, at least on the open AI and Bard side, are much more straightforward. So the journey has a little bit of work to do from the prompt side to make it more accessible. Savannah: Well, and hey, maybe we'll be back here in six months talking about how Gen AI has developed since we last spoke. Ryan: I bet we will. So, thanks again Savannah for your time and the seven years of partnership that we've had at Coegi too. It means the world to me. You've been a great friend and it's been awesome to see your development as a professional. So here's to seven more years of working at Coegi. Savannah: Seven more years with the help of AI copilots. Let's go. Ryan: Yes, let's do it. Thank you everybody for listening and have a great day.

Other Episodes

Episode 11

September 13, 2022 00:18:46
Episode Cover

Analytics vs. Insights

In this episode, Coegi's Director of Data & Technology, Jake Amann and Radar Analytics President, Candice Rotter, join us to discuss how to find...

Listen

Episode 14

October 02, 2024 00:40:45
Episode Cover

Personalizing the Digital Patient Journey with AI and Machine Learning

In this episode of the Loop Marketing Podcast, the host is joined by Hannah Schatz, Coegi's Director of Programmatic Operations, and Malcolm Halle, Head...

Listen

Episode 13

March 07, 2024 00:52:00
Episode Cover

Building a Commerce Brand with Thu Bang of EyePromise

Welcome back to The Loop Marketing Podcast. In today’s episode, you’ll hear from Thu Bang, Sr. Director of Marketing at EyePromise, and Ryan Green,...

Listen