
Sidecar Sync
Welcome to Sidecar Sync: Your Weekly Dose of Innovation for Associations. Hosted by Amith Nagarajan and Mallory Mejias, this podcast is your definitive source for the latest news, insights, and trends in the association world with a special emphasis on Artificial Intelligence (AI) and its pivotal role in shaping the future. Each week, we delve into the most pressing topics, spotlighting the transformative role of emerging technologies and their profound impact on associations. With a commitment to cutting through the noise, Sidecar Sync offers listeners clear, informed discussions, expert perspectives, and a deep dive into the challenges and opportunities facing associations today. Whether you're an association professional, tech enthusiast, or just keen on staying updated, Sidecar Sync ensures you're always ahead of the curve. Join us for enlightening conversations and a fresh take on the ever-evolving world of associations.
Sidecar Sync
Generative Video Meets Deep Think at Google I/O & Claude 4 Finds Its Voice | 85
In this electric episode of Sidecar Sync, Amith Nagarajan and Mallory Mejias unpack the latest AI announcements from Google I/O and the highly anticipated Claude 4 release from Anthropic. They explore what Google's Deep Think means for AI reasoning, debate the creative and ethical implications of video generation tools like Veo 3 and Flow, and rave about Claude's new voice mode. Plus, they reflect on the seismic shift AI is bringing to content, coding, and SEO—alongside some AC/DC-fueled Chicago memories and a preview of the upcoming digitalNow conference.
"If you dislike change, you're going to dislike irrelevance even more." - Erik Shinseki
https://shorturl.at/39XvA
🤖 Join the AI Mastermind: https://sidecar.ai/association-ai-mastermind
💡 Find out more about Sidecar’s CESSE Partnership - https://shorturl.at/LpEYb
🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
https://learn.sidecar.ai
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai
🎉 Thank you to our sponsor https://memberjunction.com/
🛠 AI Tools and Resources Mentioned in This Episode:
Claude 4 Opus ➡ https://www.anthropic.com/news/claude-4-opus
Claude Voice Mode ➡ https://www.anthropic.com/news/claude-voice-mode
Gemini 2.5 Pro ➡ https://deepmind.google/technologies/gemini
Gemini Ultra ➡ https://deepmind.google/technologies/gemini
Gemini Flash ➡ https://deepmind.google/technologies/gemini
Google Veo 3 ➡ https://deepmind.google/technologies/veo
Google Image 4 ➡ https://deepmind.google/technologies/image
Flow by Google ➡ https://deepmind.google/technologies/flow
📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/
🎉 More from Today’s Sponsors:
Member Junction https://memberjunction.com/
🚀 Sidecar on LinkedIn
https://www.linkedin.com/company/sidecar-global/
👍 Like & Subscribe!
https://x.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecar.ai/
Amith Nagarajan is the Chairman of Blue Cypress https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith:
https://linkedin.com/amithnagarajan
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory:
https://linkedin.com/mallorymejias
The way to think of these AI tools is they're part of your team and they can help you brainstorm. They can help you come up with new ideas. They can help come up with creative alternatives to something you're working on. So give it a shot. The best way to learn AI is to play with AI. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings Sidecar Sync listeners and viewers. Welcome to this episode. We are here to talk about an interesting set of topics, as usual, at the intersection of all things associations and the world of AI. My name is Amit Nagarajan.
Speaker 2:And my name is Mallory Mejiaz.
Speaker 1:And we're your hosts and, as always, we've prepared a couple of topics for you that we think are really interesting at this intersection of where the world of associations is going and needs to go and, of course, what's happening in the world of artificial intelligence, which is always moving and going really, really fast. It's a crazy time. How are you doing today, mallory?
Speaker 2:I'm doing pretty well, amit, I think today was a fun example. You sent me a Teams message right before we started recording, with an announcement about Clawed Voice that had just been released maybe what, 15 hours ago, 18 hours ago that we had to hurry up and add into today's script. So I think you're right, we're seeing AI move faster and faster. It can be tough to keep up with, but at least we have this weekly meeting to kind of hold ourselves accountable.
Speaker 1:Well, it reminds me of that old quote. I don't remember who it's from. We'll have to find out if there's this definitive attribution of this particular quote and link it in the pod notes. But something along the lines of if you don't like change, you'll like obsolescence less, and so that's certainly an appropriate thing to remember when we're dealing with the pace of change. And for everyone you know, I find it overwhelming as well. I mean, I kind of enjoy it on the one hand because it's super fun to. It's kind of like Christmas every day where you get these new toys to play with.
Speaker 1:But the flip side of it is it's like how do you keep up with these things, you know, and especially if a big part of your business is staying on top of AI and you find out? Well, we didn't know about Claude's audio mode. How can we be so far behind? We're 15 hours behind Mallory. I mean, we're just doing a horrible, horrible job. So I won't steal our thunder from that, but I am excited about audio mode. Speaking of thunder, this weekend I was up in Chicago and got to check out ACDC, and so my teenage daughter was with me. I got to experience ACDC, one of my favorite all-time bands, but live I'd never seen them, and so she got to see the band play, which was pretty fun, and seeing her sing along to Thunderstruck was quite fun, you know.
Speaker 2:Wait, that's awesome, I had no idea. Were you just in Chicago for fun, for business?
Speaker 1:No, we were. So every year I take each of my kids on a one-on-one trip, so I'll take them just for like a long weekend. Usually it's like leaving a Friday morning and come back on a Sunday night, so and I let the kids pick where they're going. My son always picks going skiing, so we kind of do the same thing and just ski really hard for three days and come home and that's always super fun, and then my daughter picks you know, usually a city. She's a city person.
Speaker 1:So we've been in New York, we've been to Chicago, we've been to Orlando, we've been to a bunch of places, san Francisco one time, and we've been to Chicago. Actually, this is our second trip to Chicago that we've done together. Actually, this is our second trip to Chicago that we've done together. The first one we did was eight or nine years ago and she was really small at the time and what was really cool was Chicago. Back then they had a Chicago does a lot of really cool things in terms of street art and they had at the time, eight or nine years ago, a bunch of dog statues which I think were there to honor the canine police units Something along those lines is what I recall and they're not generally on display across the city as they were at that point in time, you know, several years ago, and both my kid and I are dog nuts, so seeing those dog statues all over the place was awesome.
Speaker 1:But it happened to be that we were checking into the Sheraton Hotel downtown and there was one of those dog statues right there like greeting us as we got out of the cab, you know, coming into the hotel. So that was a cool way to start the trip, and then we got to see ACDC live on Saturday night. So it was awesome. We had a great time.
Speaker 2:That's really awesome. I love the idea of doing a special trip with each of your kids solo each year. That's really special. I remember one year too. I think y'all went to San Diego or maybe it was like San Francisco. That was a few years ago. Yeah, we did a.
Speaker 1:California trip and that's where I'm from originally and we got to go to San Francisco and we got to do a bunch of cool stuff and showed her where I grew up and stuff which was fun too. But yeah, I highly recommend it if you have kids and if you can get away, even if it's somewhere you know short driving distance, just spend a day, two days. It's interesting the kind of stuff, especially in teen years. I've got a rising sophomore and a rising senior in high school and you know you don't get a whole ton of time with them one-on-one, so it's cool to just break away and spend a little bit of solo time with each kid. And you know, if you have a family with like six or seven children, I guess that's a lot harder. But I've only got two, so it's pretty straightforward.
Speaker 2:It works out well for you. Yeah, well, I love Chicago. It's a great city. If only we knew of another event in Chicago that was coming up this year that maybe our listeners could go to Any ideas, amit.
Speaker 1:Yeah, you know, and it's pretty awesome. I think it will compare very well we're also to ACDC in terms of how cool it is. We are going to be hosting yeah, totally, we are going to be hosting Digital Now in Chicago November 2nd through 5th at the beautiful Lowe's Hotel. I was actually next door to the Lowe's Hotel when I was there this weekend, and Lowe's Hotel is spectacular. We have some really fun events in that neighborhood as well that people can walk to, so it's going to be a great event.
Speaker 1:Learning deeply all the things that are happening in the world of AI and emerging technology in general and how they apply to the world of associations, as we always do each year, each year in the fall, when we host our Digital Now conferences. We've been doing for quite a number of years now. It's really a touch point in the year where we can look at what's happened since the last Digital Now, reflect, think deeply about the future and really build relationships with a wonderful community. We expect to have in the neighborhood of about 300 people there this year and that should be the biggest Digital Now ever and, most importantly, the quality of the community coming together, the care that they have for advancing their organizations and helping their colleagues advance across organizational boundaries is really what gives Digital Now its energy. It happens to be that we're talking about a lot of AI topics. It's really about the community of these forward looking practitioners. We think it's a pretty unique community and a unique event, so definitely encourage folks to consider putting that on the calendar. Chicago, november 2nd through 5th.
Speaker 2:Absolutely, and we're starting to announce some keynote speakers as well. Amith, you were on the docket. I'm assuming you'll be delivering the first keynote kicking off the event.
Speaker 1:That's usually my habit, is you know? I'll get up there and start talking about whatever's on my mind.
Speaker 1:And it's pretty much just like that. But yeah, I'll be opening it up and sharing some general thoughts on the broader arc of AI and, at that point in time, who knows what we'll be covering? Because you know, in all honesty, I do prepare. I prepare keynotes well in advance on the one hand, but I really am tuning them up until literally the night before usually. So it's just so hard to prepare for delivering a message on anything related to AI if you're not that dynamic with it. It's not so much that the tools and technologies are changing so much which they are but it's more about how do you get people on board with the concept of exponential change, and the more up-to-date and the more relevant the examples are, I find that you can bring an audience along in a much more effective way 100%.
Speaker 2:I feel like if you had prepared a keynote for today, which we're recording this, in late May, right after Memorial Day, and prepared that in January, it would be obsolete, like there would be no point in presenting that keynote on AI. So I think you almost have to maybe have a framework or structure ahead of time, but keep tweaking almost up until what 15 hours before.
Speaker 1:Yeah, totally.
Speaker 2:All right. Today we've got an exciting lineup of topics. We're going to be talking about Google's IO conference, which they recently held, and some of the updates and releases that came out of that, and then we'll also be talking about the Claude 4 release, which I'm very personally excited about because Claude is my favorite model. I'll go ahead and put that out there. I've said it a few times, but that's the one I go to for the most part. But first we're starting off with the Google IO Conference, which is their annual developer conference, typically held in May in Mountain View, california. It's the company's flagship event for unveiling the latest advancements in technology. Google IO 2025 was held in May and, of course, was dominated by artificial intelligence advancements, with a particular focus on infusing AI into nearly every product and service. So I'm going to include some of the important releases and updates from this year's conference. First, focusing on the Gemini AI platform. Gemini 2.5 Pro, which we've covered on the pod, received a significant upgrade with the introduction of DeepThink, an experimental enhanced reasoning mode. Deepthink allows the model to consider multiple answers and reason in parallel. We've seen something similar come out of OpenAI and Anthropic as well. They also released Gemini Ultra, a new top-tier subscription, so it's about $250 a month. It offers the highest level of access to Google's AI tools, including VO3, flow, deepthink and Expanded Limits on platforms like Notebook, lm and Wisk. And then Gemini 2.5 Flash, the lighter weight model, is now available to all users via the Gemini app. Moving on to search and workspace, google's AI mode, a chat-like search experience powered by Gemini, is now available to all users in the US. It introduces features like in-depth search chart creation for financial and sports queries, like in-depth search chart creation for financial and sports queries, and the ability to shop directly through AI mode. Gemini can now generate personalized smart replies in Gmail by drawing on a user's email history, with this feature rolling out to paying subscribers this summer. And then generative AI is being integrated into Google Workspace apps, maps and Chrome, making these tools more intelligent and more personalized. Maybe the most exciting part for me was looking at the generative AI model releases and updates.
Speaker 2:The latest version of Google's text-to-image generator, imogen 4, improves text rendering and supports exporting images in multiple formats. I tested this out right before the pod. Images in multiple formats. I tested this out right before the pod. The speed incredible. I gave it a really long prompt that I've been using those types of prompts for ChatGPT's 4.0 image generator. Chatgpt tends to take like 30 seconds to a minute to generate the image. This generated the images near instantaneously, which was incredible, but the text was not as good as ChatGPT. It got some of the words right, some of the words not so right. So I'll say I'm sticking with ChatGPT for now, but I wanted to share the speed Very impressive.
Speaker 2:They also released the next generation video generator, vo3. That produces synchronized video and audio including sound effects, background noise and dialogue. I did a quick experiment with VO3. I shared this with you yesterday, amit. I want to insert a quick clip here. If you saw that on YouTube, you saw how impressive the details were in the video. It's only eight seconds for now because it's still in preview, but I will say it's pretty impressive for what I gave it in a single prompt. My prompt was essentially act as a member or pretend we're seeing a member of the Association of Really Awesome Nurses at the annual meeting giving a testimonial directly to camera about why being in that association and attending that event is so important. So for one prompt, quite impressive, and I'm sure you all have seen perhaps the video making rounds on social media. It's like a car expo hall event and you almost can't you really can't tell that it's AI generated. They also released Flow, which is a new application that uses VO, imogen and Gemini to generate short AI videos from text or image prompts. It includes scene building tools for creating longer AI-generated films.
Speaker 2:Something interesting I had not heard about Project Starline is Google's 3D video chat booth, and it's evolving into Google Beam. So it's an HP-branded device that uses a light field display and six cameras to create a 3D video chat experience targeting enterprise customers. Pretty neat. Maybe we'll have a Google Beam digital now one day. We're also seeing new APIs and sample apps like Androidify showcase how generative AI can transform user experience and app development workflows. We didn't see any new hardware come out of Google IO 2025, like we had in the past. They did release an important update, though, that they're beginning to allow large language models to access personal data for more tailored experiences, starting with Gmail, like I mentioned, and expanding to other products soon. So, amit, a lot, a lot, a lot to unpack here. Of all of the updates that I just covered, what do you think is most notable or most exciting?
Speaker 1:I think this is an interesting point to look at Google's overall development arc in the last couple of years, because you know you covered a lot of ground there. So clearly Google has been, let's just say, very busy. We talked about Google and, at the time, their lack of a contemporary AI offering back in the early days of ChatGPT. Chatgpt, we talked about how Google had, you know, issued this internal announcement that they had to focus their resources and it was, you know, essentially an existential threat to their business, which it was and is if they weren't in the race. But you know, clearly, what they've done here over the last couple of years is not only caught up, but I would argue that they're leading the pack in many areas of artificial intelligence, which is kind of natural for these guys. You know Google you have to remember starting with search actually was a form of AI in a very early style. But you know, these guys have been deep, deep, deep in computer science and AI for a very long time. The transformer architecture which all of this stuff is based on was invented in their lab years ago. So but they, you know they didn't commercialize it quickly or as extensively as others did, and so the point I'm making, before we get into the specifics, is if you find that you are behind on AI or something else like that, it doesn't mean that you're lost. It means that you need to prioritize. It means that you need to cut out the things that you don't need to do and focus on the things that are really critical to your future. And associations are often led by groupthink and committees and kind of everybody getting a trophy kind of mindset in a committee where nobody wants to kill projects that they personally felt were important were important. But these are the times when you have to lead with conviction and you have to be willing to narrow your lens and scope down to just a handful of activities that you're going to go crush it with, and I think Google has done a really good job of that. So I wanted to give them some props but also relate that back to the world of associations, who are often juggling far too many priorities, and that's where you have to make some tough decisions. The difficult conversation in the strategy room isn't how to add ideas to the mix. It's how to prune ideas and how to defer ideas and, in some cases, how to kill ideas off. So I think Google's done a good job in focusing their resources on AI generally.
Speaker 1:Coming back to your question, mallory, I think that what they talked about with DeepThink is you know, it's one of like 50 announcements, so let's unpack that just a tiny bit. So DeepThink is? It goes beyond the idea of extended reasoning in cloud or the longer form of reasoning that's in OpenAI's 03 and 04 models, which are kind of single threaded in a sense. So those models, what they had, when you talk about either test time compute or inference time compute, you're essentially giving the model more time to think and, like our brains, they'll start exploring, typically one solution at a time, the next solution and then they'll try to find out, well, what's the best solution. Let me break down the problem into chunks and solve it. We've covered that a lot on this pod and in our content elsewhere.
Speaker 1:And what DeepThink does that's interesting is in parallel considering multiple possible answers and essentially imagine that reasoning process we just discussed but happening three, five, 10, 50 times in parallel and then pulling back the best answer based upon that combined reasoning.
Speaker 1:It's like saying, hey, instead of having one really smart PhD level person work for you on a problem, let's spawn a room of 30 such PhDs, or 50 or 100 or 1,000, have them all work on it in parallel and then bring back the best ideas and combine them. That has a lot of promise. Obviously, it's computationally resource intensive, but I think it holds a lot of promise. We ourselves here at Blue Cypress not nearly at the level of what Google does, obviously, but in our own small way are experimenting with the same concept with some of our AI agents where, rather than solving for a particular user problem one piece at a time, we're actually generating multiple possible responses and then using a supervisor AI model to pick the best answer. So I think that's, and a lot of people are experimenting with this. What's happening is is, if you look at the broader arc in the compute curve, we have more resources available and faster inference available than ever before. So it's Flow AI filmmaking app, because I found that, just in terms of creativity, certainly a very interesting announcement.
Speaker 2:Yeah, so I didn't have a chance to play with Flow directly, but it's kind of a combination, like I mentioned, of all the things VO3, gemini which I have experimented with. I will say my first experience with this was actually on Reddit, which you and I have been talking about just recently, but I'm an avid Reddit user. I like to go to that platform if I need recommendations or advice and like crowdsource information for travel, for acting classes, for recipes, all sorts of things. So I'm in an acting thread and someone dropped the release of VO3 and the video that was going viral and they basically said this is going to completely ruin like performing arts and everything. It was kind of a doomsday post which I immediately clicked on it and was like ooh, when my two worlds clash right, my acting world and then my sidecar sync AI enthusiast world. So I will say I was a little bit taken aback initially. I knew that this was possible, but seeing it in that context, with all the comments of other actors saying this is going to ruin everything, we're never going to be able to act in the future. This is going to take over. I had a little bit of a negative angle on it perhaps, but it's just two sides of a coin.
Speaker 2:I think. On one side, you're allowing so many people to create things that never would have been possible, including myself Having an idea, and not just in the US. Someone anywhere in the world having an idea, a story that they want to tell, something that's emotional for them, impactful for them, that might be helpful for other people. Being able to do that at your fingertips for essentially free is incredible. On the reverse side, being able to do that with no humans in the loop, or very minimal humans in the loop, when living life as a creative is already really difficult to do to make a living, that sucks.
Speaker 2:So I do think, and I read in this thread. I do think art for art's sake will continue. I think there's going to be a group. I do think art for art's sake will continue. I think there's going to be a group of people that want to consume art made by humans because that's what makes them feel the most, and then there will be a market for AI-generated art as well, and maybe there's a little bit of a crossover in the middle when you can appreciate a filmmaker with limited resources creating something really beautiful out of this technology. But it's hard. It's hard to grapple with for sure.
Speaker 1:That makes sense. Thanks for sharing that perspective. It's hard for me to relate to in the sense that I don't have a creative side in terms of and kind of the impact it has on that world. But I totally get where you're coming from and I could see that being a risk. I mean, I think that the way I'd relate to that is you know, with computer science and software development there's definitely a creative element to that, but just generally, the displacement by AI coding agents is going to wipe out a large percentage of the coding work that humans are doing right now. So what does that mean for the world? That's deeply concerning and when you see what these things can do at ridiculous speeds, it's truly phenomenal. It's just an amazing experience. So hopefully there's some kind of equilibrium that's met where there's so much more opportunity that there is ample space for the humans to do what they do and to maintain the artistic side.
Speaker 1:I will say like personally, on the consumption side of, particularly with entertainment, I can't see myself really consuming a whole lot of AI generated like TV or movies. I don't know, maybe, but I really would like to see, you know, people do their thing. I think there's there's a tremendous value as a consumer in in experiencing what people create. So I don't know, but maybe that's maybe that perspective will change too, who knows. On the side of the business use of something like a Flow AI filmmaking app. One of the thoughts that I had was you used the term storytelling and I think that's the key thing to double click on is, as a species, we've been telling stories since we've been able to move around and talk right, since we've been drawing pictures on caves and getting together around fires and conveying stories verbally, and if we can do that at scale better, if we can communicate our business ideas through storytelling, that's an interesting concept I think about, like our AI learning content that we deliver through Sidecar's AI certifications and AI courses and I'd love to have additional content in there some modalities where you have right now for those of you who haven't experienced it yet if you go to Sidecar's AI Learning Hub, you will see AI-generated content and we use AI avatars and we use AI voices and the content itself.
Speaker 1:We heavily use AI to help us prepare it. Everything has been touched and reviewed and originated by a human, but we use a heavy, heavy dose of AI to prepare our AI learning content. It gives us tremendous flexibility and speed, the ability to modify it, working in partnership with some of our clients to deliver AI learning for whatever their industry is. It's wonderful, but it's also the modality is fairly simple. It's an avatar, like you know, which is basically what I was doing and you were doing when we were recording these courses, manually speaking and talking about each slide, and then there's some demos mixed in.
Speaker 1:But wouldn't it be great if you could kind of have additional dimensionality to that where there's, you know, some kind of videos, and maybe they're three minute or five minute videos explaining a complex concept? An example is in our AI learning content. We have a section of our data course all about vectors, and in that lesson we talk about this concept of vector embeddings and what they do and how they work in the world of AI, and we try to explain it in a fairly non-technical way. I could think that there would be some really interesting animations and videos that this type of platform could create.
Speaker 1:So I think that use case could be really interesting. And for our association community, that's pretty much what associations do is they tell stories in their space. So could these tools be used, even in some experimental ways initially, to do things you would never be able to do, you'd never afford. You know, hiring actors to go create an animation for something like that right.
Speaker 2:What do you?
Speaker 1:think about that use case.
Speaker 2:I think it's a great idea and I think it goes back to the idea of breaking your brain. We often think when there's a topic that's serious or dry perhaps, maybe like vector databases, there's no place for animation in that, there's no place for fun, because we've never done it like that historically. But I've mentioned on the pod before that across Blue Cypress we use Ninjio, which is a platform that provides cybersecurity training through animations, and that's a very serious topic that you have to be very thorough with. But they're using animations and cartoons and storytelling and I like them. Like, yeah, those videos are not something I just click through, click through, have to finish. They're actually quite enjoyable and I find myself retaining more of the content because it's in that story mode. So if you are discouraged by the idea of I don't know, having serious content portrayed through a story, I encourage you to try it out. Maybe not every piece of content needs to be a story, but it's definitely a powerful medium.
Speaker 2:Amit, on the VO3 front, obviously my little eight. Second example was a fake testimony about an association. After I created that, I was impressed, but I also thought, huh, well, perhaps you shouldn't create a fake testimony. That's probably not a great use case because if an association posted that, that might erode trust, VO3 is incredible. I highly encourage you all to check it out. But I'm curious for you, Amit what immediate or near-term use cases do you see for associations coming out of something like hyper-realistic videos from a single prompt?
Speaker 1:You know, I think, the idea of illustrating, perhaps things that aren't an individual you know, attesting to their satisfaction with your membership or your events or whatever, but other things where you're trying to illustrate a concept that's better, better conveyed verbally or through video, or perhaps it's it's, you think about an example where some of the most effective instructional videos that I find on YouTube, which I go to all the time for different things, is, you know, a person talking, but they might have like a whiteboard and they're drawing on it and they're kind of creating an image, or maybe they're just showing, showing some concept and illustrating.
Speaker 1:So it's kind of it's multimodal in that sense, I guess. And so if this tool can produce videos like that, that can be interesting, or maybe even producing like super realistic looking animations where you're showing concepts in a very visual, like 3d, 4d way, where these, these concepts are coming to life as things that you're illustrating, I I think that has a lot of application in learning in general, where you're trying to convey concepts and you're trying to literally illustrate those concepts. A lot of times we say, hey, let's illustrate this idea with an example and we'll write out the example or speak to an example. But could we potentially somehow visualize that right? Another use case that comes to mind is when we're thinking about, like interacting with a website or some other kind of application, being able to kind of visualize how people might use such a tool in the real world or, you know, visualizing it in a prototype environment. There's all sorts of different ways I think you could throw you know concepts at this.
Speaker 1:I think that the things people are doing is like oh, what kind of videos are we used to seeing? We're used to seeing videos of ourselves. We're used to seeing videos of, maybe, animals or you know, you know, you know just different kinds of images that are out there, of landscapes or whatever. So we just start doing that with these tools. But what kinds of videos do we not have that we'd love to have? That would help us illustrate our point, help us tell stories better, and I think there's like this whole latent space in the world of possible videos that we've explored, this tiny little percentage.
Speaker 1:It's kind of going back to the whole Javon's paradox conversation we had around AI inference and we say, hey, like, look, as the cost of something approaches near zero, that increases the demand and it also, on the supply side, importantly, increases the people producing these things, because they see this 100x, 1,000x, millionx increase in the opportunity and that's why you see people running out there and building these massive clouds, these hyperscalers, building opportunities for doing more and more AI inference. So I think that's what it creates is use cases. No one would have dreamed up, even the inventors of the technology.
Speaker 2:I want to zoom in a bit on AI search or AI overviews. Anecdotally, in my opinion, I feel like these have gotten a lot better. On Google search, I find I don't have to really click into as many pages when I'm looking for an answer. You know, this morning I was looking up like the fiber content in my oatmeal and it had the nice AI overview. I said perfect. So I'm curious. We've talked about on the podcast before, about SEO being dead, which I know is a little bit alarmist, but I just don't know if it's going to live on in the same way that it has. What are your thoughts there for associations?
Speaker 1:Yeah, there's a lot of marketing folks leading marketing influencers have been posting stuff on LinkedIn and elsewhere saying like look, look at the traffic stats of the top hundred websites and you know, in terms of where they're getting the traffic from, and a lot it's. First of all, it's gone down in general. Secondly, like traffic from search has gone down, from organic search has gone down dramatically. So it's definitely a real thing, it's definitely an impact. I wouldn't go so far as to say SEO is completely dead, but I do think that you need to look at strategies to supplement that and say well, how do we make sure the AI models pick up on our content? Hopefully these AI models and certainly from Google, they're doing this to provide proper attribution. Now, whether the attribution is good or not good, the question would be is is there utility to the consumer to actually click, to actually go through and click right? So if there isn't that, even if you have been discovered by the AI and the AI is like, oh, I love Mallory's content, I'm totally going to use Mallory's content in preparing my answer to Amit's question. But if, like, I'm like that's cool, I totally know how to do that thing. Now I don't need to go to Mallory's website. Is that? Does that really help Mallory? Right, I would argue it probably doesn't.
Speaker 1:So it it raises all these questions about, of course, the copyright question, but just really the flow of value creation, and so where the value lies is where the consumer goes. It's really that simple. Independent of the law, independent of whatever is right and wrong, people are going to go to the lowest friction, highest value creation environment they can. That's just the way we all work. That's true for businesses, that's true for individuals. So if you can solve your problem directly in the Google search, you're going to go there, or it could be directly within Cloud or directly within ChatGPT, because those tools are able to do searches and they're quite good at it and give you a comprehensive answer. I tend to do that actually in cloud all the time now, where I just I've turned on web search by default in my conversational settings and cloud is often doing web searches and conversations, I have and coming back with citations and. But I but I too have experienced Google's improvement in AI search and.
Speaker 1:AI overviews. So you know, I just think that we have to be aware of this. I don't really have a particular recommendation on what to do. I still think that having amazing content is really important, because I mean expertise, particularly in narrow spaces like the worlds that we live in, and associations where you have an association for a hyper-specialized, you know, subdomain of a profession. That content has to be built somewhere. As of this moment in time, the AI is not creating that new content. You know out of whole cloth really. I mean, in some cases it seems like it is, but you know, the source of truth for your domain still very much can be you, and I think that, as an association, what I'd be thinking about more is how to make sure that my content, my traditional content repository, isn't locked away in such a fashion that it's difficult for people to access.
Speaker 1:That's been a common complaint. You know, if you're an association leader, you probably have gotten at some point a phone call from a board member saying why does your website suck? How come it's so hard to find things? I can't? I know you have this content, but I can't get to it. I've tried search, I've tried this, I've tried that People try all these tools, like you know, universal or federated search tools and none of them really provide answers.
Speaker 1:And with AI, there are better ways to approach this than ever before. So if you're not doing that, you're definitely, you know, on the losing end of the field. But you know, even if you are doing those things, that doesn't necessarily solve the problem you asked about, which is, you know, how does this affect people coming from external sources? Will they find you or not? But if you don't have an AI-enabled strategy, you're definitely going to be. You know, so hard to use compared to Google's AI search or similar tools, why would people bother? You know you're asking them to do like 20 backflips just to approach the front door lips just to approach the front door.
Speaker 2:I know one of our keynote speakers at Digital Now this year is Brian Kelly, who I've had the chance to work with very briefly at Sidecar. He's a fantastic marketing leader. I'm curious if he will cover maybe some of this SEO stuff. I don't know if you know that, but might be interesting to find out.
Speaker 1:I suspect he will. Brian is a marketing practitioner and an entrepreneur and someone who has worked across a very wide array of different businesses, ranging from Fortune 500 companies to small business. He's had a number of his own companies. He's helped a number of our businesses over the years. We've known Brian for 15-ish years and he's an innovative thinker and he's done a lot of really cool work with AI. So I'm excited to hear his talk and I suspect this theme will definitely be top of mind for him and part of what he discusses.
Speaker 2:Last question for you, Amit, on this topic. I feel like we're seeing more and more of these top tier subscriptions. We've got ChatGPT Pro $200 a month. Cloud Max is $100 a month. Gemini Ultra is about $250 a month. What is your stance on these? Do you feel like any of our listeners should be seeking out maybe to try one of these top tiers, or do you feel like you're good with the lower tier?
Speaker 1:I think it just depends on the user. I mean so for software developers, cloudmax is a hell of a deal. At $100 a month, you get unlimited use of this tool called CloudCode, which is this agentic AI coding tool that basically all of our team members across you know, across all of our companies are using extensively, and we're spending way more than $100 a month just paying by the token, essentially as you go. So that's a great deal, it's a dead obvious one. And then you get benefits like in the cloud. You know desktop tool and the web tool as well, and you know Gemini Ultra offering ChatGPT Pro. They all have really attractive features.
Speaker 1:I think what happens is is once you start getting into the tier of $100, $200, $300 a month, you're talking about probably a winner-take-all kind of environment where you're probably not going to have CloudMax and Gemini Ultra. You probably pick one of the two, which, of course, is the desire of the companies is to get you into one of their camps. The more tools you use from a given company, and especially the more they go up the application layer of the stack and I'll come back and explain specifically what I mean by that the more sticky their offering is. So I think that this makes a ton of sense to the businesses. They're probably not expecting more than a single digit percentage of their total audience to go for these subscriptions, but those become the people that then bring the rest of their organization with them, even at a lower tier. Now, what I mean by going higher up the application layer of the stack is this so you have the basic model and the model is capable of having conversations with you, and then you build a UI on top of that so you can actually have an end user. You know, send and receive messages. That's the basics of all of these tools. But what do you do on top of that to make it so that you have a higher allegiance to one of these tools over another? Of course, the model has to be fantastic, right?
Speaker 1:I know, mallory, you mentioned earlier and you've said this a number of times on the pod that you're a big fan of Claude and you have been for some time, and I think a lot of that initially was the model was better from your perspective. But also they also provided features like artifacts early on, well before ChatGPT had Canvas and a number of other tools that made the UI more pleasing and more useful to you, right? So that was a thing that started to get your allegiance. But now Claude is adding features like projects, where what you can do is you can create a project in Claude and you can provide certain information like documents and other things that are part of that project. Then you can have more threads or more conversations in that project and your colleagues can as well within the project. So that provides an interesting dynamic because you're still using the same underlying model, but you have more invested in that environment. Because you've set up the project, you have more people in the project, you're collaborating, you're sharing. You can create some product-led growth through that tactic where you share a thread and somebody else wants to see it. Of course they have to be part of that same cloud subscription, and ChatGP2 is doing similar things.
Speaker 1:All of these companies are run by people who have smart product management folks beyond. Of course, they're very smart AI developers and they're thinking about these kinds of things. So I think, from the consumer perspective, the association leader's seat two things to think about. Number one do you want to have a standard where you offer, like one of these companies, products to all of your employees, or do you offer more than one, but maybe make your employees choose which product. You know, right now we provide most of our employees chat, gpt and many of them clod as well.
Speaker 1:Once you start getting into like $20 a month, maybe you know it still adds up once you have 100 plus people on these tools. But you know, when you have a tool at maybe $100 or $200 for some of your people, now you're really going to start thinking a lot more critically. Hey, do we really need both? Personally, I've gravitated much more towards cloud over the last couple months, particularly because of the cloud code tool. I like having that environment along with the cloud. You know desktop tool, but you know, I think I think what we have to do is start thinking a little more critically about these things as systems not just tools.
Speaker 1:That's really the point I'm trying to make is they're becoming much more woven into our business workflow than they are just a place to get like a task done.
Speaker 2:Yeah, I think you said that really well. Of the whole, winner takes all angle. I would say at this point I'm not ready to put all of my eggs in the clawed basket, but that's primarily because ChatGPT 4.0's image generator is just so good. But as soon as Anthropic releases something similar or better I don't know maybe I'll be at that point. But this is a really good segue for us into the next topic, which is Clawed 4, the latest generation of AI models from Anthropic, released in May of 2025. It comprises two main variants, so we've got Clawed Opus 4 and Clawed Sonnet 4. Both models set new benchmarks in coding, advanced reasoning and AI agent capabilities, with Opus 4 positioned as the most powerful and intelligent model in the Claude family.
Speaker 2:Right now, claude Opus 4 is recognized at this moment late May 2025, as the world's best coding model, excelling in complex, long-running tasks and agent workflows, and it achieves state-of-the-art results on coding benchmarks. It's capable of sustained, multi-hour autonomous workflows, handling tasks that require thousands of steps and continuous focus, such as independently running for nearly a full workday. It supports a 200,000 token context window and up to 32,000 output tokens, enabling work with big code bases or documents. It excels at agentic applications, advanced coding, including code generation, refactoring and debugging, agentic search and research, and high-quality content creation. We're seeing those hybrid reasoning modes as well, so instant responses for quick queries or extended step-by-step thinking for deep reasoning, with summaries for long thought processes, and it integrates extended thinking with tool use. This is in beta right now, allowing the model to alternate between internal reasoning and external tools like web search or APIs for improved responses.
Speaker 2:Cloud Sonnet 4 is the smaller model. It's still a significant upgrade from Cloud Sonnet 3.7, and it balances performance and cost for high-volume applications. It also excels at code reviews, bug fixes, customer support and more AI-assistant tasks, and you still get that hybrid reasoning, tool use and improved memory features with Opus 4, though Opus 4 remains superior for the more demanding tasks. Overall, I would say the innovations and improvements can be summed up like this Both models can use multiple tools at once, enhancing workflow automation and research capabilities. When given file access, opus 4 can create and maintain memory files storing key facts to maintain context and continuity over long tasks, and both models are 65% less likely to use shortcuts or loopholes and agentic tasks compared to previous versions, improving reliability and alignment.
Speaker 2:Something I want to cover briefly here are safety levels, so Anthropic uses something called AI safety levels. I believe that it invented so AI safety level or ASL. It gave Opus 4 an AI safety level of 3 and Sonnet is a level 2. 4 would be the highest. So CLAWD4 Opus is the first anthropic model deployed under ASL 3, reflecting its advanced capabilities and the associated increase in potential misuse risk.
Speaker 2:Internal testing indicated that CLAWD4 Opus is more effective than prior models at providing potentially dangerous advice. And there was also a controlled simulation where they embedded CLAWD4 Opus in a fake company and within this fake company they sent emails between fake employees saying that they wanted to replace Claude for Opus as the primary AI model within the company. And something interesting that happened in this controlled simulation was that Claude for Opus used blackmail so as not to be replaced. So that is just an example that they provided and, again, controlled simulation that this is perhaps has a more heightened risk because it is such a powerful and intelligent model. Last thing I want to cover here is that 15 hours ago, 18 hours ago, claude rolled out or Anthropic rolled out Claude voice mode, which is incredibly exciting. Voice mode has been in ChatGPT I mean, if you would know better than me like a year or so more than that, yeah, something like that.
Speaker 1:Advanced voice mode, all the different terms they use, but yeah, the current capabilities we have had for maybe three or four months but probably a year, year and a half Actually. I remember using it in the early part of 2024. So they've had some form of voice mode for, let's just say, about 18 months. It seems like.
Speaker 2:For a while, and that's another thing I know. I said the image generator is something that keeps me going back, but also voice mode. I really like that. I can have the back and forth, and I've been waiting and waiting for that within Claude. So now they're rolling out voice mode in beta for mobile apps over the next few weeks. This allows you to speak with Claude, hear responses back, so it's a great partner when you're on the go or you want to just brainstorm out loud. And then there's also a Google Workspace integration so paid plan users can ask Claude things about their calendar, about their email and their docs through voice conversation, which I think is quite neat.
Speaker 2:I, amit, have been playing around with Claude for so primarily my work at Sidecar focuses on the podcast and on blog writing and production podcast and on blog writing and production, and so I use Claude quite heavily like for all of these things. So I know 3.7 to 4. If you don't use Claude very much, you might think well, it seems about the same. Claude 4 is a major improvement, in my opinion, in terms of writing. It also just seems smarter. I feel like I have to say a little bit less or remind a little bit less about certain things I've said prior in the conversation. I'm really happy with Cloud for Sonnet. I've actually not tried Opus. I'm curious if you have tested either out, if you have an opinion.
Speaker 1:Yeah, I was on it the day they released it Quite excited. It's my primary workhorse tool, as it is for you, mallory. I use it for all sorts of things, for a lot of different business tasks in the Cloud desktop app that I use on my computer and also Cloud Code, which we'll talk about. But Cloud 4 is definitely much smarter in terms of the day-to-day use. It gets things right way more often than Cloud 3.7 did, so that's exciting. That's one way to kind of know if you're on the right track with AI is how often do you have to get it to like fix something? And if it gets it right on the first shot for sometimes fairly complex requests, that's a real positive thing. I've used both Sonnet and Opus. I tend to use Sonnet as my default and go to Opus if it doesn't solve the problem, which is pretty rare. And the other thing you can do is if you combine Cloud for either Sonnet or Opus, with the extended thinking mode and also the research tool that is an option to turn on in Claude, it'll do some pretty impressive work for you because it's using that higher level of intelligence. But then it's kind of doing this agentic loop where it gathers information. It kind of distills it down. Sometimes these deep research type tasks in Claude can take 5, 10, 20 minutes, but it'll come back to you with some pretty incredible findings. So I like to combine Sonnet with extended thinking being turned on and the research tool not always turned on because it does take quite a long time, but to use it fairly regularly and I find it is extraordinary at producing great results.
Speaker 1:There's an experiment I would like to offer and invite our listeners and viewers to try on their end. So take your website, so whatever your website is, take the website URL, drop it in a cloud and say hey, claude, this is my website, please take a look at it. And these are the kinds of points of feedback my members are often giving me. It's hard to find information, it's too complicated, it's not contemporary enough, it blah, blah, blah. All the different common things. You probably know them by heart because quite likely you hear them regularly from your members and say hey, claude, I'd love for you to create an artifact that is an interactive prototype of what my website should look like, that would address these concerns and a minute or two later, especially, again turn on extended thinking mode For this one, maybe try Opus and you will see an artifact come to life with a new and improved version of your website that might impress you. I actually suspect it will. And of course, you can go through many iterations. You can give feedback. You can just say, well, give me a couple of other options and we'll give you a couple of other options.
Speaker 1:And I think this is not necessarily suggesting that this replaces the people who do website design for associations, but rather should supplement that process and getting experts involved to help you tune things and implement them. There's a lot of technology involved in making the website actually work, but a lot of times people get stuck with the frame of thinking they've had in the past, and so you know you can take that experiment and say, hey, wouldn't it be great if our site could look like this? And a lot of times us humans will often have all the reasons why it's hard to make it look like that. Right, you say, oh, I really like this website. I was inspired by website ABC and it's a really cool website. And then there's lots of reasons why you can't have that for your association, whatever that is. Maybe you don't have the budget, maybe it doesn't fit the design motif or whatever, but why not work with Cloud to come up with some prototypes for things like that?
Speaker 1:Or I gave you a very general idea.
Speaker 1:What if you have a specific problem that you're trying to solve? Let's say, for example, that you would like to have more new members come in and your association has a new member application process that, let's just say, is unpleasant, it is difficult and it takes a lot of steps, and once again, you could take some screenshots of that. Or you could take some screenshots of that, or you could take the actual link to it, if it's available publicly, give it to Claude and say, hey, like I really want you to reimagine this as a much more user-friendly experience, something that's dynamic, that's interactive, that maybe even be enjoyable. I think you will find your experience working with Claude to be a really eye-opening one, and this example use case that I'm suggesting here to invite you to experiment with is it will take advantage of all of these new features of Cloud 4, which is why I'm suggesting it here. It's something I've been talking about for a while, but I think at this point, just anybody can do this in the Cloud app and see some pretty exciting results.
Speaker 2:And, as you said, it's not about replacing expertise. It's also very practical. I don't know about you, amit, I'm a bit less visual when it comes to you know, like how I learn and process. So for me to practically explain a visual design of a website with words, that can just be really challenging. So, having a conversation in voice mode with Claude and then having it spin up this interactive thing that I could share with Claude, and then having it spin up this interactive thing that I could share with web developers and say here's what I'm thinking, what do you think about this? That's just practically so much easier.
Speaker 1:Totally. Yeah, you have this new team member that's part of your team. That's the way to think of these AI tools is they're part of your team and they can help you brainstorm. They can help you come up with new ideas. They can help come up with creative alternatives to something you're working on. So give it a shot. The best way to learn AI is to play with AI.
Speaker 2:Well, we mentioned earlier in the pod Cloud Code which you're a fan of, and obviously we've seen some big advancements in the coding side with this Cloud 4 release. Can you help contextualize how big this upgrade is in terms of what you've seen on the coding side, how big this upgrade is in terms of what you've seen on the coding side.
Speaker 1:Yeah, I mean, it's just better. It's the simple way to put it, and it's this ongoing, relentless progression of AI intelligence. And Cloud 4 is clearly a leap above not only the prior Cloud models, but it's now better than Gemini 2.5 Pro and better than Chachi PT 4.1. And I would argue it's as good, or probably better actually, than the 03 or 04 models from OpenAI as well in many of the practical day-to-day use cases. From a benchmarking perspective, it's about the same, but just in terms of the day-to-day use it's really, really good. And we also are using Cloud4 in an experimental edition of of our skip AI agent, which is our data analyst and report writing AI agent, and the output we're getting is quite intriguing as well. So I think that this is I don't know that this is necessarily the moment in time where people all flock to AI who haven't been because AI is so much better, but to those of us that are deep in the game, it's a very natural next step in capability, but maybe it does take it far enough along where you know how much better is Claude than like Claude Four, than the average human. I mean I would argue it's better than I am at almost everything and I think I'm pretty good at a lot of things. I'm really terrible at a lot of things too, but Claude Four even Sana I don't even need Opus is better than me at coding and better than me at a lot of things.
Speaker 1:Right, certainly, research. I have no tolerance or patience for like reading tons of articles. You know like there's so much more you can do with it. I still think I do certain things quite well, that the models may not in terms of thinking like broadly across a wide variety of topics and distilling it and coming up with new ideas and all that. But my point in saying all that is you know, you really need to look at it from the viewpoint of going beyond the trivialities of your experimentation. If you're one of these folks that's kind of dabbled with AI saying, yeah, I kind of went and chat GPT, I asked her to make a cocktail recipe for a party I was having and you know it's 2022, 2023 experimentation and that's cool. It's better than nothing, for sure.
Speaker 2:But like you got to get into this stuff and like, try doing your actual work with it. If you've been with us from the beginning of the Sidecar Sync podcast, we were in the trenches of generative AI. I feel like we were impressed by the little things and it was impressive, don't get me wrong. But looking back at where we were versus where we are now, if you're just getting started, enjoy, but also note this is the worst AI you'll see. So I feel like in the past few years, amit, I am shocked with how good things have gotten. And then I'll adjust to Cloud 4 and then I'll say, oh, I wish Cloud 4 would be better. And then we'll have whatever Cloud 4.2, mini Pro, and it'll be better.
Speaker 1:I will say that the folks at Anthropic have been a little bit simpler and clearer with their versioning and so forth. But, yeah, cloud 4, opus and Sonnet are pretty straightforward names. We'll see what happens with the folks over at OpenAI with the various models they're releasing, because they do have someone focused on the consumer side of the house now who I think brings much more consumer-centric mindset. So hopefully we'll have some cleaner model names from them. But it's a race. It's a race in many different dimensions. One way to think about it is the prize is so incredibly enormous, potentially the largest economic opportunity in the history of our species, the history of the world. I don't think that it's, you know, too much to say that at this point, because we're scaling intelligence, and intelligence up until now has required scaling. You know the number of people on earth and that takes a long time and is very hard to do, whereas we have, essentially, abundance coming through the form of AI now and that, of course, is an incredibly large opportunity. So there's a lot of dollars and a lot of smart people chasing it and I think that's the reason we're seeing this compounding. But ultimately it's because of the exponential nature of this where the AI is very, very close to improving itself. I mean, the last thing I'll leave you with on that thought is Anthropic themselves the maker of Claude. I think they were the ones who went on the record saying about 85% of the code they write, including on the model itself, is written by Claude, and so that is still human in the loop in terms of the self-improvement.
Speaker 1:But effectively, if you zoom out and say, is AI improving itself? Of course it is, and it has been probably for a few years now and in several ways, right, it's by enabling us to do a better job. It's improving itself. The idea of recursive self-improvement in the academic sense means the model is continually improving itself, like in the model, and that's not happening yet, but effectively we're starting to see what that means.
Speaker 1:So it's a pretty exciting time and I think the comments on safety you made and alignment it's one that we need to keep talking about and thinking about because, ultimately, what we have here is the most powerful tool we've ever held in our hands and the excitement needs to come with awareness of what this potentially could mean. The downside risks are real and nobody really knows what they are and nobody knows how to contain them. So we have to address that by continuing the dialogue, and I know of no other way to advance safety than to invest time and energy in it, and I'm thankful that the folks at Anthropic are one of the leaders in this space really deeply focused on that.
Speaker 2:I mean, I guess you do have some concern around the safety but is it the kind of concern that's. I don't want to say that there's nothing we can do about it, but besides having conversation where it's almost like this is just a given, if the AI advances further, we're going to see the dark side of that as well.
Speaker 1:I still feel the same way I did, you know, even going back 10 plus years is that AI is going to be used for both good and bad, and the only way for us to counter the bad AI use cases is with more good AI, better good AI and more good AI. So that's a hyper simplistic way of thinking about it is good and bad. You know, there is no such thing really in that kind of simplistic terminology. But to the extent that you consider certain use cases really bad and certain use cases good, you have to focus on having a lot of good AI to protect you from the bad use cases. So, for example, we say, hey, we can create hyper realistic video and audio and 3D experiences and soon maybe, holograms and all this other stuff.
Speaker 1:What's going to stop people from having the most grandiose scale of fraud that's ever been imagined, which will be completely undetectable, from having the most grandiose scale of fraud that's ever been imagined, which will be completely undetectable even by the most intelligent, most aware people? You'll be fooled by it. So how do you detect that? Well, you alone have no chance. You, along with really powerful good AI, at least you have a fighting chance, right?
Speaker 1:So my point of view is, yes, the dialogue needs to continue, but we have to keep driving forward with the development of AI with the intent of using it for these good, positive, societally benefiting use cases, because there are plenty of people out there who will, no matter what we do, no matter what regulation and what law enforcement attempts to, you know, tamp it down. There will be lots of people pursuing ad use cases for AI. There will be lots of people pursuing ad use cases for AI, and so that's my point of view. I don't know if that's right or wrong, but I think it's more true now than it was when I started thinking and saying that. But I don't know of any other framework that can protect us from the downside of AI, other than lots of good AI.
Speaker 2:Yep, nope, I think nothing has changed with that. I think it's in fact, like you said, more relevant now than it ever has been and probably will continue to be in the future. I would say we at the Sidecar Sync podcast, we're team good AI. Hopefully all of you are as well. Thank you for tuning in to today's episode and we will see you all next week.
Speaker 1:Thanks for tuning in to Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember Sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.