Sidecar Sync
Welcome to Sidecar Sync: Your Weekly Dose of Innovation for Associations. Hosted by Amith Nagarajan and Mallory Mejias, this podcast is your definitive source for the latest news, insights, and trends in the association world with a special emphasis on Artificial Intelligence (AI) and its pivotal role in shaping the future. Each week, we delve into the most pressing topics, spotlighting the transformative role of emerging technologies and their profound impact on associations. With a commitment to cutting through the noise, Sidecar Sync offers listeners clear, informed discussions, expert perspectives, and a deep dive into the challenges and opportunities facing associations today. Whether you're an association professional, tech enthusiast, or just keen on staying updated, Sidecar Sync ensures you're always ahead of the curve. Join us for enlightening conversations and a fresh take on the ever-evolving world of associations.
Sidecar Sync
Anthropic’s Claude 3.5 Sonnet, AI Scams, and Runway's Gen-3 Alpha Model | 37
In this episode, Amith and Mallory delve into the latest advancements in the Claude 3.5 Sonnet AI model by Anthropic, exploring its capabilities and potential applications. They also examine the growing threat of AI-driven scams and discuss how financial institutions are leveraging AI to combat fraud. Finally, they highlight the exciting developments in Runway's Gen 3 Alpha model for high-fidelity video generation, discussing its potential uses in creative and professional fields. Join us for an insightful conversation at the intersection of AI and innovation.
🛠 AI Tools and Resources Mentioned in This Episode:
Claude 3.5 Sonnet by Anthropic ➡ https://www.anthropic.com
Runway's Gen 3 Alpha ➡ https://www.runwayml.com
Chapters:
00:00 - Introduction
02:29 - Claude 3.5 Sonnet
04:38 - Mallory’s Experience with Claude 3.5 Sonnet
08:35 - Technical Insights
11:37 - Speculations on Claude 3.5 Opus
22:36 - AI Scams and Fraud Prevention
28:46 - Practical Defense Strategies Against AI Scams
32:45 - Runway's Gen-3 Alpha: High-Fidelity Video Generation
45:17 - Conclusion and Listener Engagement
🚀 Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global
👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
To me, educational awareness is step number one. The world is filled with people who have no awareness at all about what's going on, and I worry a lot for those folks, like what's going to happen to them in this environment. I think the best thing you can do for not only your company, your association, but also your family, your friends, is to talk about this stuff and try to spread awareness of it. That, to me, is defense wall number one. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings everyone. Welcome back to the Sidecar Sync. We're here for another action-packed and exciting episode at the intersection of artificial intelligence and all things, associations.
Speaker 2:My name is Amit Nagarajan. I'm your host.
Speaker 1:And my name is Mallory Mejiaz. I'm your co-host and I run Sidecar. And before we get going with our three exciting topics of the day, let's take a moment to hear a quick word from our sponsor.
Speaker 2:Today's sponsor is Sidecar's AI Learning Hub. The Learning Hub is your go-to place to sharpen your AI skills, ensuring you're keeping up with the latest in the AI space. With the AI Learning Hub, you'll get access to a library of lessons designed to the unique challenges and opportunities within associations, weekly live office hours with AI experts and a community of fellow AI enthusiasts who are just as excited about learning AI as you are. Are you ready to future-proof your career? You can purchase 12-month access to the AI Learning Hub for $399. For more information, go to SidecarGlobalcom. Slash hub. Amit, you're joining us from Utah today. How are you?
Speaker 1:I am doing great. It's beautiful up here in Utah, so always happy to be out and about out here in the mountains. How about you?
Speaker 2:I'm doing pretty well myself. I'm particularly excited for today's episode because I took some time this morning before we recorded and actually made sure to test out some of the tools that we were talking about. I don't want to give too much away before we get started, but I'm excited to share what I've learned.
Speaker 1:Yeah, I saw some of your messages earlier about your excitement with Cloud 3.5. I know we're going to jump into that in a minute, but it's exciting to see progress when you kind of feel improvements to some of these tools.
Speaker 2:Exactly, I think, when you're so deep in it every day, all the time. No-transcript. So starting at the top, claude 3.5 Sonnet is the latest and most advanced AI model released by Anthropic. I want to share some key points about this model. It operates twice as fast as Claude 3 Opus, while maintaining high levels of intelligence. It sets new industry benchmarks across various cognitive tasks, outperforming competitors on evaluations like graduate-level reasoning, undergraduate-level knowledge and coding proficiency. Anthropic also introduced Artifacts, a feature that allows users to see, edit and build upon Claude's creations in real time, creating a more dynamic workspace. Claude 3.5 Sonnet has undergone rigorous testing and has been trained to reduce misuse. Anthropic has engaged with external experts like the UK's Artificial Intelligence Safety Institute to test and refine safety mechanisms, intelligence Safety Institute to test and refine safety mechanisms, and Anthropic if you didn't know this does not train its generative models on user-submitted data, unless given explicit permission. Cloud 3.5 Sonnet is the first release in the Cloud 3.5 model family. It's available for free on cloudai and the Cloud iOS app, with higher rate limits for Cloud Pro and Teamplan subscribers. It's also accessible via the Anthropic API, amazon Bedrock and Google Cloud's Vertex AI. Just as a note as well, anthropic plans to release Claude 3.5 Haiku and Claude 3.5 Opus later this year.
Speaker 2:Amith, I know some of the messages I shared with you earlier this morning were about you know. I think I'm just going to put it out here on the podcast. I think Claude 3.5 Sonnet is my new favorite model. I've been working with it a lot as we are diving deep into the second edition of Ascend, and it's hard to put into words exactly what makes it better. It's more of a gut intuition thing, but it just seems smarter. I don't know exactly how to how to phrase that, how to attach numbers to it, but I assigned it a task and it seems to run with it and do a really good job with less context, which is kind of interesting. In the greater scheme of prompt engineering, it seems like it needs a little less prompting to do an even better job than the output I get on ChatGPT. So I'm really pumped about this. I'm wondering have you had a chance to try it just yet, and what do you think about the release?
Speaker 1:I have not tried it in any meaningful way. So I have logged into Claude and I've, you know, turned it on and asked it a question, but nothing of any significance. So I'm excited to hear more about your experiences, mallory. I think that it's really worthy to note a couple of things. First of all, just, you know, continue to underscore on the pace of progress. You know we announced the Cloud 3 release I think the same day. Our podcast happened to come out when Cloud 3 came out and we were talking about it and it was a big deal. It was some significant improvements over Cloud 2. And here we are, just a few months later and Cloud 3.5 is a significant leap ahead, and so that speaks to the speed at which AI is advancing Generally. It also speaks to the competitive pressure that exists in the environment between OpenAI and Anthropic specifically, but obviously much more broadly, with all of the other companies out there going after this, particularly Meta and Google. So I'm excited to see it.
Speaker 1:I like Anthropic. As a general statement, I think that they tend to be much, much more focused on safety than their counterparts as far as the frontier AI research labs. I don't have far dated a backup, that statement, I just it's more of a feeling based on what seems to be the values of their leadership team. So I like what they're doing. They're certainly backed by a strong you know major tech player in with with respect to Amazon. Amazon is the company behind them, much like Microsoft is behind OpenAI. So there's a lot of good things here, I think, from an API access perspective for developers. They're looking to build solutions on top of a model.
Speaker 1:Cloud 3.5 Sonnet is very much worth taking a look at.
Speaker 1:Now the thing that we should probably quickly reintroduce to our listeners is that the three models in the world of Anthropix CLOD model the three sizes of models I should say are Haiku, sonnet and Opus, from small to medium to large respectively.
Speaker 1:And so CLOD 3.5, only a few months later outperforms the Sonnet size model and 3.5 outperforms the 3.0 Opus model, which is their biggest model. And that also speaks to this overall trend that smaller models are becoming more and more and more powerful, which is good in a number of levels. So I find it super exciting For my particular areas of great focus with multi-agent systems, their improvement in coding ability in the human eval benchmark, which is this benchmark that AI models are using to determine how good they are at writing computer code. It was significantly above GPT-4 Omni, so from my perspective that's exciting as well. I know the team at Tassio Labs is working on testing it out with Skip and with Betty and with other products, so there's a lot to be excited about. I agree with you on that. I'm looking forward to actually digging into it myself.
Speaker 2:For sure. So I mentioned there's this sense of it feeling smarter, and it is the medium-sized model which is quite interesting. Can you help contextualize what you think happened from the Claude 3 family models to the 3.5? Was it a change in training? Is it a change in the structure of the models? What exactly would have changed to make it smarter?
Speaker 1:The information I've seen is pretty limited, but I believe what's happening is that you continue to have improvements in the training data and the quality of the training data, and so what's happening is is that, you know, originally the idea was kind of brute force. People were throwing everything and anything at these models and their training and the more you could throw at them and the longer you could train them, for really that improved the quality of the models. Now what we're finding is that number one improved data quality going in dramatically affects the size, affects the quality of models relative to their size. So there was quite a bit of research over the last couple of years showing that a model trained on a smaller training set can be as performant or even better than a model trained on a much larger training set if the quality of the data is higher. So that's an important thing to notice that MN Anthropic has been talking about this a lot in terms of the quality of their training set. Also, the models are able to generate high quality synthetic data which can then feed the next generation of models, which sounds a little bit odd, because models that aren't as smart, producing content that will then train models that are smarter sounds a little bit odd, it's a little bit of a leap of faith. But smarter Sounds a little bit odd, it's a little bit of a leap of faith. But the models we have now are good enough to create high quality synthetic material that's very accurate, which in turn, you know, helps improve the training of the future models. But the other thing that we're finding is that the longer we train models, you know keep training them and training them, and training them on the same information, but training them in new ways and training them in different modalities increases the quality of the models as well.
Speaker 1:So, um, you know, at some point you kind of have to make this cutoff decision of when we're done training a model. Um, that's somewhat of a qualitative decision. Still, in terms of when does the model trainer say, hey, it's done, it's fully baked, it's ready to be pulled out of the oven and served to customers? Um, that isn't like a beginning and an end. There's no like binary decision-making on that. It's somewhat qualitative and there's certainly benchmarks people are using to determine that and they're testing the models all along the way. But you know, much like the baker kind of determining if something is, you know, baked enough or overcooked or whatever. They kind of look at it and take it out at the right time.
Speaker 1:So that's, I think, part of what Anthropic has been doing in their most recent releases. I think they were pretty intentional about calling this a .5 release rather than 4.0, because it definitely represents any criminal improvement. But enough of any criminal improvement. To take the leaderboards essentially which is a big deal in this world of AI models that these leaderboards that are showing model performance relative to benchmarks. A lot of decisions are being made based on people's continued progress towards that. The main thing I would encourage people to think about when they look at these benchmarks, if you are evaluating them, is don't look at it at just a point in time. Look at a company's performance over a period of time, over a series of model releases, and you can kind of tell how companies are doing. And when you do that, you can clearly see that Anthropic and OpenAI are the leaders over the last year and a half or so pretty strongly.
Speaker 2:This is speculative, but I'm thinking if Cloud 3.5 Sonnet, the medium-sized model, is this good? What uses can you see for Cloud 3.5 Opus when it's released?
Speaker 1:Well, you know, I think, that the idea of having different sizes of model for the same model release. So you take 3.5 and you say, hey, within that model, we have three sizes of model, each of which has different characteristics, you know, in terms of price and performance. As a model, each of which has different characteristics, you know, in terms of price and performance. So, like Haiku is very high performance, very fast, very low cost. And then Opus was like the big, you know much, much more powerful model, but also slower and more expensive. And so having multiple models to choose from allows you in thinking through solution design. You know, where do you use the small model, where is something like Haiku, a great fit right, or even like a Lama 3 we've talked about before. Lama 3 has a very small and very performant model, very, very fast at inference and very low cost. And so the way I think about it is, opus will be.
Speaker 1:I think the reason they didn't release Opus and Haiku right away is they're doing more and more and more training to really set that. They don't want to be in the catch-up game with OpenAI, they want to leapfrog them, I think, and so they're probably doing some much more intensive training on Opus to develop more frontier capabilities. And then Haiku. I think they're probably working really hard to develop a really, really small model that's super high performance and very low cost. So that's my speculation. That's super high performance and very low cost. So that's my speculation.
Speaker 1:But I think the applications of that are, you know, when you think about like high throughput, real-time, low cost things like conversational interactions with people on a website or classifying massive amounts of text in a very accurate way, that's where a model size like a haiku could be really great. But if you need the best reasoning capability, if you need the best code generation, if you need the best in whatever class, then something like an Opus to answer your question will still probably be your best bet. A lot of times those applications also aren't. It's not as important that those kinds of applications are as instant, but expectations are continuing to increase that models respond essentially instantly.
Speaker 2:To give listeners some context, I mentioned that I was working on Ascend second edition with Claude. Unfortunately, there are cap limits, even on the paid version, even on the team's version that I keep running into. That's my only complaint for the time being, which means it's just really good and I'm using it a ton, but I dropped the whole previous first edition in there, asked it to read it and then started working with it to establish a new flow. There are some new chapters that we're adding in, so we wanted to make sure everything was seamless essentially, and some of the suggestions it was giving me were really smart and it felt like I was actually collaborating with a person, whereas sometimes with ChatGPT, I'm like oh, why would you suggest that? That's so repetitive? You forgot this thing. I asked you earlier and this actually felt more like a collaboration, which leads me into this point of artifacts.
Speaker 2:I'm not sure if you saw their demo video of this, but we're kind of seeing the shift from conversational AI to collaborative work environment. I tested this out just a bit, not with the book, but asking it to create like a web page using HTML, and you kind of see a dual screen, so you see the chat interface on one side and then you see the code and you kind of see a dual screen. So you see the chat interface on one side and then you see the code and you can preview what the webpage will look like as well, so you can kind of iterate on your creation with the model. So that was just really interesting to see. Do you feel like that is the future of kind of all of these chatbots?
Speaker 1:Definitely. I really am excited about the innovation they've had and really it's a user experience innovation as much as it is about AI, this idea of artifacts, which is basically, as you described it, mallory, this side-by-side window where on the one side you're having the chat and the other side you're like building something that might be a piece of code for a website, it might be a document, it could be literally anything, it could be an image and in the conversational interfaces we've been experiencing for the last couple of years, really you have to constantly remind the AI like, hey, go back to the last thing you did for me and make this one change. So I've created a blog for me. I just want to change this one sentence, I want to change this one paragraph, and but you have no guarantee that it's actually going to produce the exact same thing and only do the one thing you wanted.
Speaker 1:So you get like a whole new document every single time, and the beauty of artifacts is that it's like collaborating with a person or a team of people and on this right-hand side of the screen you see the document in question, or whatever it is that you're working on, being updated. So it's kind of like this place that the AI knows it has to come back to and work on with you, and I think that's a really powerful construct. It's actually very similar to what we've been doing in a multi-agent system with Skip, where the idea is the user is collaborating with Skip to create a report that does whatever they want and can keep going back and forth with Skip and changing that one report once the report is created. So it's a similar concept, but I think you'll see that in the next edition of ChatGPT. I think you'll see that across the board pretty much with every AI system out there. I find it really exciting.
Speaker 2:And just as a note to listeners as well, since I've already tested this out for myself, if you do have the paid version of Cloud, you have to enable the artifacts preview.
Speaker 1:So just so you know that you go to your little icon enable it and then you should be able to use it from there. Now let me ask you something, since you've had more experience with this particular tool. Do you find that in Cloud 3.5, you are getting kind of variations in the tone and the way the thing responds to you? You know, with ChatGPT you can kind of quote unquote, tell that ChatGPT has created content, because it kind of uses a very formulaic approach unless you prompt it really well to not do that. You know, like, for example, it'll say in conclusion you know, in almost every blog it ever writes, or if you talk to it repetitively about a task it's doing, over and over it'll say the same thing, like it'll say certainly, and then it'll go on to produce whatever. Do you find cloud three sorry, cloud three five to be more of a natural kind of interaction with you? Is that part of what you think might be feeling more collaborative?
Speaker 2:That is a great question. So there are definitely still some AI isms. I'll call them in there. One of the ones I've mentioned that drives me nuts is the in today's digital landscape or in today's AI evolving landscape. That's kind of how it proceeds to start off a lot of paragraphs, which drives me nuts. There's a little bit of that in there still, but I think it's doing it less and obviously that's anecdotal. It's more of an intuition feeling, like I mentioned, but it seems like I'm getting a more of a variety of copywriting, which is exciting. I also feel like it's doing a great job of using the previous edition of the book's style to kind of mimic that I'm not having to give a ton of guidance on how we should write it and how we want to sound in each chapter. So I think that's a great question. I'm keeping an eye out for the little AI-isms, but I think that it's an improvement from ChatGPT.
Speaker 1:I like that AI-isms. I don't think I've heard that before anywhere else, all right, maybe we should trademark it. Yeah, we should put it out there, maybe create like a top 50 AI-isms blog or something like that.
Speaker 2:I would love to do that because I think, oh, I pride myself at least on being able to detect pretty easily these days when something's written by ChatGPT.
Speaker 1:I wouldn days when something's written by chat gpt I wouldn't even have to use ai for that, you know, I wonder. I don't even know if ai could write the blog about ai isms. Do you think it's that aware of itself that'd? Be a fun thing to ask.
Speaker 2:Cloud 3.5 sonnet to do and see what it does, very interesting. I did ask cloud 3.5 sonnet how to enable artifacts and it could not answer that that was a little bit too, too self-aware, I suppose.
Speaker 1:Yeah, that is something that's interesting about part of and this is part of a user experience issue, I think, because there's nothing that's stopping the model developers from improving the knowledge of these models about their own capabilities. What happens is in this pre-training process, you know, it's basically kind of like this chicken and egg problem in a way, in that you know the model is being trained on this pre-training data and the pre-training data essentially is what makes the model's weights get formed. Essentially we've talked about that before on the pod where you know this pre-training process essentially is generating these weights, which is the way the neural network structure is defined, and that doesn't include insight into what that model can do, because the model is in training. So in that pre-training process you can't really say this is what Cloud 3.5 Sonnet does. However, what is possible is, after you've trained the model, you could give it additional knowledge through either a RAG style approach, where you give it some memory there's fine tuning approaches. There's fine tuning approaches. There's other things that you could do. You can even do another round of training on top of the training to provide the model with additional insight on its capability. So I'm kind of curious about that, like why the model developers haven't really done more to make the model smarter about its own capabilities and its user interface and so forth. I think you're gonna to see that happen. You know a lot of the people who speak about AI that I listen to are talking about some of the biggest differentiators are ultimately going to be the user experience. You know so if you look at what we talk about and spend a lot of time thinking about analyzing, testing, putting to work in various ways, you know really, ultimately, if you think about the fast forwarding in time of even another year, two more doublings in AI, most of these models will be so good it won't really matter that much.
Speaker 1:For many of the common use cases. You'll have something that's like so great that it doesn't really matter that much. Like are you? Like when you get on your next flight, do you know if it's an Airbus A320 or if it's a Boeing 737 or if it's a regional jet? If it's an Embraer or something like that, you probably don't care that much. You just you want to know that it's safe. It gets you there. Like one of them might have a you know, cruise speed of 520 miles per hour, the other one's 510. You don't really know, because they're both at kind of the state of the art in terms of expected performance from a jetliner. But then someone comes out and says, okay, well, x number of years from now, someone comes out with a new airliner design that's both supersonic, quieter, lower fuel consumption et cetera. It's like a you know, a step change in capability.
Speaker 1:But I don't know if we'll see like kind of a plateau of models and there won't be major step changes happening every week, every every month. But at some point, if that does occur, where models somewhat achieve parity across the major model families, most of the differentiation will be in user experience and in domain knowledge. And that's actually where I think a lot of opportunity still lies for innovators of all types, including private companies developing solutions and verticals, certainly for associations that develop models or applications on top of models using their knowledge. But I digress somewhat. But I think that the main thing is when you think about that type of user experience and the model's lack of awareness of its own capabilities. It's somewhat funny, but I think a lot of that has to do with everyone's moving so crazy fast that almost all of their resources are on the models themselves, not on the user experience. I think that's going to start to change. Yep, yeah, are on the models themselves, not on the user experience. I think that's going to start to change.
Speaker 2:Yep, yeah, on the Boeing note. Now, I didn't pay attention to that before, but I guess with all the recent news about the Boeing planes I'm a little more aware. But I see your point and I want to quickly add I know listeners have heard me talk about MidJourney. It's my preferred AI powered image generator, but right now they're rolling out it's Midjourney Alpha, I think is what they're calling it to select users. Maybe it's only paid. It's a completely different interface. It is so easy to use. That would always be my caveat to people is oh, you have to use Discord, and it's a little clunky If you've never used Discord before. It's nothing like that now and I assume that's the route they're going as well. So I think you're spot on.
Speaker 1:Yeah, I think that's an opportunity for differentiation in a mature market. You know, in the context of Airbus and Boeing, you know like Airbus can certainly claim for a more secure environment and Boeing can say they give you more fresh air.
Speaker 2:Oh gosh, dmitri, I don't want fresh air on my flights Not that way, anyway. Moving on to topic two, ai scams. This should be a fun one. We all know AI is increasingly being leveraged by scammers to create more sophisticated and convincing scams, posing a huge threat to individuals and financial institutions. We're going to run through just a quick list that you're all probably aware of of kind of some major scams that we're seeing and how AI is being used in them, particularly first being voice cloning. This uses AI to replicate someone's voice with high accuracy, and scammers can use this tech to impersonate relatives or friends, often fabricating emergency situations, to solicit money or sensitive information. As a side note, one of my friends from college this happened to his grandparents, and I'm not sure if his voice was cloned or not, but the grandparents claim that they heard his voice on the phone. It was all an elaborate scam and that was several years ago, so I'm sure this has gotten much more improved.
Speaker 2:Next, we've also experienced this across Blue Cypress CEO scams. In these kinds of sc scams, scammers impersonate a company's CEO or other high ranking official to trick employees into transferring money or sharing confidential info, and AI enhances these scams by generating more convincing emails or messages that mimic your CEO or high ranking officials writing style and tone phishing scams aimed to steal sensitive info by pretending to be a trustworthy source we're all familiar with these. Ai can improve the effectiveness of these scams by personalizing messages, mimicking the language and style of legitimate companies. Making phishing attempts harder to detect. Deep fakes has been a focus this year, for sure, and in previous years, which use AI to create realistic but fake videos or images of individuals used to spread false information, blackmail or to facilitate other scams. And then malware. Ai can be used to develop more sophisticated malware that can evade detection, steal passwords or other sensitive information, and it can be more closely disguised as legitimate software, tricking users into downloading it.
Speaker 2:So the reason that we wanted to have this topic on today's podcast was actually because of an article you shared with me, amit, from the Wall Street Journal that was highlighting kind of some recent AI scams, and one was about a man named Joey Rosati, who is a small cryptocurrency firm owner, who received a call about missed jury duty. He was advised to report to a local police station and then actually was going on the way there. He was prompted to wire funds to cover the fine for missing jury duty, and that is kind of what sparked his siren alerts, and so he realized it was a scam at that point. But I think that one is interesting just because this is an educated person we're speaking about, who works with cryptocurrency, is obviously aware kind of of scams that are out there and the technology that can be used, so I think that is a scary one for sure. And then, within the past few months, we covered a really high profile case in Hong Kong where an employee wired out $25 million after a deepfake call with their CFO.
Speaker 2:But this is not all bad news. Financial institutions are fighting back, using AI. Of course, they're using AI to monitor how you enter your credentials, which I didn't know which hand you use, to swipe shifts in typing cadence and even if your voice verification is too perfect, which leads me back to a point that we've made many times on this pod, which is the only thing that can fight bad AI is good AI. So, amit, is this? I'm sure it is, but is this an area of AI that you are keeping a close watch on?
Speaker 1:Well, yes and no. I mean, on the one hand, every time I see a new technology come out, whether it's video generation or an improved language model I get excited about all the great things that we can do with it within our organizations, how we can help our clients and the association and nonprofit sector take full advantage of this technology. I get pumped about it. And then I turn around and think, okay, well, how can you know scammers use this stuff? How can people with you know bad intentions use this stuff? And you know the frontier companies like Anthropic and others are trying to do things to prevent their models from being used the wrong way. But there's so much technology in the wild. There's so much open source stuff, and the open source stuff is nearly as good as the, you know, the closed source stuff. So people are going to be able to do a lot of terrible things. So, yeah, I keep my eyes on it, both in terms of reading what's going on and just thinking about. You know, how could a bad actor you know go about taking advantage of people with these technologies? And I think your point about the good AI fighting bad AI is critical to point out that you know the bad guys aren't going to stop using AI or anything else to advance their agenda, and so we have to use AI to detect these things and to try to stop them. So to me, it's a critically important topic. It's something people have got to get aware of, because, especially people who have their heads in the sand a little bit about AI in terms of, well, we'll deal with it in a year or two or whatever, wait to see if it gets mature Well, even if you might not be thinking about deploying into your own organization immediately, for various reasons, you have to realize that people around you are using it for all sorts of potential harms. So I think it's a super important area to stay very aware of and just thoughtful about too. Like every time you interact with people.
Speaker 1:Like think about where your weak spots would be If your business process for transferring money, let's say, includes phone calls and you think that your phone call is a great way of verifying that the requests that you got over email or through a banking system was correct. Maybe it is for now, but it's very likely that that's going to be, you know, as reliable as email, let's say, which is easily hacked, and voice is as hackable now, basically. So what would be the solution there if you do heavily rely on phone calls? Well, as you pointed out, ai detecting real versus synthetic voices I think is a thing that's going to be helpful. I know people are working on that.
Speaker 1:I also think that you know there's another technology in our tool belt that could be quite powerful in the fight against bad AI, which is blockchain. So using blockchain to authenticate and certify the provenance of content that you're receiving. So, when you're talking about using, let's say, wire transfers, if you're doing your transactions for wire transfer on chain, as they would say, in the whole Web3 blockchain world, you have an immutable transactional history on the blockchain of all the prior transactions. So it's way, way less likely that you're going to have a fraudulent request appear in that kind of a context than the way people do a lot of transactions now. But as far as, like, just data interaction with people and, like you know, I remember I think one of the things you were alluding to was we had an intern on our team that received a text message that was supposedly from the CEO of Blue Cypress asking them I think they were supposed to buy Amazon gift cards or Apple gift cards or something if.
Speaker 1:I recall correctly, and that happened maybe a year ago and, um, and the person almost did it, almost did what was requested, and then kind of you know sense that there's something off about the request and you know stuff like that, I think is is ripe for people to exploit. So I think the first thing is awareness and education. Right, I keep coming back to that, whether it's, you know, offense in AI or, in this case, defense in the world of AI. Being aware of what's going on, being aware of what's possible, you know, being aware of, like, oh well, synthetic voice is really really good, as opposed to if five years ago, we said, hey, ai generated voice was a thing, then it was just terrible, it sounded like the computer generated voice. If your brain is still telling you, hey, synthetic voice is super easy to detect because knowledge of the of the state of the art is several years old and you also have one of these biases where like, oh yeah, I already checked that out, it's not, it's not good.
Speaker 1:Um, you've got to refresh, right, you've got to refresh as frequently as the technology is refreshing, or you're out of date. And because the doubling is happening every six months, the quality is improving at this crazy exponential pace. So what you assume to be safe areas in terms of what couldn't be hacked or minicked or scammed a year or two ago, is no longer safe assumptions. So, to me, educational awareness is step number one. The world is filled with people who have no awareness at all about what's going on, and I worry a lot for those folks like what's going to happen to them in this environment, and I worry a lot for those folks like what's going to happen to them in this environment. I think the best thing you can do for not only your company, your association, but also your family, your friends, is to, you know, talk about this stuff and try to spread awareness of it. That, to me, is defense. You know, wall number one.
Speaker 2:Hmm, and again anecdotally for me, on social media it seems like I've seen a lot more of these stories, and especially from young people who seemingly, in the past maybe wouldn family of companies. Do you feel like there's aside from AI? Is there a way to protect? Look a few years ahead even with the education piece. Is there a way to protect your business and individuals from these AI scams without an AI detector?
Speaker 1:Well, here's something we're doing at Blue Cypress that I think could be a useful technique for a lot of other folks and I think will be relatively safe for some time, is to have, whenever you get together with your team on a recurring basis let's say you meet up once a quarter in person In those quarterly in-person meetings where you know you're not being intermediated by technology not on Zoom, it's not really hackable, right? I mean, theoretically, someone could plant like a physical microphone in your room and people are going to go to those lengths. You know it's probably a state actor or someone who really is motivated to get to your stuff. You probably don't have much of a chance against that. But in most normal circumstances, if you're in a conference room at a hotel or if you're at your office or something like that, meeting in person face to face, what you can do is just say, hey, let's come up with some passphrases that we're going to use for the next three months, and we're not going to type them into a computer, we're simply going to write them down on a notepad. All of us are going to write them down, we're going to put them in our wallet and we're just going to know that for the next three months, until the end of the quarter, these, this particular passphrase or these passphrases, are what we're going to use, and maybe come up with two or three of them and use them on different days of the week or things like that. And that would not be like for every single phone call you have with someone to say, hey, mallory, what's the passphrase for this week, or whatever. That'd be kind of silly, but like if you got a call from you saying, hey, I need you to transfer money or something like that. Like or hey, mallory, can you give me your password for LinkedIn real quick? I just need to do something. I forgot mine. I need to do something to the sidecar company account, things that are inherently important, which people have a pretty good sense of. We would just say, hey, what's the passphrase for today? And we pull out of our wallets or purses or whatever and say, oh, here it is and it's written down by hand. So that's a thing that you can consider is like, how do you go analog when you have all this stuff going digital? How do people kind of go analog to potentially have an additional safeguard, right? That's a very simplistic thing, but it's pretty powerful. It's pretty effective.
Speaker 1:Another thing that you can do is, if you're dealing with banking in particular, or any financial institution, look to have hardware-based multi-factor authentication. So, instead of just being reliant upon like text messages coming to you from you know like you log in right and then a lot of people have turned on multi-factor authentication. Hopefully you've done that for all of your most important accounts, and there's usually a couple of options. One of them is text message, sometimes emails offered, which is of basically no value, and then, because email is so easily intercepted is why I say that, by the way, text messages can easily be intercepted by a motivated hacker as well.
Speaker 1:Then there's software and hardware-based authentication keys where on your phone, you can have an authenticator app like Google Authenticator, and those will generate keys those are much, much harder to hack. That's a really good solution, and even better than that is a physical key that you plug in through USB device, and those are things that you can utilize with a lot of banking sites. So these are things a lot of people view as kind of annoying, and they kind of are right. They're extra work we have to do, they're more hoops we have to jump through, but I think they're worth putting in place for the most sensitive things, and most most financial institutions offer, you know, these layers of protection, but most consumers like for your personal bank account or for your certainly for company stuff. People don't bother with this stuff. Um, yeah, then there's simple things too, like using a password manager rather than using the same password for every single website. There's tons of people who still do.
Speaker 2:Oh, they have the same password, probably the same password they've used since college, and or maybe one character is different. Right, and you know, it's just not. It's kind of silly how bad it is. It's just it's think of the example that you've talked about, where someone asked a room full of people, like how many of you still send physical magazines out, and no one raised their hands, essentially, and they were like, okay, maybe that's something you should, that's an avenue you should be tapping into. Maybe we do need to kind of reverse the clock a little bit and think about ways to do things in person or on paper.
Speaker 2:I think very soon we will not be able to detect deep fakes with the naked eye. I mean, honestly, it's probably already there. I will say, um, I don't know if you follow like the wnba at all, amith, but caitlin clark is, is, you know, like a rising star or has already risen pretty much. And I watched a video on social media of her after game conference and it was a deep fake and I'm not going to lie. For about the first it was a joke, it was supposed to be funny, but for about the first 20 seconds I was like, oh, this is really her after game interview and then, once she started saying ridiculous things, I realized oh, this is a deep fake. So I myself already am falling victim to it. So I think having these strategies in your back pocket like the passcode, of course, password encrypted, password, vaults, I feel like that one's just a no brainer, but it's really important.
Speaker 1:Well, if you just look at the stats on how many people use password managers as an example, it's a tiny fraction of the total user population that's out there. As an organization, you can mandate it like we do here, and I think it's a cost, but it's a fairly trivial cost compared to the downside risk. But, yeah, your point is well taken. I think you know whatever experiences you're having out there like at least having some skepticism over whether they're real or not, is important.
Speaker 1:You know we've been trained to think that things that are media if it's on TV or if it's in print, it is legitimate, right, because we've been in a world where media resources have been somewhat scarce. You know you only got your content from a handful of outlets, whether live, broadcast or in print, and so there's this kind of preconception people have that things that come from media outlets are legitimate. And yet these days, you know anybody can put something out there that's been the case for years, right. Anybody can write whatever they want Now, with these tools it's exploiting. So I think people have to have a safe level of, you know, skepticism honestly. Just, it's a healthy dose to have some degree of it.
Speaker 2:The last topic we want to cover today is Runway's Gen 3 Alpha. Runway introduced Gen 3 Alpha, its new model for high-fidelity controllable video generation, and I want to share some key points about capabilities and improvements. It offers improvements in fidelity, consistency and motion compared to its predecessor, gen 2. It can generate highly detailed and realistic videos based on text, prompts or images, and the model excels at creating expressive human characters with a wide range of actions, gestures and emotions. It's the first in a series of models trained on Runway's new infrastructure for large-scale multimodal training. It has been trained jointly on videos and images, allowing it to power various tools like text-to-video, image-to-video and text-to-image, image to video and text to image. It can be used for a wide range of creative applications, like generating realistic human characters to creating surreal and fantastical scenes, and it interprets a wide range of styles and cinematic terminology, making it versatile for various artistic needs. Runway is also collaborating with leading entertainment and media organizations to create custom versions of Gen 3, which I think is interesting, and the customized models will allow for more stylistically controlled and consistent characters, targeting specific artistic and narrative requirements.
Speaker 2:It was just released to use a few days ago, maybe even just as soon as yesterday. So I hurried, created an account and barely played with it, maybe five to 10 minutes. I will say their demo videos were mind-blowing. I mean, again, couldn't tell that this was AI generated, which is, I think, a difference from models I've seen in the past, and they really created some beautiful landscapes, beautiful images of people riding on trains, just I was very impressed. So, amit, I know we talk a lot about video on this podcast. Why do you think this new model might be of interest to our association listeners?
Speaker 1:I mean to me. Ultimately, it's another way of communicating. It's another way of conveying ideas, to be expressive, to be creative, to go beyond where we've gone in the past in how we communicate and how we interrelate amongst one another. And for associations who are trying to advance a cause, they're trying to educate people, they're trying to connect people, it's a big opportunity over time. I think that the idea that video you know video has historically been extremely expensive, extremely difficult. It takes a long, long time to produce a tiny amount of it. And with video exploding in potential, with AI, everyone will be able to use video, as well as music, as well as audio, as well as imagery, as well as rich text, as tools to communicate more effectively. So I think, on that side of the house, it's super exciting.
Speaker 1:I think associations could conceivably create video this way for all sorts of things that they do. So there's the marketing side of it, where you're telling stories all the time and you're trying to get people to take actions of one sort or another, whether it's signing up for an event or renewing their membership. To leverage this type of video generation in the educational world, where you can say, hey, let's generate lots of different videos for this particular educational lesson based on a variety of factors. Right, it's like deeply personalized video generation down to the individual student and maybe even doing that in real time, not even necessarily storing a bunch of versions of videos, but being able to do that in real time. It's kind of like the avat still seated with um the uh, the tutor, essentially right, like the conmigo style tutor, and that's a little bit different than what runway's doing here.
Speaker 1:Runway's doing like super high-end video that's cinematic in quality and has applications outside of what I'm describing. But I just think it opens up another potential tool and it democratizes access, because once you digitize something like this and you make it this good and this readily available, demand will explode for how we use it in a lot of different places. That wouldn't have ever touched video in the past. It's interesting on the front of video. I'm listening to this podcast right now on the NFL and it's on the Acquired podcast.
Speaker 2:I don't know if you've ever come across those guys, mallory, yeah, and I think you've mentioned them before, Yep.
Speaker 1:I'm a big fan of theirs. They they have a lot of really great episodes. What I love about their their podcast, is it's deeply researched content. It's basically like business histories on different companies, and the two hosts are very entertaining. They're very knowledgeable guys and they have like a tech spin on things. But they cover a lot of companies outside of technology. And this episode in the NFL I think they did it about a year ago they're going through the history of the NFL and that by itself is interesting as a football fan, but they are in this. I'm not actually done with this particular episode, but they're at the point in the episode where they're talking about NFL films and the history of NFL films.
Speaker 1:And you know, when TV first came onto the scene, that completely changed the business of football, because the business of football for a long time was getting people to the stadium, getting people to take a seat, pay for tickets, buy food, you know, enjoy the game in person. And radio was something that kind of extended the reach of sports. But TV brought an audience, you know, into the mix that was not previously. You know part of the economics. Of course it did have a potentially cannibalistic aspect to it, because if people could easily watch the game, would they still go to the game? And, of course, there was all these rules about like, well, actually, you can't air the game in your own hometown uh, initially, and that's that obviously has changed a lot over the years.
Speaker 1:Um, the reason I bring all this up, though, is they had the broadcasting of the of the games fairly early on then, over time I think this was like the early 60s, if I recall correctly um, they, they were auctioning off the rights to create a film for the championship game. This is pre pre-Super Bowl and one year, this one individual guy decided to bid on produce having the rights to produce the film on the NFL championship game, and the individual won the competition and produced this unbelievable, like you know, cinematic film of the championship game, and it was a storytelling experience that had multiple angles, that had panning and zooming and all these other things that no one was used to seeing, and it turned into this big opportunity, and then, actually, that that small business this guy created got acquired by the NFL and got turned into NFL films, which became a big, big part of how they tell the story of football, which is, you know, there's there's a drama to it. To it, there's kind of a narrative arc to things. In football it's not just oh, here's the game, here's the stats. That's one of the reasons the NFL has been so successful.
Speaker 1:In any event, I think about that because that's like such a massively scarce resource, so incredibly hard, so incredibly expensive, and now you fast forward and say, hey, anyone can create cinematic quality video really with just typing in a few sentences. It's pretty amazing what that could be. What are your thoughts on it?
Speaker 2:I mean, I think what's really interesting is this idea of storytelling, which is something as a sneak peek that we're talking about in the second edition of Ascend. I am an actor which I've mentioned on this podcast. I think stories are incredibly powerful to evoke feeling and emotion. I think it's really interesting to hear that story about the NFL, because I wasn't aware of that. But I wonder if we do have listeners who maybe work for medical or scientific associations, who are struggling to envision a path where storytelling has a place with their high, when they need to have really highly accurate content, if they can see an intersection between storytelling which at times maybe seems frivolous or emotional or not necessarily soft, but in that vein, as compared to something like science which is more hard and fact-based.
Speaker 1:Yeah, I think it's an interesting point, and I think some organizations will more naturally gravitate towards this type of technology than others, and even organizations that operate in a realm, as you described, where they may not see this as a primary medium of communication for their core content.
Speaker 1:From an educational perspective, they might see applications for marketing where they can help.
Speaker 1:They can use it to engage people kind of around the edges, to bring people to events, to do a variety of things that aren't necessarily about the delivery of content itself, but perhaps are, you know, additional to it.
Speaker 1:But I think what people will find is, no matter how scientifically and quantitatively rigorous you are, biologically we're all wired in a similar way, which is tribal and which is based on the storytelling tradition, and that's been going on orally since the beginning of our species, basically, and in writing for a long time as well, and we know that we are attuned to stories, right, we tune into them in a different way than just facts and data. And so if you can utilize storytelling, if you can use storytelling along with different modalities that are really engaging and rich, like video, like music, like audio, that it's a new opportunity, right, it's a new way to get people to engage with content. They might remember it better if it's delivered that way, even in deeply quantitative fields. I had no idea what the applications of this will be in those kinds of disciplines, but I find it really exciting just to think about.
Speaker 2:Another thing, too, with the NFL story that I really appreciate is that they were worried that broadcasting the games would hurt in-game attendance and I don't have the stats on this, but I would say NFL games are really not hurting for attendance. So I think that's it's exciting to see that kind of this new digital medium came about, and yet we're still seeing humans seeking out these in-person experiences.
Speaker 1:For sure. Well, and think about distribution. You think about any product or service, whether it's a football game or if it's delivery of content or education. Distribution how you get that product into the hands or into the minds of the end consumer. That's the distribution challenge we're talking about, and distribution has changed a lot over time.
Speaker 1:You know different mediums of communication, like television, radio or, even before that, the telegram changed the ability to speed, the cost, the dynamics of the distribution of a particular class of product or service. So with, like the telegram, it was a very limited amount of information. It didn't really convey anything beyond just, I think, a couple of sentences. Essentially it was very limited. Then you moved on to telephone, and then you had radio, and then you had TV, and then you had obviously the internet come online and the richness of what you can deliver essentially in an instant, at no cost or marginally no cost, continues to evolve. And so what you're seeing, trend wise, is more and more product, services, experiences get digitized in some way, or at least partially digitized, and every time that happens people freak out. You know when music became digital right, when videos started streaming, and the same thing is true in every field. What's happening now is both a distribution shift with AI, but also it's capability shift, because what we're saying essentially is AI models that are trained on content. Of course, they can distribute the content more effectively than static websites. It's a distribution improvement, but it's also improving quality, because the content becomes interactive. You can do more with it, right. So you're actually changing kind of the level of fidelity, if you will, or the level of resolution, in a very dramatic way.
Speaker 1:Every single time that happens, there's people who are trying to put up walls to stop the technology from advancing and saying, no, we're going to protect our wall garden, we're going to make sure our stuff doesn't get into that world of crazy distribution.
Speaker 1:And every single time that fails over time not necessarily immediately, but over time it fails and it's not because the protected asset wasn't really valuable, but it's because people want what they want and the demand grows. So much so that even if you protect that initial walled garden and you had 100% market share in that world to begin with, if the market grows by 100x, all of a sudden you have 1% market share and maybe you completely dominate that 1% market share, but there's 99x the number of people out there who might want access to your products and services that haven't had them through the traditional distribution means, and so these shifts in technology and distribution enable much greater demand, because, as cost goes down, demand has essentially been pretty much proven to be insatiable, and you know, we see that over and over. That's why the economy is growing at an exponential rate, and it's you know it's very correlated to the technology shifts we've been experiencing.
Speaker 2:So in many ways, history is repeating itself. Maybe in a unique way with AI, but we've seen this before.
Speaker 1:Yep, we've watched this movie before.
Speaker 2:Well, to all those listeners who are still with us, thank you for joining us on this episode. If you do get to try out Claude, 3.5, sonnet or even Runway, feel free to shoot us a message. You can click the link if you're joining from your mobile phone and send us a text message directly from there. We would love to hear what you think and let us know if you all have any questions or topic ideas for future episodes. Amit, I'll see you next week.
Speaker 1:Thanks a lot. Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.