Sidecar Sync
Welcome to Sidecar Sync: Your Weekly Dose of Innovation for Associations. Hosted by Amith Nagarajan and Mallory Mejias, this podcast is your definitive source for the latest news, insights, and trends in the association world with a special emphasis on Artificial Intelligence (AI) and its pivotal role in shaping the future. Each week, we delve into the most pressing topics, spotlighting the transformative role of emerging technologies and their profound impact on associations. With a commitment to cutting through the noise, Sidecar Sync offers listeners clear, informed discussions, expert perspectives, and a deep dive into the challenges and opportunities facing associations today. Whether you're an association professional, tech enthusiast, or just keen on staying updated, Sidecar Sync ensures you're always ahead of the curve. Join us for enlightening conversations and a fresh take on the ever-evolving world of associations.
Sidecar Sync
AI Grannies Fight Back & Anthropic’s Model Context Protocol | 59
In this post-Thanksgiving episode, Amith and Mallory dive into groundbreaking AI innovations shaping the association landscape. Discover the story behind Daisy, a clever chatbot designed to thwart phone scammers, and its implications for cybersecurity. The hosts also unpack Anthropic’s Model Context Protocol (MCP), an open-source framework poised to redefine how AI interacts with diverse data sources. With a mix of practical advice and forward-thinking insights, this episode is a must-listen for anyone navigating the intersection of AI and associations.
🔎 Check out the NEW Sidecar Learning Hub:
https://learn.sidecarglobal.com/home
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecarglobal.com/ai
🛠 AI Tools and Resources Mentioned in This Episode:
ChatGPT ➡ https://openai.com
ElevenLabs ➡ https://elevenlabs.io
HeyGen ➡ https://www.heygen.com
Anthropic’s Claude ➡ https://www.anthropic.com
Chapters:
00:00 - Welcome to the Post-Thanksgiving Episode
03:59 - Behind the Scenes: Creating an AI Co-Host
11:40 - Cybersecurity Meets AI: Fighting Scams with Daisy
15:42 - How AI Could Transform Telecom and Beyond
22:44 - Deep Dive into Anthropic’s Model Context Protocol
28:12 - Implications of MCP for Associations
38:40 - Questions for Tech Leaders About AI Standards
42:16 - Closing Thoughts
🚀 Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global
👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
Even if AI doesn't get any better right after today, we'll still see I don't know 10 years of kind of progress and innovations being built on top of that.
Amith:Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Welcome to the Sidecar Sync. This is the post-Thanksgiving edition, where we'll be featuring a lot of great content at the intersection of AI and associations, and probably some sniffles and coughs along the way as well. My name is Amit Nagarajan.
Mallory:And my name is Mallory Mejiaz.
Amith:And we are your hosts. And before we get into exciting content at that intersection of all things associations plus AI let's take a minute to hear from our sponsor.
Ad:Introducing the newly revamped AI Learning Hub, your comprehensive library of self-paced courses designed specifically for association professionals. We've just updated all our content with fresh material covering everything from AI prompting and marketing to events, education, data strategy, ai agents and more. Through the Learning Hub, you can earn your Association AI Professional Certification, recognizing your expertise in applying AI specifically to association challenges and operations, to association challenges and operations. Connect with AI experts during weekly office hours and join a growing community of association professionals who are transforming their organizations through AI. Sign up as an individual or get unlimited access for your entire team at one flat rate. Start your AI journey today at learnsidecarglobalcom.
Mallory:Amit, how was your Thanksgiving holiday?
Amith:Had a great time. A lot of people and you know it's kind of the obligatory post-Thanksgiving cold season I think that we're getting into, at least in my household. My wife's family was in town. We had I don't know 30 people or something. I lost count and somewhere along the way, you know, the cold virus seemed to find its way to at least me and my kids and my wife as well. So but, but it was great. We had a really good time. It was fun. How about you?
Mallory:I had a good time. It was a little bit of a different Thanksgiving for me, as I'm no longer in Louisiana, as many of you know where my family is so we did kind of a small Friendsgiving with some people that we've met here in Atlanta and it was a really good time. I made a squash recipe that I found on social media that was pretty tasty, but kind of a shot in the dark as I had never made that before. So that was a hit, and I somehow emerged from the holiday with no cough, no sniffles, but I didn't see 30 people, so that might have been the key there.
Amith:I thought you were going to say a squash recipe that AI generated for you.
Mallory:Nope. Well, I suppose AI could have generated it and I didn't know, but it was from an account I saw on Instagram that did a really good job. It had dates on it, sunflower seeds, it made it look really good and it was tasty. In the end, I will say that's cool. Well, that's awesome, glad you had a good Thanksgiving.
Amith:I did it's great to be back. I think this last stretch of the year is always an interesting one. It tends to be pretty busy in the world of Sidecar, in the world of Blue Cypress. We have a lot of associations working with us in a lot of different ways, trying to wrap up projects before the end of the year or, in some cases, get projects started, get things signed and ready to rock and roll for January 1st. So there's a lot of activity happening all over the place, across our family of companies, and particularly excited about all the great work we're doing with Sidecar to expand the content in the Learning Hub as well. Lots of cool stuff there.
Mallory:And I've got to ask you, amit, about the lovely experimental episode that we ran last week. Hopefully all of you had a chance to check out Wilson, our honorary Sidecar Sync AI host. Amit, can you share a little bit kind of an overview of how you did that, because you did it fairly quickly, I think, and I was impressed.
Amith:I spent less than an hour on it. So actually I guess the way I feel right now is the residual of how I woke up and felt that morning, the Wednesday before Thanksgiving, and so we had a bunch of people coming into town early and I felt like complete garbage that morning. So I got up and I'm like there's no way I'm doing an hour-long recording session with you on this. So I said, what if we do something different? And I had this. Know, I've heard a lot of other podcasts that have AI co -hosts and stuff like that. At least in kind of the nerdy little world that I live in, I listen to a lot of AI podcasts and a lot of other podcast hosts in this world are using AI co-hosts or you know AI in different ways. So I'm like, well, why don't we do something? We have we've only kind of little bits, little bits and pieces. Here and there we featured things like notebook LM segments and stuff like that More to illustrate the tool. What if we did an AI podcast where we had it completely generated? So here's what we did Basically started off in my good friend ChatGPT's environment and talked about what we wanted to build and talked about well, we're going to have an AI-driven podcast episode.
Amith:So what if we talked about audio? And so that was, kind of thematically, the first choice. Then I had the ChatGPT tool build a script which I thought was pretty good even at first shot. I took that in and edited a little bit in a Word editor. Then I said, okay, now what we're going to do is create the audio from this.
Amith:So my favorite audio-only tool is 11 Labs. I think they just have the most diversity of voices, the most natural sounding AI generation of voices. There's a lot of tools out there. That's the one I prefer. It's called 11 Labs. Just spell out the word 11labscom. We'll include a link in the show notes. They have free plans and inexpensive plans. So I use that tool all the time, use that tool all the time, and so I basically created a text-to-speech thing. So I wasn't speaking ever because I could barely speak last Wednesday, and I just took the text transcript from ChatGPT, I put it into 11 Labs and it created a pretty great sounding audio file for me.
Amith:Now I noticed a few things that weren't quite what I wanted. So one of the things I had to learn to do was to modify the transcript a little bit to include more spacing in certain areas, and it turns out that, at least with 11 Labs, if you put an ellipse or extra commas in your text, it'll respect that and it'll space out the audio a little bit. So that was quite nice to learn about. So then we had basically the makings of the audio side of the pod, but those of you that are frequent listeners or viewers of the Sidecar Sync know that we always publish on YouTube simultaneously with the audio format on all the pod platforms, and so I said, well, we could just put up like a screenshot of the two of us or something like or something else. But what if we did something a little bit better?
Amith:And we've talked on this podcast a few times about video generation and this tool called HeyGen H-E-Y-G-E-N. This is a company that's really done some great work with generation of videos using AI avatars as well as using essentially cloned avatars, where you take a video of yourself and you can create a new video by typing in text and your avatar of you will say stuff. And we've experimented with that and I think we've even shown some of that on the pod in the past. Well, I said, well, what if we took any of the avatars they have. They have a lot of actors that they film, who get royalties, by the way, from hey Jen when their avatar is used. And what if we took one of those and created the video version? So we did. We selected a fun voice that is locally appropriate for the southern part of the United States for the audio, and then we combined that with an avatar that we thought perhaps might, you know, link up to that voice, and that's how we had that particular pod put together. So for those of you who have not seen last week's pod, I would encourage you to go to YouTube and check it out.
Amith:It's a pretty interesting episode and it just showcases how real this stuff looks, because I showed it to quite a few people and a lot of people didn't realize it was AI. They're like who's this other guy on your team? I'm like no, that's not a person, that's an AI. Of course, wilson says that he's an AI, because we like to disclose that. That's part of ethical AI in our minds is that you always tell people when you're using it.
Amith:But in any event, yeah, that's how it came together. It took me about an hour in total to do the whole thing, which is the same amount of time we typically spend recording the pod, at least on my end. I know you do a lot of other work in preparing and doing things afterwards, but I was able to do that without actually using my voice, which was quite helpful that day. If I were to do all of that again, I'd probably be able to get done in 15 minutes though, because 45 minutes at least of that time was me stumbling around trying to figure out how to get 11 Labs to space out the audio and then trying to get hey Jen to generate the video, the way I wanted to generate the video. But that's the series of tools mixture of chat, gpt for the actual idea and transcript, 11 labs for the audio and hey Jen for the video.
Mallory:You heard it here first and I, as I said, was impressed. And I'm very fully aware that you can do all of these things with Hayden and with 11 labs, but seeing it strung together in that way left me impressed for sure. I told you a meet. To me the voice sounded like a mix of Santa Claus and Morgan Freeman. So for everyone who hasn't checked it out, go listen and see if you agree with me. That's what my perspective was. I also want to ask you one question I think you mentioned this to me over Teams, amit that you cranked up the personality on Eleven Labs. Is that something that you're able to do? Because the speech did have kind of some spunk to it, which I enjoyed.
Amith:Well, some of that is based on the speech, the voice actor that you select that the voice is based on. So some of it's based on that. But there is a setting where you can choose. I think it's called like exaggeration setting or something like that. So I cranked that up a little bit just to make it a little more fun sounding. So I think the tools did a good job.
Amith:You know, mallory, I think the reflection I had after putting that together is so much of the great unknown with AI isn't really so much about the model capabilities. Of course that's interesting when we talk about new models or advances in the fundamentals of the technology. Those frontiers are exciting. But just taking the stuff we've already got and reassembling it in different ways, using your creativity and coming up with new applications these use cases are not obvious, even when we know these tools really well. And sometimes necessity drives invention right.
Amith:So you end up in a situation where you can't use your voice. You're like well, how can I communicate something of value to our audience for that particular week? And an idea is born from that. So I think that that's where so much excitement in my mind is appropriate for this time of year in 2024, as we look ahead and think about what will the association community and the nonprofit community do in 2025 with the tool sets we have available, the greater level of understanding people have, because who would have come up with an idea like that if they hadn't already been playing around with these tools for some time, right? So that's why we always go to people and say, look, your first step with AI isn't to try to, you know, crack, you know, split the atom and solve all the problems of the world, but your first job with AI is to learn a lot about it and to use the tools in the obvious ways, which will hopefully then lead to things that are initially non-obvious.
Mallory:but when there is need, there will be innovation, which is a good segue to our first topic of the day, which is fighting phone scammers with AI, and it's probably not what you were thinking of. Our next topic after that is going to be Anthropix, Model Context Protocol, or MCP. So, starting with topic one, O2 is a British telecommunications company that has introduced an AI-powered chatbot named Daisy to combat phone scammers. Daisy, designed to sound like an elderly woman, serves as a virtual granny quote-unquote, whose sole purpose is to waste scammers' time and prevent them from targeting real victims. Daisy can engage in autonomous conversations with scammers without any human intervention. The AI listens to callers, transcribes voiced text, generates appropriate responses and then converts them back to speech in near real time.
Mallory:Daisy has been given a distinct character as a tech-illiterate grandmother, complete with stories about her family and hobbies. O2, the company, has circulated Daisy's phone number on lists known to be used by scammers, and she, the AI chatbot Daisy, employs various strategies to keep scammers on the line, like telling meandering stories about her grandchildren, discussing her love for knitting, feigning confusion about technology and providing false personal information, including made-up bank details. All of this might sound fun or comical, but we have seen an impact from Daisy. She has successfully kept fraudsters on calls for up to 40 minutes at a time and, since her debut in November of this year last month, 2024, she has had over a thousand conversations with scammers, ultimately preventing them from targeting real potentially vulnerable individuals. So, Amit, this is a really fun story in the world of AI for good.
Amith:I want to hear your initial thoughts on it. I just love this story. I think it's so much fun and it's so powerful, and we talk a lot about how cyber threats it's always a cat and mouse game. It's move, countermove, move, countermove, and people are going to always. You know the people who are trying to make attacks happen are always going to use the latest and greatest technologies, techniques, social engineering, and you know it's up to all of us to try to find ways to prevent that, and I think that using AI in this way is innovative. It's creative. I think it's pretty fun.
Amith:I think the idea of the grandma basically baiting the scammer into staying on the line long enough. So, number one, that their time is wasted. Of course, that's great, but also because the longer they're on the call, the more information you get about them, the more of their voice you record, potentially assuming it's a real voice and potentially you can backtrace the calls, maybe even find the location of these people and send RoboCop out to go arrest them. Right? So that would be the ultimate workflow, right, as you use a multi-agentic system where you have Daisy on the one hand as one agent and they have another agent that can control a police swarm of drones to go arrest someone. Maybe that's a little bit bad in some people's minds, but I love the idea of using AI to catch the bad guys and to me, this is a fun example of it. I think that you know, if you think about what will the future of the scammers making the calls be like? Well, those are all. A lot of them are probably already AIs, but there will definitely be AIs that are on the other side of it. So if one AI is wasting other AIs time, does that really matter? And then the question is is what's the better AI? Does the scammer AI, you know, realize they're talking to an AI on the other end somehow, because it's a more sophisticated AI, and then stop the call or do whatever? So there's just so much to this that I think that we can unpack, but I love the concept.
Amith:I love the idea of using AI for good in this way to protect people, particularly people who are, you know, likely to be unable to protect themselves. We can talk all we want about how people need to be cyber aware. We across Blue Cypress, for all of our team members, subscribe to cybersecurity training. We push that really hard, but the reality is is not. Everyone's thinking about this all the time. And what if there was a virtual assistant that answered all your calls for you and looked at all your texts for you before you did? That's going to be there, where you're going to have this kind of filtering built into your life day to day to protect you, so I'm excited about that. I'm assuming that a lot of the major tech companies are going to be working on this, if they aren't already.
Mallory:These scams are getting quite sophisticated, as you mentioned. I find myself more and more looking at my emails and text messages and thinking is this real? And then you know, quickly discerning that it is not and that I should not take said action. But I find myself even questioning more and more often and I think it's just a testament on how good these scammers are getting, and with the assistance of artificial intelligence. So it's certainly lovely to see something like Daisy out there working for good using AI. I'm curious, amit, from your entrepreneur background, do you feel like a business like this has legs? Do you think it's just kind of a fun thing, or do you think we'll be seeing more and more companies doing something like this in the future?
Amith:Well, I think for major telecoms like in O2 or Verizon, at&t over here, or some of the technology providers, it does become a viable business, not that this type of product generates revenue directly, but that these capabilities, built into their platform or built into their telecom service, become seen as like a necessary level of security and perhaps a differentiator between, let's say, Apple and Android, or a differentiator between AT&T and Verizon.
Amith:So if Verizon has something like Daisy built in that somehow is able to screen your calls and help you number one, reduce spam calls which would be great and then also eliminate the scammer calls, that'd be a pretty valuable feature. And if Verizon was a year or two ahead of AT&T let's say hypothetically I think a lot of people would think that that's a pretty significant reason to consider switching Right now. If you look at telephone service, it's basically a commodity, right? Whether you go with AT&T, Verizon or T-Mobile here in the United States as the three major options, a lot of it's just driven by price and packaging and there's some brand perception stuff, but it's bundling and packaging is what I'm referring to, combined with price. But a lot of times that's what it is. There's not really any differentiation in the quality of service. I mean, of course the providers will say, oh, we have better 5G at Verizon or whatever, and there's some truth to that. But the reality is it's a commodity and so this potentially allows you to create a differentiated offering in a commodity landscape, a differentiated offering in a commodity landscape. So to me that's number one thing I see here is that I believe this kind of capability will become expected over the next 12, 24 months, with not only the tech platforms, but also, possibly, the telecom providers, maybe internet providers as well.
Amith:From a business perspective, when I think about it and then I think about it specifically in the context of associations I do think people have to be thoughtful about scammers targeting businesses, and so it's actually not really an association specific thing, but think about the number of phishing attacks and the number of scam calls that come into a business. People are targeting grandmas and grandpas, but they're also targeting businesses to try to go after bigger and bigger prizes, right, and so I see this being potentially a very valuable tool in the repertoire of a company to have an assistant like this that does filtering of calls and filtering of emails. The same basic use case, essentially, but the broader theme that I would say, though, is this If, presumably, scammers are at least somewhat sophisticated, because their job is to social engineer people when they get them on the phone. So I think that's not necessarily always true, but I say presumably, because I think if scammers are going to be economically viable, regardless of their ethics, to be economically viable as a scammer, you have to have some basic capability like that, right, the people that you employ to do that are going to have to do that. So the point would be, if the AI is good enough to fool them, that's pretty impressive, Because these aren't like really ordinary average people. These are people who have had special training on how to scam other people, and so if they are spending 40 minutes on the phone talking to you know Granny Daisy, who's an AI, that tells you that that AI is pretty far along.
Amith:So the broader implication, therefore, is actually what we've spent a lot of time on this pod and at Digital Now talking about, which is the criticality of voice and audio, how the modality of audio and how voice interaction is going to become probably the primary way people interact with AIs going forward, and that's super exciting. That's super exciting to me for a lot of reasons. One is that you know voice works everywhere all the time. Voice is not dependent upon advanced technology in terms of high-end computers. You can do it through an inexpensive phone. You can do it through like a literal telephone if you have something like that.
Amith:I haven't seen one of those in a while, but you know you can do it a lot of different ways. So voice enables the technology to make its way through social and economic barriers, which is really exciting in terms of making this stuff available around the world to everyone, and it also obviously enables agility and speed, because people can do things with their voice that they can't do with typing or clicking a mouse. So to me that's the broader implication, but the cybersecurity specific aspect of it I find fascinating and exciting by itself.
Mallory:It seems like audio is well one, a trend that is here to stay, but certainly something that it would be beneficial for our audience to continue exploring. Even me, as I said at the top of this podcast, knowing what's available with 11 Labs and still not kind of realizing exactly what you can do with it. Because, Amit, you always make the point that even if AI doesn't get any better right after today, we'll still see, I don't know, 10 years of kind of progress and innovations being built on top of that. So, I think, definitely experiment with audio. If you have not, and even if you have, like myself again, when necessity hits, you might find yourself creating something like a grandma chatbot to fight scammers that they will get better on a continuous basis for at least the next few years, probably indefinitely.
Amith:Even if they don't get better, though, they will get cheaper and they will get faster because of scale, just by itself. So what that means is that you know the highest end real-time audio right now. I still think is reasonably accurate to say that it's within chat. Gpt and what they call their advanced voice mode and 11 Labs is great for audio in the context of creating audio translations, transcription, stuff like that, but it's not quite as good at the real-time interactive stuff in my experience. But all these models are going to get so much faster, so much cheaper and so much more accessible, so that's a really important part of it too is that you'd be able to essentially count on audio AI to be as available as air If you're the one that told me for so long to do that and to talk to it, because you can go on a walk, right?
Mallory:I often walk to the co-working space near my house and think, man, I really wish I could be working while I'm walking. You can, you can have a conversation with your app right there. By the time I get to the co-working space, I've already knocked out a whole project, so just your friendly reminder there. Moving on to topic two of the day, anthropic's Model Context Protocol, or MCP, is an open-source framework introduced by Anthropic in November of this year. Last month. It aims to revolutionize how AI systems, including, but not limited to Claude, interact with various data sources and applications and applications.
Mallory:The model context protocol provides a standardized interface for AI models to connect with diverse data sources, including content repositories, business tools and development environments. This eliminates the need for fragmented custom integrations for each data source, which, of course, simplifies the process of giving AI systems access to the information they need. By enabling direct connections to data sources, mcp allows AI models to produce more relevant and accurate responses, and this improved context awareness leads to better reasoning and overall quality of AI interactions. The protocol is designed to work with all AI systems and data sources, which makes it a versatile tool for a wide range of AI applications, and MCP facilitates the development of more autonomous AI systems by maintaining context as they move between different tools and data sets. So, in theory, right, this makes sense to me. I understand what it's saying, but oftentimes, after hearing you explain something, I think, ah, that makes a lot more sense. So can you explain this model context protocol for our Sidecar Sync audience in your terms? Sure, Well.
Amith:So you mentioned a couple of things that I think are important to highlight. You said open source. Anthropic is the developer of it, but the MCP, or model context protocol, is open source, so it's available for everyone to use and any model developer and any user of a model can use it, which is an important part of it. Are designed to create standards, just like the internet has a lot of standards there's standards for web browsers, there's standards for lower level aspects of the way the internet works. So this is a proposed standard that would become a protocol that, if adopted by enough models, model developers and enough application developers, would make it easy for models to connect to data sources and to each other potentially. So the model context protocol essentially is a secure way of connecting this stuff up. That's generic in nature, so it's not specific to a particular model or a particular database or a particular content source. But in theory, the idea is that a model will be able to more easily connect to any number of different sources of data and then to use that data in whatever processing it's doing. So we often say that without your data, you can certainly benefit from AI. There are many use cases AI can help you with without a single drop of data from various data systems that you have, whether they're structured or unstructured. But to go beyond those basic use cases of what I call consumer-grade AI use cases, you need your data. You have to get your data house in order, and there's a lot of steps to that. There's a lot of thinking that has to go into that. This simply eases one of the burdens of that process, which is really the last step, which is how do you connect the data to the model once you have your data? But you still, of course, have to gather your data, have it available in a location where you can actually make it connect to a model with MCP. So if you have your data in a bunch of legacy systems that were built maybe before the turn of the century, in some cases they're not going to support MCP out of the box. So this is not like some magic wand that says Claude can connect to any data source on the planet just with the snap of your fingers. It's not that at all. Each data source has to implement support for MCP in order for MCP consumers, which would be the models, to actually use that data. So I'm actually quite excited about this for two reasons.
Amith:One is first the generality of it. Having standards around data transmission between models and between data sources and models is really important, and so Anthropic is clearly one of the leaders in the space. It's going to take the heft of one of the major players in the market to propose a standard like this. This might end up being a VHS versus Betamax kind of situation, where Anthropic has MCP, maybe OpenAI or Google or someone else proposes something else, and that'll get shaken out. Initially, our strategy with Member Junction, which is our AI data platform, is we are going to immediately introduce support for MCP so that any data that goes into Member Junction automatically can be made available to Claude or any other model that uses MCP will have that support available in the first quarter of next year as part of that open source platform. So any data that you bring into MJ will automatically be MCP enabled. But we fully anticipate in the course of time probably in 2025, other such proposed standards will exist as well, and the nice thing is it's fairly easy to implement this stuff. So in the kind of abstract, generic environment like member junction that I'm speaking of specifically, we can easily add these layers and make them things that people can turn on or off if they want those, those features supported. Ultimately, this stuff's going to shake out and there will be probably one standard to rule them all, or at least a couple, and that's going to be good. So the net net of this is I'm really about it. It's a sign of the maturity of the market, or the increasing level of maturity in the market, and it's going to make it easier to supercharge models, and what I mean by the supercharging models is, once models become system or data aware, they can do a lot more for you.
Amith:Now I would like to throw a little bit of caution into this as well, though, which is, even though I'm very bullish on the protocol and the idea of having a standard for the interchange of data in a secure manner, I wouldn't necessarily recommend that you connect all of your data directly to the model. The reason is that you need to think about data governance. You need to think about whether or not you do want Anthropic or OpenAI or Google or Mistral, whoever else the model provider is. Do you really want to give them a pipe to your data, a direct pipe, or do you want to provision that a lot more carefully? And those are questions of governance, those are questions of trust of the vendor. There are a lot of questions. So just because the technology allows you to share any and all of your data quite easily in the near future with the model provider, that doesn't mean that you should do that, and so I actually.
Amith:I think that a lot of people will choose not to directly enable like enterprise data stores to connect to models, but rather what they'll do is they'll lean on agentic workflows where they'll have systems in place that they have more control over, multi-agent kind of environments, where the multi-agentic environment which we've talked a lot about on this pod right multiple different AIs working together to collaborate on solving a bigger problem. The agent-level system will have access through probably through MCP or something similar to the data, and then the agent system will determine which little bits and pieces of the data it needs to actually share with the model. And so I generally think that's a safer bet because you have control over the agentic architecture. Whether you're using something like Landgraf or you're using Autogen or AG2 or any of these other systems, you typically have more deployment control over that. So I'm getting a little bit into enterprise IT type considerations, but I think it is an important thing to at least flag for discussion if you think deploying a protocol like MCP makes sense For the association market.
Amith:What I would like to see I don't know if this will happen or not what I'd like to see is that all the major providers support MCP. So your AMS, your LMS, your FMS, your CMS if all of them support MCP, then theoretically you could choose to connect them either to an AI agent system or to a model directly. Whether that will happen quickly or at all is obviously up for debate, and really your forecast is as good as mine in general in the world. But that's actually where intermediate software layers like the Member Junction AI data platform or others that are out there like that, can play a role. Because even if legacy systems don't directly support MCP, if you bring all of your data into a unified data layer that you have control over what we call an AI data platform or common data platform, and then if you have MCP support on top of that data platform, then you can do whatever you want. You don't have to worry about the legacy vendors providing you support for something like MCP, which may not be on the roadmap for years, if at all. So I would like to see as many vendors as possible support something like MCP.
Amith:I also think that established vendors in various verticals won't initially jump on the bandwagon because they're going to want to see what happens. You know which is probably rational to say OK, like will there be a competing standard from OpenAI? Most likely, yes. If so, you know what will that look like? How different will it be? Which one will shake out to be the winner? Will there be multiple winners? Things like that.
Mallory:So that's the kind of stuff that goes through my head. All in all, I'm super excited about it. So the MCP, or model context protocol, is a standard or framework that allows for the safe connection of data sources to AI models. That is correct.
Amith:Yes.
Mallory:That is totally dependent on however many companies decide to implement.
Amith:Correct. A protocol is kind of like a language combined with a culture. So if we think about it in terms of a very commonly used protocol, like, there's something called the Hyper Text Transfer Protocol or HTTP. That is the protocol that all web browser traffic is based on and this protocol goes back to the late 80s and this protocol became a standard on the internet early on. And essentially protocol goes back to the late 80s and this protocol became a standard on the internet early on. And essentially what it does is it defines the way computers connect to each other to transfer text and images and stuff like that, and the protocol basically allows for secure connections. It allows for unsecured connections, it allows you to transfer text, images, it allows you to stream, it allows you to do a lot of different things, but it's essentially a standard way for two computers to connect.
Amith:Now, the reason I say it's a language and a culture is that the language essentially is like a common set of words. The culture is what's the beginning and the middle of the end of the conversation. So you and I might both speak English, but in my culture, if I like, just dive right into the meat of the conversation and in your culture. You want to have some small talk first, then we're going to have a little bit of friction initially, right, until we learn each other's culture Not that one is good or bad, right, it's just different. So a protocol is a little bit more than just the raw language. It's also like the way you use language to have the computers communicate.
Amith:So there's a lot of protocols that we rely on in our day-to-day lives using this Zoom technology. Zoom has protocols for transmission of real-time video. There's a lot of stuff like that. So protocols make up really the backbone of everything we do with computing, particularly on networks, which is of course, the type of protocol this is. But protocol more broadly just means like a way of interacting between people or computers and people or computers and computers. So there's going to be a lot of these things over time. I just find it fascinating that a major player like Anthropic has put one of these out fairly early in the game. But I think that's great that they did, because they recognize this is a major enterprise need.
Mallory:Okay, that's incredibly helpful context. When thinking about connecting data sources to AI models, my mind certainly goes to companies like BettyBot, which is in our family of companies that seems to do exactly that. I'm curious what you would say on how they are different.
Amith:Well, I mean, a product like Betty is an application. So Betty is an application where you say, hey, betty, here's my website, here's all of my private stores of prior journal articles, here are all the proceedings from all of my conferences in the past. You basically give Betty a bunch of data. So that's where it's similar. You say, hey, betty, here's all the stuff, and then Betty learns your content. Betty has this whole training process where she will essentially ingest that content and thereafter the business value from Betty is a conversational knowledge expert who can interact with your staff, your members, the general public, whoever you want. It becomes essentially this component of your AI infrastructure that represents the full body of knowledge of your industry. So that's the functionality of something like Betty.
Amith:Where MCP may play a role is if you want Betty to be aware of data sources that go beyond, kind of the vast repository of unstructured data. So let's say that you have a member directory and that member directory, its source of truth, is in, let's say, your AMS, and let's say the AMS supports MCP. And let's say Betty supports MCP, which as of today, she does not. But I envision the Betty team probably will rapidly support MCP. But the idea is that you could say oh well, betty, we want you to be aware of our member directory, and here it is and it's an MCP endpoint that you can connect to and then Betty can access that whenever a relevant request comes in where member information is related. So that would be one way that something like Betty could use MCP. So Betty is not a model, betty is a multi-agent solution. Actually, betty has multiple different agents within her, so to speak, and it's very easy to envision how Betty could take advantage of MCP.
Amith:Very similarly, skip, which is another multi-agent software solution within our family. Skip is our data analytics platform, essentially, and so Skip allows you to do the same kind of thing against both structured and unstructured data, where you can essentially request reports and Skip will go and program all the queries against the database and then we'll construct a report and run the report for you and all that kind of stuff, and Skip could directly use MCP to access a number of disparate data sources. Right now, in order for Skip to work, you have to have your data in member junction. But imagine a world where Skip could connect to any number of disparate data sources through MCP or member junction. Using MCP could ingest all that data automatically and then put it in the platform and then skip could just sit on top of that.
Amith:But MCP plays could potentially play a role in those kinds of solutions for sure. There's a ton of examples like that where different kinds of data systems in the association landscape would benefit from that. In theory, you could use it as an integration layer between various pieces of technology, like an AMS and an LMS, although I don't know if it would necessarily lend itself to that type of integration, but it's theoretically possible.
Mallory:So in the world prior to November 2024, we were relying heavily on kind of custom integrations if we wanted to connect these data sources to models, and now we potentially have this framework that could be used widely. That would make that much easier.
Amith:Essentially yes. So having a standard, if people adopt it, if there's enough scale to the adoption of the standard, then it becomes easier and easier for everyone in the ecosystem to say yes, we support MCP. Therefore it's easy for our systems to talk to each other. It's like imagine a world where there were multiple competing standards for something like a web browsing protocol and the web browser on your computer only supported one flavor of that and my web server that had my content didn't support the version you're using. Right In the early, early days of the internet there were actually kind of variations, not so much in the protocol but the layer above it, in terms of like HTML standards.
Amith:So protocol standards, languages, all this stuff. You can kind of think of it this way, just abstract it all out into this concept of a standard agreement. It's like a handshake between all the parties saying this is how we're going to do stuff. And if you think about it in that more generalized way, it's easier to kind of consume in a non-technical way. It's just saying this is how these systems are going to interoperate in a way that makes it system independent or less dependent from one system to the next. So interoperability is always a good thing.
Mallory:Now this is quite new. Last month of this year, do you think it would be worthwhile for our technology leader listeners to be asking the vendors that they work with if they're going to adopt MCP, if they have plans for that, or do you think we're not quite there yet?
Amith:I think it's a great question to ask vendors that you're talking to about AI projects just to even know if they are up to speed on what's happening in the industry, because most people I talk to regularly about AI are very, very involved in reading this and understanding it and playing with it. So I think that it serves to benefit you to ask the question just to know whether or not the vendor is even thinking about this stuff or has awareness of this protocol. That by itself may be a value in an evaluation of a vendor or a conversation with one, but as far as like, does it actually matter for your immediate implementation future? It may not right now. I wouldn't be like super stressed about it. If you're talking to someone that's a great fit for you, but they don't have plans to support MCP, it doesn't necessarily mean that it's an issue. The main thing to look for is is there some other way to access the data? So let's say you're looking at a new marketing platform and this new marketing platform, like pretty much all marketing platforms right now, don't yet support MCP. You know, even someone like HubSpot doesn't yet have official support for MCP. I suspect they will very, very soon because they're very forward on all things AI, but let's just say you're using something else. You're like oh, this other platform has amazing functionality, it's perfect for us, it's priced right, we love it, but it doesn't support MCP and all the leaders, let's say, 12 months from now, do.
Amith:The question would be well, how do you work around that? So if the vendor has another way of getting at the data in an open and unfettered way, what I mean by open and unfettered is like can you use APIs or some other way of connecting with their data? And unfettered, in my mind, means they're not actually interfering with these requests. You don't have to go request data from a person and then hopefully they'll give it to you, right, like there are some people who actually consider that a form of an API, as silly as that sounds. And you know, realistically, if you don't have unfettered access to your data, you have a big risk with any system. You know, because you're really dependent upon the goodness of the hearts of the vendor to give you your data when you want it. So you want to be able to access the data, as long as you're able to access the data through some programmatic means, some kind of an API or some kind of a database connection. You can get around the MCP issue if everything else is fine, because building an MCP layer on top of any kind of data won't be terribly difficult, but it's extra work that it creates for you. So I would definitely think about it.
Amith:Remember also, this could be Betamax, right? This might be the Betamax version of an open protocol for data standards. Someone else may come out with the VHS version, right? Or it's the Blu-ray versus HD DVD thing. It's the same idea, right? So someone else might come out with a better standard, or not a better standard, but just a standard that gets more widely adopted. So I'd say, be aware of this stuff and let it shake out before you get too worried about it. But not being aware of it's a problem, because you want to be thinking about this stuff with any kind of enterprise you know AI type of strategy.
Mallory:I'm not going to lie, amit. I did have to look up Betamax, but I see it was. It was the competitor to be, I know, vhs. I guess that's why I've never heard of Betamax, because it kind of fell off. Is that the story?
Amith:That's pretty much it. Yeah, and Betamax was Sony's proprietary and then their standard. They wanted to push and VHS I forget who was behind that, but that obviously won. And then back in the days of DVDs with high definition.
Amith:DVDs. There was the Blu-ray standard versus the HD DVD standard. The HD DVD standard died, blu-ray won. That tends to happen with each generation of technology. There's always something, whether it's a true standard or not. There's kind of the preponderance of users that go to a particular tech stack and that's what ends up winning. So it might be the best technology, it might not be. You just don't really know.
Mallory:Well, will MCP be Betamax or VHS? We will keep you all in the loop on the Sidecar Sync podcast. Everybody, thank you for tuning in today. We will see you next week.
Amith:Thanks for tuning in to Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember Sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.