Sidecar Sync

Getting Hands-On with MCP & Anthropic’s AI Economic Index | 80

Amith Nagarajan and Mallory Mejias Episode 80

Send us a text

In this episode of Sidecar Sync, hosts Mallory Mejias and Amith Nagarajan take a deep dive into Model Context Protocol (MCP), a revolutionary standard making it dramatically easier for AI to interact with data, tools, and systems—no code required. Mallory shares her personal setup experience with Claude for desktop, walking through a live example of using MCP to automate file organization. Amith expands on why MCP is a game-changer for association leaders, how it enables business users to go from idea to execution quickly, and why data liberation is the missing piece for many organizations. The duo also breaks down new insights from Anthropic’s Economic Index, which reveals that AI is currently used far more for augmenting human effort than full automation. It’s an episode rich with practical insights, technical clarity, and a peek into the fast-arriving future of AI in the workplace.

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
https://learn.sidecar.ai

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai

📅  Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/ 

🎉 More from Today’s Sponsors:
CDS Global https://www.cds-global.com/
VideoRequest https://videorequest.io/

🛠 AI Tools and Resources Mentioned in This Episode:
MCP Quickstart Guide ➡ https://modelcontextprotocol.io/quickstart/user
Claude for Desktop ➡ https://www.anthropic.com
Zapier ➡ https://zapier.com
Cursor ➡ https://www.cursor.so

Chapters:
00:00 - Welcome to Sidecar Sync and What’s Ahead
01:02 - Catching Up: Spring Break and Autonomous Driving
05:02 - What Is Model Context Protocol (MCP)?
07:06 - How MCP Works: Mallory’s Simple Explanation
08:11 - Step-by-Step: Setting Up Claude Desktop With MCP
12:42 - Live Demo: Moving Files Using Claude and MCP
18:01 - Why Associations Should Care About MCP
26:44 - MCP and the Role of AI Data Platforms
33:07 - What Is the Anthropic Economic Index?
35:59 - Key Findings: Augmentation Over Automation
39:34 - Amith on Real-World Automation Examples
44:46 - Vibe Coding: Cool Concept or Caution Zone?

🚀 Follow Sidecar on LinkedIn
https://www.linkedin.com/company/sidecar-global/

👍 Please Like & Subscribe!
https://x.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecar.ai/

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
...

Speaker 1:

At the moment. As good as these tools are at building working software, they can absolutely build software that creates problems for you that you don't realize right. So you have to be very thoughtful. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings everybody and welcome to the Sidecar Sync, your home for content at the intersection of artificial intelligence and associations. My name is Amit Nagarajan.

Speaker 2:

And my name is Mallory Mejiaz.

Speaker 1:

And we are your hosts, and once again, we're here to deliver the good news of AI in the world of associations and some exciting topics here today that we're really, really excited to jump into momentarily. Before we do, let's just take a quick pause to hear from our sponsor.

Speaker 3:

Video request Easily collect, edit and share videos. Collect videos from your customers, members or volunteers by simply sending a link. Contacts can enter their information and record a short video Once your video is complete. Our platform makes it simple to share your video on social media, on your website or in marketing campaigns. The possibilities are endless. Get started for free today.

Speaker 4:

What if you could reduce churn, lower costs and forecast membership trends all from one platform? Cds Global brings predictive AI to the business of membership. We help executive teams simplify operations, protect reoccurring revenue and uncover the secrets to success buried in your data. Every missed insight costs you members. Uncover what your data already knows. At cds-globalcom, slash AI.

Speaker 2:

Amit, we've been out of the podcast game for a minute there. How are you?

Speaker 1:

I'm doing great and I think between your vacation and my vacation, we've been out of pocket for a couple of weeks, but fortunately we had some great content recorded that we dropped over the last week week and a half, so, but I'm doing great. I had an awesome family spring break trip out to Hawaii, and you know you can't complain about going to Hawaii and that that I would find a reason to. It was just absolutely stunningly beautiful and had great family time. My teenagers were, you know, happy to be there, which was good, and my wife and I had a great time too, so it was awesome. How about you? How are you doing?

Speaker 2:

I'm doing well, like, like you said, I also had some travel lined up. I got to go to Yosemite for the first time, which was beautiful, did some incredible hikes over there a little bit more challenging than I thought when I was going into it, but it was really fun. And then I spent a few days in the Bay Area and then in San Diego too for a wedding, so it was a pretty crazy week. I was just telling you, amit, I also I forgot to mention have you ever been in a Waymo when you were in San Francisco?

Speaker 1:

No, it's been on my list of things to do. You tried one.

Speaker 2:

I did. Actually, once we got in we realized we loved it this is my husband and I and so we took three in one day. But we did play a funny joke on my mom, who is quite a worrier and does not know about Waymo, where we recorded a video of us, and said oh, we're in the Uber, you know, check this out, check the scenery out, oh where's the driver? And then kind of had a video of the wheel just moving. She of course, panicked and said texted me Mallory, get out of the car. But it was really safe and I actually will say I might have preferred it to a human driver, because I felt actually more safe in that car than I would with a human driver, I think.

Speaker 1:

You know, I think, that most of us, even those of us that consider ourselves very, you know, cautious, diligent, defensive drivers we're humans and we get tired and we get distracted and we have things in our minds, you know, whether it's professional or personal or family or whatever. So AI doesn't get distracted. Ai is always looking in all directions and the algorithms aren't yet perfect, but, you know, statistically these cars are actually already much safer than the average driver and soon they'll be, you know, safer than all drivers combined. So I think that is definitely the wave of the future. My youngest is about to get her driver's license. I think that's a great rite of passage, but, you know, I think, for those that have a little bit younger kids, you know, maybe your kids will never drive, or never need to drive, and maybe it'll just become a sport and a hobby.

Speaker 1:

I would say that I am a giant fan of classic cars and have a couple myself, and I think it's one of those things where, you know, it's a different reason for having them. It's not about transportation, it's a hobby, it's a passion. But I would love to see people continue to drive in safe and responsible ways, obviously, but I do think that transportation, as you know, a way of moving people and things around is going to get dramatically safer and also more efficient. Because, you know, we human drivers are not particularly great at conserving energy. We tend to accelerate and brake in ways that are unpredictable, and AI is better at that too. So I think that's really exciting. I'm looking forward to trying Waymo. I'm actually going to be out West in a couple months, I think, so I'm going to try to find an opportunity to do that. But, even more importantly than AI, tell me what your favorite hike in Yosemite was.

Speaker 2:

So we did. I would say my favorite one was Nevada Falls. But which have you done? That one, amit.

Speaker 1:

Yes, I grew up in California, and so I've been to Yosemite bunches of times. This is one of my favorite places in the world.

Speaker 2:

So we stayed in Curry Village and these little tent yurts and then we did Nevada Falls. But the direct line to Nevada Falls was closed and so we kind of had to take this indirect route where we went back down and back up, and I think that's what really got me by the end. I was like I'm so ready to be there, but once you got there, the views, the waterfalls were insane. I think the time of year that we went was great for those, and so we had a blast. We also saw the lower Yosemite Falls. We did a little bit of biking around the valley. It was a great time.

Speaker 1:

Wonderful, that's great. I love it when people get out to Yosemite. I haven't personally been in quite a number of years, but it's one of my favorite places to go.

Speaker 2:

It was incredible, but now it's time to talk about some exciting AI topics, cover model context protocol or MCP, which we've actually covered on the pod before, but we'll go more in depth today. And then we're also going to chat about anthropics economic index and how CLAWD 3.7 is being used. So, first and foremost, model context protocol. Don't be scared, it sounds a bit technical, but it's not too difficult. The MCP is an open, universal standard designed to connect AI models, especially large language models or LLMs, and AI powered applications, to a wide variety of external data sources, tools and services in a secure, scalable and standardized way. So, traditionally integrating AI models with external systems required custom code for each data source or tool, leading to fragmented, hard to maintain solutions. Mcp addresses this by acting as a USB-C port quote unquote for AI, providing a single, standardized method for connecting models to the data and tools they need, regardless of the underlying provider or format. So think of it this way With MCP, you could ask Anthropics Cloud, for example, to look up information in your CRM, send a message on Slack or access files on your computer, all without writing a single line of code. What makes this particularly exciting is how it massively expands what AI can do. Any MCP client can potentially connect to any MCP server, creating an explosion of possibilities. As an example, heygen recently released an MCP server that lets you tell Claude, generate a HeyGen video for every name in the attached CSV and it works. To give you a little bit of the technical side, just so we can have a foundation for discussion, it's built on a client server architecture. So you've got the host, which is the AI powered application like the chatbot or desktop assistant that needs access to external data. You've got the client, which manages a dedicated connection to an MCP server, handling communication and capability negotiation. And then you've got the server, of course, which exposes specific capabilities, such as functions, data or prompts over the MCP protocol, connecting to local or remote data sources.

Speaker 2:

So I sought out to test this for myself with some encouragement from Amit, and I'm glad I had that encouragement, because once I glanced at this quick start guide I thought, huh, maybe this is something I'll do later, but it's actually not that hard. So I want to share screen and show you the process quickly for setting up MCP on your own device. If you're interested. I encourage you all to check it out. It might seem a bit involved in the process part, but it's fairly simple and watching what you can do in the end is pretty fun. So if you drop into any search engine Claude, model context protocol, it will pull up this quick start guide.

Speaker 2:

I'm going to walk you through this piece by piece. First and foremost, you have to download Claude for desktop. That's how you can get set up the MCP to work for you. So that's the first step. I downloaded Claude for desktop, which I have here on the left side of the screen, and then what you want to do from there is inside Cloud for Desktop. Go to settings. So I'm on a PC right now, so I'm just going to file and settings and you will see it pulls up this screen. I'm going to navigate to the developer tab On the left-hand side. Mine looks a bit different because I've already configured this, but it will look something like what I'm showing you on the right-hand side of my screen and you will just click a button that says edit configuration From there.

Speaker 2:

This is going to create a file on your computer. It should automatically pull up that file. What you're going to do is open the file, delete any of the text that's currently in there and you're going to copy and paste in this code that you directly pull from the quick start guide. So I have Windows, so I copied and pasted the Windows code and I dropped it into that same file that was created on my computer. Something important to note here is that you have to replace username with the actual username of your device, just as a note.

Speaker 2:

I also want to point out that if you see this part, I'm giving it access with the standard code provided from Anthropic. I'm giving it access to desktop and downloads. I decided I didn't really want to give Claude for desktop access to my desktop folder, so I just deleted this line in my version and I only left the downloads folder. I don't particularly have anything sensitive in my downloads folder, so that is the sandbox that I decided to use From there. You also have to make sure that you have nodes on your computer. It'll give you some instructions. You just go to this website link and you download this to your computer as well, and then finally, here at the end, you just need to close out Claude and restart it, and then you should have access to the model context protocol.

Speaker 2:

As a quick note, on my computer it was a little bit glitchy, I don't know if this was just a my device thing, so I actually had to restart my computer and the way that you know this is working is when you see this little hammer here and it says 11 MCP tools available. That's how I knew it worked. These are just some of the things that you can do with it. So essentially, I'm giving my I'm giving cloud for desktop access to the files on my computer only in my downloads folder. So keep that in mind. I can edit files, I can get file info, I can create directories, I can move files, which is the example I want to show you Read files, read multiple files, search files, write files. So lots of options there. And now I want to show you how I decided to use it.

Speaker 2:

So my I will admit my downloads folder is not the most organized. All the time I'm mostly downloading lots of images images for blogs, images for cover art for the Sidecar Saint podcast, so lots of images in there and I decided to run a really quick experiment here. I said basically, with no further prompting, as you can see on my screen, can you take all the images and downloads and move them to a folder called images? I feel like that's a pretty straightforward prompt. I didn't provide a ton of detail and then it started this lengthy process, which I've already run. Because it took quite a bit? Because, as I admitted, I have a lot of images in my downloads folder. It says I'll help you move all the images from downloads folder to a new images folder. Let me first check if both folders exist and create the images folder if needed. That images folder did not exist, so it created it and scrolling down my screen here, you can see it moving all the image files from my downloads folder, which is quite a few, into a new folder called images, and I confirmed this.

Speaker 2:

When the whole process was run, I went to my downloads folder and I saw a brand new images folder there that I had not created and all the images were moved in there. So it did take a few minutes to set up and it took a few minutes to run, but I think this is a really neat use case to showcase. Also, I did, as a note, ask, ask it to read a PowerPoint deck in my downloads and it did say the file size was too big. So that's on my end. It can read PDFs. I also did some experimentation with Word documents and it seemed to struggle to read Word docs, but when they were in PDF format it could actually read what the file said and then provide a summary to me. So that is just a quick example for how you can test out model context protocol on your own. So, amit, why do you think our listeners should care about MCP?

Speaker 1:

I want to go back to something you mentioned in your description that I think Anthropic popularized when they announced MCP, which is this idea of the USB port for AI. So what is the USB port? It allows you to connect different kinds of electronics together in a standardized way. For those of us that have been around a little while, connecting peripherals to computers used to be quite challenging. You'd have all sorts of different types of connectors, ways of connecting printers and external hard drives and different kinds of peripherals to your PC or your Mac, and then the universal serial port or USB. Universal serial bus came out maybe 20 years ago, and various versions of that have evolved over time. And as USB has evolved, it's gotten faster and more standardized and most devices now work off of USB-C. In fact, even Apple's latest iPhones devices now work off of USB-C. In fact, even Apple's latest iPhones, begrudgingly on their end, are USB-C compatible, which makes them easier to charge and connect with a variety of different things.

Speaker 1:

So why is MCP the equivalent of a USB standard for AI? Well, it's a standard way of connecting tools and models and applications together in a really simple way. So it's a very, very lightweight and simple protocol. You described it really well in terms of what the client and what the server does. From a business user's perspective, what it means is that AI can take action. So for a long time on this pod we have separated out the idea of AI models from AI agents by saying models do thinking, they sometimes now are starting to do reasoning, but they can't take action on your behalf. And AI agents, on the other hand, combine models with behaviors or with actions. Well, in a way, mcp essentially agent enables a desktop app like Claude to take action for you. So now the model essentially is able to gain context of its environment, understand data, understand documents, understand you more and more and more from all these other tools, and can talk to these other tools to say hey, I want to create a record in the CRM, I want to renew this member in my AMS, I want to save a file to the file system.

Speaker 1:

Prior to MCP, you would have to manually do those things. Claude or ChatGPT or Gemini could give you advice. It could say hey, mallory, here's a really great email that you could send to this member because they complained about something. In comparison with MCP, you could say hey, claude, take a look at my customer service inbox and read, using Zapier or using whatever. Read my most recent five complaints. Tell me the best way to respond to them. Okay, go ahead and respond to them with those responses.

Speaker 1:

And if you enable MCP servers to retrieve messages and send messages which obviously has its risks too you don't want to necessarily just let the AI run wild with it. Right, we can talk more about that, but the idea is that the AI can actually take action on your behalf. Now, this is not a new capability in the general sense of it. Developers could previously stitch together these types of capabilities, building what we've been calling AI agents or AI systems, but now you, as a business user without any coding skill, can do this. Claude is Anthropic's AI product, and Anthropic is the company that proposed the MCP standard as a protocol.

Speaker 1:

And the good news is, every other major lab is behind MCP, including OpenAI. Openai didn't jump behind it right away. Google got behind it, a number of other labs got behind it, a lot of people started building for MCP and then recently, openai announced that they would fully support MCP. So you will be able to do what Mallory just demonstrated in ChatGPT as well. At the moment, cloud Desktop is the tool that we use for this type of functionality, because it's the best user interface for interacting with MCP. Also, our friends over at Grok Grok with a Q just released the Grok Desktop, which is primarily a little bit developer-centric. It's a brand new product, but that allows you to do MCP stuff as well. So there's multiple different options. If you are a software developer, you can utilize MCP servers from Visual Studio Code, from Cursor, from Windsor, from other environments. So MCP is a standard already that's being adopted by a lot of different apps.

Speaker 1:

To me, though, just zooming out to your question of why it matters for, let's say, you're the CEO of an association and you're decidedly non-technical, you want to leverage AI, but you're not a programmer. You don't ever want to be a programmer. How do you use this stuff? Well, just follow the instructions that Mallory showed you. It might be a little bit involved looking, but just get over the head and go do it, because if you try it out you're going to be blown away by the capabilities, and then you personally know what is possible with this MCP protocol. It's really, really powerful, so I'm excited about it. I think it's going to open up a whole category of use cases for business users. You know, as much as I'm a programming geek and love building systems and all this stuff, I get more excited about empowering just the typical user right? How do we make the typical user more productive, where their creativity, their curiosity can directly lead to better results? And MCP is a major bridge for that? So I'm pumped about that.

Speaker 2:

So you mentioned all the major labs, or many of the major labs, are MCP compliant. Is that the right term?

Speaker 1:

Yeah, I don't know if compliance would imply like some kind of standardized testing and people like verifying, so I don't know that. I'd go so far as to say that. I think they're just saying that they're expressing their support for MCP as a protocol, and so therefore it's. You know, do you want to like build on top of Betamax, right? Or do you want to build on top of, like, hddvd? These are old standards for those of you that aren't familiar with that died, you know, over time. What you want to do is try to build on top of a standard that's going to be around for a while, and so, with all these labs behind MCP, it's quite likely that MCP will see several years of flourishing activity around it.

Speaker 2:

So and you mentioned I can't do the same with OpenAI right now, but that I could I would have to download another server from OpenAI to my computer. Or can you kind of explain how that would work? Or I already have it?

Speaker 1:

So ChatGPT at the moment their desktop client as of the moment we're recording this in kind of late April of 2025, chatgpt desktop does not support is not what they would call an MCP client. It cannot talk to MCP servers, but OpenAI has expressed that they plan to add it, so by the time you listen to this, it may already be in there, so you need to download. This is not something you can do through the web browser. You have to download the desktop app for your Windows or Mac machine and Claude supports it already. I suspect imminently that OpenAI is going to introduce this because there's so much buzz around it and, honestly, it's not that hard to support. It's a fairly easy thing to do to support MCP from a software developer perspective, so I expect OpenAI will have it very, very soon. I'd be shocked if Google didn't introduce a Gemini desktop product. That has this imminently as well. So you'll see it as a standard thing. Also, cloud one of the things I love about Cloud is their user experience is awesome. The way they do artifacts is great. Their experience around configuring MCP leaves a little bit to be desired. It's a little bit complex. That will change very rapidly.

Speaker 1:

Again, I would urge all of you that are listening or watching the Sidecar Sync. You are on the forefront. You are doing your best to look ahead and to be part of this wave. Don't let that little bit of friction stop. You Go try it out now. But I would imagine in the next few months these tools will all have way easier user interfaces for setting up MCP servers, much more point and click kind of stuff. But right now it's a little bit involved to get it set up. But that's a one-time thing. Once you set it up then it's back to chat and you can talk to Claude or whatever the tool is about what you want. It can start doing amazing stuff for you.

Speaker 2:

And to remind you, to break your brain a little bit, I did have some error logs when I tried to do this the first time. Looking back, it really is a simple process, but I had just not done anything like that before and I took the error logs, pasted those into chat GPT because I was over my limit for Claude for until 6 pm, and then had it identify what was going on and then it provided me advice of what I could do to fix it. So, reminder, you can use AI on top of AI on top of AI to help you with AI.

Speaker 1:

Yes, and once you install the file system MCP server that Mallory demonstrated, now Claude has access to Claude's configuration file. So Claude can actually configure his own MCP configuration file for additional MCP servers. So if you, for example, wanted to configure Zapier as an MCP server, there's a portion that you do on Zapier's website and then you can say hey, claude, here's the documentation from Zapier on their MCP capabilities. Here's a URL that I got from them with my MCP server endpoints. Now set up your own MCP file to do that and Cloud Desktop will be able to edit, quote, unquote, his own configuration file. So once you give Cloud access to a portion of your file system, you are in good shape to be able to really simplify this. And by the way, when I emphasize the word portion of your file system, as Mallory demonstrated, it is a good idea to limit the areas of your computer that you give any tool access to. There's really no reason to give Clot or any other tool complete access to your whole system. It's best to give access just to limited portions of your system. This is also a good opportunity to just emphasize that we live in the wild west of AI. These are early days. You are going to see all sorts of tools come out saying, hey, I've got this great chat desktop app that supports MCP, install me, I'm free and maybe it is really cool. But be thoughtful about where you download software from. Get software from trusted providers, so you know. The major labs are a good place to start. I'm not saying third party apps are bad. I'm just saying that you should probably be thoughtful about software you install on your computer, particularly if you're going to enable access to use tools to access your file system, to access websites that hold important data, like your CRM, for example. This is all very, very powerful, but be thoughtful about what you do. Set up experiments, sandbox them in thoughtful ways, encourage your team to play with them, but also teach your team about the downside risks of just installing whatever random software. So, for example, I'm a big fan of what DeepSeek is doing in terms of research. I love the fact that they're open, sourcing all their models. I think by the time you listen to this, we'll probably have a DeepSeek R2, like next-gen reasoning model out there, and that's awesome.

Speaker 1:

Now, at the same time, I would not use DeepSeek's website ever, because it's inferencing. It's happening in a country where we don't know what's going to happen to our data, and this is not anything anti-China or pro-China, it's just like I just want my data to be here in the United States. That's personally I feel more comfortable with it, and that would be true of really anywhere else in the world. And so, at the same time, you know, I would not want to download a deep, seek desktop app because I don't know where it's doing its inference. It's probably doing its inference in China If I'm going to give it access to my file system. What's happening right? So I think you have to be thoughtful. I'm not saying to be paranoid, but I'm saying be thoughtful about apps you download and how you give them access to increasingly powerful tools in your environment. Using a web browser like chatgptcom, there's a limit to what it can access, but a desktop app can access a whole lot more.

Speaker 2:

That's an excellent point, and it brings me to the question, amit, if we have an association leader listening to this podcast, should they be incorporating MCP protocol into AI guidelines or usage and then, on the flip side, should association staffers be asking for permission to do this? I mean, what's your take on that?

Speaker 1:

Yes, I think so. First of all, if you're a staffer and you don't have access to things that are in your association's tech stack, like your AMS probably doesn't have an MCP server and may not for a while, if ever, and some of these tools are a little bit long in the tooth and so they might not have support for these kinds of things. You know, a little bit long in the tooth and so they might not have support for these kinds of things. That's okay. You can still experiment with other things, like the file system, but you know, do it in. When I say sandboxing, I mean set it up in an environment where you're not sharing super sensitive data. Start off with like little test cases, like the downloads example of images and stuff like that. Those are benign. You know, even if those files did get uploaded to a server that you don't want them uploaded to, it's not a big deal. So be thoughtful. Test things out as an association. Yes, you should absolutely update your AI policy to accurately depict how you want MCP to be handled. It's a really important thing to educate every employee on. With the Sidecar Learning Hub, we are working on some additional lessons right now to add to the Learning Hub's content exactly on this topic to help provide guidance to all of our learners, and I think you know the thousands of folks that are going through that content are going to be able to, you know, quickly take advantage of that, which is great. We will also publish a lot of publicly available content on the Sidecar blog to help drive guidance on. You know, ai policy and MCP give you some templates, so it's really really important to be thoughtful about this, because you will have users in your organization who want to experiment with this. Don't stomp on them, don't prevent them from doing it. Just give them some thoughtful guidance on what they should be doing and what you'd like them to not do, just like with AI tools in general. It raises the stakes in terms of security, but it also raises the possibility so much higher. You should be excited, really, really excited, but you should also be very thoughtful about it.

Speaker 1:

Now I do want to say a couple of things about the world of associations and kind of the unique data architectures that many associations have, which oftentimes consist of. They might have mainstream systems like Microsoft Office 365, google Slack you mentioned earlier things like that Great and those tools are mainstream. They're big companies. If they don't already support MCP, they will very, very soon. Right? That's an awesome piece of news. But you probably have some software in your stack like an AMS, an LMS, maybe an association or nonprofit-specific financial management system. Maybe you have a content management system that's been tweaked, maybe you have some custom applications that have been written, and these products likely will never support MCP, or not anytime soon. Even if you're on a very contemporary AMS, for example, the vendor behind it probably is a much smaller company than one of the labs we're talking about, right, and so they're going to take some time to develop this capability. So what do you do? How do you get your world of data unified into the world of AI?

Speaker 1:

From an MCP perspective, what I wanted to mention there is this is the same challenge you have with unifying your data or accessing your data for any other kind of AI application. Mcp is just going to put more pressure on you to figure this out, because your users are going to say, hey, if I have MCP access to my AMS data, oh, my gosh, mallory, what I could do is automate this and this and this and this, and it would be amazing. Can I please have access to it and Mallory, who's the CEO of the association, says hey, I'm really sorry, my AMS doesn't support it, there's nothing I can do. Or is there? And what Mallory can do is she can implement an AI data platform, which is really just another fancy term in the world of technology for another database, but a database you control. It can be literally anything you want it to be. It doesn't have to be Member Junction, which is our free, open source AI data platform that everyone in the world is able to download, install and run on their own for no cost. That's why we built it, but you can use anything for this.

Speaker 1:

So you have an AI data platform that's continually ingesting data from your legacy systems, your AMS, your LMS, your FMS and legacy might be actually a harsh word, it sounds negative, but it just means a system that doesn't necessarily support all of the latest standards, right? So you have all the data flowing into this unified database and then, if that system supports MCP, now you have, at a minimum read access to all of your data across your whole enterprise. So what if your AMS and CRM and LMS data was all in one unified database and now you stand up MCP support on that database, which, by the way, the latest version of Member Junction gives you again for free. It gets every single piece of data. In a Member Junction AI data platform Instance, you have an MCP server just built in right out of the box. So Claude knows all about MCP server protocol and knows that member junction in your environment supports MCP for all of the data types in your AMS. Claude can now access whatever you enable accessing members, figuring out who's renewed, who hasn't, getting a list of people that are about to renew, pulling that list down, generating personalized renewal letters for each of them and then connecting to Zapier to connect to your HubSpot to send individualized messages to every one of those people.

Speaker 1:

I mean, these are the kinds of things you as a business user can do if, and only if, you liberate your data from the confines of a proprietary prison of data, which is what a lot of these legacy systems are, and you can do that. You have the power to get your data out and unify it in a data platform. We've been talking about this for years now, and we've been talking about it in the context of building agentic AI applications, doing personalization at scale. What I love about MCP is it opens up the door for business users to do what their mind comes up with, but you have to have your data available in order to make that possible. So having your data wire into an AI data platform once again, this is probably the biggest use case of why AI data platforms can be such a massive immediate ROI. You get your data in there and then you can enable your users to have all sorts of amazing use cases like the one I just described. Just as one fairly small example, actually, yeah.

Speaker 2:

Wow. So basically, we have a Gen Tech applications at our fingertips, but it's all reliant on the data.

Speaker 1:

But if you then combine that with having all of your data, I mean the world opens up. Train ChatGPT to say, hey, this process I just did where I grabbed the list of about to renew members, I pulled them in, I generated a custom renewal reminder, I sent it out through HubSpot. I want you to call that my renewal process and I want you to remember it and then in the future I can say, hey, by the way, I just want you to run this for me once a month. That does not exist yet. Just to be clear, I'm making this up as I describe it, but it is so easy to see that future unfold where these consumer-grade AIs will absolutely allow you to run recurring tasks.

Speaker 1:

Chatgpt already does have the concept of scheduled tasks in it, by the way, but it's really weak right now. All it does is it reminds you, saying hey, mallory, it's time to run your renewal process. But what if you could say, hey, this thing I just did here. I want you to just automatically do this for me once a month and send me a summary of what happened. That's going to be a thing Now. Will you want to do it that way? At the enterprise scale Probably not, but that's a fantastic way to prototype ideas, and then you could scale them up on an AI data platform or however you want to do it. So I think it just opens up the door for business users Again, I keep hammering that term business users to be able to come up with new processes so quickly and with such incredible power. You know, I just get super excited thinking about it.

Speaker 2:

On that note I did immediately. My mind went to HubSpot when I was doing some research on MCP and it looks like they don't yet support MCP, but that they will soon.

Speaker 1:

So what you just described will surely be available within the next few months. And you can actually do it right now because HubSpot has a lot of endpoints in Zapier and Zapier enabled MCP for all 8000 systems that connect to Zapier. So but yeah, hubspot Salesforce NetSuite you know, sage, like all these major vendors, I will be willing to guarantee you, certainly by the end of the year, but probably much, much sooner for the big ones they're going to have MCP server support directly in their software.

Speaker 2:

Moving on to topic two for today we want to discuss Anthropics Economic Index, which is a data-driven initiative launched by Anthropic to systematically track and understand the evolving impact of AI on labor markets and the broader economy. The index is based on large-scale analysis of anonymized interactions with Anthropix's Claude AI platform, offering a unique real-time perspective on how AI tools are being incorporated into actual work tasks across a wide range of industries. Tools are being incorporated into actual work tasks across a wide range of industries. Their second Anthropic Economic Index report analyzes data from CLAWD 3.7 Sonnet usage. Specifically, this report provides fascinating insights into how AI is actually being used across different industries and professions. So since launching CLAWD 3.7 Sonnet, anthropic has observed some key trends. First, they've seen increased usage in coding, education, science and healthcare applications. This suggests AI adoption is expanding into more specialized and technical fields. Second, the new extended thinking mode, which we've covered on the pod before, is predominantly used for technical tasks. Computer science researchers used it most nearly 10% of their interactions followed by software developers about 8% of their interactions and creative digital roles like multimedia artists, and then followed by game designers. This indicates that complex, deep-thinking tasks are where users see the most value in AI assistance.

Speaker 2:

The third trend. The report breaks down how people interact with AI across different occupations. They found that copywriters and editors show the highest amount of task iteration, where humans and AI collaborate to write and refine content together. Meanwhile, translators show among the highest amounts of directive behavior, where the model completes the task with minimal human involvement. Amounts of directive behavior where the model completes the task with minimal human involvement. Perhaps most interesting is the balance between augmentation and automation across industries Tasks in community and social services, including education and counseling approach 75% augmentation, while tasks in computer and mathematical occupations are closer to a 50-50 split. No field has tipped into automation-dominant usage just yet. The bigger picture finding here is that AI is primarily being used to augment human work about 57% of usage rather than automate it. Even in fields where AI performs well, humans remain in the loop, especially for creative and strategic tasks. My question for you, amit do you find any of these insights surprising? Do these feel right in line with what you've been feeling anecdotally?

Speaker 1:

I am not surprised by this. I think that the augmentation story is the really exciting story to focus on. I do think certain things will 100% be automated as these models get even slightly more intelligent or slightly more reliable. But augmentation is the big story because it's about having a bigger pie rather than saying the percentage of the pie taken over by AI is increasingly large. So it's an abundance story rather than a story of missed opportunity or AI taking over the world. So what you see here is more is getting done. More exciting things are happening.

Speaker 1:

Let me give you a quick little example as an anecdote to illustrate this. So, with CLOD 3.7, sonnet, which is the model this research was based on and you could say the same thing, by the way, for Gemini 2.5 Pro or GPT 4.1 or a bunch of other really cutting edge models, we found them to be so good, and CLOD 3.7 specifically to be so good that, using a new tool, one of our development teams is working on this really sophisticated learning content automation project, and what it is essentially is this ability to completely automate the generation of asynchronous learning content for our LMS. So if you imagine, like the Sidecar AI Learning Hub has seven different courses, each of which has, you know, somewhere between five and 15 lessons. Each lesson consists of a bunch of slides and audio and video and demos and stuff. How do you continually update this stuff and automate the regeneration of this content so that it doesn't require Mallory or me or one of our other colleagues at Blue Cypress? You know literally recording it the way we've recorded the content historically. So we've automated that using a mixture of different tools and recently we migrated from SharePoint to a different file provider or file system provider called Boxcom for this particular project, because it just has far superior workflow and better version control and SharePoint's fine for a lot of things we use across the enterprise.

Speaker 1:

But we needed a much more robust file management system for the sidecar team in order to address this particular use case. So we did not have support for Boxcom in our software. In our software stack, which is based on Member Junction, we had support for local file systems, sharepoint and the major cloud providers Google, azure, et cetera but we didn't have Boxcom built in. Literally within minutes, one of our team members was able to get in, get cloud code to build a complete Boxcom implementation and while they were at it, they're like oh, let's just knock out Dropbox as well. Oh, let's do Google Drive too, and maybe a couple of others.

Speaker 1:

So there was like four or five other like file provider type, you know, cloud storage providers that got implemented in member junction as just standard drivers or providers in that software, each of which would have probably taken a developer anywhere from two to three days to build and test, were done literally in minutes.

Speaker 1:

And so that's still augmentation, though, because the developer who was doing it had to review it and make sure it worked and still test it and all of that. But it's a force multiplier, right, because that would have taken a team of four or five developers a month to release a feature of that significance, and now it's just done, right. So the speed at which you can go is really really crazy, I think. Going back to the economic impact, to us what that means is there's more value created rather than, oh, we've reduced labor requirement, right, as a percentage of the total pie. Ai probably did 95% of that work, but that work wouldn't have gotten done, or wouldn't have gotten done certainly by now, if it wasn't for that caliber of AI. So, really, what I'm saying essentially is what the report says has been my experience personally as well my experience personally as well.

Speaker 2:

I'm wondering if you feel there's a missed opportunity on the automation front. I know you mentioned maybe we're just not quite there yet in terms of being able to automate all these processes, even the example you gave with MCP on the previous topic. But do you feel there are opportunities to automate? But we're just kind of focusing on the augmenting piece.

Speaker 1:

Well, I think the question is is what's the bar that you have to pass in order to feel comfortable with complete autopilot for a given process, right? So this is an area where, understandably, most association leaders have a degree of concern handing over the reins to an AI to completely make decisions on a variety of things, whether it's customer support, outbound marketing whatever the case may be or, for what I described, coding, and I think that's appropriate to have what we often call human in the loop in an agentic workflow where there's some expert human reviewing the work that the AI produced. I still think we're at that phase. The question is really like are we being overly cautious? Are we being overly conservative? Is the AI already better than the average human in a particular domain, and do we hold the computer to an unreasonably high standard in terms of what level of perfection it has to achieve to achieve quote unquote full auto ability? And I actually think that's probably true Now to some degree. It's probably still fair, because the downside risk of screwing up external face-in communication is pretty high. At the same time, there's ways to mitigate that right when you can tell people hey, listen, this is an AI response. Be transparent, say it's not always perfect. Please let us know if there's anything that's off. We have humans in the loop that are available at all times to help you out, but we're optimizing for being super responsive at a scale we can't do with human-only responses, right? So I think there's ways to balance that out. My general view is that I think people are going to get more and more comfortable with automation complete automation over a period of time.

Speaker 1:

The first use case in AI that I really was pushing on years and years ago was personalization, and with our AI newsletter product Rasa, the first thing we were doing was actually super simple. We were simply saying hey, association folks, let the AI choose which articles Mallory should get in her newsletter versus which articles Amit should get in his newsletter. So it's the same newsletter. The overall structure of the newsletter, the copy, a lot of it's the same, but just inserting different articles. So if I have 30 or 40 articles that might be good to send out to people for a given issue, each person gets the articles that may be of greatest interest to them, right? It's a simple concept. Even that, though, people were totally freaking out about, like eight years ago when we started doing this, and even five years ago and some people are super uncomfortable with that today, freaking out about like eight years ago when we started doing this, and even five years ago, and some people are super uncomfortable with that today. But most people, once they start letting the AI pick the articles on a per individual basis, totally forget about it after about like an issue or two, because they're like, wow, our members love this, it's totally performing and I can't keep up with this thing. Like, the AI is at a scale I can't keep up with and we're about to release a new capability with Rasa where it'll actually write a lot of the copy and summarize articles and it'll also write like opening paragraphs for marketing emails with the new campaigns product we're launching.

Speaker 1:

And right now people are going to be a little bit concerned. They're going to say, hey, I'm sending out 30,000 emails to 30,000 members. Do I really want to have the AI write a call to action paragraph that asks them to come to our annual conference, that takes into account their you know, their greatest areas of interest and all stuff? Because I can't read 30,000 CTAs and some people will say no and some people will say yes. We're ones who are saying yes, we're about to roll this out for digital now. So those of us, those of you 20,000 plus folks that are on our mailing list are going to see digital now invites that are hyper personalized for you in the next 30 days or so, uh, and you can tell us what you think.

Speaker 1:

But we're, you know, it's our job to experiment, maybe a little bit past the line of what some folks are comfortable with. I think very soon, though, my point is, people will become very comfortable with that and they're just going to assume oh yeah, of course, yeah, I can write that opening paragraph. That's no big deal. So our comfort level, or like your experience with Waymo, right, like the first time you experienced like, oh my God, I am not sure this is going to be comfortable. And then like, literally within one or two rides, like, wait a second, this thing's way more reliable than the average Uber driver. So I think automation is going to happen. Like full automation is going to happen very quickly, like way faster than we expect. The tipping point happens and all of a sudden, there's just there's not even like a concept of looking back. I think we're within the next year. That's going to happen in many domains.

Speaker 2:

As an aside, I've heard this from the Rasa team to your point about how eight years ago the association space was really scared of this concept, that back in the day they wouldn't even lead with artificial intelligence. The Rasa team didn't want to include AI like forefront on the website because that was really scary to people, which I think is crazy. Right, because now it's like we have AI, this AI powered, this feature, and also on the Waymo front. I just had a memory pop up where I was about to enter the Waymo and I had this moment where I literally out loud said, oh, I'm scared, and then these people walking by us on the street laughed. We're like, oh, she's scared of the Waymo, you know. But you're right, once I got in there, once I got more comfortable with it, I actually preferred it to the human driver.

Speaker 2:

I want to talk about one more thing. Amit, that was a really interesting example you shared with Boxcom and the learning content automation. Would you call that and I know we'll probably cover this in a full topic one day on the pod would you call that a version of vibe coding that we've kind of heard thrown around in recent times?

Speaker 1:

Yeah, you know, I'm glad you brought that up, and so that term, I believe, was originally coined by this guy, andre Karpathy, who's one of the most amazing AI researchers on the planet. Have tons of respect for him and, I think, for people of that level of capability, the idea of vibe coding it's basically this. For those of you that haven't heard it a lot, it's this idea that you have a person sitting with a tool like a cursor or a Visual Studio Code and just having a chat with the AI and the AI is doing all the coding and you're kind of going kind of at this speed. That's somewhat super human in terms of how it feels. You're not really looking at the code or doing the coding yourself, you're just you're coding in the sense that you're talking to the AI and it's building a program for you. And I love the concept in the sense that you're taking advantage of AI, particularly helping people who aren't necessarily professional software developers to be able to have an unlock and be able to build things. What I do not like about it is I think it kind of it makes it seem like you can kind of hand over the reins for building software to the AI by itself almost, which we're not quite ready for for a few reasons, one of which is that, as cool as this stuff is to rapidly build software, if you just kind of hand the reins over to AI and you have no idea how the software was constructed, the AI is going to essentially build a different version of it every time you ask it for a new program. So you can say, hey, I want this particular tool to be built One day. It'll build it one way, the next day it'll build it another way, and, yes, you can give it some rules and stuff.

Speaker 1:

But as a non-developer, if you don't have anybody helping you with this, you can sometimes get things that make a lot of sense and sometimes get absolute garbage, and now it might actually be functioning garbage. But when I say it's garbage, it sounds like how can you have functioning garbage? Because the functioning garbage essentially is software that's either really inefficient, is brittle, uses techniques that are not secure, exposes you to risks that you don't want, and you don't necessarily have any idea about this. So I don't think there's any future where professional software architects and developers aren't involved in this at some level. Maybe they're AI at some point, but the term vibe coding, I think, makes it seem to people, like you know, they can just go build whatever they want at the enterprise level, I think, for personal tool use and for using MCPs this.

Speaker 1:

So I think that vibe coding as the category does fit into the context of augmentation if it's done right, and in fact my example earlier, you could say, is a form of vibe coding right, where you had a professional software developer building, you know, some significant functionality and cloud code. The CLI tool did that, but it was reviewed by a team of experts to make sure that it was built in a certain way where it fits into a very structured framework. It's secure, it's reliable, it's built in a robust manner. Last thing I'll say about that is AI will often rebuild code that already exists because it hasn't really thought of the fact that, oh well, we already have a piece of code that does this particular thing.

Speaker 1:

Because it's so fast at building something, it doesn't necessarily do a great job of reuse, which reduces the resilience of the software, makes it unnecessarily complex. So there's a lot of things you have to watch out for. But again, my goal in these comments about vibe coding isn't to cast a shadow saying hey, don't do it. Just don't think of it as the 100 percent solution. It's much more of an augmentation scenario. So it very much supports what you're just saying, but it's just something that you should also tie back into some kind of professional review process.

Speaker 2:

Okay. So I think what you're saying is keeping that human in the loop is important. I know we covered, probably a few months ago at this point on the pod, that the CEO of Anthropic, dario Amadei, said that in a year from that time, every single line of code would be generated by AI. So if that's the case, we can vibe code, but it's important to have a human in the loop that can review that and kind of address all the issues that you just mentioned.

Speaker 1:

Yeah, or maybe another AI in the loop that reviews it, and this other AI is more deeply trained on your approach to software development and that can be set up once and then reviewed.

Speaker 1:

So there's kind of levels of abstraction that we can get to that will help us further automate. I agree with Dari Amadei on that statement. I think all code will be automated at some level. But where we are right now, where I'm simply trying to highlight both an excitement that is there for us and I hope everyone shares, but also a bit of caution, because at the moment, as good as these tools are at building working software, they can absolutely build software that creates problems for you that you don't realize right. So you have to be very thoughtful and there's a lot more that goes into it than just kind of hanging out with you know, download Cursor or Windsurf and just start talking to it and see what happens. It's an awesome thing to go try out, but don't press the button to take it live on your website without you know someone that has some expertise in this checking it out for you, absolutely, absolutely. Last thing I want to just quickly say on that is I do think people look at the trend line and need to understand this means that software development is going to become extremely accessible to all associations.

Speaker 1:

A lot of associations have said hey, we don't want to build this custom member application on my website because it's expensive to maintain, it's expensive to build. I don't have to deal with that. Let's just use the standard out-of-the-box tool that the vendor provides in the AMS, and it's a little bit less efficient for my members to use. But it's good enough. You can re-evaluate that position, right, you can think about it from the viewpoint of well, but maybe I can have a vendor help me out. But it's going to take one-tenth the time to build it, so my costs are going to be lower. But also, if I need to maintain it, the cost of it is way, way lower too.

Speaker 2:

I'm sure we'll be keeping a close eye on vibe coding in the coming months and years, even though you hate the term, Not the actual concept, but the whole phrase. Vibe coding it is a little bit I don't know.

Speaker 1:

I don't know. It just seems like something out of an episode of Dumb and Dumber or something. It just seems like something out of an episode of Dumb and Dumber or something. Like you know. It's more of just. It just sounds weird to me, but I'm a little bit of a curmudgeon, I guess.

Speaker 2:

Coding on vibes. I think it's because the word vibes is maybe popular with the youth nowadays, so that might be what you're associating it with. Yeah, probably. Well everybody.

Speaker 1:

We hope you all have great vibes for the rest of your week and we will see you all next week. Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.