Sidecar Sync

6 Obstacles Preventing Your Organization From Embracing AI | 43

β€’ Amith Nagarajan and Mallory Mejias β€’ Episode 43

Send us a text

In this episode of Sidecar Sync, Amith and Mallory delve into the hurdles associations face when trying to incorporate AI into their operations. From time constraints and skepticism to the need for strategic policies, they explore the common challenges and offer practical advice on how to overcome them. Whether you're just starting with AI or looking to deepen its integration into your organization, this discussion will provide valuable insights. Plus, they share examples from their recent experiences at industry events and offer tips on making AI work for you.

πŸ›  AI Tools and Resources Mentioned in This Episode:
ChatGPT ➑ https://openai.com/chatgpt
MeetGeek ➑ https://meetgeek.ai/
Suno AI ➑ https://suno.ai/

Chapters:
00:00 - Introduction
02:35 - ASAE Highlights
07:15 - Obstacle 1: "We're Too Busy for AI"
15:23 - Obstacle 2: "We Want to Be Thoughtful, Not Rushed"
18:42 - Obstacle 3: "We Can't Start Without AI Policies"
25:44 - Obstacle 4: "Mixed Feelings on AI Adoption"
32:09 - Obstacle 5: "Free vs. Paid AI Tools"
38:27 - Obstacle 6: "We Tried AI, Now What?"

πŸš€ Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global

πŸ‘ Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress πŸ”— https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

πŸ“£ Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

πŸ“£ Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Speaker 1:

AI can probably save you 10 to 50% of your day. It can automate so much of what you're currently doing that it'll free you up to have, you know, higher order opportunities in your life. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Welcome back, everybody, to another episode of the Sidecar Sync. We're, as always, enthusiastic to be back with all of you with all sorts of interesting thoughts, ideas and challenges in the world of associations and artificial intelligence. My name is Amit Nagarajan.

Speaker 2:

And my name is Mallory Mejias.

Speaker 1:

And we are your hosts. Before we jump into our exciting episode, let's take a moment to hear a few words from our sponsor.

Speaker 2:

Today's sponsor is Sidecar's AI Learning Hub. The Learning Hub is your go-to place to sharpen your AI skills, ensuring you're keeping up with the latest in the AI space. With the AI Learning Hub, you'll get access to a library of lessons designed to the unique challenges and opportunities within associations, weekly live office hours with AI experts and a community of fellow AI enthusiasts who are just as excited about learning AI as you are. Are you ready to future-proof your career? You can purchase 12-month access to the AI Learning Hub for $399. For more information, go to sidecarglobalcom slash hub. Amit, I just saw you in person in Cleveland at ASA Annual and for those of you that don't know, that's the American Society for Association Executives annual meeting. How was that for you?

Speaker 1:

I had a great time. I was only there for, I think, maybe 36 hours or 40 hours on the ground. I had some other things I had to go do, but it was amazing. I don't go every year, but I try to get there every other year at least, and it's just a fascinating time to get together with people because of all the things that are happening in the world and with technology and with associations. So I had a bunch of fun. It was great to reconnect with people and meet new people, hear a lot of thoughts that are on people's minds right now. So I had a lot of fun. Plus, the big thing for me more than any of that was it was two days of a reprieve from the weather in the South. It was so nice up there.

Speaker 2:

It was so nice. I actually didn't even look at the weather when I was packing, and not that it was too cold, but it was kind of chilly and I thought, oh, I just assumed it would be blistering hot, being that we both live in the South. But it was really beautiful weather up there.

Speaker 1:

Yeah, it was great being close to the lake Cleveland. They really did a nice job with their downtown. It was pretty. There was parks, there was places to walk. It was just a great place to visit. If you haven't been to downtown Cleveland, I would definitely recommend putting that on your list. It might not be on the top of everyone's list of places to go visit, but it's cool and NFL season's starting up, so that was fun to see people walking around with Packers and Browns jerseys the first day we were there.

Speaker 2:

There was a lot going on in downtown Cleveland. There was also an MGK concert. If we have any listeners or viewers on YouTube who like MGK I don't know a ton of his music, but I know that he did thwart a lot of people from getting to our happy hour, so shout out to MGK for that one.

Speaker 1:

Oh, is that what the disruption on the street was?

Speaker 2:

I think so. People weren't even. There was only one path to get to our happy hour, I believe. Well, I'm sure there were alternate paths, but the main path, I believe, was blocked off for the concert, so people who came had to walk a good bit. So we truly appreciate that.

Speaker 1:

For sure. It was great to see people at the Sidecar Happy Hour that evening and now that I know that, actually I'm really impressed that we had a pretty good turnout.

Speaker 2:

We definitely did and I've got to say we probably have some new listeners to the podcast just because we had so many people stop by the Sidecar booth at this event. We had such great feedback from people who were familiar with Sidecar, people who weren't, and wanted to learn more, and some fans of the podcast, which was really interesting to see in person because, again, amit and I don't meet a ton of you, but it's always exciting to hear who's listening and what you've learned and what your favorite takeaways were. So, to the new people, welcome and to the people who have been joining us, thank you so much for tuning in.

Speaker 1:

Yeah, and on my end I spent a lot of the time at the show walking around just talking to people, and also we were. I think we had a few hundred copies of our new book, the Ascend second edition, and I believe they were all kind of flying off the shelves or out of the booth and I handed out quite a few to some friends and folks that I met at the event. So that was cool too, because it's a pretty heavy book to walk around with. It's almost 300 pages of content for association leaders to learn about AI, and people are walking out these things. It was great.

Speaker 2:

It was fantastic. We had, I want to say, 200 copies of Meath and we gave them all out. People were so excited to see us giving away hard copy books and, dare I say, I didn't walk around the expo hall a ton because I was mostly at the sidecar booth, but I don't really remember seeing many booths with books. So I think people were excited by that and they were even more excited to hear that it was free.

Speaker 1:

Yeah, and I think it's a nice giveaway.

Speaker 1:

I mean, certainly, hopefully it's a topic that's high on people's lists to learn about and most people are still at the very early phases of their learning journey.

Speaker 1:

The day after I or actually the day of this, the second day of the show I flew to Chicago and I had a handful of meetings there with clients that we work with and just kind of reaffirm some of the thematic things that you know, depending on the organization, there are different phases of the journey, but they're all fairly early on in their learning journey with AI. So, which is exciting, but also it's an opportunity to kind of refresh our minds, thinking, you know, you and I are kind of in our little echo chamber of Sidecar and Blue Cypress Land where we talk about AI constantly. We are massive practitioners of using every ounce of AI capability we can. We're developing software around AI. So to talk to people and hear their struggles and their challenges and their concerns about where they are day to day was really actually quite refreshing for me personally, because I just don't get that much because it's the reinforcing loop of the people who are close to us. I just don't get that much because it's the reinforcing loop of the people who are close to us.

Speaker 2:

Absolutely. Being in person like that and getting to collect so much information firsthand from the people that we're talking about all the times, from the people that Sidecar is serving, that was so empowering that it inspired the topic for today's episode, which is six obstacles stopping you from fully embracing the power of AI in your organization. So at the booth, as I mentioned, we had lots of people stop by and I started to notice some patterns fairly quickly in what people were saying to us. Different challenges across the board. We had big associations stop by, small associations in between, but generally we kept seeing the same challenges over and over. So we wanted to dedicate today's episode to addressing those challenges head-on.

Speaker 2:

Whether you were at the event or not, I think you will probably relate to most of these and we're going to troubleshoot them. Live together. First obstacle that I heard probably the most often and I think might be the most difficult to address we're thinking about AI, but we are just so busy right now we really don't have the time. Amit, I know you and I have covered that on the podcast before, but for the sake of this episode I want to dive in a little bit deeper. If an association CEO came to you and said that challenge, what would be your advice?

Speaker 1:

and said that challenge what would be your advice? Well, I mean this may not be a popular answer, but I'd tell them that if they're too busy right now, they're going to have a lot of free time if they're unemployed. And so my bottom line is is you better learn this stuff, because your job no matter if you're the CEO or an entry-level person your job depends on it. People are not going to be replaced by AI by itself. I don't believe, at least not in many of the jobs in this sector, but absolutely people and associations will be replaced by people and other entities that embrace AI. So AI is both the most amazing opportunity in the world, but it's also an existential threat if you ignore it. So I can't think of a more important thing to go do than to learn the basics of AI. You don't need to be an AI expert, but if you're completely unaware of the capabilities of these tools, it's like walking around saying, hey, we're in the home construction business and all of our employees use hand tools, but every other company building homes, they learned about electricity and they have power tools. Who do you think is going to do better long term right the sustainability unless it's like truly some craft where it's artistic and artisan type building, which is wonderful, but like that is not the scale solution or the sustainable solution in a sector like this. And of course most associations are not in that type of realm.

Speaker 1:

So you know, I'm sorry if that kind of rubs people the wrong way the way I'm positioning it, but I'm trying to get people's attention to say, look, this is that urgent that you have to pay attention to this. There's plenty of things you can stop doing. Build a stop doing list. Think about what you're currently doing. That isn't going to change that much in the next 12 months. So if you look at your list of to-dos and say, hey, if I didn't do this issue, if I didn't take care of this topic, would that affect my association's viability or my personal job viability in the next 12 months or even the next six months? So between now and the end of the year, five months-ish left, right, four and a half months left. So if I just stopped doing this thing for four and a half months, will I still be employed and will my association still exist at the end of that four and a half months and most things? The answer is yes.

Speaker 1:

I would argue that if you don't pay attention to AI. Maybe you won't be out of business or out of a job in four and a half months, but it's possible and certainly in the next year or two it's very likely because the world is moving so fast. So it's a little bit of tough love that I have to provide in terms of responding to that. You've got to prioritize learning, this stuff and people who are already moving along down this path. I commend you because I know it's difficult. I realize it's really hard, especially in a volunteer governance structure where your top level priorities are set by your board and your board may not buy into it. I totally get it. I empathize deeply with that. At the same time, if you're a leader in the organization at any level, you have to push forward on the things that are going to drive change, and this is going to be the most amazing change in a lot of ways for every organization.

Speaker 2:

You mentioned before the idea of the stop list and I love that. How often would you recommend looking at that list, writing something on that list in practice, like once a month? Is that something you should be looking at weekly to see if there's anything that you could potentially stop?

Speaker 1:

I think so. I mean at a minimum on a quarterly basis. You should be looking at what you're saying. Your stated priorities are for the next quarter and then looking at what you can stop doing. I do think that a more frequent refresh of that would be appropriate.

Speaker 1:

I look at it also in terms of meetings that I'm attending. You know, meetings chill a lot of time and I look at it and say like, am I really needed for that meeting? Can I just skip it? And it's not because I don't want to meet with the people or whatever, it's because I'm trying to protect my time and I'm looking at it meeting on this project. Do I need to be part of yet another conversation on a certain topic? Does the meeting even need to exist? Of course it's a great question. Can we kill it? But as an individual, can you opt out of certain meetings?

Speaker 1:

That can oftentimes reclaim a handful of hours a week pretty easily, because people kind of go like drones into these meetings and just kind of just sit there and most of the time they have nothing to say and they're just listening. That's a really inefficient use of time. You know that's mostly. That type of meeting can be replaced with asynchronous communication where people are like writing up the status report that takes them 10 minutes to write up and five minutes to read, right, like we do that at Sidecar and at Blue Cypress across the board, where, you know, we do have regular meetings, of course, but we expect our team members, depending on the role and various things, to either send a daily or weekly, what we call a 515 report, which is literally takes no more than 15 minutes to write and no more than five minutes to read for the other people on the team or the supervisor, and that saves a crazy amount of time. So there's a lot of techniques like that.

Speaker 1:

I think there's tons of great advice out there on optimizing your schedule and this and that, and a lot of people feel like they've already read all those books and they've. You know they're kind of tired of hearing the same thing, but the reality is is most people haven't optimized their schedule hardly, hardly at all, and they're attending a lot of meetings that aren't necessary, at least for them as individuals. Another thing that I threw out there with the meetings thing while I'm on that topic is you know, amazon and Jeff Bezos were famous for this two pizza rule, where they wouldn't have a meeting with where you couldn't feed everyone with more than you know. You had to be able to feed everyone with two pizzas, right? So, depending on the organization, you know, if I was in the room that might just be me, but you know, depending on this, other people, you know it's not that many folks can be fed by two pizzas. You know four, six, maybe eight people. So it's not these big giant meetings where you have, well, 14, 18 people in them for the most part.

Speaker 2:

So it sounds like the idea here is that we understand you're busy. I would say most people probably say that they're busy, but you've got to do something to optimize your schedule because educating you and your team on AI is urgent. Would you agree with that?

Speaker 1:

is urgent. Would you agree with that? Yeah, I mean, what else is happening in the world right now that could completely change the nature of your sector and your own association, business and your job? I mean, this is a technology that you know. What we're saying is that intelligence is fundamentally being commoditized, which is a crazy thought, but that's exactly what's happening right. We're not seeing entire white-collar jobs being fully automated yet at the job level, but you are seeing at the task level and what is a job? But for a bundle of tasks so it's a percentage of the job the more and more the tasks get automated or automatable. That's going to displace a lot of those things.

Speaker 1:

It also that's the one side of it saying how do we do what we currently do? Faster, better, cheaper? That's the quest we've been on since the beginning of time. We're a species that's advanced because of our intelligence, but also because we're tool makers, and we've been that way since the beginning of time. But the point is that this is the most powerful tool we've ever held and we don't know how to use it. So people who know how to use it are going to run circles around those that don't.

Speaker 1:

The other side of the ball, though, of course, is, with these tools, what can we do that we could never do before? So, other than saying hey, how can we do what we currently do? Better, faster, cheaper? What are the things that we could not do but now we can Like, for example, taking this podcast and simultaneously translating it to as many languages as we want to right, which we haven't done yet with Sidecar but that would be a wonderful thing to do, and the cost of that would have been unapproachable for us and most organizations up until now.

Speaker 1:

Now it's free, and that's a good example of a task that's been automated. But there's a lot of things like that, things that you can do, and, of course, our book Ascend is chock full of ideas like that. But even if you think about it purely on the how do we get more efficient side, the last thing I'll say about you don't have time is well, that's why you need AI, because AI can probably save you 10 to 50% of your day. It can automate so much of what you're currently doing that will free you up to have higher order opportunities in your life.

Speaker 2:

What a great point. A secondary sub-obstacle I'll say to this. One to me is a little bit more valid. Not that being busy is not valid, but I think this one is more justifiable. At least we want to be thoughtful about AI and not rush into anything without strategy. I heard this a few times at the booth. We want to educate our team, but we want to really think it through slowly before we do. What do you say to that, amit?

Speaker 1:

I think there's two sides to that. I do think it's prudent to have some thoughtfulness, some guidelines, some policies, even around AI. But it's kind of like, you know, how do people who have no knowledge or experience with a given technology write the policy and create the guidelines if they've never done it themselves? It's like saying, hey, I'm going to teach you, mallory, how to drive a car I've never driven one, but I've seen a car. Or I'm going to teach you how to fly a rocket ship. I kind of understand what rocket ships do, but I've never been in one, I've never been near one. I have no idea how to fly one, but I'm going to teach you how to fly one right. And that's the idea of setting policies and guidelines for others, or for your whole team, without having any hands-on experience. So, on the one hand, I appreciate that there's a need to protect your confidential and sensitive data. That's a major hot button and it should be, and that's something appropriate to address. But the flip side is is, if you try to get too, I would say rigorous about your structure and your policy and guidelines too quickly, you're going to stifle innovation and, quite frankly, the handful of people who are, you know they're hell-bent on just doing whatever they're going to do. They're going to do it anyway and they're going to work around policies. So your policies will likely be misinformed largely if you do it too soon.

Speaker 1:

So my suggestion would be do some basic training, get some understanding of what these tools can do at a very high level and then formulate a preliminary policy. And when you roll it out unlike most things that come from high on up in association land where they're kind of like etched in a tablet and people think they literally are, like you know, impossible to change, right Like the bylaws of the association, policies are perhaps not as hard to change as that. But the point is is that most people in the association culture are used to policies being quite persistent. They last a long, long, long time. You'd have to set the expectation on people that the policy is likely going to be adapting iteratively on a high frequency basis. So that's a really important part of it.

Speaker 1:

But I do 100% support the idea of policies and guidelines. But that's also why training is part of that. I would mandate training and that's another topic we'll be discussing. But the whole point to me is there's times to lead where you're saying, yeah, we're going to let people ebb and flow at their own pace, and there's time to lead in kind of a mandatory type of mindset where the leader stands up and says this is a critical issue, all hands are on deck, this is what we're going to go do, you're required to go do your job, this is your job, and I think this is the time for leaders to stand up and do that. When it comes to AI training, but also in this area, so to me it's a two-step thing. Number one get some basic familiarity with AI. If you know nothing about it, start by learning a little bit and then form an initial policy and then set that expectation that it's going to change. And then, of course, learn more and then iterate and make the policy adapt to your needs.

Speaker 2:

This flows in really well to our second obstacle, which was exactly that, amit. We don't want to do AI training until we have policies and guidelines in place. Now you touched on that a bit just now, but I'm wondering can you share an actual, maybe a general framework? You talked about not stifling your team. You've mentioned this idea of a sandbox before. Could you give us kind of a high-level overview of what AI guidelines might look like?

Speaker 1:

overview of what AI guidelines might look like. Well, first of all, the most common things people are concerned with is the potential loss of control over their intellectual property, whether it be structured data from an AMS or something like that, or it's their content. And that's an appropriate concern, because if you have your people just kind of throwing your sensitive data into particularly free tools from any random vendor that they happen to come across, you're just basically giving away all your stuff. Who knows who has access to that? So some organizations will pay extreme attention to the terms of service with OpenAI and ChakshyPT for the paid version. But, yeah, turn around and have absolutely no issue attending a meeting about something super confidential. When a tool like a MeetGeek or some other rando note taker shows up in the meeting, it's like you know, so-and-so's AI note taker from some unknown company and they're no problem. Hey, come on in record everything I'm saying, listen to me, talk, take my video, do whatever you want with it. So it's really weird. It's like this separation that people have in their in their minds. So I think, because open AI has been this lightning rod for concern which is appropriate because they're the leader in the space it's not really any different than working with any other type of software company.

Speaker 1:

You have to make sure you're dealing with a vendor you can trust. That's the first thing. You have to make sure the terms of service are reasonable that also very clearly say they cannot use your data for other purposes other than to serve you. And this idea that the AI models somehow are going to always use your data to like train the next version of the model. That's the concerns people typically have. That has to do with contract terms and trust. There's nothing inherently inherent about AI models that make it so that future AI models will train on your data. That's only if a company either has the right to do so and tells you they're going to do that, like if you're a free user of a product. Just remember, you're not a customer, you are the product. So just like with Facebook or something like that, if you're a free user of ChatGPT, the terms of service are different than if you pay 20 bucks a month. So pay attention to that. That is important.

Speaker 1:

But the flip side of it is is that I wouldn't like lock down all the tools and say you can't have access to any other tools. I'd probably pick one tool in each category. So one tool for conversational, like ChatGPT or Cloud or Gemini I'd make a tool for image generation, either the built-in tools, like in those environments or, you know, pick a mid journey or something like that. And maybe a couple of other categories, maybe a note taker, and I have a few kind of approved tools that we know are mature enough where the organization can say, hey, listen, we know these are companies that we can trust, we have commercial agreements with them, or we've reviewed the terms of services and our stuff's not going to get ripped off. Use those. And then say, hey, there's a process. If you're interested in using something else, just let us know and we'll do a quick review.

Speaker 1:

Make sure it's like you know, not like a North Korean state actor that's sponsoring a free meeting note taker in order to steal your data, something like that. Right, so you got to think a little bit like that. But open up the door for people to come to you with new ideas is my point, so that you know, so that as new tools arrive like, for example, we've talked about sunoai on this podcast, where it's a text to music AI model which is really cool Like, do you want to allow people to use that, and that may not be a privacy issue in the terms of your content. You're not gonna feed Suno your sensitive data, probably, but do you want that and do you want music to be generated under your brand? That's a different issue, right? So I think there's a lot to unpack there. There's no quick answer that I can really give, other than to kind of have a flexible mindset around it.

Speaker 2:

And I think that goes back to your point, too, as well, about the fluidity of the whole policy. In the first place, suno is something that popped on the market at this point maybe I don't know, six plus months ago, but if that weren't a part of your AI policy and it needed to be right, you want to make sure that you have something flexible that you can add to and that your staff is aware of that as well.

Speaker 1:

Well, and there's also even within the tools themselves the fluidity is a good way to put it that you know like yesterday, actually, somebody on our team asked me hey, what do you think about this feature? And like, we're paying for the ChatGPT Teams thing. That's our primary AI tool that we pay for across Blue Cypress and you know a lot of people use Anthropic and other things, but that's the one we have, like terms of service reviewed and we have, you know, a paid account where we've opted out of all the things that we don't want and so forth and control it at the company level, which is good. But they added a feature recently to ChatGPT where you can have it authenticate automatically with your cloud storage provider. So if you're a Google Docs user, you can have it authenticate with Google Docs to automatically be able to look at Google Docs. Same thing with Microsoft. And while, on the one hand, that's just an easier way to share a file, rather than downloading it from the cloud provider and uploading it to ChatGPT if you want to share the file, it makes me a little nervous personally, because you're essentially authorizing this SaaS company to have access to your private file repository right, giving them pretty much full access to it is the way that integration works, and that makes me nervous personally. So I suggested to this person it's just better to download the specific file. And then way that integration works, and that makes me nervous personally. So I suggest to this person it's just better to download the specific file and then upload that specific file.

Speaker 1:

In the case of OpenAI, you know, I think they're a very interesting company at a lot of levels. They've done some phenomenal AI work, but I also don't really trust them, partly because of the changes that are going on. Leadership wise, there's a mass exodus of the founders, there's the you know ousting and then return of Sam Altman. There's like all this opacity with respect to their approach to alignment and safety and on and on and on. Right so, but yet there are leading model providers, so they have a very good product that's cost effective and so forth.

Speaker 1:

Would I be more comfortable allowing Anthropic to have access to my Microsoft files? Maybe, but I don't really know that much about them either. What I think is going to happen is a lot of that stuff is eventually going to end up being boiled into your main platform. So you already have Microsoft Copilot, google's assistant as well, those things frankly kind of suck compared to the full-blown chat, gpt or Anthropic. They're always a little bit behind because they have to serve such a wider audience. It's like in Microsoft Copilot if you go into Word, on the one hand it has access to everything in your Word document and, theoretically, your other documents, but it really is super limited right now. That's going to change over time and you probably will just live in that environment, is my guess, and then you don't have to worry about it as much.

Speaker 2:

I think what could be a bit worrisome about the example you just provided is the fact that you and me, or our CEO Johanna, may not have known about that feature of being able to give OpenAI access to all of your files until an employee brought that to you. So I can imagine some listeners are probably concerned. If they're leading an organization of how do they even predict what they need to protect themselves from without that kind of intel?

Speaker 1:

Yeah, totally.

Speaker 2:

Our next obstacle, amit. You will think this one is interesting and this is a direct quote. Some of our employees are really excited about AI and leading the charge and some, quote, believe the earth is flat. End quote. How would you recommend leading?

Speaker 1:

teams with mixed feelings across the board about this new technology. I'm all about providing people as much information and insight as possible and hearing feedback. At the same time, I'm a big fan of this idea called making a decision and then pushing the organization forward on it. So if you, as the leader, or your leadership team, has decided, this is an important big thing, of course you should listen to people's concerns and you should educate them to the best of your ability, but after a certain point you say this is what we're doing and we're going forward with it and that's it and get on the bus if this is where you want to work. And there needs to be more of that mindset, in my opinion, in the broader association sector and there needs to be more of that mindset.

Speaker 1:

In my opinion, in the broader association sector there's way too much of a kind of consensus-oriented mindset where everyone has to be on board or if not everyone's on board, those individuals can opt out of doing certain things.

Speaker 1:

A lot of people are not making these things mandatory, but to me that just points to an issue with the culture more than anything else.

Speaker 1:

And again, I understand there's lots of nuance to this and I'm kind of approaching all problems with a sledgehammer when I talk about it, but that's intentional, because the world doesn't care about your nuances, the world doesn't care about your problems. They only care about your abilities and what you produce of value. So if there are competing forces out there other associations possibly, or perhaps commercial organizations that are in your space or can essentially displace the value of the services you provide like ChatGPT itself right, can do a lot of the things people come to associations for you've got a big problem. So I don't think it's time right now to get to consensus. I think now is the time to take bold action and to demand that your team goes with you and maybe some people opt out. That's okay. That's not the right place for them to be. Maybe it's been the right place for them, but it's not anymore. So there's some of that thinking that I think needs to be more prevalent in this space.

Speaker 2:

And then I'm sure education is also a piece of that as well. Ideally, you educate your team, you show them what's possible. Maybe some of the flat earthers hop on board, I don't know, maybe they don't.

Speaker 1:

Some people are going to be pessimists but like ultimately will come on board. That's probably okay, cause there's people who are just slower adopters or late adopters, as long as they know they have to do it. But there's some people who are just flat out obstructionists and those people have to go. So it's not just that they're slow to adopt it, but they are actively trying to undermine it because they disagree with it or they think it's. You know, there's some.

Speaker 1:

There's something fundamentally mismatched between their system of values and beliefs and what your organization systems you know, system of beliefs are on a go forward basis.

Speaker 1:

And again, these might be wonderful people, like wonderful human beings that have served your organization well. But where you, as the leader, are taking the organization may diverge from where these people want to go, and the bad news is that that might mean some of those people are not part of your future. But the good news is is there's other people out there who probably would love to be part of your future, especially if you have a bold vision for how you're going to transform your sector or your profession, leveraging AI to advance your mission. That's exciting, and so the people who will get on board with that may not all exist within your organization's walls right now, and that's also okay. Mainly, associations are so focused on keeping everyone happy, which is basically both impossible and largely just. You know it's just something I wouldn't want to try to focus on in any organization. There's no way to do that, even in a 10 person, 10 staff association.

Speaker 2:

You certainly can't do that in a larger one people cautious and who want to move more thoughtfully through this. This is people who are obstinate and who don't want to embrace it. I mean, I'm curious did you see the same thing with the internet? Were there people that were like absolutely not, never, won't do it?

Speaker 1:

Yeah, for sure. I mean, I talked to tons of people in the nineties and the two thousands that were like, yeah, you know, we don't need a website or websites, you know, in general, isn't going to change our association that much. Same thing with mobile, same thing with social media, same thing with whatever the technology shifted is. The difference is all of those have been passive technologies. Right, if you think about it, it's about distribution, it's about lowering cost of compute, it's about changing it so that you can do computing anywhere with mobile devices connecting with other people more easily. All of these things have been passive technologies. They've required a user to basically activate them and tell them what to do, whereas AI is an active technology. Right now, it's kind of like somewhere in between, where you still have to tell it what to do.

Speaker 1:

But as these systems become not only more capable but semi-autonomous and in some cases, possibly fully autonomous or mostly autonomous, this is a different thing. It's a completely different animal. It's an intelligence form, right? Whatever you want to call it augmented intelligence, artificial intelligence, apple intelligence, you can call it whatever you want it is the same thing. It's a shift in capability because it's an active form of technology and we've never experienced that, so that's, of course, understandably. All of us, including everyone here at Blue Cypress and Sidecar who are playing with this stuff all the time we're all confused. None of us know exactly what we should do next, because none of us have encountered this type of technology before. But that's precisely why you have to go figure it out.

Speaker 2:

And keep this in mind all those obstructionists, as you called them, who were against the Internet and against social and against mobile, I'm fairly sure at this moment are using the Internet and have mobile phones and probably have social media accounts. So think about that, I would say, in the greater landscape for the future.

Speaker 1:

Yeah, and then in those in those examples, they had many years to adapt. Early proponents of, let's say, the iPhone personally they enjoyed using the iPhone in their lives or social media, or just were a big user of the web personally might have said well, for my group that isn't necessary, it's not appropriate. There's always a reason why it's not applicable to the association, because the mindset has been that it's a protected space, that, oh, we are the association for X and therefore we are somehow insulated from the need to drive this kind of innovation. But that world is rapidly coming to an end.

Speaker 2:

Our next point you also touched on a bit earlier, but I think it warrants a deeper discussion as well. Our fifth obstacle is we've been experimenting with the free version of ChatGPT or various other large language model tools, but we aren't sure we want to go the paid route. Why do you well, you already said it that you think people should go the paid route. Why is that?

Speaker 1:

I think there's a couple of reasons why People should go the paid route. Why is that? I think there's a couple of reasons why. So first, I mentioned earlier that if you're using something for free, how is the company making money? It's not making money from you, so maybe it's a freemium model and the idea is that you know as you go further into the product you'll want to upgrade, and there's a small slice of the users that are paying and therefore that's how they make their money. For example, I was reading or not reading.

Speaker 1:

I was watching a video on the Wall Street Journal site this morning about Duolingo, and this is the language learning app that's become the most popular language learning app in the world. They heavily use AI. Very interesting short video seven, eight minutes. I'd encourage anyone to watch it. They do about $500 million a year in revenue, but only 8% of their users pay. So there's 92% of the users are free users, and so that's called the freemium model, where you have a massive number of people you have on your platform and then the paying users actually support it financially. They do have some ad revenue as well, but you have to remember, as a Duolingo user, part of what you're providing both paid and free is your data. You're providing your interactions, your data, and you're opting into the terms of service that allow you, allow the business, to use your data to do whatever it is that they're going to do to optimize their business. So there's nothing wrong with that. It's just you have to remember that paid versus free. If there's a paid option, there's probably some benefits there that are worth thinking about, one of which is, in the case of AI, the terms of service.

Speaker 1:

The terms of service for paid accounts often do say that they will not use your data for future model training, whereas the free versions of most models do not say that, and that's a really important distinction with FreeChat GPT. As of right now, if you were to take, let's say, confidential documents, drop them into the FreeChat GPT and you can opt out of it. By the way, even as a free user, openai got a lot of heat about this. So there's a setting in the free version where you can, in the settings, you can opt out of model training, but it's like everything else.

Speaker 1:

The vast majority of people don't even know that and don't do that. So if I were to not do that and just use free chat GPT and drop in, let's say, all of your sensitive data from a system. It now is something that open AI can use to train the next version of chat, gpt, and I think that's a pretty bad, pretty bad issue. So I would avoid that, and I do think free tools are great things to experiment with, but just understand them. It as like almost like you're opening yourself up to working in the public's eye when you do that.

Speaker 2:

I would say most of us across Blue Cypress well, maybe not most, but the people who are experimenting with AI were in the past using the individual paid account of ChatGPT, which I think was 20 bucks a month, and I think there are some protections there and then Blue Cypress as a whole opted to switch to the Teams account. I don't know if you know the specifics on that, Amit, but why did we make the shift from those individual paid accounts to the team paid?

Speaker 1:

This is a classical SaaS play where people will say, hey, we get adoption. It's called product-led growth. You get adoption at people within an organization. Then you go to the company and say, hey, listen, you have 50, 100, 200 people using our tools. If you go for the team version, which is more expensive, of course, we will provide you centralized billing. We will provide you the ability to automatically provision and deprovision users as they come and go from your organization. We'll provide you organizational data control so that you can organizationally opt out of certain things right or control your data.

Speaker 1:

You can have retention of information. So the chat history. You know we're not subject to some of the regulatory compliance issues that a public company under Sarbanes-Oxley or other similar regulation may be subject to, but it's important for document retention to be able to potentially retain some of that information for a longer period of time, whether the employee stays or goes right. So there's a lot of issues that are not features for the individual user that an individual would really care about but the company would deeply care about. Or SSO right, where I don't have a separate login, I can log in using my BlueCypress email, stuff like that. So that ability to scale up to a team or enterprise type product is a major value out at the org level and does provide some security. You're obviously not going to do that for every tool that you have, but I think it's an important thing to consider for your workhorse AI tools. So that's where I think a little bit of diligence around that will pay off quite nicely, quite nicely.

Speaker 2:

So would you recommend going back to the AI guidelines piece. If an association decided ChatGPT which, to be totally transparent, is the tool I heard brought up the most at our booth this past weekend If ChatGPT was going to be their preferred tool to create, copy and analyze files and whatnot, would you recommend that they go ahead and do the team subscription?

Speaker 1:

I think so. I think that if ChatGPT is the right product for you, there's some advantages to it. It's again, I think, it's 50% more per user or something like that, but the marginal cost is not high enough to be a reason to avoid it, in my opinion, because you get those additional level of company level controls. You don't have to rely on each user remembering to opt out of the data issues I was describing. Those are some nice Teams things where you can privately share chats and collaborate in different ways, so I think there's enough value to justify the cost. I think there are pretty smart people working over there and they know how to do SaaS pricing, so they figure that out.

Speaker 1:

It's a playbook a lot of people have used. Probably the initial, most famous example of that was Slack, where Slack had lots of adoption at the individual user level and then they went into the companies and sold these types of enterprise accounts and we had a great episode recently with Bill Masaitis, who was the former CMO and early stage employee at Slack that I encourage our listeners to go back and refer to as a fun conversation, and Bill probably has the nicest podcasting studio I've seen. It was so nice, but in any event, yeah, I mean that strategy is, I think, a very clear cut way to approach enterprise value creation. It doesn't affect the end user, which is great, because the end user is like, yeah, the tool is the same to me, I love it, it's great.

Speaker 2:

Our last obstacle, amit, was one that was a bit more difficult even for me to respond to, so I'm interested to get your take. And this is our CEO. Told everyone to go out and try some AI tools, but we don't know where to go from here. What are our next steps? So I'm wondering how you recommend making the leap from just dabbling, testing different tools, to actually integrating AI into your workflow.

Speaker 1:

Well, I mean, this is where learning plays such a key role. So people who are listening to this podcast, I think, are investing in learning in one form. You know we obviously have a ton of free content. We have the Ascend book, which is a free download. If you prefer print, you can buy it from Amazon for a few bucks. You can access our monthly free intro to AI webinar, which is a great place to get like a broad overview of what's going on with AI but how it's applicable to associations. Specifically, we have our kind of introductory level paid, offering the $24 one-time fee for the prompt engineering mini course, which is a great and quick way to learn and get going. And then, of course, our Sidecar AI learning hub, and that's just in our sidecar ecosystem. There's tons of other resources.

Speaker 1:

I think the key to it is to be educated at some level, because if your CEO says, hey, go try some AI tools and you're like what does that mean? And your CEO doesn't really know what that means, either it's the same thing. Who's going to write policy or like lead if no one knows a whole lot? You know, I can't teach you how to drive a car if I don't know how to drive a car myself, right, or if I've only seen a car, I'm like it looks like you might drive it this way. So just start by learning, and the learning that is most effective pretty much across all types of learning is learning by doing, learning by experimentation. So that's what I'd encourage people to do as the next step, through whatever tools and whatever content that they find most helpful.

Speaker 2:

You just mentioned product-led growth, so in my mind I'm thinking of a way to phrase this, but that is the key point. I guess I was looking for more of a framework or a step-by-step first you try this tool and then you take these actions, and then you'll have AI integrated into your workflow. But I think it's value led usage is the phrase I just came up with in my mind. But as you learn about these tools and as you start using them more and more frequently and getting more out of them, I think that is actually the path to integrating them into your workflow, as opposed to having to be so conscious about it. I think the more value you get out of it, the more you will use it every day.

Speaker 1:

I completely agree. It's a self-reinforcing cycle and you know, if you even if you're, you know, first of all, if you've never used any AI tool at all, go to ChatGPT, try it out, pay the 20 bucks a month just as an individual, so your data is protected. Opt out of the thing I just mentioned where in the settings, you can turn off the thing where you share your data for future model training. Obviously, opt out of that and go play with it. The problem is is it's just the text box. I mean, there's some prompts in there, like some things you can choose that are kind of like you know, give me a recipe for a good cocktail or whatever, but none of that's really interesting. For the most part, you got to have ideas and examples, and that's where some structured learning is somewhat helpful, because if you attend a one-hour webinar or you take a little course, you can get some ideas, and context is everything. So if I give you examples of what might help someone write a blog, that's cool for your marketing team or people who write blogs, but if I don't write blogs at all, it's hard for me to translate that into what I do. So if I'm an attorney, how does it help me with contracts or negotiation? If I'm a financial person, how does it help me with my work? And so that's the idea of contextualizing and giving people clear examples, because I think if they get going, they'll have a much easier time being creative, because the creativity is the ingredient you need right now with these tools.

Speaker 1:

I was having dinner with an association CEO recently, and one of the things that came up this is a person who's, like, super advanced in AI. Use is really a great conversation, and one of the things that came up in the discussion is I said well, use. You know, have you experimented with using the AI as a counterpoint, meaning like someone to debate against? So, rather than the AI agreeing with you all the time, which is what these things do by default and completing your words for you and completing your sentences for you, essentially, you tell the AI at the beginning of conversations your job is to be my thought partner and, specifically, your job is to take the counterpoint on everything I say. So I'm going to tell you about something I'm planning on doing. I'm planning my annual conference. I'm going to do these things.

Speaker 1:

What do you think Normally AI is going to say well, that's awesome, mallory, that's the best idea I've ever heard. It's amazing, and go do that. And maybe it'll have a couple of minor points depending on the model you're using. But if you prompt the model to say, your job is to be my thought partner and to take the counterpoint in every one of these ideas, it gives you all these other ideas right and it gives you all this feedback that you may not have considered. I use AI that way all the time. I find that most people haven't tried that. Even the ones that have been pretty deep, they're kind of continuing their workflow as opposed to using it. So remember these things, like have read the entire internet? Right, I haven't done that. So like they've read the entire internet. So there's lots of interesting ideas that potentially could be opposing to whatever it is that I'm thinking about.

Speaker 2:

I have never used it that way. I think in my mind I'm often trying to do things as quickly as possible and I might think, oh, that'll take me a little extra time. But that makes me think of something I heard from someone in our AI learning hub for teams or for their whole association, which was that they had different people on their board with different personas, different levels of expertise in certain areas, and they actually use the AI to be that persona and said OK, like you have a strong personality, it is your expertise. I'm going to present to you this idea and then criticize it, and I thought that was it's exactly what you're saying. I mean, it's so profound and such an interesting use of a large language model.

Speaker 1:

Well, you imagine a world where and this is not really requiring imagination these days where there are AI avatars for each of us that represent kind of our collective thought process, and you know, if you think about the information in your Microsoft or Google account pretty much tells you a lot about what I think and how I respond to things. Right, there's a lot of Mallory, amit, johanna and these other people, and you can have these avatars talk about a topic, and there are tools that already do this, and so people are going to start sending their avatars to meetings and actually have reasonably good conversations about. And then it's like what if it's all AIs talking about an issue? And so that's one of the things we talked about a lot in this pod and it's in the book is this idea of multi-agent systems, and a multi-agent system is nothing but that. It's just basically multiple ais that have been trained with either specific prompts or different ai models that have conversations with different roles or different viewpoints. So, um, that's what you hear.

Speaker 1:

When you hear, when you hear about microsoft's autogen project, which we've talked about, or if you hear about lang chain, lang graph, you think about crew dot ai, you think about the stuff we do in member junction, all of its multi-agentic type of approaches. It's the same idea. It sounds super fancy and complicated, but it's basically a conversation between multiple AI systems and sometimes with humans. And it's exactly that, where you know, we know what the training data is and the AIs are pretty good at predicting the next token, right? So they're pretty good at predicting what I'm going to say.

Speaker 2:

Well, amit, thank you so much for this great chat today. To everyone who has joined us before, thank you for joining us again, and to all of our new listeners welcome. Whether you're joining us on YouTube or listening on your favorite podcasting app, we're so happy to have you here at the Sidecar Sync and we will see you next week.

Speaker 1:

Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.