Sidecar Sync

Leveraging Asset Bases with AI, The Plummeting Cost of Tokens, & Generational Differences in AI Adoption | 46

Amith Nagarajan and Mallory Mejias Episode 46

Send us a text

In this episode of Sidecar Sync, Amith and Mallory dive into three major topics shaping the future of AI in associations: leveraging an asset base with AI, the rapid decline in AI token costs, and the generational differences in AI adoption. Amit breaks down how associations can unlock the potential of their data to create innovative, revenue-generating services. They also explore how the plummeting cost of AI tokens is making sophisticated AI tools accessible, and why younger workers are more likely to embrace AI in their jobs. Tune in for insights into how AI will transform the association world.

🛠 AI Tools and Resources Mentioned in This Episode:
Claude ➡ https://www.anthropic.com
ChatGPT ➡ https://openai.com/chatgpt
Groq ➡ https://groq.com
Llama 3 ➡ https://ai.facebook.com/tools/
Thomson Reuters Could Be the Best AI Stock You’ve Never Considered ➡ https://tinyurl.com/3w4cv74e
DeepLearning.ai The Batch, Issue 264 ➡ https://www.deeplearning.ai/the-batch/issue-264/
Generative AI Adoption Higher Among Younger Workers, Survey Finds ➡ https://tinyurl.com/33nay6ba



Chapters:

00:00 - Introduction
01:10 - Overview of Leveraging AI in Associations 
06:02 - How Associations Can Use AI to Unlock Their Data
12:22 - Counter-Positioning Your Business for Success
15:19 - Multi-Tiered AI Offerings for Associations
18:28 - The Plummeting Cost of AI Tokens
24:41 - Impacts of Token Costs on AI Accessibility
28:46 - Generational Differences in AI Adoption
34:51 - Why Leaders Need to Adopt AI Now

🚀 Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global

👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Speaker 1:

It is unlikely you by yourself, like, will be displaced by AI by itself, but it is extremely likely that you will be replaced by someone who's very good at AI if you're not so. It's that simple. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Welcome back, everybody, to another episode of the Sidecar Sync, where all things association and artificial intelligence come together in a fun, interesting podcast. My name is Amit Nagarajan.

Speaker 2:

And my name is Mallory Mejiaz.

Speaker 1:

And we are your hosts. And before we get going into our topics for today, let's take a moment to hear from our sponsor.

Speaker 2:

Today's sponsor is Sidecar's AI Learning Hub. The Learning Hub is your go-to place to sharpen your AI skills, ensuring you're keeping up with the latest in the AI space. With the AI Learning Hub, you'll get access to a library of lessons designed to the unique challenges and opportunities within associations, weekly live office hours with AI experts and a community of fellow AI enthusiasts who are just as excited about learning AI as you are. Are you ready to future-proof your career? You can purchase 12-month access to the AI Learning Hub for $399. For more information, go to sidecarglobalcom. Slash hub Amit, how are you doing today?

Speaker 1:

I can't complain. It is September, and so here in New Orleans, we are starting to have the end in sight in terms of the crazy weather, and so we'll see what happens. You know, it's like 45 days from now. It'll not be horrible outside.

Speaker 2:

Absolutely. This morning I felt a light not, I dare I say, chill a light change in the weather in Atlanta, so I was really excited for that. I really like the fall, I like doing all things fall. I know it's basic, but the pumpkin, spice coffees and the candles and I like Halloween, so I'm really excited to get into the season for sure.

Speaker 1:

Yeah, it's going to be fun, and we were just talking about being in Utah in a few weeks or in a couple of weeks actually. We have an annual event for Blue Cypress, our family of companies. We have our senior leaders from all of the different companies come together. About 40, 45 people each year come together up in Park City, utah, where we have a learning event, and that's always a lot of fun. And the weather in Park City, utah, where we have a learning event, and that's always a lot of fun. And the weather in Park City truly is fall weather. There'll be changing leaves and cooler temperatures, wake up in the morning with 35, 40 degree temps and all that, and I can't wait for that. That's going to be awesome.

Speaker 2:

It's going to be a great time. Last week, as all of you know, I was on my own, but, amit, it's nice to have you back for today's episode for sure, thank you. Today we will be talking about one leveraging an asset base using artificial intelligence. Then we'll be talking about the falling cost of AI tokens and finally, we'll wrap up with a talk around generational differences in AI adoption. So, starting with leveraging an asset base with artificial intelligence.

Speaker 2:

Reuters, founded in 1851, is one of the world's largest and most trusted news organizations. It's known for providing unbiased, real-time news to media outlets globally. In 2008, reuters merged with the Canadian company Thompson Corporation to form Thompson Reuters. Today, thompson Reuters has transformed far beyond its news agency roots. While still operating Reuters news agency, 90% of Thomas Reuters revenue now comes from data services, not news. The company provides specialized information and software to professionals in law, tax, accounting and other fields.

Speaker 2:

Thomson Reuters combines its extensive proprietary content with AI and machine learning to enhance its services and by focusing on data and technology, the company has effectively reinvented itself for the digital age. It uses its vast database of legal and financial information as a competitive advantage in the AI era. This strategy demonstrates how companies with unique content or valuable data can adapt to technological changes, and the company's stock has outperformed the market, reflecting the effectiveness of this approach. Now, this use case shows that organizations can use their existing strengths, combined with new technologies, to stay competitive in a rapidly changing business environment. Amit, you shared this with me and in this situation, reuters is leveraging an asset base or a corner resource and using it to drive durable differential returns. Why do you see this as incredibly relevant for associations?

Speaker 1:

Well, you know the call out to durable differential returns, which is a key phrase coming from Hamilton Helmer's work in the Seven Powers Strategy, which is one of my favorite books and we're talking about it a lot on the pod and our content, and in fact, I just finished recording over the last few days all the content for the AI Learning Hub. We are doing a new course on AI strategy and it's based on Helmer's seven powers and how they apply to the association market, and one of the power positions that an organization can be in is to have something called a cornered resource, and a cornered resource can be intellectual property, it could be a team of people, it could be a patent, it could be a number of different things, and so I think that's exactly what you just described is. Reuters does have a cornered resource. They have a unique proprietary set of content that no one else in the world has. It would be very difficult for someone else to create that equivalent content, if not impossible, and what they're doing is they're finding ways to create more leverage on that same asset base.

Speaker 1:

So I saw that article and I said this is exactly what I want us to share with the association community, because the same asset type exists in many associations where you know you talk to an association leader and you say, hey, what are the key assets that you guys have? And they'll say, oh, we have an awesome team. I say, cool, that's great. We have great members Okay, great, what else? And they'll say, oh, we have all this amazing data. We have all this data on our sector. We have amazing content.

Speaker 1:

It's usually one of the top three or four things, if not even the first thing, that an association will cite as the value they provide or the assets they have, depending on who you're talking to and the language they prefer to use, but I think it's extremely common to have deep insight and deep content in the market the association is in. However, most of the time, that content is locked up not so much physically locked up by intention, but rather the lack of accessibility to that content is really what prevents people from gaining the full potential value. Here, reuters has not only made it easy to access, but they're creating derivative products using that intellectual property that you just spoke of that's driven up so much of their enterprise value in recent years.

Speaker 2:

And can you take that a step further and maybe provide a few examples of what kinds of products or services an association could create with their resource?

Speaker 1:

Well, think about it this way. So imagine I have a benchmarking report. Many associations do that. My benchmarking report, let's say, is about industry stats. Let's say that I'm in the air conditioning space and so my members are all the people who install and operate and maintain HVAC systems, both for commercial and residential properties across the United States I'm sure there's many associations that are in that sector and residential properties across the United States. I'm sure there's many associations that are in that sector. And let's say that part of what I do is I collect data from my members and I say I want to know how much you're selling, how optimistic you are about the market, what's slowing down, what's speeding up, what are you happy about, what are you scared about? All that kind of market intel. And let's say I want to capture that data on a fairly regular basis, maybe annually, maybe even more frequently. And so then what do I do?

Speaker 1:

Well, the classical play is the association will create a report, and that's great. They'll hire a statistician to go through the survey responses, if they don't have them in-house, and they'll produce a beautiful, glossy printed report that they'll sell for a thousand $2,000, or they might have like an online version. That's awesome. So that's a core function of a lot of associations. But that report is static. It doesn't really take into account the history of those reports, other than the analysis that the person is doing, saying, hey, I'm going to compare this mention working report against the 50 preceding it. So let's say it's annual and you've done it for 50 years. The trend lines have to be directly pulled out by the analyst that's creating that one-time report, right? But imagine, instead of doing that, or perhaps in addition, the association says, hey, we're going to do an AI benchmarking service which is dynamic, which is real-time and allows our participants, whoever's buying the service, to ask questions of the report, for the report to kind of like regenerate itself based on their particular areas of interest.

Speaker 1:

So let's say that I'm an HVAC contractor continuing that example but I only service mid-sized commercial buildings. So I'm particularly interested in how other people in other regions of the United States are feeling about mid-market commercial HVAC as opposed to residential HVAC, right? So there's a segmentation problem where the general report may be aggregated across all of the sub-segments. I might be interested in slicing it that way and the report may not have that particular slice available to me because it's a static report. So the value to me as the member is therefore not zero, but it's limited. But if I can get exactly what I want, that's interesting.

Speaker 1:

But to take that even a step further now, what if the member could actually interact with an AI agent saying, hey, my business is seeing these kinds of challenges or these kinds of opportunities. What do you think right? And let's say, the HVAC association not only has that benchmarking data, but also has their full knowledge base, every article that's ever been written, every conference proceeding that's ever been recorded, and that knowledge assistant, combined with the benchmarking data, can offer bespoke advice back to that individual member. Back to that individual member. Right, if I'm that owner of a mid-sized or a contracting business that services mid-sized commercial properties, I can get highly specialized insights and highly specialized advice, which becomes almost like a one-to-one consultative service, very different level of the value chain, right? So I'm higher up the value chain and associations can monetize that.

Speaker 1:

And who better to do that than the association? Right, I could start a website like that today, but I don't have the data for it. The HVAC association not only has the cornered resource the data but they also have brand power. Generally speaking, not all associations do, but they're generally a highly trusted, highly regarded resource in their space. So that's why I get really fired up about this. I think this is just an obvious thing for associations to go build these kinds of services and start making a lot of money.

Speaker 2:

And do you see these kinds of services as nice to have, good to test out in the future, or as essential to survive?

Speaker 1:

I mean, I think this is Netflix versus Blockbuster. I think that this type of service is going to be what people do and you have to kind of undermine your core business in some ways. So let's say, you make a few million bucks a year on benchmarking reports and this new service basically gets rid of the old report. You're going to piss off a lot of people who spend their time creating those reports manually and, by the way, that physical report or even the electronic version may still have value. Right, it's not like it's gone, but this new service eclipses the value opportunity significantly. So you're essentially displacing a current line of business, and that's hard politically in some organizations, but the world doesn't care about that. The world just cares about value creation. If you're creating enormous value, people will come to you.

Speaker 2:

And what you're talking about in terms of Netflix and Blockbuster would be called counter-positioning, right, referencing that book that Amit just mentioned, by Hamilton Helmer called the Seven Powers. So you see this as a path associations will need to take, probably soon, to counter-position themselves so that way they can ensure longevity.

Speaker 1:

Absolutely. I mean, you know, counterposition yourself is the idea of saying, hey, we're going to displace our own product, right? Some might say cannibalization of your own product and counterpositioning yourself. There may be an obsolescence path for an existing product or service. In some cases it isn't necessarily obsolescing a current product or service, but in a lot of instances it is. So the question I would pose is would you rather be counter-positioned by yourself and you build that new business within your association or let somebody else do it?

Speaker 1:

My belief is the association is very well positioned to go after these opportunities from a brand perspective and from a cornered resource viewpoint, in terms of having unique data and unique content. They don't tend to be well positioned to do it in terms of how nimble they are. They tend to have structure, governance-wise, culture-wise. That is kind of repellent of risk, and so when you have that kind of culture, it's very difficult to take on new opportunities like this. But that's the challenge ahead of us, because the value equation for the consumer of your service ultimately is what it is, and you can either choose to play in the new game or not. I mean this is why Google has been freaking out about generative AI ever since the chat GPT moment in the fall of 22, because Google realized immediately that there was a step change in value creation in the world.

Speaker 1:

Chat GPT in its its first incarnation at the time there was nothing else like it was fundamentally more useful than search alone. It doesn't replace search entirely. There's still places where you and I, mallory, we use Google, we use perplexity, which is more search oriented. But there's things that you can do that absolutely lower your need for search in the context of using a generative AI chat solution. There's things that you would have previously gone to Google, run a search, grabbed a few resources off the first page of links, probably maybe, if you're a little bit more detail-oriented than me, the second or third page, and then you would have composed whatever work product you were going to build. Now you go to ChatGPT or Cloud or whatever, and you can create everything in one place.

Speaker 1:

So essentially, it's like a value. The step function and value creation in the world was all of a sudden reached and that's why 100 million users flocked to ChatGPT in the first like 45 days or so Fastest growing app to 100 million users ever and it was not the novelty, it was the value creation. So, fundamentally, when an organization sees that existential risk, they start to invest in it. But I would say that you don't want to be like Google, in the sense that you know, google actually invented the underlying technology that powers chat, gpt, the transformer, you know AI model and so, but then they didn't do a whole lot with it until they realized somebody else had actually figured out how to make money from it so, and it's going to displace their core business. I would say the same thing for associations is, rather than waiting for someone who isn't as well positioned as you to be that challenger, why don't you be the challenger to yourself?

Speaker 2:

So I can sympathize with the association leader who's listening to this right now and thinking well, we sell these benchmarking reports and we bring in revenue and they do really well, and it would be very difficult for us to take a temporary hit in revenue to dedicate resources to coming up with some AI offering or service. Can you speak to that a little bit?

Speaker 1:

Sure, I would offer it as a multi-tiered offering. So let's say we have this HVAC benchmarking report, industry trends report, and we sell that and it's whatever the price is. I wouldn't stop doing that. What I would do is I'd say, hey, we have a new offering which is, if you pay an extra X dollars, you also get the interactive version of this, where the report is available to you in an AI format where you can ask questions, you can slice and dice it, so you have kind of that base level of value that the report creates. And creating a report manually, by the way, and having a static snapshot of, like your association's view of the industry at that point in time, that is actually fundamentally useful. It's just that by itself it doesn't create the kind of value that this new generation of technology can help you create. So I would do it as a two-tiered or higher or more tiers, like two-tiered offering. And we talk about that in our book Ascend, where there's like this playbook where you say, look that base level of value, maybe it's bundled in the membership, maybe it's a product you sell, but then there's this tier above it which no one is expecting from you today and you start to offer that as a premium service for additional dollars. And so if there, you know, in some cases there are services and products that you offer that don't make any sense. To continue In the benchmarking example, actually, it's my belief that the association's view of the world is, in fact, very relevant, and that's part of what you feed to the AI is you say, hey, this is our view of the world, of what's going on in the industry right now, and here's the last 50 years or 30 years worth of those views, and then you can build on that and then the AI can add to that right, with the dynamic nature of what conversational AI can do. So I think there's a way to kind of, you know, have your cake and eat it too a little bit.

Speaker 1:

What you have to do is invest a little bit of time and a little bit of money in experimenting around this stuff, because you know like a simple thing you could do and this wouldn't take any deep integration is go to Claude and load up the most recent benchmarking report, or take the last three or four of them as PDFs, drop them into Claude and say here are the last four years of benchmarking reports.

Speaker 1:

Your job now is to be a conversational analyst that interacts with my members and provides additional insights based on your knowledge of the world and what specifically is in these reports. Whenever there's conflicts between your training data and these reports, your training data is superseded by our reports. Our reports are always the source of truth, which is part of what this thing called grounding that you can do in your prompting strategy and then you can play with that setup in Claude and you can do the same thing in chat GPT. You could create a custom GPT Gemini has similar capabilities and then go talk to the thing and see what it provides, see exactly what it is that you're able to do with that kind of a setup, and that's kind of a proxy to understanding the value creation that would exist if you took this to scale.

Speaker 2:

Moving to topic two, the falling cost of AI tokens. One trend that we discussed recently on the pod is the rapidly falling cost of AI tokens. Ai tokens are the basic units of computation for large language models, or LLMs, like GPT-4, used to process and generate text. The pricing of these tokens directly impacts the cost and feasibility of AI applications across industries. Dr Andrew Ng, a leading figure in AI, recently highlighted this trend, pointing out that the price of GPT-4 tokens has dropped dramatically from $36 per million tokens at its release in March 2023 to just $4 per million tokens today. This represents a staggering 79% price drop per year, and the price drop is driven by several factors, one of those being competition from open source models. The release of open weight models, like Lama 3.1, allows API providers to offer services without the burden of recouping model development costs. Price reduction is also due to hardware innovations. Companies like Grok, samba Nova and Cerebrus are pushing the boundaries of AI computation, enabling faster and more efficient token generation. And then, finally, semiconductor advancements. Giants like NVIDIA, intel and Qualcomm continue to improve AI-specific hardware.

Speaker 2:

The implications of this price drop are far-reaching. It makes AI applications more economically viable across a broader range of use cases, even complex token-intensive applications that might have been prohibitively expensive before are becoming feasible. For instance, an application using 100 tokens per second continuously would now cost only $1.44 per hour to run. The trend is expected to continue, with Dr Andrew Ng predicting further rapid declines in token prices based on this technology from various companies. Roadmaps include improvements in semiconductors, development of smaller yet powerful models and innovations in inference architectures For AI companies and developers. This changing landscape presents both opportunities and challenges. It opens up new possibilities for AI applications, but also requires strategic thinking about model selection, cost optimization and future-proofing applications. Amit, in a short summary why is this so important? We've covered it a few times on the pod, which always alerts to me this is something important and that we're going to keep talking about. But what do you think?

Speaker 1:

I like that, mallory. It's kind of like giving me a system prompt saying don't go into verbose mode. So I appreciate that the bottom line is it's accessibility right. So the cheaper something is, the more it gets used. So it's kind of like the assumption of insatiable demand. So if you think about the growth of the economy broadly speaking and we say technology tends to drive the cost of things down and we say, well, what does the curve look like and where are we seeing growth happen? So if you assume that quantity demanded is fixed, then what's going to happen is is, as price goes down and the overall demand doesn't increase, you're just going to end up with a surplus and you're going to have essentially people out of work in the case of labor. In reality, what's happening is is that good things tend to grow demand.

Speaker 1:

If you make tokens a tenth of a cent per million tokens instead of $36, which they're not at yet, but they will be soon then essentially it becomes completely free. It's kind of like the mindset around video on the internet and bandwidth. We're doing this in high res on Zoom and people are consuming this on YouTube or on just audio and we don't care about the incremental cost of the bandwidth we're using. Because internet bandwidth has basically become free. We're paying $50, $100 a month, whatever the amount is, for bandwidth at home. That's, like you know, gigabit, fiber kind of connections, and that would have been thousands of dollars just a few years ago. So the same thing is happening here with tokens, and so what that basically boils down to is, if AI is almost free in terms of its use, then people will use it more, and the more it's used, the freer it'll get. So if that's a word, the freer it'll get. So if that's a word, the less expensive it'll become, and that will encourage more adoption.

Speaker 2:

What kinds of applications of AI are pending lower token cost? What do you expect to see come out of this?

Speaker 1:

Well, there's a lot of different things. So, from our perspective across the Blue Cypress family, when we think about the apps we're developing, when we first started building a lot of these generative AI solutions, like BettyBot being one of the first ones, we actually had a pretty significant concern about the cost of token consumption, because Betty was consuming a lot of tokens right and in order to be a knowledge assistant that's trained on the world's information for that association. That's a lot of content, a lot of tokens to process requests in an effective way, and so that was a major factor in our original economic modeling of the uptake of the solution. But now tokens are very close to free, so it's just completely irrelevant. So people who are interested in a solution like that are not worried about their token consumption nearly as much. It's completely not worried about it, but it's changed that.

Speaker 1:

The other thing that's happening is we talk lot about on the pod, about multi-agent solutions. We talk about that in the learning hub and in the book, and that's becoming a really important concept, and what you do in a multi-agent solution is you're talking to one or more AI models over and over and over and over again, repetitively and in kind of a loop and sometimes you're firing off multiple requests in parallel. So, like Skip, which is our other AI agent, that's a conversational agent for data, so Skip is like chat GPT for your database. Skip will execute a bunch of different inference at the same time, so Skip will talk to a bunch of different AI models in parallel and we don't have to care about the cost that much. It's really, really insignificant now, whereas again a year and a half ago, when we started working on Skip, it was a significant concern, because Skip is what we'd call token heavy. Skip is an agent that uses tons of tokens in order to do what he does.

Speaker 2:

For the average listener. I think we hear a 79% price drop is what we said, A 79% price drop and we think that sounds great. But I think in actuality we're not exactly sure of if that's a total non-issue at this point. But you did say you feel like it's basically free. Are you waiting for Skip and Betty for the tokens to be even cheaper and cheaper, or at this point, does it feel free?

Speaker 1:

I mean it feels pretty close to free relative to the value creation of something like a Skip or Betty. You know we're able to do things that would previously have taken lots of people, and so the idea is is like you know the ROI is based on hey, you couldn't hire six people to be data analysts that your association at. You know the significant salary data scientists would take, right Versus Skip can do it all and do it faster and do it, frankly, better than any normal data scientists could do. So the value equation is so strong for those products that even if the token consumption cost is something you know, it's non-zero. You know, even if it was $5,000, $10,000 a year of token consumption or something like that, nobody's going to care about that. Now, I'd rather have that be close to zero, if that was a $50 bill, than it's like. You know, turning on a light bulb in your house, you know, no, you shouldn't waste electricity, but if you left one LED light on that's consuming seven watts, you know, then you might not care as much about it. So that's kind of where we're headed with this stuff. It's obvious.

Speaker 1:

The competition, the scaling. You know, when you scale something as aggressively as this, you know, prices drop from economies of scale. And then, in addition to that, if you have deep competitive dynamics in an environment, you're going to see prices drop further. And the AI, the technology itself, is already a commodity. Lama 3.1 is basically just as smart as GPT-4.0. Lama 3.1, 405 billion parameter model their largest model that was released a couple months ago is highly competitive with GPT-4.0. It's not perfect and it's not at the level of GPT-4 in a couple of areas, but for most applications you're building it's as good. So there's no cost associated with that model itself. There's just the hardware cost and then the margin that the operator of the hardware needs to have and that's arbitraged out right. So it's essentially saying, hey, there's going to be no real margin in that business, which means for consumers, like all of us, it's great because you get more choice, better product and lower cost.

Speaker 2:

Are you using a mixture of models in a multi-agentic solution like Skip?

Speaker 1:

Yes, yeah, it's a good idea to have multiple models At a minimum. You want to have a small model and a large model. So you might say, hey, this is a really complex question, let's fire it off to the large model. But then for a lot of the simpler decision making you can go to the small model, which actually call it Lama 3.1 Instant on the Grok platform. You mentioned Grok. That's G-R-O-Q with a Q. That platform has a really interesting hardware architecture that literally is nearly instant. It inferences at like 800 tokens per second, which for non-technical people basically means it's faster than you can possibly read it. So ultimately, we already are there and the stuff just keeps getting better.

Speaker 1:

So from my point of view, using a small model is a really good thing to do for a lot of the day-to-day basics. And then you build your architecture so that your system is smart enough to know when it needs to go to the big model versus the small model. And that's the simple version of the answer to your question. But you might also use a hybrid where you say, hey, you know what I want to get the best possible answer. So what I'm going to do is I'm going to fire off the same request to Lama 405B, to GPT-40, and to Gemini in parallel. At the same time.

Speaker 1:

I'm going to fire off the same prompt to all three of these best-in-class models. Then I'm going to pull back the responses and I'm going to compare them, and I'm going to compare them in all three models. So I'm going to ask all three models to compare the answers and synthesize the best response. So you're kind of like using them all in parallel together and if tokens are effectively free and these things are really fast, you can do that and have unbelievably higher quality, right, because it's like the multi-agentic nature of it. But you're using the best of each of these model architectures different training, data, more heterogeneous kind of interactions. So that's one way you can approach it.

Speaker 2:

Interesting, and when you say you are looking at the results of these models, you mean a supervising AI of sorts, correct?

Speaker 1:

Yeah, no human that I know could possibly keep up with that. It's definitely something that another AI has to look at for sure.

Speaker 2:

Moving to topic three generational differences in AI adoption. A recent DICE survey of US technology professionals revealed significant disparities in generative AI usage across age groups. 40% of tech workers aged 18 to 24 use generative AI at least weekly on the job. Nearly half of tech professionals aged 55 or older don't use the technology at all. This generational disconnect could potentially disrupt enterprise plans for AI adoption. Now, despite the hype surrounding generative AI, its impact on work has been limited, so far at least, as it pertains to this survey. Over half of the survey respondents reported that AI has only slightly impacted their work. Younger IT professionals 34 and under are more concerned about AI's impact on their careers compared to older workers, 45 and above.

Speaker 2:

The growing importance of AI is influencing talent strategies as well. More than 80% of HR professionals anticipate increased demand for AI professionals in the next six months. Many businesses are focused on upskilling existing staff to meet AI talent requirements, and we're starting to see companies implement AI adoption roadmaps that include employee training. Several obstacles remain in the path of widespread AI adoption. The generational gap in AI usage could impede adoption efforts, and most companies lack clarity on their staff's existing AI proficiency levels, which makes it difficult to develop effective upskilling strategies. Amit, as a longtime leader of companies, I will say is this to be expected?

Speaker 1:

Yes.

Speaker 2:

So you're not surprised.

Speaker 1:

How's that for a short answer?

Speaker 2:

That was too short. You can expand on this one. You can expand.

Speaker 1:

So every time there's a disruptive technology, it tends to be overhyped in the short run and underestimated in the long run in terms of its impact. Think about the Internet, think about mobile social. Even I would call that more of an app than a framework. But ultimately AI is the kind of biggest of them all and it's being overhyped in some ways, and what that results in is some people who are kind of the incumbents in a given field saying, oh, I'm going to wait for this to pass, or I really don't want to deal with this and it probably won't be that big of a deal, or I'll adopt it once it's more mature. But what we're not really taking into account in that is how quickly AI is moving. It's faster than anything any of us have ever seen. So that's a problem With internet adoption. As rapid as that has been over the last three decades, it's kind of slow compared to what's happening with AI adoption. So I'm pretty worried about this.

Speaker 1:

I think that there's a whole kind of I mean, these are generalizations, that's what surveys intentionally do.

Speaker 1:

But if you do have that kind of demographic disparity, you're going to have problems because you know people are living longer and working longer, and you know, of course, ai, along with other exponential technologies, is going to further extend those time spans, hopefully. And so the question is is what are these people are going to do if they don't know how to use AI and AI is mandatory, you know there's going to be a major skills gap. So I fear that people who don't get on this soon enough are going to have a hard time. They're going to have a hard time learning it, but they're also going to have a hard time keeping up with it, because you know, I mean you and I spent a lot of our time talking about this and thinking about this topic, and I think it's still overwhelming. I feel that way all the time. So, you know, I think people who've done nothing with it are really at a disadvantage already, and they'll be at a deeper disadvantage if they don't jump on the learning path pretty soon.

Speaker 2:

I think you touched on an interesting point, which is a lot of people see these new emerging technologies popping up and think, oh, this train will pass, right, it'll just be a wave. I think you and I are both in agreement. Right, that AI is not one of those emerging technologies we think will pass in a wave, but how would you recommend, as a leader, communicating this to your staff or getting that buy-in? I know education is a key piece, but are there ways that you would stress that this is here to stay?

Speaker 1:

Yes, I think. One quick side note before I answer that question is I think one thing that's different about this technology shift is it feels more personal to a lot of people, and both because of all the sci-fi lore that's preceded the actual arrival of true AI in our lives. For a lot of different reasons, you know, but I think people feel a lot more with this technology shift. You know, when the internet came out right, or the internet kind of entered the popular consciousness and same thing with mobile technology, that wasn't a shift so much in people like having pros and cons to should you have a smartphone? Or like maybe some people didn't want a smartphone, they're like I'm just fine with my flip phone, I'm just fine without it, but they weren't like these deep emotional issues of like will this destroy industries? Will this change our way of life? Right? So this is like a very personal topic to a lot of people, and so, beyond perhaps waiting for those clouds to pass, kind of thing, I think a lot of people are just like I hate this stuff. Clouds to pass, kind of thing, I think a lot of people are just like I hate this stuff, I don't want anything to do with it, it's just terrible. So that emotional reaction I get that there's a whole group of people that feel that way, and I empathize with that deeply. The flip side of that is the technology doesn't have feelings, and so it's going to be here whether you like it or not. And so, as a leader, what I would tell people is very simple, and I think this is a mandatory thing for leaders to stand up and do it's that it is unlikely you by yourself, like, will be displaced by AI by itself, but it is extremely likely that you will be replaced by someone who's very good at AI if you're not. So it's that simple. Like, if you're not a good user of AI, it's like saying you don't know how to use a computer, you don't know how to use a telephone, you don't know how to use electricity, If you don't know how to use basic technology in your job, which AI is very quickly going to become a forgotten technology. We're all talking about it right now as if it's like the most important thing, which it probably is, but very soon it's going to be forgotten about and it's just going to be assumed. And so if you don't know how to use this stuff, you're going to have a tough time.

Speaker 1:

I will also say that there is a flip side to that argument, which is that AI is becoming so much smarter that you won't have to be as skilled in using it to be good at using it. So like, for example, we have a whole course on the Sidecar Learning Hub called Prompt Engineering right, and it's a super popular course, by the way, it's only 24 bucks for those of you that are interested in diving deep and attaining that skill. It's an awesome course. People love it, but that course is going to be completely irrelevant in three or four years. But for the most technical people, because these systems are getting so much better at metacognition, which is this process of thinking about thinking right. So if you go to the thing and you're like, oh, I just want to solve this problem right now, it'll like spit out some garbage at you if you don't know how to talk to the AI. So you have to train yourself on how to talk to the AI, which is prompt engineering is a fancy way of saying learn how to talk to the AI, right, that's all it is. So, but like, the AI is getting smarter and smarter and smarter, so much so that for most people just like having a, a generic conversation like you would with some person, will work with AI systems, perhaps even in a year or two.

Speaker 1:

Now, I'm not suggesting you shouldn't take the Sidecar Prompt Engineering course you should, because it's awesome and it's, for now, very useful. But the point is that AI is actually making it easier to use AI. That has not been true with other technologies in the past. That being said, I still think there's a major disadvantage for anyone who's not taking the time to get going and understand this stuff. I think the biggest issue is actually going to be senior leadership. So I tend to see and this isn't senior leadership they tend to be further along in the career path, therefore older.

Speaker 1:

So when it comes to the study you referenced in the second category, for the most part, but I think the problem there is there's a tendency in a lot of organizations, particularly in nonprofit land, where the senior leaders say oh cool, we have technology people for that.

Speaker 1:

This is not a technology conversation, this is a strategy conversation, this is a business conversation.

Speaker 1:

If you don't know that power tools exist, then you have a legion of carpenters who use hand tools to do your work and, all of a sudden, if somebody else next door knows that power tools exist and they train their workforce on how to use power tools, they are going to kick your ass.

Speaker 1:

So what you have to do is learn what the tools are and what their capabilities are. Just like in first topic we said, hey, you can take that corner resource of all your data and your content and turn it into something magically different, right, such as unbelievable value creation. But if you don't know that that tool is capable of doing that, you cannot conceive of the business model or the strategy to execute that will result in that durable differential return we started talking about earlier in this episode. So to me, I see that as a major gap, which, of course, is where I'm focusing, trying to spend time with senior execs and these organizations and convince them that they personally have to learn what this stuff is about, even at the capabilities level, not so much even using the tools.

Speaker 2:

So there's a big gap and I think it's my biggest concern for the association market and the nonprofit market in general in general, I would say this is the yeah, by far the biggest technology change that I'll see in my professional career, or at least that I've seen thus far, I should say. Who knows what's going to happen in the future. So it is hard for me to think about, as you said, a few years from now, us no longer talking about prompt engineering and us no longer talking about AI, because it's so ingrained in what we're doing that it won't be worth a topic of conversation. That in itself is hard for me, I think, to think about. But going back talking about the internet, which is an example we use a lot, how can I phrase this? Were there internet trainings for company staff? Was that something that was rolled out kind of in the same way that we're talking about?

Speaker 1:

Yeah, back in the day you know, corporations would roll out how to use a web browser. Training, that was a thing Wow.

Speaker 1:

Yeah, it was a thing. Of course, there were, like you know, eight websites you could go visit on the internet at the time. So, you know, you'd learn how to use Yahoo and you'd learn how to use whatever. So, yeah, there was training on that. There were trainings that people would provide on a variety of things, and on the technical side, there was like a whole legions of people that would say, hey, I am an internet developer, I do web application development, versus something else, and most corporate applications were built as Windows apps back then and doing web development was a different thing.

Speaker 1:

Now, when you say software development, people assume that you're referring to distributed internet-based development, unless you say otherwise. Right, so I think it's one of the things. Like it just becomes a tool in the toolkit. The pandemic accelerated this along with technology, but, like video conferencing and other remote work style models like what we're doing right now, it's just something that people have to adapt to and, in any event, the study didn't surprise me. It does concern me and I think that this is just like other technology shifts. The difference is it's moving so much faster so we have less time to fix the problem.

Speaker 2:

Well, I hope that if Sidecar had existed back then, we would have been in the business of training associations how to use the internet.

Speaker 1:

Totally would have been yeah.

Speaker 2:

Everyone thanks for tuning in to today's episode 46. We look forward to seeing you next week thanks for tuning into sidecar sync this week.

Speaker 1:

Looking to dive deeper, download your free copy of our new book ascend unlocking the power of ai for associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.