Sidecar Sync

AI and Exponential Growth, Protein Folding, and Predicting the Future of AGI | 39

β€’ Amith Nagarajan and Mallory Mejias β€’ Episode 39

Send us a text

Join us on this week's episode of Sidecar Sync as hosts Amith and Mallory dive into the fascinating world of exponential growth and artificial intelligence. From the historical context of computing power to the latest advancements in AI, we explore how these technologies are revolutionizing various industries, including healthcare and associations. Amith shares his insights on the future of AI, the concept of artificial general intelligence (AGI), and how associations can stay ahead in this rapidly evolving landscape. Whether you're curious about AI's impact on society or looking for strategies to future-proof your association, this episode is packed with valuable information and forward-thinking perspectives.

πŸ“… digitalNow Conference: October 27th-30th in Washington, D.C. For more information, visit:  https://www.digitalnowconference.com

πŸ›  AI Tools and Resources Mentioned in This Episode:
Claude 3.5 ➑ https://www.anthropic.com/
ChatGPT 4.0 ➑ https://www.openai.com/
AlphaFold2 ➑ https://www.deepmind.com/research/case-studies/alphafold
GroqCloud ➑ https://console.groq.com
Skip AI ➑ https://helloskip.com

Chapters:
00:00 - Introduction to Sidecar Sync
05:06 - Ray Kurzweil's AI History and Predictions
08:06 - Amith's Keynote on Exponential Growth
12:47 - Future AI Capabilities
17:19 - Multi-Agent Systems and Real-Time Applications
20:01 - AI's Impact on Healthcare and Other Industries
25:57 - Balancing Current Operations and Future Planning
28:55 - Removing Constraints with AI
34:17 - Future Predictions: AGI and Singularity
39:34 - Societal Changes and the Exponential Economy
42:04 - Opportunities for Associations

πŸš€ Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global

πŸ‘ Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress πŸ”— https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

πŸ“£ Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

πŸ“£ Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Speaker 1:

Since 1939, computing power has increased by a factor of 75 quadrillion when adjusted for inflation. This means that for the same amount of money, we can now buy 75 quadrillion times more computing power than we could in 1939.

Speaker 2:

Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Well, welcome back listeners to the Sidecar Sync. We're super excited to have you back. We have another really interesting discussion for today, all about exponentials and AI a lot of fun things that we're going to dig into. My name is Amit Nagarajan and I'm one of your hosts.

Speaker 1:

And my name is Mallory Mejiaz. I'm one of your co-hosts and I run Sidecar.

Speaker 2:

And before we dive into the wild world of exponential technology and exponential growth, let's take a moment to hear a quick word from our sponsor.

Speaker 1:

Today's sponsor is Sidecar's AI Learning Hub. The Learning Hub is your go-to place to sharpen your AI skills, ensuring you're keeping up with the latest in the AI space. With the AI Learning Hub, you'll get access to a library of lessons designed to the unique challenges and opportunities within associations, weekly live office hours with AI experts and a community of fellow AI enthusiasts who are just as excited about learning AI as you are. Are you ready to future-proof your career? You can purchase 12-month access to the AI Learning Hub for $399. For more information, go to sidecarglobalcom. Slash hub. Amit, how are you doing on this lovely morning?

Speaker 2:

I am doing great. I was going to say it's pretty hot up here in Utah, but that's not probably true compared to what it's like in the south where you're at.

Speaker 1:

I will say it's kind of a nice morning in Atlanta. I don't even know the temperature, it's 83. I feel like I'll take that An 83 degree morning. What about in Utah?

Speaker 2:

in Salt Lake yesterday. Up here in the mountains where I'm at, it was, I think, 94, 95, but it is dry, so that helps a little bit. And the nice thing about the low humidity is that you end up in the evenings. It cools down really quick. So by the time the sun was setting I took my dogs out for a walk. It was in the 70s. This morning when I got up it was in the 60s. So there are times of the day that are nice, even when it gets hot out here For sure. Yeah, comparatively speaking, normally we really enjoy. Maybe the high temperatures are in the upper 80s in the mountains.

Speaker 1:

So a little bit on the hot side. Yeah, I feel like it's been, dare I say, maybe a slower few days weeks with AI. I don't know. I feel like there hasn't been any major drops in, maybe like the last two weeks. Do you agree with that?

Speaker 2:

I mean, yeah, I think there's been a lot of technical stuff going on as there always are Like there's some interesting projects that are being discussed from a research paper perspective Nothing that we'd really surface in the podcast.

Speaker 2:

I think that you know, after the Cloud 3.5 Sonnet drop a couple weeks back, a lot of people have been talking about that I personally haven't been like super excited to come to the pod and say, hey, there's this new tool that everyone in the association community should check out. I will say, actually, just one last thing on the weather. I don't think this is because of AI, because I don't believe that the AI weather models we've talked about both the Microsoft research one and then the one from DeepMind I don't think either of those have like gotten into actual commercial use. But I'll just say that, like you know, the weather forecasts seem to be getting a lot better in terms of predicting, like, what things are going to look like. Like I knew a week and a half ago that it was going to be hot for three days and it's hot for these three days. So I think technology in general is getting better. I can't wait for those AI models to tell me exactly what the weather is going to be in my little dot on the map.

Speaker 1:

Exactly. I think that's a really exciting time. You know, my mom would probably hate me saying this I don't pay that much attention to the weather. She's normally the one back when I was living in Louisiana who would tell me you know, Mallory, be on the lookout, there's a big storm coming. So now that I don't have her here in Georgia, I'm realizing I need to pay a little bit closer attention to the weather.

Speaker 2:

Yeah, I think in this in certainly in New Orleans. You have to pay attention to it from just a safety perspective.

Speaker 1:

Right.

Speaker 2:

When I'm in New Orleans, honestly, I don't do a great job of paying attention because I just wait for, like, my phone to start freaking out because there's some kind of tornado alert or whatever as we do.

Speaker 2:

But up here in Utah. The reason I come out here is I just love the outdoors and the mountains and the hiking and the mountain biking and the skiing in the winter and so I'm always checking the weather because I want to know when storms are coming in and plan like mountain bike rides and plan what I'm going to go in the lake and all this kind of stuff. So I'm super into the weather when I'm up here. When I'm in New Orleans, I don't really pay as much attention.

Speaker 1:

You just wait for those storm alerts, flood alerts, to hit the phone.

Speaker 2:

I just try to keep the family safe is pretty much because, in New Orleans, I mean, there are a few months of the year where it's nice but there's a good bit of the year where it's just like just stay indoors.

Speaker 1:

Well, today we have an exciting episode lined up. Our episode is inspired by a fascinating TED Talk that Amit shared with me, called the Last Six Decades of AI and what Comes Next, by Ray Kurzweil. Kurzweil, a pioneering computer scientist and futurist, has been involved with AI for an impressive 61 years. He's written several books, including the Singularity is Near in 2005 and the Singularity is Nearer this year. I thought that was a joke from the TED talk, but he actually did just release a book with basically the same name. In this talk, he takes us on a journey through the history of AI, from its inception to the present day, and offers some predictions for the future. Amit, have you? I think you've read the Singularity is Near, right?

Speaker 2:

That's right yeah.

Speaker 1:

Yeah, is it a good book?

Speaker 2:

Yeah, I'd recommend it. I think when you read something from somebody like Ray Kurzweil, but specifically him, you're going to learn something. So whether you buy into all of his predictions or not, it doesn't really matter. Actually, you can learn a lot just by kind of studying the thinking that goes into a book like that.

Speaker 1:

Yep. So topic one today we're going to be talking about exponential growth and kind of how we got to this moment. And then topic two, we're going to be talking more about AI today and in recent times. And then for topic three, we're going to talk about some of those future AI predictions and societal changes, starting with exponential growth. This is something, amit, you've talked about at length, that we've talked about on the podcast.

Speaker 1:

Ai has been a concept around in the world since the 1950s, but recent advancements in compute power have made it feel almost like a new phenomenon. The ideas behind AI have existed for decades, but only now that we have the computational resources to make many of these ideas a reality. It seems that we're living in an AI boom. Since 1939, computing power has increased by a factor of 75 quadrillion, when adjusted for inflation. I had to google that as well because I thought perhaps it was a mistake, but that's real by a factor of 75 quadrillion. This means that for the same amount of money, we can now buy 75 quadrillion times more computing power than we could in 1939. This trend has remained pretty consistent over the past 85 years, doubling approximately every two years a phenomenon known as moore's law, which we've also talked about on the podcast.

Speaker 1:

The surge in compute power is what's fueled ai's recent breakthroughs, and the key here is that the growth is exponential, meaning that the rate of advancement continually accelerates, bringing exponential leaps rather than incremental improvements. So, amit, this got me thinking a lot about your keynote from last year at Digital. Now you have been giving lessons I mean or sessions on this idea of exponential growth on Moore's Law for a while. I'm wondering what initially sparked your interest in that. Like, how many years are we talking that you've been really tied into Moore's law?

Speaker 2:

I mean it's hard for me to actually answer that question because it's been most of my life I've been a computer geek since I was a little kid and just kind of witnessing what had happened through the 80s and 90s and last couple of decades in this century. It's just kind of mind boggling, you know, when you think about how far we've come in such a short period of time. In the grand scheme of things, if you think about the course of human history and you think about the last you know 80-ish years that you described, and if you think about even the last, like 40 years, it's truly remarkable To me. I think that a key insight that needs to be extracted from all this is how to think about the future. When you think about where we are today, we as a species have evolved to think in linear terms. So we're not thinking about exponential growth. We're thinking about well, that which will come tomorrow is likely to be similar to that which has come today, but in reality with exponentials it's so different.

Speaker 2:

So you have to think, you have to force yourself to use a different frame of reference when it comes to planning, whether it's personal and certainly for business, and we're going to dig into that in this episode, I know, but to me that's the big thing that I'm interested in is not even so much the technology, because I love technology. I find that interesting. But my interest was just like looking at it and saying what can we build with this stuff? You know, when I first started getting into computer software even before professionally I was just amazed by it. And then, once I started my first software company, just experiencing what we could do year over year, things that were not possible one year became possible a year or two later, and that's pretty exciting. So that's how I got into it.

Speaker 1:

So that's how I got into it. For sure, I'm wondering and we'll, like you said, we'll get into this in terms of business later in this episode. But this factor of 75 quadrillion, I mean you're more of a math person. That probably packs more of a punch to you, but for me, the difference between 75 quadrillion and one quadrillion, I can't even wrap my brain around that. So my question to you and not that you may have the perfect answer is what is an average person supposed to do with this information? How could you possibly plan back in 1939 for a factor of 75?

Speaker 2:

quadrillion. I mean you can't, because it's essentially it's almost like a number, like infinity versus zero. So if you say back in 1939, computing power was close to zero and now computing power is dramatically greater, you know, that's why the factor of growth is so high, because when you divide something by a number that's close to zero, then the number ends up being very, very, very large, and so you kind of like with limits, you'd say, okay, well, that's basically approaching infinity. In reality, of course, if you change, if you shift the time horizon a little bit and say, well, let's look at what's happened in the last 10 years or the last 40 years, the number is not quite as big, but it's still super impressive.

Speaker 2:

I think one of the stats from the executive briefing I give on AI talks about how, since the late 90s, we've had about an 8 million time increase in computing power for the same amount of money, and so that might be a little bit more conceptually, you know, useful, but it's still a massive number, right? I mean 8 million times like the computer I used to start my first software company. You know, 8 million times more power is packed into the actually now an older iPhone that my kids walk around with right. So that's a hard concept to get your head around. But what you have to do is say, okay, what are my assumptions with my business? What am I assuming I can do, what am I assuming I cannot do? And, based on what's happening with this arc of growth, with computing, with AI, with everything, what is likely to actually be true? And to retest your failed experiments, right? So we might say, oh, okay, well, a year ago we tried to do something with ChatGPT and it didn't work. Therefore, chatgpt doesn't do what I want it to do.

Speaker 2:

And maybe in the 1990s, if you tried to use Microsoft Word for something and you said, oh well, you know, I tried to get Microsoft Word to do X and Microsoft Word couldn't do it, it would be reasonable for you to say a year later well, no, like, microsoft Word doesn't do that, because I still have the same version of Microsoft Word. Right, I might have had Word version, you know, 10 or whatever it was back then, and Word version 11 didn't come out for years. But things are moving so much faster now and software and computers are getting so much more capable. You have to reassess these results and your assumptions and that, I think perhaps is the most important lesson about exponentials is that because you can't look around the corner that well, you have to retest your assumptions more frequently around the corner.

Speaker 1:

That well, you have to retest your assumptions more frequently. Given the trajectory of compute power growth. What capabilities AI capabilities do you anticipate becoming feasible soon that could impact associations? I know you've talked a lot about autonomous agents, so that's where my mind goes, but is there anything else that's on your horizon?

Speaker 2:

I think that if you were to say the current AI capabilities that we have right now in the frontier models, like I know we've talked about Sonnet 3.5, the cloud, Sonnet 3.5 from Anthropic and obviously ChatGPT 4.0 or 4.0mni are the latest models from a couple of leading vendors. They're both amazing, right, but they're both expensive and they're both slow. And I say they're expensive and slow in a relative sense. They're basically super cheap and really fast compared to what you would have done two years ago to do the same output. But our expectations continually go up.

Speaker 2:

Humans are insatiable in terms of their demand for things. If you use something and it takes 10 seconds, you want it in one second. If it takes one second, you want it instantly. It's like web page load times. If a web page takes longer than a second to load, I think it's something like 50% of the people on that page leave. Back in the 90s and 2000s, if you loaded a web page under five seconds, it was considered phenomenally fast. Seconds, it was considered phenomenally fast.

Speaker 2:

So the reason I raise all that is what I think of is how do you go about doing more real-time applications with the capabilities we currently have? Capabilities are going to grow so we can come back and talk more about advanced reasoning capabilities with multi-agent systems that we can't do right now. That's exciting. What I think is super exciting, though, is that every time a new generation of smaller models comes out, they tend to have performance similar to the last generation of big models. So, put another way, if you compare GPT-4 Turbo, which was released, I believe, in November of 23, not that long ago and you compare that to Lama 3, the 70 billion parameter model from Lama 3, which is about, you know, I think, 18 times the size of the original GPT-4, not sorry, not the original GPT-4, but the GPT-4 turbo model it's comparable in terms of its benchmarks, yet it inferences or runs about 10 times faster because it's so much smaller and it's free, and also, by the way, it runs on the Grok cloud. We're big fans of Grok G-R-O-Q. If you check out groqcloudcom, these guys have a proprietary, different style of hardware we've talked about in a prior episode which is literally 10 times as fast for inferencing any LLM. And I share all this because, in answer to your question, I get excited about these kinds of instantaneous real-time applications, whether it's for audio and video or even for text. But part of the reason I get excited about it is that people are missing a key ingredient in understanding what AI systems can do, even if they don't get any smarter Everyone's trying to solve for.

Speaker 2:

Does the model by itself, with zero shot meaning just like send it a prompt and hope for a good result? Does it produce the right outcome? Does Chachi PT4O give you the perfect answer on the first attempt? And we want the models to get better at that. Right, we want them to get a lot better. But with multiple agents or multiple iterations with the same model, even you can get unbelievably great responses, even with current state technology. You don't need better AI models to solve a lot of current issues.

Speaker 2:

People look, for example, at some of the things that one of our AI products, Skip, which is this AI business analyst and AI data scientist. People look at what Skip can do. They're like how in the world did you make GPT-4-0, do that? How did you make? And we use GPT-4-0, we also use Sonnet, we use some other things in there. Well, the reason is is because Skip is a multi-agentic solution. The downside to Skip is, every time Skip creates a report for you, it takes between 20 and 40 seconds. Well, what if that was instant, right, and Skip could also be way smarter. If right now, I think Skip on average does two or three passes right when Skip is formulating a response using the intelligence from the models in combination, essentially. But if these things were instant, then Skip could do 10 or 15 or 20 passes and produce a way better answer, even if the models got 0% better. So I think that's a key point is that speed is actually a key ingredient in improving capability as well.

Speaker 1:

Can you clarify too what you mean on multi-agentic solution when you mention Skip?

Speaker 2:

Yeah, and the most basic way to understand it is multi-agent systems are basically AIs talking to other AIs to produce an outcome. So I might say I want a member retention forecast. So what I want is in my AMS I have a bunch of data and I have data in other systems. I want an AI to gather all that data and other systems. I want an AI to gather all that data. I want that AI to run machine learning models against that data and give me a really good forecast of which of my members are likely to renew and which ones are not. So it's a classical machine learning type problem. It's a data science issue. It requires data gathering, it requires some data analysis, it requires running models and then it requires actually taking the output of running those models and analyzing it and producing a report. So there's five or six or seven steps, right, depending on how you break it down.

Speaker 2:

A multi-agent system would say, hey, each of those steps or each of those different pieces is a quote-unquote agent. It's basically an AI that's been prompted or trained in a certain way and that AI works with other AIs. And then there's a supervisor AI agent, much like you'd have a supervisor in a company asking the different AI components or agents to do particular small tasks. So it's the idea of decomposing a larger problem into smaller problems, right, we've been doing that in the world forever. We take a big problem like build a bridge Okay, well, how do you build a bridge? Well, there's lots of steps to building a bridge.

Speaker 2:

You don't just build a bridge. How do you build a complex computer application? You break it down into small chunks. Same idea, that's all. A multi-agent system is and it's not really that novel of a concept, but it is something people are like realizing they can combine traditional software engineering techniques with AI and get incredible results. And in the context of this discussion, the reason I'm pointing at it is multi-agent systems are slow and expensive right now. That's going to change. It's going to become super cheap and basically real-time.

Speaker 1:

So what you're on the lookout for in the next few months to few years is speed of these systems going up, costs going down and more real-time. So what you're on the lookout for in the next few months to few years is speed of these systems going up, costs going down and more real-time applications.

Speaker 2:

Totally, and then at the same time, obviously, the progression of both scaling laws plus algorithmic advancements are going to yield fundamentally smarter models too. I just like to make the point that even if you don't believe that the models are going to get smarter which I don't think there's good data to support that belief but even if you don't believe that the models are going to get smarter which I don't think there's good data to support that belief, but even if you don't believe that the ai models are going to get better, there's probably 10 years worth of engineering we can do with the current models, maybe longer, to like extract every ounce of opportunity out of them. There's so much more. We barely scratch the surface with the stuff that we have in our hands today that's a great point.

Speaker 1:

Moving on to topic two, focusing more on AI, today and recently, we know that AI is currently transforming various aspects of business, from chatbots to predictive analytics, personalized recommendations and process automation. Ai is reshaping how companies operate and interact with customers, and, in the case of many of our listeners, members, members, however, ai's impact extends far beyond business. It's driving innovation and breakthroughs across numerous industries, often in ways that aren't immediately visible to the public if they're not looking for them. One area where AI is making profound changes is healthcare and medicine, a striking example of that being vaccine development. So traditionally, researchers would select a few promising mRNA sequences out of billions of possibilities and then conduct time-consuming clinical trials to find the best one. The process, as you can imagine, would take months or even years, but, in contrast, moderna used AI simulations to design their COVID-19 vaccine in just two days, by simulating the reaction of billions of mRNA sequences and identifying the most effective one. Another example is protein folding research, which we've covered in previous episodes of this pod. In 2023, an AI system called AlphaFold2 mapped 200 million proteins in just a few months. To put this into perspective, humans had only mapped 190,000 proteins in the entire year of 2022. And that was AlphaFold 2. We should mention that we just talked about the release of AlphaFold 3 recently on the podcast as well.

Speaker 1:

It's important to break out of the business bubble sometimes and see how AI is accelerating drug discovery, enhancing personalized medicine and potentially leading to cures for diseases that have long eluded us. Amith, I feel like this topic, particularly when we talk about how ai is impacting specific professions or industries. It makes me think that we're looking at ai and in two ways, and it's not so black and white, but this is what I think one, that it has the power to transform an association's business operations and offerings, and then two, it has the power to transform an association's business operations and offerings, and then two, it has the power to completely alter their members' professions or industries. Do you see one of those as more urgent than the other?

Speaker 2:

Well, I don't know that one is necessarily more urgent than the other in terms of getting awareness. I think people need to look at those externalities, understand what's happening in the broader sense of AI some of the topics you just touched on and then how will that affect their world? How does it affect their profession, their industry? The associations have got to get on top of understanding that impact, because some industries are going to be radically affected, others more subtly, and I think the learning curve to figure that out is actually not that different than the learning curve to figure out how you can apply it to your own business. When we do roadmap work, helping associations and nonprofits plan out what the next couple of years look like from an AI adoption perspective, we spend a lot of time helping people think through the externalities, taking a fresh look at their industry, taking a fresh look at how AI might affect the key value components of what their industry does right. So what is it that the industry is doing? Where does it provide value? How will it be disrupted? Not if, but how will it be disrupted by AI and over what time scale. And then the reason we do that is more than just there's an exercise there. That's useful doing by itself, which is, associations should, in turn, take those insights and use them for education and work with their members to help their members realize these things, as well as to learn from their members, obviously. But the other side of it is is that you have to build what will be relevant in that future.

Speaker 2:

Your current educational offerings, your current conferences, whatever it is that you're investing in, may be irrelevant, right, it may be completely irrelevant. It might need major updates. So working on optimizing something that isn't going to be useful in two years is something you should question. It doesn't mean you don't do it. In some cases, the opportunities for AI optimizing existing processes when you know those processes will be obsolete is still worth doing, Because both it trains you a little bit on AI and it also means that for the next couple of years, those processes will be radically more efficient. So that's great, but don't create a situation where you've hyper-optimized and made efficient a process that doesn't make sense anymore.

Speaker 2:

Right, Building something like if I had the most efficient factory for producing, you know, let's say, saddles for horses before cars came out and demand for saddles for horses dramatically declined, I might be the best in the world at making them at scale, but my business is not going to do too well because the demand for that particular category of product has been displaced by technology, essentially. So you have to look at it from both the externalities and the internal side. So I don't know that I can answer the urgency question, Mallory, that one is more than the other, but I think people have to think about both in parallel and they have to be pragmatic when people get too married to the idea of like, oh, this is the process I'm going to put in place and it's going to be beautiful and it's going to last for 10 years. The reality is is that you know, most of your processes probably won't last 24 months.

Speaker 1:

And with innovation happening so quickly and at the scale that it's occurring, how do you recommend for associations to kind of keep an eye on what's happening in their sectors and also to discern what's here to stay, what might change? I guess my question is and maybe you know this more with the roadmap, a lot of this is a guess. I mean, we're seeing these advancements come out. That's not a guess, but in terms of where we'll be in two to three to five years, that feels like a guess. And so how can you prepare for that without, you know, completely throwing out something that's working for you right now?

Speaker 2:

I think it's important to have a really keen sense of finding the constraints in a system. So a constraint would be like a choke point based on a limited resource Often it's labor, right. So if I say, how can I deliver better healthcare If doctors can only spend an average of five minutes or 10 minutes per patient visit in large healthcare systems, how do I improve healthcare and improve personalization of healthcare and, ultimately, patient outcomes, if I have such a rare, limited resource in doctors? And so you know, everyone wants better healthcare, everyone wants better patient outcomes, yet we have this constraint. So one way to look at it is to say, okay, well, if AI is really good at answering a lot of basic questions, is able to even potentially do a preliminary diagnosis on whatever it is that's ailing an individual, or perhaps even look ahead and think about how can I help this person improve what's already reasonable health? If an AI can complement a human doctor, that's interesting. And then you can say, okay, well, if we imagine that that like. What if we had unlimited doctors? Right, how would we deliver healthcare? And that's the way of like removing a constraint from a system and saying the value delivery is better healthcare, better patient outcomes.

Speaker 2:

The constraint I'm talking about is number of hours of available, you know, qualified care, number of doctor hours, basically, or number of nurse hours. What if we imagine that we had an unlimited number of those hours? What would we do? That is what I'd ask people to think about is in, basically, brainstorm, what happens if the constraint somehow magically got removed? Ai doesn't magically remove the constraint, of course, at least not current generation AI and probably for the next 10 years.

Speaker 2:

But that's not even the point. What if we could automate so much of the steps that the human doctors could really spend more quality time with patients? Right? Because if you break it down and this is true for educators as well the amount of time educators spend educating, working one-on-one or working with a small group as an educator, or the amount of time a doctor spends with the patient is not the substantial majority of their day. A lot of times they spend a lot of time doing other things, right? So how can we get rid of those and how can we make them more effective?

Speaker 2:

So to me, that's the opportunity for any business is to look for those constraints and to imagine a world without those constraints, or if those constraints were significantly lessened, and then that gives you the opportunity, the creative license, if you will, to just envision how you're going to use these technologies differently. Right, that's the new process, as opposed to like oh, I want to optimize the scheduling of getting people in for their 10-minute visits with their doctor. How can I use AI to reduce the number of no-shows? How can I use AI to get people in and out faster? Right, in terms of getting, and those are not bad things. But if we can solve for the bigger issue, which is the actual scarce resource constraint which is what's exciting about AI to me that changes the world.

Speaker 1:

That's a great system of thinking in terms of approaching how AI is going to impact a lot of industries and professions. With the AI roadmaps, how far ahead do you look with those? What do you suggest is like the cap of how far ahead you should be planning right now?

Speaker 2:

Generally speaking, we tell people to not formally do anything beyond 24 months. It doesn't mean you don't think about what's going to happen beyond a 24 month time horizon, but two years is a really tough period of time to plan for in any significant way beyond two years Even, honestly, beyond 12 months is tough because, remember, AI is on a doubling curve. That makes Moore's law seem quite modest, because Moore's law had a doubling of compute every two years roughly for a long time, and so that concept is amazing. But AI is on a doubling of training data which is roughly equivalent to how powerful these AI systems are of six months. So over 24 months we're going to have four doublings, so that's an extraordinary increase in power. It's very difficult to forecast what we're going to do with our business and envision that right. Even those of us that spend all of our time thinking about this stuff, I think intellectually honestly must say we don't know how to forecast in any level of granularity beyond two years.

Speaker 2:

So, two years is the answer to your question. That's what we like to focus on for roadmaps.

Speaker 1:

And by the way.

Speaker 2:

It's a rolling two-year roadmap. So you don't just say, hey, here's the two-year roadmap, cool, let's go do this. And what you do is you say, hey, every quarter we're going to incrementally update it, because every quarter we learn more. Every quarter, you know, the curtain opens a little bit wider right in terms of seeing what's going to happen next how far are you realistically looking ahead?

Speaker 1:

Are you actually looking two years ahead or are you more looking like in the next six months to 12 months? This is what I expect we can do and kind of leaving that next year up to I don't know.

Speaker 2:

When it comes to building software, we are looking a couple of years ahead in terms of thinking about what will likely be possible and we're kind of working on how to build the software now to anticipate those smarter, more capable, cheaper and faster models. Right, and then they just snap right in. Skip again, going back to that example is a perfect example. We started working on that. Skip is now on version two. It's going to be released at the end of July based on some really, really cutting-edge multi-agent concepts and things that would not have been possible even six months ago, certainly 12 months ago.

Speaker 2:

We started working on Skip originally about 18 months ago and we knew very well we wouldn't be able to do a whole lot with the first beta version, and even the first version, one which we released last summer, was super, super limited, but we designed the architecture so that we could plug in future capabilities. There's a limit to what you can do with that, because when something totally radically different comes out, you have to rethink that. So in my role, I look kind of broadly across these things. I do look at the narrow lens that you need for specific products, but I'm trying to think about how do we build a product portfolio that will solve the systemic problems and opportunities for associations even five years from now. But make investments now in products and services that will go to market, be profitable, be successful, create tremendous value and pave the way to those future things that we can't really fully visualize.

Speaker 1:

And this makes me think of a point too that you've brought up several times the importance of having kind of a layer in between the thing that you're building and the model that you're using, so you can kind of plug and play, based on whatever releases we see in the future.

Speaker 2:

Exactly.

Speaker 1:

That is a good segue to our third topic of the day, which is future AI predictions and societal changes. Based on what we've seen thus far, we can say AI is expected to continue its rapid advancement, with some experts predicting we'll achieve artificial general intelligence, or AGI, ai that matches or exceeds human intelligence across a wide range of tasks soon. In 1999, ray predicted we would reach AGI by 2029, and he is still standing by that, so I guess we've got a few years to see how that rolls out Now. One intriguing future concept that he brought up in this TED talk that's the inspiration of this episode is the idea of longevity escape velocity. This suggests that scientific progress, largely driven by AI, will advance fast enough to extend human lifespans by more than a year for each year that passes, potentially leading to increases in human longevity. There are also predictions about brain-computer interfaces becoming commonplace by the 2030s, potentially allowing direct connection between our brains and AI systems. And then some say by 2045, we might experience an event called singularity, where AI could lead to an explosion and human intelligence dramatically enhancing our cognitive abilities, creativity and problem solving skills.

Speaker 1:

I mean, this is another one for me. That's just. It's just hard to grasp. You think AGI, you think singularity. I feel like these are bold predictions, but none of them really surprised me. I want to hear your take on the AGI one. Well, first, can you kind of explain to our listeners what AGI is? I feel like it's been a minute since we've done that, and then I want to hear your thoughts on 2029.

Speaker 2:

AGI is generally referred to as a system intelligent enough to be at or above human performance in pretty much all domains. So, rather than being narrowly focused and really, really good at a particular skill, agi is capable across all these domains. And, critically, agi also has the ability to reason and plan and execute actions. So an AGI system is basically like a multi-agentic system with significantly greater reasoning skill. The tools we have today are facsimiles of reasoning.

Speaker 2:

You think you're getting reasoning out of Claude or out of ChatGPT? We have to remember that these are very simple systems still that are next token or next word predictors. They essentially simulate reasoning because they're so good at predicting, but they're not necessarily checking their work and looping back and saying, hey, did that make sense? Let me think about that. Or when you write an outline for a blog, you might say, well, let me think about what might make sense there. And then you read it again and you go, well, maybe I'll change this. Then you start working on it. Then maybe you change the outline, because once you start working on it you realize the outline wasn't great and you go back and forth. It's much more of an iterative kind of thing and these systems don't do that right now. So an AGI system has to have kind of an iterative, multi-step planning reasoning capability. You know how long it'll take to get there. First of all, I'll say this thing about Ray Kurzweil and like why I think he is someone who's been accurate so often, is that he's looking at the exponential curve. So when in 1999, I don't think he just like you know drank a bunch of beers and said, hey, it's going to be 30 years from now, and I think the guy was looking at like what's happening with the doublings of compute and that was you know, I think you know what is it? Five years from now? Right, so that's quite a bit of time to get there.

Speaker 2:

I think that we will have systems that, whether people will say, you know, we've crossed the finish line, we have AGI, or whether they'll say, well, it's not quite AGI, the systems we'll have in 2029 are going to be ridiculously smart compared to what we have now, whether it's through multi-agentic kind of approaches that will be definitely part of the solution or just dramatically smarter models, faster models. So that's the basic concept of AGI and I think that when we talk about something like escape velocity for longevity I think it's an interesting concept. We say, okay, you can add a year of life, more than a year of life, for each year that you live and therefore you can infinitely extend your life. Right, and there's all sorts of interesting conversations that can come from that. The more important thing to me is that I think it's likely in our lifetimes that we're going to see significantly longer lifespans, whether it's to the extent of this so-called escape velocity or not. That affects everything. That affects resources, that affects healthcare, that affects education, that affects our models of economy, that affects things like what do you do with retirement and social security? That affects also the question of healthspan, not just lifespan. Who wants to live to 120? If you're miserable for the last 40 years of your life? You know you're gonna want to have high quality of life, and what are people going to do? Especially as AGI systems come online and become smarter and smarter and can replace more of the basics, what happens?

Speaker 2:

One of the things that I spend a lot of time thinking about and talking about in executive briefings and keynotes is the idea of the exponential economy, and the exponential economy actually very closely parallels technological progress If you go back over the course of the last thousand years and say what's happened in terms of total economic output globally, you will see that the inflection points are when major technological shifts have occurred the printing press, the steam engine, obviously, digital technology and we're seeing a further acceleration of that. We've gone from a $1 trillion to a $10 trillion to $100 plus trillion global economy in really, really crazy speed. So what we're going to see now as we go forward is further growth that's driven by demand. We have to remember, especially in the developed rich world, that most of the world doesn't live the way we do. Most of the world has incredible resource constraints and even in the rich world we have a lot of inequality. We also have a lot of opportunity.

Speaker 2:

There's a lot of untapped potential intellectually and the eight plus billion humans that are out there. Education, healthcare it's going to change all of that, and so there's a lot of opportunity, whether it has to do with, like, an individual human lifespan being 150 years or whatever. But what about, like how this stuff will affect the billions of people who have horrible and short lifespans and how that changes because of access to healthcare, access to this technology? That's a much more interesting conversation in my mind. And what's the net impact of all those people having the basics met, coming online and contributing their intellectual capability and their creativity, which I think is going to be a domain that humans will continue to excel in, compared to machines, over that time span.

Speaker 2:

So that's what I get excited about this area where you start getting into sci-fi-ish realms like brain-computer interface and singularity possibilities. I think these are all things that are going to happen over time. I don't know when they're going to happen. I think what matters a lot more right now is how we think about the world and how we think about society and how we plan for even the next five to 10 years, and I think there's a very strong likelihood we're going to have an AGI-like system in the next five years, or certainly 10 years.

Speaker 1:

Will an AGI system be multiple agents in one Kind of like the concept you explained with Skip, or will it just be a single system? That I don't know if those two things are even different from one another.

Speaker 2:

I mean, ultimately you're going to think of it as a singular thing is how you're going to interact with these things. Because when you think about your computer, you think about you have a laptop. You don't think that you have an operating system and you have a web browser and it's got hardware and it's got electrical components and it's got a screen. It's just a singular entity to you and it does stuff right. So the utility that it creates is really through that singular concept. I think AI systems are going to be like that to a large extent, and then what's going to happen is people are going to stop talking about AI because AI is going to blend into the background. It's a general purpose technology that's just assumed, just like we assume the Internet, just like we assume mobile phones, just like we assume, you know, digital technology or things like language. Right, we don't talk about how cool is it that we've come up with language as a species. It's pretty badass actually, but we don't talk about that anymore. So we're not going to talk about AI forever the way we are now. I think it's going to be a big deal for quite some time to talk about, but it's going to blend into the background as an assumed general purpose technology. It's going to change everything, but the biggest problems we have to go solve are how do you drive equitable access to this stuff throughout the world? How do you deal with the impact to society when jobs are displaced because you can't retrain everyone fast enough? So what do you do with these people right? And what happens to the associations that represent sectors that no longer have a workforce? Those are the big questions with AGI type systems. I think we're in kind of you know, we're in the early innings right now, so it's important to get out there and learn this stuff because it's going to affect your space in some way.

Speaker 2:

I will say this just to conclude those remarks I am super optimistic about the future of associations, not just the defensive mindset of can they exist, but in terms of the opportunistic mindset of what do they do. Associations are fundamentally about connecting us right. It's about how do you bring people together who are like-minded or have a common goal. How do you help them do that better, how do you help them have a bigger impact? That's what associations are about and more than ever in a world of AGI, we will need to connect in deeper and more meaningful ways, and that's where I get excited, because we'll have more time to do that and we'll have better opportunities to do that in more meaningful ways, because AI will help us do that. So I think associations could really be actually some of the greatest beneficiaries of all these changes.

Speaker 1:

That's a great point. Even reading through this and researching the topics for today, amit, this one made me a little scared. Just thinking of AGI and singularity and like how can we prepare for that and how can we envision what things will look like in 10 years. We just can't. But I think you've hit the nail on the head that we will continue to seek out things that make us feel more human and more connected, especially in this ai landscape, and associations have an excellent platform to do that and to bolster that. Um, I'm wondering what you think of you've mentioned this before to me this idea of kind of reinventing what associations are right now thinking exponentially. I think you might be speaking about that at Digital Now this year, which is October 27th through 30th in DC. You've mentioned things like maybe taking down geographic barriers, maybe opening up membership to like adjacent professions. Can you talk a little bit about that, like what it might mean to reinvent an association moving forward?

Speaker 2:

Yeah, I think the barriers, whether it's through, you know, these artificial constructs we have of someone is in profession A or profession B, but they're really similar or they're really close to each other, I think you have to zoom out. And then the geographical ones, of course, which are which are, you know, historically it's hard to distribute value in at distance, right? Um well, I think that associations, you know, should think more broadly about, fundamentally, what they're good at and where they can deliver value, ultimately aligning with some deeper purpose. In the book that I wrote back in 2018 the open garden organization I talk a lot about purpose and how it's the rooting of the culture that you need to get right to really describe your impact on the world, your purpose statement, your core purpose, and this idea is so critical now because it's this navigational beacon, in a sense, where you are saying, hey, I'm always going to be heading towards that goal. How can I do more of that? How can I improve patient outcomes? How can I improve the reliability and health of our financial systems? How can I make construction safer? How can I improve drug discovery? These are things that might be at that fundamental, lower level purpose statement that act as a way of foundationally aligning your thinking, even as everything's changing around you. So to your point, is the best conduit through which you deliver that value your traditional membership base, or should you include adjacencies? If your goal is to improve patient outcomes and all you've ever done is focused on a super narrow medical specialty, could you improve patient outcomes even more by making your content more inclusive, by using AI tools to reach other audiences that might be able to help improve those patient outcomes, whether it's the patients or the families, or adjacent medical professions or even people doing back office roles. So I think associations do need to think in much broader senses. But I think revisiting that purpose if you don't have a clearly defined core purpose statement, I really encourage you to consider exploring that.

Speaker 2:

A lot of associations say, oh, we have a mission statement, but the mission statement usually is this like super long thing no one remembers. It usually kind of generically says our mission is to improve the success or advance the profession by doing X, y and Z. And those mission statements tend to be forgotten because they're long and ultimately, when you read them, they're not inspirational and they tend to be not only boring but they tend not to actually convey the point. So if you're really successful at making the profession better at what they do, if I do that with doctors, lawyers or accountants, like who cares Ultimately, what matters is someone who's not in your profession. Why would they care about the impact you're having?

Speaker 2:

There's some great resources out there. I mean, as I mentioned, the book that I wrote in 2018, the Open Garden Organization contextualizes purpose in the context of associations. And, of course, there's Jim Collins' body of work, which is some of my favorite business writing, starting off with Built to Last and then Good to Great, that talk deeply about core purpose, core values and BHAG, which I'd recommend people check those out. I think that stuff is timeless and will serve you well to revisit, even if you've studied it extensively in the past.

Speaker 1:

Do you think there would be power in the future in several associations kind of joining forces if they serve similar sectors?

Speaker 2:

Yeah, I think consolidation is a natural function of a maturing market of any type, whether it's a nonprofit space or not.

Speaker 2:

I know that you know folks that are in our CEO mastermind on AI are talking about this from time to time, and how does that affect their big picture strategy? I've heard a lot of people talk about the possibility of merging with adjacent associations. I also think there's opportunities for associations to acquire for-profit businesses that are complementary to what they do. So an association might say, oh, we're sitting on X millions of dollars of reserves and typically you're doing basically nothing with that. It's your rainy day fund, but you might have eight years of runway with your reserves. You might think about strategically investing some of that in startups that are in your space, that you could build a portfolio to drive innovation, or maybe acquiring companies that have lines of business services that are complementary to your own. There's a lot of creative ways that M&A might be helpful in the association space, because you know scale can drive more opportunity not always, but if you have more resources available and you're growing faster, you might be able to get ahead of that curve or at least keep up with it.

Speaker 1:

That's really interesting. I'm hoping you dive more into all of this at a digital now this year. I think it's a great conversation to be had.

Speaker 2:

I'm definitely planning on touching on a lot of this. Of course you know that's three months and 10 days, I guess, or three months and 18 days away from now, something like that. I need an AI to help me calculate that, but I probably will be fine tuning that slide presentation up until the pretty much the last minute, because things are changing so fast.

Speaker 1:

Absolutely Well, Amit. Thank you so much for the conversation today, and we mentioned a few times digital. Now, if you want to get more information on that conference, you can go to wwwdigitalnowconferencecom. We'll also include that in the show notes and we will see all of you next week.

Speaker 2:

Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.