.png)
Sidecar Sync
Welcome to Sidecar Sync: Your Weekly Dose of Innovation for Associations. Hosted by Amith Nagarajan and Mallory Mejias, this podcast is your definitive source for the latest news, insights, and trends in the association world with a special emphasis on Artificial Intelligence (AI) and its pivotal role in shaping the future. Each week, we delve into the most pressing topics, spotlighting the transformative role of emerging technologies and their profound impact on associations. With a commitment to cutting through the noise, Sidecar Sync offers listeners clear, informed discussions, expert perspectives, and a deep dive into the challenges and opportunities facing associations today. Whether you're an association professional, tech enthusiast, or just keen on staying updated, Sidecar Sync ensures you're always ahead of the curve. Join us for enlightening conversations and a fresh take on the ever-evolving world of associations.
Sidecar Sync
Breaking Math with Google’s AlphaEvolve | 84
This week, Amith Nagarajan and Mallory Mejias dive deep into Google's groundbreaking AI breakthrough—AlphaEvolve. In their first-ever single-topic episode, the duo unpacks why this agent’s ability to autonomously invent and refine algorithms marks a pivotal moment in AI history. From shattering a 56-year-old math record to its potential to reshape how associations tackle "unsolvable" problems, Amith and Mallory explore what AlphaEvolve means for science, business, and the association world. Plus, they discuss how associations can remain relevant in the face of rapid AI advancement—even when the tech seems impossibly complex.
📽 Supporting video by https://www.youtube.com/@Pourya_Kordi:
https://youtu.be/PcjhXA2bsh0?si=WuQeA3BuDAAROHIp
🤖 Join the AI Mastermind: https://sidecar.ai/association-ai-mastermind
💡 Find out more about Sidecar’s CESSE Partnership - https://shorturl.at/LpEYb
🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
https://learn.sidecar.ai
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai
📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/
🛠 AI Tools and Resources Mentioned in This Episode:
AlphaEvolve ➡ https://shorturl.at/3vzXm
Gemini ➡ https://deepmind.google/technologies/gemini
Claude by Anthropic ➡ https://www.anthropic.com/index/claude
Perplexity AI ➡ https://www.perplexity.ai/
Betty ➡ https://www.meetbetty.ai/
Isomorphic Labs ➡ https://www.isomorphiclabs.com/
📅 Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/
🎉 More from Today’s Sponsors:
CDS Global https://www.cds-global.com/
VideoRequest https://videorequest.io/
🚀 Sidecar on LinkedIn
https://www.linkedin.com/company/sidecar-global/
👍 Like & Subscribe!
https://x.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecar.ai/
Amith Nagarajan is the Chairman of Blue Cypress https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith:
https://linkedin.com/amithnagarajan
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory:
https://linkedin.com/mallorymejias
This AI is better than any member I can talk to. It's more knowledgeable, smarter, better answers, obviously faster, and that is not what they thought it would do. Even the people who are signing up and paying for a product like this they're blown away consistently. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings, sidecar Sync listeners. We're excited to be back with you for an amazing episode all about artificial intelligence, and we will be talking about how this particular topic actually is super relevant to the world of associations, and you'll find out very soon why I introduced today's episode that particular way. My name is Amit Nagarajan.
Speaker 2:And my name is Mallory Mejiaz.
Speaker 1:And we're your hosts, and it is an exciting time in the world, isn't it, mallory? And with AI in particular, it's just kind of nuts. So I've been excited about this topic. I remember reading the news and we're going to get into that in a minute but a lot of people are going to be wondering why we're covering something that seems like a deep science topic, which we tend to do from time to time. We'll digress from the world of like hey, here's how you use this particular tool to big picture stuff, but this one definitely is notable, I think, in fact, so much so that I think it's our only topic for today, right?
Speaker 2:It is our only topic and I was just reflecting with Amit before we started recording. I don't think we've ever done an episode on a single topic. We've done some evergreen episodes, you know, about getting your board on board with AI or data or foundations of AI, but this is our first AI news item episode. That's just one topic. That's how important we believe this is.
Speaker 1:Yep.
Speaker 2:Amit, how have you been?
Speaker 1:doing this week. I have been really good. I've been looking forward to this. I've been diving into a whole bunch of project work with our teams in different areas, which actually some of our project work is kind of related to this. That'll be fun to talk about. But just all around good here in New Orleans this time of the year as you know from living here and it's probably simpler in Atlanta it's just starting to get really, really nasty outside. It's like 95 degrees or pretty close, and 100% humidity, so I'm not enjoying that, but otherwise I'm doing great. How about yourself?
Speaker 2:Honestly, I hate to tell you, amit, but the weather's really nice. Here in Atlanta it has not been that hot. I mean, we've been getting like some peak, maybe 83 Fahrenheit days, but overall the weather's been really nice. I've been getting outdoors a ton and we actually both my husband and I just got new bikes. So we are really excited to get out and about and ride those.
Speaker 1:Did you get e-bikes or no motor?
Speaker 2:It's funny. It's funny that you ask that, because my husband really, really wanted to get e-bikes, which are just kind of, in my opinion, outrageously expensive, so we didn't, we couldn't have. One of us have an e-bike and one of us not which is what he proposed initially. I was like I'll be behind you, like wait up. So we ended up getting I think they're hybrid bikes, so road bikes slash gravel bikes. We had mountain bikes before, but we never went mountain biking, so we decided we needed just a more standard bike, I guess.
Speaker 1:That'll be fun. I love cycling Not so much here in New Orleans, although you can do a little bit of cycling, primarily here up on the river levee and also the lake. You can do that because that's kind of like there's nobody driving there and it's actually pretty smooth. But Atlanta, I imagine you have some pretty good trails and pathways to go right on.
Speaker 2:And some pretty good hills and that's why we realized we went to this one trail probably last summer with mountain bikes and we couldn't get up the hills. We'd have to get off the bikes, walk them up. Everybody on their road bikes is passing us, so we decided to make that change, but very excited to start using those and get outside.
Speaker 1:E-bikes are pretty impressive, that I'd say. Like my wife has one up in Utah, and when we go mountain biking it's just kind of amazing. I actually don't particularly like riding it most of the time because it's heavy, and with mountain biking it's just kind of amazing. I actually don't particularly like riding it most of the time because it's heavy and with mountain biking you want to be able to kind of move around a lot and, you know, have a lot of maneuverability, but it sure is fun going up hills.
Speaker 2:Yeah, oh yeah, it's much, much easier. Amit, I wanted to talk about on the pod, this new CESS sidecar partnership that's been announced. If you can share a little bit about that, For sure.
Speaker 1:So CESS is all the STEM associations, the STEM societies. They have a wonderful community of people, particularly in that sub-niche, and it's funny because for people that aren't familiar with the association market just having a handful of associations for associations like the National Body with ASAE, and then regional bodies like Chicago Forum or any of the state SAEs, people are surprised that there's associations for associations even at that level. But there's, of course, even more associations for associations that are specialty based. So there's SES and there's NABI for the bar executives, there's AMSI for the medical society executives. So there's a lot of these wonderful organizations. What's cool about them is they hyper-focus on content and ideas in their particular communities.
Speaker 1:So in the STEM societies, in the world of SES, the needs are similar to other associations. Of course Most associations share certain commonalities, but STEM societies oftentimes have much deeper needs in the area of scientific and technical publishing and content and amongst other areas as well. They tend to have lots of meetings that are, you know, formal in nature, scientific proceedings. There's those kinds of things that are going on. So their requirements and their focus there, both in terms of business and technology, are concentrated and that's why these types of groups form is because they want to talk about the issues most relevant to them. So we're super excited to partner with SES. They are the premier body that exists in this space for STEM societies. The partnership with them is awesome. It's a member benefit to get a discount on the Sidecar Learning Hub and we couldn't be more pleased to partner with those guys. So very excited to roll that out.
Speaker 2:Awesome, and so just to be clear here, the sidecar AI learning hub comes to their association members at a discounted rate the full AI learning hub right.
Speaker 1:Yeah, the full AI learning hub is available to them as well as the prompting course. Is also available Anything in the sidecar AI learning hub which includes those two options, or actually three options, because there's the Learning Hub without the certification and there's Learning Hub with certification. So, ses members, through their membership, they have a new benefit which is a discount on all the Sidecar products.
Speaker 2:Very cool and I feel like that was a really good segue into our topic for today, which is very heavy on the science. Today we're talking about Google's Alpha Evolve Amit. I'm really glad you flagged this for me, because you sent me the LinkedIn post and, honestly, you probably feel the same way often, but when I see such an inundation of information around AI all the time pretty much every day it can be hard to decipher what's really of an impressive magnitude, even though all of it is like what I need to pay attention to and what I don't. So I feel like I might have just glazed over this post until you sent it to me and said all right, I think this is a full episode quality.
Speaker 1:I think. I think I first learned about this through YouTube. Maybe I don't think it was.
Speaker 1:LinkedIn, I think it was YouTube, and YouTube's algorithm for recommendations is. It knows me quite well. It sends me a lot of really nerdy stuff like this, and you know, this particular topic as we get into it, I think might initially sound both intimidating, perhaps, but also not particularly relevant to a lot of association leaders. So can't wait to get into that, but it just caught my attention because I think it represents a significant new capability that has thus far, as far as I know, only been hypothesized but hasn't yet been proven by anyone else. So let's get into it.
Speaker 2:Yep. Alpha Evolve is a cutting edge AI agent developed by Google Deep Mind, designed to autonomously invent, test and refine computer algorithms for a broad range of complex real worldworld and scientific challenges. Alpha Evolve leverages the power of Google's Gemini large language models combined with an evolutionary framework, to go beyond simply generating code. It actively discovers new algorithms, optimizes existing systems and solves mathematical problems that have long eluded human experts. So to get into a little bit of how it works, it uses two versions of Gemini Gemini Flash, which rapidly explores a vast space of possible solutions, and then Gemini Pro, which focuses on deeper, more nuanced analysis. And here's an overview of how the process works. So first a user provides a task and an evaluation function, so basically the metric for success for that task. Then Gemini Flash rapidly generates multiple candidate solutions. Next step would be Gemini Pro analyzes and improves the most promising candidates and then automated evaluators rigorously test each solution against the defined metric that you provided at the beginning. Finally, the best performing solutions are selected, mutated and recombined in successive generations, evolving toward optimal or novel algorithms. This method allows Alpha Evolve to autonomously refine not just short scripts but entire code bases and system level solutions, producing readable and auditable code that engineers can easily review and implement.
Speaker 2:I want to talk a little bit about the discovery that has been getting a lot of press with Alpha Evolved. So it broke a 56-year-old mathematical record by discovering a more efficient algorithm for multiplying four by four matrices, reducing the required number of scalar multiplications from 49, which is the previous best to 48. This surpassed the results set by German mathematician Strassen in 1969, a milestone in computational mathematics. So at a glance that might not sound the most impressive. We'll break that down just a little bit. But Amis shared with me this great YouTube clip and I want to insert just a piece of that in the podcast because he gives a quick overview of kind of how Alpha Evolved did this and why it is so impressive. So I'm going to play that now.
Speaker 3:Most of the time entries of your matrix are going to be real numbers. But Alpha Evolved realized if we use complex numbers which have real numbers as a subset, we can get some magical cancellations and reduce the number of multiplications to 48. A lot of researchers would probably assume using complex numbers would make the problem more difficult, but Alpha Evolved somehow realized that's a good approach. 4 by 4 is very small but, just as a reminder, we can do this recursively for larger matrices. In fact, the larger the matrix, the bigger the effect, because now, instead of 49 times 49, you have 48 times 48 for 8 by 8 matrices and the gap keeps growing the bigger the matrix.
Speaker 2:Beyond breaking this 56-year-old mathematical record, in a set of over 50 challenging mathematical problems, alpha Evolve matched state-of-the-art solutions in about 75% of cases and improved upon state-of-the-art solutions in roughly 20% of cases. So, amit, I often do this on the podcast. I like to quote you when you share something with me because I feel like I can pull a lot of insight from that. You shared a link I think it was to the LinkedIn post and you said absolutely stunning and 100% predictable. I've got to ask. So explain to our listeners. What did you mean by that?
Speaker 1:Well, you know, I think people that are deep in this stuff have been expecting systems to have this concept that would broadly categorize as an early exploration of this concept of recursive self-improvement, which is where a system is able to improve upon itself and improve upon itself, which is effectively what's happening here, and so it's predictable, because there's a lot of people working towards this. The folks at DeepMind tend to focus on these types of unsolvable problems. I find a lot of their work to be just incredibly inspirational, and so 100% predictable is because we know that we have a lot of the core elements to do this. Still, even though you expect it to happen, it's stunning to see. So that's where I was coming from. I was just really excited about this.
Speaker 2:So you talked about recursive self-improvement that was the phrase you said, right and the ability for AI to now discover novel algorithms, which is something, as far as we know, has not been shown before. Can you kind of provide some more tangible examples of situations where algorithm discoveries have changed the world, where algorithm discoveries have changed the world?
Speaker 1:Well, I mean, algorithmic improvements have helped us do everything over time, from, you know, from the earliest stages to where we're figuring out how to do very complex things now. So algorithms are essentially it's a complex, fancy word for saying it's a step-by-step way of solving a type of problem. So, you know, if we know how to solve more problems and more and more complex problems, and then if we come up with smarter, better, faster, more efficient ways of solving the same problem, there's value there too. So we might know how to do matrix multiplication. We've known how to do that for a long time now, but we know how to do it in a way that's pretty compute, intensive, right. So if we can improve that by some degree of efficiency in this case, you know, one out of 49 doesn't sound like a massive increase, but as these matrices become larger and larger, the percentage of efficiency increased by this new algorithm actually goes up quite a bit. But the point is is that even that that level of whatever that percentage is it's very small is actually a stunning impact If you think about global energy consumption by AI systems, which heavily, heavily, rely on matrix multiplication. That's the core of what inference is doing. That's the core of a lot of what happens in training. If you can make that slightly more efficient, that's a pretty big deal. It's both good for performance, but it's also good for energy and cost. But to me, the examples are actually literally anything you can imagine that's in this category of unsolvable.
Speaker 1:So, mallory, I'd remind some of our listeners to go back in time in the sidecar sync to our episode on material science, and I'm trying to remember what it was called specifically. But there was a material science episode Alpha Fold. Alpha Fold was the bio-related one, but very similar. In fact, we talked about the material's genome in that episode. We'll have to go back and look it up because my memory is failing me on this, but in that conversation I actually think it was also Google DeepMind that had a materials AI. It was discovering novel materials and this was incredibly interesting because they were actually able to physically fabricate many of these materials and test their properties that were hypothesized by the AI and prove that the AI was correct about the vast majority of them. So essentially, we have there one example. You mentioned AlphaFold. Alphafold has gone through several of its own evolutions, in this case, in AlphaFold's case, by humans evolving it, but in the most recent AlphaFold 3, for example. That's being commercialized by a lot of different people. But the people over at Isomorphic Labs, which is another branch of Google that Dennis Isavis also leads they're doing novel drug discovery with AlphaFold 3. And they've built a layer of software on top of that. I'm sure the concepts in AlphaEvolve are bleeding over there and back and forth.
Speaker 1:So let me zoom out for a minute and try to explain why I think this is a big deal If a computer system can be given an arbitrarily complex problem and said improve upon this, make this better, solve this problem. For me that I don't know how to solve, that's very different than what we've been doing, even with the state-of-the-art AI systems we have in our hands, which, in many respects, are still effectively required to have somewhere in their training set something that essentially contains the answer. So thus far, this is not 100% true what I'm saying, by the way, but it effectively is. Thus far, all of our AI systems are capable of doing anything that's in their training set, but not really generalizing in a broad sense and being able to create new solutions that didn't exist solutions. There's 50 challenging problems that they ran through it right and in 75% of cases, they matched the current best known algorithm to humans, invented by humans. Right, but those known algorithms were not included in the training set, so they were able to essentially create new algorithms rather than use prior information. So that's by itself very impressive. And then, in these 20% of the scenarios where they created new algorithms that were better, that's quite fascinating. Now they did this through a system. This is not a new model. This is the smart use of engineering on top of the underlying models. They actually used older versions of Gemini, gemini Flash and Pro 2.0, which are both excellent models, but they're not even the latest 2.5. So when they do this again with Gemini 2.5, which is a reasoning model that has its own level of increased intellect, it'll be quite fascinating to see what happens.
Speaker 1:So to me, it's all about the stuff that we don't know how to solve, right. Like for those of you on YouTube and have heard me talk elsewhere, I have this flywheel in my background all the time and it's there to remind myself as much as share it with anyone else. But the very first thing is is that we want to seek and destroy unsolvable problems in the association market. That, to us, is what drives us to move forward as a family of companies is to find these so-called unsolvable problems, and then we want to go figure out how to actually make them solvable right. So that's the place that I love to spend time thinking about, because that's where you can really unlock new business models, new sustainable sources of revenue for the association community. So we'll get more into that shortly, I'm sure, but there will be ways to apply this idea for lots of organizations, not just people that are in deep in science and math.
Speaker 2:I love the terminology seek and destroy, I'm guessing did you come up with that phrase?
Speaker 1:That's got my fingerprints on it. Yeah, we had some lively debates in our planning meeting we were coming up with that, but eventually we got everybody on board with it. But yeah, I like visually captivating imagery where we're like you know what? That's what we're going to go do. We're going to seek them out and we're going to just crush these unsolvable problems.
Speaker 2:Yeah, it definitely makes you feel something, so I like that as well. Amit, you talked about the idea of creating novel algorithms. I think for some of our listeners even for me included, when we use a large language model, it can seem like AI is coming up with novel quote unquote solutions. For example, if I give it some ideas we have about digital now, it might come up with this theme that I had never thought of. But what you're saying is, in its training, somewhere that information exists, it wasn't creating something truly novel.
Speaker 1:Yeah, I mean the models we have now, especially the reasoning models, are able to synthesize new ideas in a sense, and that they're able to combine ideas from other ideas.
Speaker 1:Right, there has to be like a copy of a piece of text that literally answers your question. That's and that. In fact, even going back to the earliest language models, that's not what they did. They were creating new answers, but if they hadn't been trained on something, it would be extraordinarily unlikely for them to come up with something truly unique. Now these models have gotten better, partly also because they have access to tools where they can write code and run it. They can search the web. So that was, of course, originally the main of just perplexity and a couple of other early innovators, but now every major AI tool has built in web search. So you know they can. They can go discover new information that's outside of their, their true training set. So that statement is evolving as we speak.
Speaker 1:But ultimately, the way I think about it is that so far these models, they don't work the way our brains work in terms of this creative space. They have very limited ability to, you know, really ponder the problem and kind of go through the creative process the way you know a lot, of, a lot of people do in order to solve novel problems. Right, you think about, like, what's the journey of a scientist that's trying to find a cure for something. It's not really a linear path, right, it's always, always all over the place.
Speaker 1:You hear all these stories about people waking up in the middle of the night and in their dream state they thought of this. Or you know, they're walking their dog and they saw something in nature that inspired them to think of a new solution, and on and on and on. You know the apple falling on Newton's head, right that kind of stuff. So you know, ai doesn't really have that experiential type of component and therefore it's not as creative. Yet that doesn't mean it won't be, but at the moment, you know, what we use day to day in Clod and ChatGPT is not that level of creativity, and creativity is the ingredient that drives any kind of new discovery, whether it's in poetry or in science.
Speaker 2:If you all have been listening to the podcast for a while, amit, you've definitely said multiple times even if the AI we had today doesn't get better at all, we will continue to see discoveries or new applications over the next probably I don't know five to 10 years. I feel like this is a prime example of that. Right, because it's not a new model, it's simply engineering that was put together with the models we already had. Right.
Speaker 1:Totally yeah. And you know I'll give you a quick example of that. In one of our AI agents. That's a tool called Skip, which does you know data science and analytics and stuff like that through a conversational interface. Up until the most recent version of Skip, which we're about to release, we would run a request through our agentic pipeline once. So essentially what would happen is is you go to Skip and say, hey, I need to analyze my member data to figure out retention by geography and see if there's a correlation between member retention by geography and age, or you know, just I'm making stuff up right Just any arbitrary question. Skip gets to work, looks at your data, figures out what to go, pull in, starts writing code, puts together a report, sends it back to you. So that happened once.
Speaker 1:Now, with AI costs dropping so much and compute being more abundant, we're actually running some of these components of that agentic pipeline dozens of times in parallel and then picking the best answer. So the final step that Skip goes through into creating a report is to actually generate fairly complex code. It can be in many cases thousands or even tens of thousands of lines of code. And rather than doing that once we say, hey, do it three, five, 10, 50 times in parallel, and then we have another AI process that's capable of evaluating the output and picking the best answer, which in some ways is similar to what Alpha Evolve is doing.
Speaker 1:Now. Our stuff is not nearly as advanced as what they're doing in terms of testing different algorithms, because what we're doing doesn't require novel algorithmic discovery, but in a way it's a similar process, because we're basically running a lot of these processes over and over and then iteratively improving within the agent. It's not in the model layer, but it's in the sygentic layer. To me, that's why in past episodes I've said many times, the terminology is interesting, but not necessarily that important, because what happens in the model versus what happens in the agent layer or the application layer to the user does it work or not is the question. So those are some things I think I'd point people back to.
Speaker 2:Yep, I want to start zooming out just a bit and really talk about what makes Alpha Evolve unique, at least what I found in my research. So the first which we've started to touch on is that general purpose capability. Unlike previous domain-specific AI systems we just talked about AlphaFold, that was specifically for protein folding Alpha Evolve is a general purpose algorithm discovery agent, so it's capable of tackling a wide variety of problems computing, mathematics, engineering. I'm sure we can even extrapolate further than that. The second piece of what's unique about it is its evolutionary search. So the evolutionary approach, combined with automated evaluators, enables it autonomously to explore and optimize the solution space far beyond what's possible right now with traditional AI code generation tools. And then human-readable outputs. So the system produces clear, structured and auditable code, making it practical for real-world deployment and human collaboration, amit of these three things that I mentioned. So human-readable output, general-purpose capability and evolutionary search. Can you kind of talk about each of these unique components and maybe we'll get into how these might apply to associations?
Speaker 1:Yeah, totally. I think each of those is really important. I'll actually start with the last one first, because it's probably the easiest one to describe. I think it's really important that AI systems communicate in natural language, aka natural language to us humans, as opposed to some kind of funky computer communication mechanism, because that makes it not only interpretable and discoverable but also it's something we can, you know, keep up with right as opposed to computers could probably find a much more efficient way to communicate, because our natural languages are designed for our brains and computers can do things differently. But that's important. Not all systems of this type necessarily will have to work this way, but I think it's really important to have human-readable output and human-readable steps along the way.
Speaker 1:One of the leaders in this what I'm referring to kind of broadly is this category called interpretability, which is a really big, important dimension of AI research. Actually Anthropic, who we've talked about a number of times in this pod they're the makers of the Claude AI system. Those guys are really really good in their research efforts and their emphasis on interpretability and human readable output is a big thing that those guys and many others emphasize, because that's like the side of it. We can see those guys go into the model itself to understand what's happening in the actual neural network and that's super interesting, but a little bit off topic. So, in terms of general purpose capability, we've touched on that in the first part of our discussion, mallory, where this is not just about mathematics, certainly not just about matrix multiplication, but it can be about anything.
Speaker 1:So imagine you had access to a system like this in your association. You said, hey, I need to improve member retention, go figure it out. Said, hey, I need to improve member retention, go figure it out. And so what are some of the common playbooks that people would pull up and say, well, we should run an engagement campaign. Let's try to figure out how to get people to more of our events, because we know there's a correlation between people who attend events and better retention. Now, is that a correlation or is it causation, meaning, are people coming to the events kind of come to the events and they renew because it's a byproduct of being at the event, or is it some other effect? Right, and we don't necessarily know that, but we might think. Well, that's one playbook or playbook theme we know is drive event attendance to try to drive retention.
Speaker 1:What about sending out better content. That's another common one. What about just communicating the value of membership that they've received? A lot of people forget about all the things they do with the association. These are things like in our brains common association ideas. But what about just come up with other ideas, right? So what if we had a system that could explore the space around member retention, automatically, hypothesize a bunch of possible solutions and then come up with hey, here are 10 different things you could go test and then help you actually go test these novel ideas. Some of them might not be very good, some of them might be at the level of your current tactics and some might be better. So that's kind of applying the concept from the world of, let's say, math or engineering to a domain in business.
Speaker 1:The evolutionary search part is a really important piece to come back to and make sure that's clear. So how does evolution work? Over a very long period of time for biological species, stuff happens in our environment. Largely it's various forms of radiation that cause us to effectively have mutations in our DNA and over time that causes slight variations to occur in one generation to the next, to the next to the next, and some of those adaptations are helpful and some are not. So when those adaptations are helpful, that particular branch of those generations tends to thrive and the other branches tend to not thrive right. And so over a period of time that happens over thousands and thousands or millions of generations, and you have a lot of evolution, a lot of change that occurs.
Speaker 1:So in this whole evolutionary computing category, which is very closely aligned with AI, but it's its own branch of computing and computer science research, people have been exploring this space for a long time and pre-AI, or pre-modern AI, it was a very slow process to do this. But now, with AI coming up with thousands of candidate algorithms, the evolutionary piece tied to it, to say, okay, what if we mutate these algorithms a little bit? What if we change the approach to this part of the algorithm or this part of the algorithm, making these small tweaks, rather than being caused by environmental factors, they're caused by intentional modification or intentional mutation. And then we test the offspring, the next generation of the algorithm. So you take an algorithm and you tweak it a little bit, and then you test it and tweak it a little bit and you test it.
Speaker 1:Now, in the context of a business domain problem, we're a little bit a ways from being able to actually do this, because to actually test these different ideas, you don't want to really test all this stuff on your members, right, and say, hey, what would happen if we send them all this?
Speaker 1:Now you could A, B test things and you can definitely get close to empirical results or actual empirical results with subsets of your audience, and that's very helpful for things you think are likely to work.
Speaker 1:But just to like, have the craziest possible ideas and test them on your live members, you're probably not going to do that anytime soon.
Speaker 1:So if you think about some of our conversations over time in this pod the idea of digital twins or simulation systems what if you had a digital twin for your membership and your membership, through all the data and all the attributes you have across every system and every interaction you've ever had, was modeled into an effective digital twin of your association's membership, which essentially means it's a simulation of how your you know 10, 20, 30,000 members would behave to different stimuli, down to the individual node and individual members.
Speaker 1:So if this message goes across, this is how this member would react. And then what would happen to this other member, you know, because there's all sorts of chains of things that happen based on social and so on and so forth. Right, so if you had that kind of a digital twin of your membership and then you had an evolutionary discovery algorithm tied into a system like this, that could test out all sorts of different ideas like how can we improve engagement, how can we get more people to events, how can we drive increased renewals and I'm picking, frankly, what will eventually be seen probably as fairly pedestrian problems, but they're what occupy our minds today, if those are our issues. But if you could test all these different hypotheses from an evolutionary algorithm against the digital twin it's not 100% real, but it could get pretty darn close to giving you a good prediction about which algorithms might be good for you and which ones might not be good for you.
Speaker 2:Wow, I laughed when you said digital twin Amit, because I think you and I are evolving into the same person slowly because, as you, were talking, I'm like, oh my gosh, what if someone had a digital twin of their associate? And then you said it and I said, wow, okay, we're obviously very in sync you gave was a bit more theoretical.
Speaker 2:In actuality, with maybe not right in this moment, but with Alpha Evolve, let's say, six to 12 months from now is that something an association could use it for, like if they gave it access to some data source and could run all these theoretical outcomes?
Speaker 1:I think this is probably more of like a by the end of the decade kind of thing, as opposed to six to 12 months. I think the amount of compete you need and the level of sophistication you need will be different. You know, let me actually zoom in on a little bit different version of the problem. I started to describe what if you wanted to make a decision on where to have your next annual conference and you have five different choices, you know, in terms of contenders, and you're having a hard time making the decision. Well, you know that's something you could say well, let's test that through this process with the digital twin. Let's simulate how that's going to happen. And then, once you pick a site, you might say well, what's the best marketing campaign for this conference? And there might be essentially an infinite number of variations of how the marketing campaign might work in terms of timing and messaging and sequencing, and you know channels and all this kind of stuff. You could have essentially an evolutionary algorithm with AI, come up with a whole bunch of different hypotheses for what's the best campaign and then test them on your digital twin and see which one performs the best. And then the feedback loop, of course, would be OK. You pick one or two of them to you know to run against your actual, real population. You get the continuous loop of feedback from that and that feeds right back into the process. So the algorithm is hypothesizing and testing the outcome against a population of essentially fake or digital versions of each of your members that in aggregate are a digital 20 of your association's membership, and then that feedback loop is all synthetic. But then when you get real data based on using the experimental idea once you think it's good, that further reinforces the cycle. So I think these things are going to be real. But you know this is not stuff that's going to happen anytime in the near future, partly because the data prerequisite is where most associations will have a tough time. You know, coming back down to earth for a minute, a minute. Most associations just have a hard time answering a question like which of my members receive these publications and have taken these courses and have been to our website more than once in the last three months, because that data is stored in their CMS and their LMS and their AMS and they have a hard time getting that result.
Speaker 1:So people need to solve for these more basic challenges first, then feed into the kind of thing we're talking about. So what we're talking about today everyone is not science fiction. It's totally real and it's super exciting. But it's more about helping train your brain on where things are going so that you can anticipate this and say you know? I remember on the sidecar sink back in 2025, I was hearing about this really cool alpha of all thing. In 2025, I was hearing about this really cool alpha of all thing, and now it's 2027. You just have that noted in your brain somewhere and you're like we're designing our next system that does this. By then, ai has doubled in power. You know three, four or five times. So what can you do then? Maybe this is like you know, just something you can click a button on on a website to get at that point.
Speaker 2:Yeah, wow, absolutely mind blowing.
Speaker 2:You're right, it does sound like science fiction, but it's real life, folks.
Speaker 2:I want to talk about how Google is also using this internally, so not just for mathematical challenges, but they're using it to improve their own business, which is the company's cluster management system.
Speaker 2:Its new scheduling heuristic improved the utilization of Google's global computing resources by 0.7%, translating into millions of dollars in savings and significant energy resource conservation. The AI has also contributed to hardware design, so optimizing circuits in Google's tensor processing units, or TPUs, and improving the speed of key AI training kernels by up to 23%, resulting in a 1% reduction overall in model training time. So that's one piece how Google's using it internally, which I think goes back to what you were saying, amit, about maybe how potentially an association could use this internally as well. I also want to mention that, in terms of the future, deepmind is developing a user interface for Alpha Evolve and plans to launch an early access program for select academic researchers with the possibility of broader distribution in the future, which is something we'll keep an eye on. Downstream effects of associations that have, perhaps researchers as part of their membership and how this will obviously will change a lot within the realm of research, how an association can kind of grapple with this, provide education on it, things like that.
Speaker 1:Definitely. I'm glad you brought that up, mallory, because I think that for an association's internal business operations, this is interesting, but not the most immediately pressing or really available thing, like we were just discussing. I mean, if an association really wanted to test ideas like this and was pretty far along with their data aggregation and AI data platform and all that kind of stuff, they could totally do experiments with technology like this not Alpha Evolve specifically, but there's ways of emulating these concepts today. But most associations aren't going to be playing with that for internal operations. But, to your point, many associations have communities full of people that are doing scientific research or doing other things that are kind of in a similar vein that this discovery, this capability, would be extremely relevant for those folks. So I think associations need to be the conduit through which their members, their community, learns and continues to stay informed on what's happening in the world of AI and contextualizing it for each of those communities, just like we attempt to do here for our association friends. So there's an amazing opportunity, I think, for all associations, quite frankly, to be the center of AI learning for their communities, whether it's communities of mathematicians or computer scientists, or if it's communities of teachers or doctors or lawyers or whatever it is.
Speaker 1:The association knows the context of that space probably better than just about anybody, and so to be able to bring AI content into that world and to contextualize it so that it's helpful and it's relevant is an amazing opportunity, both to advance your mission as an association but also to drive revenue, because if you're consistently providing great content on this topic we can tell you from our own experience it generates a lot of interest, which is exciting.
Speaker 1:And then from there, there's opportunities to develop you know member value ads, where you can perhaps provide some content as part of membership, but certainly to develop courses and deliver incredible amount of value to your members. And so, yeah, I mean, if you think about this topic and you're listening to it and you are anywhere, even a degree or two, separated from, let's say, a scientific realm, but certainly those that are directly in it, this is absolutely a topic you should make sure your members are aware of. To not do that would be, you know, really problematic as an association for your space is the way I would put it. So I think it's an incredibly important thing and a big opportunity.
Speaker 2:I want to look at the human element of that opinion too, amit, because this is obviously very new. You said it was predictable. I would say many of our listeners, including me, right. I don't think I would have predicted this per se, but you spend a lot of time thinking in this realm and so, given that it's new, given that it's a bit hard to understand, it's a bit ethereal, and if you imagine an association of computer scientists or people that we deem highly technical, I could imagine our listeners saying well, we cannot produce content on something that Google DeepMind has been studying for years and years. What do you say to that if an association feels intimidated by the level of expertise that it seems this requires?
Speaker 1:the level of expertise that it seems this requires. Well, so a couple of things is that, first of all, at a minimum, you can make sure they're aware of it so you can share a link in your newsletter. That doesn't take that much effort at all, right, so that's one thing you can do without trying to assert any level of expertise. You can also partner with people to deliver AI content people who have deep expertise in AI that can help you develop content, develop learning modules, things like that. We actually do that. By the way, for those of you that aren't aware, we partner with a lot of associations to help them with training for their audiences, where we create content specific to their industry. But there's tons of people who can help you with that. That is an option as well is to use some of your resources to develop that content with outside expertise.
Speaker 1:But the one thing I want to point out about what you said about people who are in technical realms, and this might be doctors, it might be math science, it could be engineering. A lot of times, the association staff are, in fact, intimidated to even raise the topic. That's even somewhat technical, because they assume that their triple PhD average member is already way up to speed on all that stuff, and my experience has been that the people who are like super deep in a particular realm, they might have like a conceptual understanding of how AI works this is even true for computer scientists and particularly even AI researchers in computer science but they're so deep in their one area that they often don't see the pattern. They don't recognize the macro trends and a lot of times they assume things that might be based upon something that they researched years ago. So a lot of times the folks that are deeply technical and in particular narrow fields don't see what's happening overall. And that's part of your job, as far as I see it as an association, is to take that somewhat uncomfortable stance of saying listen, we think it's important that we provide an AI intro course for our engineers, even if they're in a field that's adjacent to computer science or adjacent to even AI.
Speaker 1:Directly is right. If you just keep doing what you've always done and stayed in a comfortable lane, well, that lane may eventually just end. Maybe that lane goes off the side of a cliff because it's not needed anymore. That lane may not be the place to stay. So sometimes you got to switch lanes, and this hits home for me because I'm teaching my youngest right now how to drive and she's not a big fan of lane changes. But, you know, try to make sure that she does plenty of those, because when I get out of the car, when she turns 16 shortly, I want to make sure she's safe.
Speaker 2:So just get her an e-bike, you know, to push it off a little longer.
Speaker 1:Actually I'm pretty excited about her driving.
Speaker 2:It's going to be great for her, the association having context, and is arguably perhaps the best entity in the world in terms of context for their profession and industry, at least within their geographic region. So don't doubt yourselves, you have that context. A singular computer scientist may be a technical, may understand how AI, neural networks work in theory, but you've got the greater context on kind of all of that put together, if that makes sense.
Speaker 1:Totally. I think that's a really good way to phrase it, and, you know, I think one of the things that we need to do a good job of and I have a hard time with this a lot of times is to zoom back out from time to time and retest our assumptions, retest our beliefs, our views on what is and isn't, what can be and what can't be. We have these, you know, deeply calcified systems of assumptions as people, and they're used as essentially heuristics to give us shortcuts so that we don't have to reprocess everything we think we know. But sometimes those heuristics or shortcuts essentially lead us down a false path, and it may have been true even six months ago, but it's no longer true today, and it's really hard in an environment that's changing this fast. But I find that the people who hold on to those assumptions most dearly are the people who oftentimes are the most intelligent, best educated and deep in some space, because they've always been told they're the smartest person in the room, and so it's your job as the association to say maybe you are, but you haven't paid attention to this, and you can say it in a much nicer way than that if you want, but sometimes it's good to go knock people on the head and say, listen up, you got to look at this, and I do think that's the job of the association. Your job isn't to just kind of like be the bystander and to say, hey, I'm going to hang out here and just give you the same stuff I've always done. Your job is to optimize for the success of your members and to help them do their work, which ultimately influences the well-being of the world, and that's what I find motivating about helping this space. So I don't think it's a matter of well.
Speaker 1:No, our members would never want that and you know our members can't tolerate that. Our members would never use that. You know we, and you know our members can't tolerate that. Our members would never use that. You know we've been hearing that a ton over the last couple of years with another one of our companies, betty Betty's, our knowledge agent, and you know that company has worked with close to 100 associations at this point and growing really fast, and many of them are very, very technical organizations. Tons of medical societies, engineering societies, nursing societies, accounting organizations, people that have extremely deep technical content and subject matter, and consistently, one after the next, after the next, after the next in deployment, come back and they are told by their most experienced members.
Speaker 1:This is amazing. This AI is better than any member I can talk to. It's more knowledgeable, smarter, better answers, obviously faster, and that is not what they thought it would do. Even the people who are signing up and paying for a product like this they're blown away consistently, and so that assumption set if you can show that people would use a tool like that to inform the decisions they're making in their field, right in their field, their expertise area, they most certainly will be open to the idea of the association giving them some insights on AI as well and contextualizing it for their field. So it's a massive opportunity.
Speaker 1:If you don't do it, somebody else will. As an entrepreneur, I look at this and say the brand asset and the corner resource in terms of data and content that associations have is such a compelling opportunity to build businesses, to build franchises within your world, where you have distribution, where you have relationships, you have content and you have this incredible brand value. To not use. That is nuts to me and it's basically sitting there waiting for you to go capture it. You can generate a lot of revenue from this if you think about your business model in a creative way and you can do an incredible job serving your members.
Speaker 2:I would say you have a knack for predicting things pretty well, or at least on the podcast, I feel like a lot of the things you predicted me have come true in some regard. So I'm curious looking ahead with Alpha Evolved, what do you expect to see near term, long term, by the end of the decade? What kinds of challenges will we find solutions to? How do you expect to see near term, long term, by the end of the decade? What kinds of challenges will we find solutions to? How do you expect to see this play out?
Speaker 1:So what I think is going to happen with this particular technology is everyone's going to replicate the concept and it's going to find its way first into coding tools, so tools like Cursor and Windsurf and Cloud Code and all these other tools. They're going to incorporate these concepts. It's going to make code better, even better, even smarter, better at solving problems that the developers that are training these tools or guiding these tools, I should say, don't know how to solve. So that is going to be very powerful and I think it's going to have a compounding effect, because where the code goes, everything else follows. You know, that's why coding tools have been such a natural place for these companies to focus on. It's one way to make a lot of money in the world of AI today. It's also super competitive, but it's a direct, you know, use case of all these technologies that is insanely productive. I mean, the things you can do with one engineer now would have required a team of 20 last year at this time, and I'm not exaggerating that and so if that's like, one person can do what a team of 500 could have done a year, you know that's that's going to come from this type of improvement. It's not just faster, better models. It's this kind of additional capability, so you can see it there.
Speaker 1:I think people will rapidly adopt this in other specific domains, like branches of medicine or things like that. So I think it's going to be really interesting to think about, like the number of cases where people say I don't know how to solve, I don't know how, I don't know what this issue is that this patient has, or if maybe you know what it is but you don't know how to cure it. But maybe somebody else does right, or maybe there's some novel cure that you know an AI can come up with. So I think there's just so much more bigger than what we know. It's like I think most people realize the percentage of discovery that's left to be had in space and even in our own oceans is vast. Right, like what we know about marine biology relative to what is knowable is a tiny fraction, and that's true with AI, certainly. So I think all this exploration is going to get accelerated, which is fundamentally exciting.
Speaker 2:Yep. Well, everybody, thank you for tuning in today. Hopefully you learned a little bit more about Alpha Evolve and how it might pertain to your association sooner than you think. We will see you all next week.
Speaker 1:Thanks for tuning in to Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember Sidecar is here with more resources, from webinars to bootcamps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.