Sidecar Sync

Storytelling with Lumi AI, Google DeepMind's Mathematical AI Models, and Deepfakes in Politics | 41

Amith Nagarajan and Mallory Mejias Episode 41

Send us a text

In this episode of Sidecar Sync, Amith and Mallory explore a range of fascinating AI news and developments. They discuss the launch of Lumi, an AI-driven storytelling platform by Colin Kaepernick, Google's DeepMind achieving high-level mathematical problem-solving, and the ethical challenges posed by deepfakes in politics, exemplified by Elon Musk's recent controversy. Amith and Mallory delve into the nuances of AI reasoning, the future of AI in science, and the importance of critical thinking in an age of digital misinformation.

📕 Download ‘Ascend’ 2nd Edition for FREE
https://sidecarglobal.com/ai

🛠 AI Tools and Resources Mentioned in This Episode:
ChatGPT ➡ https://openai.com/chatgpt
Claude 3.5 ➡ https://www.anthropic.com
Otter.ai ➡ https://otter.ai
Lumi ➡ https://www.lumi.ai
AlphaProof and AlphaGeometry ➡ https://deepmind.com
Eleven Labs ➡ https://elevenlabs.io

Chapters:

00:00 - Introduction
02:09 - ‘Ascend’ 2nd Edition Release
08:48 - AI's Role in Accelerating Book Projects
13:54 - Lumi: AI for Storytelling
17:24 - The Power of Storytelling in Associations
25:17 - Google's DeepMind Achievements
32:06 - Understanding AI Reasoning
37:44 - The Future of AI and Generalization
41:08 - Elon Musk and Deepfake Controversy
46:18 - Ethical Concerns with Deepfakes
51:49 - Business Use Cases for AI Avatars
54:15 - Closing Remarks and Future Resources

🚀 Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global 

👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal 
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

🚀 Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global

👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Speaker 1:

I think storytelling also could make content more accessible for people who may not have the patience or the attention for more traditional, drier content, so I think it potentially opens up some doors. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings everybody and welcome back to another episode of the Sidecar Sync. It is exciting to be back with you, as always, and we have a whole bunch of interesting topics at the intersection of artificial intelligence and associations for you today. My name is Amit Nagarajan.

Speaker 2:

My name is Mallory Mejiaz.

Speaker 1:

And we are your hosts. And before we jump into our three interesting topics for today, we're going to take a moment to hear a quick word from our sponsor.

Speaker 2:

Today's sponsor is Sidecar's AI Learning Hub. The Learning Hub is your go-to place to sharpen your AI skills, ensuring you're keeping up with the latest in the AI space. With the AI Learning Hub, you'll get access to a library of lessons designed to the unique challenges and opportunities within associations, weekly live office hours with AI experts and a community of fellow AI enthusiasts who are just as excited about learning AI as you are. Are you ready to future-proof your career? You can purchase 12-month access to the AI Learning Hub for $399. For more information, go to SidecarGlobalcom. Slash hub and me. How are you today?

Speaker 1:

I'm doing really well. I'm enjoying my time up here in Utah. It's coming to a close fairly soon and I'll be heading back to New Orleans, where it's a thousand degrees and 100 percent humidity, but I'm enjoying the last few days up here while they last. How about you?

Speaker 2:

I'm doing pretty good myself, Really excited that Ascend second edition is finally out on Amazon and available for free download on the Sidecar website. Amit, I think it was just a couple months ago on this podcast, you said we'll have that book done by end of July and I remember thinking, ooh, I don't know. And I'm happy to report it is July 31st, the day we were recording this podcast, and the book is done.

Speaker 1:

It is awesome and congrats to you and the team and everyone who has contributed to the book across the Blue Cypress family, as well as our two awesome association leaders who are guest contributors Liz from South Carolina CPAs and Alice from the Society of Actuaries. Both developed really great case studies that are in the book for the second edition of Ascend. I'm really pumped about it, mallory. I mean it's got a whole bunch of new content. The book went from like 185 pages to 290. So it's definitely been taking some HGH.

Speaker 1:

I guess the book is ready to go and I think it's going to be a really good guide for people to execute on AI projects and implementations. I think the first edition was just primarily about introducing a lot of these concepts to the market and over the last 12 months, in all of our work and all of the content we've produced on AI and associations, we've learned a ton and we've realized there are certain use cases and certain topics deep within marketing, deep within education, deep within fundamental technology areas like vectors, for example, which we talk about on this podcast before. We've developed new chapters for each of those topics and others that have gone into the new edition of the book, so couldn't be more excited about it, and I'm really pumped about what the Sidecar team is going to do next in refreshing our AI learning hub to take advantage of all this new content in learning format too.

Speaker 2:

Absolutely, I think. As you mentioned to me, those two case study chapters are maybe two of the chapters I'm most excited about from Liz and Alice, but I think we've also kind of revamped the marketing chapter. As you mentioned. We added a vector chapter. There's a new chapter on agents. We kept a lot of the content, but we also obviously updated a ton of the content now that the book is nearly 300 pages. So I'm thrilled about that. And yes, we are in the process of revamping our AI Learning Hub content to reflect a lot of the topics and themes that you'll see in Ascend 2nd Edition, so I'm really excited to roll that out.

Speaker 1:

Well, you mentioned the tight timeline for delivery and I think that's the key to what we do across all of the organizations in our family is try to set very clear and very specific goals that are narrow. Very clear and very specific goals that are narrow. Make a lot of difficult decisions at the planning phase each quarter to eliminate projects that are very much worthy things, but to narrow our focus and to set firm deadlines. And, of course, we're pretty good at using AI. So we've used a hefty dose of AI to help us in creating Ascend.

Speaker 1:

Ascend is all original ideas from the team at Sidecar and other companies within Blue Cypress, as well as our guest contributors, but we've heavily used AI to curate ideas, to edit, to help us brainstorm and, in some cases, to write pieces of copy or to transfer knowledge from one modality to another.

Speaker 1:

For example, most of the work that I've done in the book has actually been an audio form in either talking to chat, gpt and developing outlines interactively through the voice agent or using otterai, which is another tool that I love using where I just walk around New Orleans and wherever I'm at, and just talk to my phone and record things and then take the transcript from that and summarize it and on and on. So it's a great synthesis of human creativity and ideas, with AI brainstorming, ai counterpositions in some cases, because we've asked the AI to say what's wrong with this content, how can we improve it, and, more and more, with the frontier models, we've talked about how, for example, claude 3.5 Sonnet seems to be more intelligent, right, and so I know, mallory, you were working with that particular tool with the book and getting some interesting feedback to help us more critically edit and improve our content.

Speaker 2:

Absolutely and, as you mentioned, using it truly as a brainstorming assistant, as an editor for my own work. I'm curious, amit, I know you wrote the Open Garden Organization book. Was it 2018 that you released that?

Speaker 1:

Yeah, it was published in 2018.

Speaker 2:

Yes, how long did that take you compared to first edition of Ascend, and then we probably beat some records with the second edition as well.

Speaker 1:

Yeah, things keep going faster. So the first book that I wrote for this market, the Open Garden Organization I published I think it was early 2018. I spent a solid two years on that. I mean, I didn't work full time on it, but it took me two years to get it done. And I hired a third party company and paid them a good bit of money to actually do a bit of the copywriting for me. So I would speak to them and they would transcribe my ideas, send me drafts of chapters, I would edit them and that's how that first book got done.

Speaker 1:

And that book, by the way, is we talk about Open Garden in Ascend because it's super relevant. We talk in Open Garden about this idea of opening up your resources and having a new perspective as an association to be inclusive of not just your core audience but other audiences that are adjacent or perhaps even disconnected from your traditional core, because that opens up your total addressable market. And there's a lot of other ideas in that book that are super relevant in an AI-driven world. But, to your point, that took me two years, a lot of dollars, external company, a lot of humans, and that was really pre-AI in the context of tools that were practical to use. In that context and comparatively, we started the Ascend project in edition one in January of 2023, I believe, and we published that in June. So it was about a six-month project Again, not a full-time effort for any individual.

Speaker 1:

I put quite a bit of effort into that, I don't know, maybe 200, 300 hours of my time personally over the course of six months. It was a pretty big effort. And then with Ascend second edition, in a way we've been working on it ever since the first edition shipped, with all the content we created at Sidecar, all the talks we give, all the webinars that we deliver, the custom delivery of education we do with a lot of associations as partners when we deliver education for their members and so forth. But ultimately, I think what the work itself we did, probably in about three months, start to finish in that range or you said two months earlier and you know and probably fewer hours. I mean, certainly on my end there were fewer hours. I probably put a hundred hours into this edition is my guess.

Speaker 2:

Yeah, and I guess you're right in a way. We have kind of been working on this one since the release of the first one, so it is hard to contextualize that amount, but we definitely churned it out, I would say, in a few months.

Speaker 1:

Right and we went through and redid basically the entire book. I mean, it's a second edition but it's completely refreshed. One of the things we changed is in the first edition we were playing around with an experimental idea of using a business fable format, where we said, hey, let's actually chart the course of a fictional character who's going through a transformative change in her association as the newly appointed CEO of the Society of Really Good Accountants, which was a fun name and we love the concept. And the idea was, I'd say, like you know, three quarters baked in the first edition of Ascend. It received some pretty positive feedback but also some criticism about being somewhat disjointed, and we decided actually to dispense with that concept for this particular edition. Maybe we'll bring that back in the future, but the idea was one that we didn't feel was fully formed for the second edition. So, even though we removed quite a bit of content from there, we reworked the order of the topics.

Speaker 1:

Mallory, that was a really heavy lift by you where you looked at it and said, hey, this doesn't really make sense in the order that was originally presented. So you reworked pretty much the entire flow of the book, which was awesome, and sometimes when you're in something. Looking at it over and over again, you're like, yeah, of course this makes sense. But then you were looking at it with a fresher set of eyes than mine and you came back and said, yeah, actually this makes much more sense in this other order and I think you maybe had a little bit of help with that revised outline from was that Chachi PT or was that also Sonnet 3.5?

Speaker 2:

That was Sonnet 3.5. I had an idea I think we both had the idea of wanting to break up the book into sections, kind of like a choose your own adventure. I think you were the one that coined that phrase in relation to this book of me. But I did use Claude 3.5 sonnet. I gave it the book, the original, and said what, what do you think about this? If we're trying to divide it into sections, what do you suggest? Um, I didn't take that feedback flat out and use it. I worked with it a little bit and kind of put my own spin. But uh, it was indeed a heavy lift, but honestly not not for the reasons you might think Just working in a Word document. That was, I want to say, at its peak, like 400 something pages. It was honestly just more complicated to copy and paste things everywhere. But once we locked that in, it seemed like the rest of the book flowed seamlessly.

Speaker 1:

For sure, and you know it's interesting because I think this behind the scenes how Ascend second edition came together. Hopefully it'll be instructive and interesting for a lot of our listeners and viewers on YouTube which, by the way, if you listen to us on audio only, we do have a YouTube channel, which is growing rapidly in popularity, so please check that out. But I think the key to it is we are ourselves very much not only users of these tools, but trying to find new ways to get really more out of each squeeze, and the key to what we're saying here isn't so much that we're using it for copywriting there's a little bit of that but more than anything, we have an abundance of ideas across our ecosystem at Sidecar, across Digital, now all these other things that we do, and we certainly are open-minded to ideas that come from AI as well as anything else. But a lot of the way we're using AI is editing and brainstorming and collaboration, and so there's really this human-AI partnership that's happening. That has dramatically accelerated a significant work like the Ascend book, but it's different than what a lot of people think.

Speaker 1:

Some people might say oh, you used AI for the book, so what did you do? You just typed into a chat GPT prompt. I want a book on artificial intelligence for associations and you know what you can get something out of that type of a prompt. It may not be that great, it might be OK it's increasingly better but we feel the value we create is where we can connect our listeners and our readers to our experiences and our thoughts in this market and how to apply these technologies. But then AI certainly is a great assist in accelerating that and also, like we were just talking about, enhancing the quality of the work. So, in any event, I'm super excited about this release.

Speaker 1:

As Mallory said, the book is available for free download. Sidecarglobalcom slash AI. That is a totally free download. It's a PDF. We encourage you to download it and share it with anyone and everyone that you would like to. There's no IP rights or issues. Our goal is to share this with as many people as we can to help them, and it is available in print and Kindle format on the Amazon store. We do plan, by the way, to create an audiobook. We're investigating the steps to do that right now. We may actually hire a human person to do that, but we're very likely to use AI. 11 Labs actually has a service that will take a transcript, like Ascend, and convert it into an audio book. So we're looking into that as well. So more to come on that front, because a lot of listeners like to listen to their books instead of just reading them.

Speaker 2:

So that'll be something we'll be doing soon, or watch on YouTube, whatever your preference. We'll figure out a way to get us in that medium. Today, we're excited to cover several exciting topics, as Amit said earlier. First, we're talking about Lumi, a brand new startup storytelling AI platform. Then we'll be talking about Google DeepMind's mathematical AI models and finally this will be interesting we're talking about Google DeepMind's mathematical AI models. And finally this will be interesting we're talking about deepfakes in politics. Very timely conversation. So, first and foremost, Lumi.

Speaker 2:

Colin Kaepernick, the former NFL quarterback and civil rights activist, has launched a new AI-driven startup named Lumi. This platform is designed to empower creators, particularly those interested in producing comics and graphic novels, by leveraging AI to streamline the storytelling and publishing process. Lumi aims to democratize storytelling by providing tools that help creators develop, illustrate, publish and monetize their ideas. The platform is subscription-based and offers various AI-powered tools to assist in the creation of hybrid written illustrated stories. These tools allow users to do things like create characters, generate illustrations, edit and publish and, maybe most importantly, creators can publish their stories directly on Lumi and monetize them. They can order physical copies and create and sell merchandise based on their intellectual property. Order physical copies and create and sell merchandise based on their intellectual property. Another note here creators retain full rights to their work on Lumi, which is a significant departure from traditional publishing models. Lumi also handles logistics like manufacturing, sales, shipping, allowing creators to focus on their craft.

Speaker 2:

Kaepernick's vision for Lumi is rooted in his own experiences with media and publishing. He faced challenges like long production timelines, high costs and issues with creators not having ownership over their work. Lumi seeks to address these problems by providing a more accessible and equitable platform for storytellers. Amit, when I heard about this I thought it was really exciting and, of course, I'm always kind of thinking on the two sides of the coin. So on one side, it's incredible that we're going to see this democratization of access to storytelling and story creation, but on the other side of that coin, is this likely means we will eventually be seeing an incredible oversaturated market full of stories everywhere, and that's the case for a lot of things, but I'm thinking specifically with storytelling. Do you see this as something that will kind of balance itself out in the market like it has in the past?

Speaker 1:

Well, you know, I think that obviously you know people's demand for different kinds of mediums will shift over time as the availability of content changes. I mean, certainly, if you think about how scarce high quality content has historically been versus what we're seeing now. Part of that is based on the market economics of the fact that there's so many more dollars to be chased as a producer of content at the scale of someone like a Netflix or an Amazon or an Apple or the traditional media producers. But for smaller producers, who traditionally wouldn't have had a platform at all, I think a tool like this could come in and make it possible to tell stories, just like the web originally made it possible for people to have blogs and for social media to make it possible for more people to connect. So I do think there's very much a democratization story there that will result in more diversity of content, which will satisfy niches that are so small that mainstream content producers couldn't ever, you know, seek to fulfill them. So I think that's an exciting thing, and you know, it's also when we think about storytelling as a medium. It's a way of communicating and it's not just about entertainment, which is where most people's minds go, I think, when we're talking about storytelling is entertainment and that's obviously awesome and amazing and it's a great part of our human experience, but at the same time, it is an incredibly powerful medium for business as well. So translating nonfiction ideas into stories, I think, is another way that this type of a platform could be really powerful.

Speaker 1:

So, coming back to your question, I don't know if there will be saturation. I think there's ebbs and flows in a lot of these things. The more AI will beget more content. I think there will be more great content in there and there will be a lot of really bad content. So I think people filtering and then tools like AI tools filtering out bad content and helping you find what matters to you is going to be more important than ever for sure.

Speaker 2:

Absolutely. I had a note in here for myself that we have a whole section in a sentence second edition, very relevant on storytelling as a way of marketing. Like you just mentioned, I was thinking Lumi would have been a great addition to that chapter if the book wasn't already done. Why has this been a focus of yours, amit, in the last few years and maybe it's been longer than that, but I do feel like it's been something more recent, at least in terms of how we've talked about storytelling?

Speaker 1:

I've aspired to get better at this in my business communication skills for decades. I've heard for a long time how powerful storytelling is in marketing and from a sales perspective. If you're able to weave a story, weave a narrative through a presentation or a pitch or a demo of a product, you're more likely to connect with whoever it is that you're sharing your content with. So if you're going to go present to your board, are you just presenting the latest results from your association's financial performance in very dry black and white terms, or do you tell it in a narrative story? And that doesn't necessarily mean to create fictional characters and have a story arc and all that, but to use elements of storytelling to make ideas come to life in a different way. So I've been interested in the idea.

Speaker 1:

I don't consider myself to be particularly good at this, really at all, but it's something that I've aimed to get better at and I've worked at over a period of time. And certainly when you're writing, I think there's an opportunity to do that, which is why we experimented with that idea in the first edition and we'll probably do other experiments like that in the future. To me, ultimately, I think the question is is, like you know, when you think about an association and what it's trying to do in connecting with people and you know, in many cases convey critical information to those people about things that are happening in their profession or their sector Storytelling, I think perhaps could play a role, and I'm actually certain it could play a role. And I think storytelling also could make content more accessible for people who may not have the patience or the attention for more traditional, drier content. So I think it potentially opens up some doors. So that's what comes to mind and why I've been focused on it overall for a period of time.

Speaker 2:

Mm-hmm, I want to dig into that just a little bit more. In working through this topic, I realize pretty much everyone knows what storytelling is. We all have this knee-jerk reaction for an intuition of what telling stories means. But I've struggled myself with bringing this down to the ground in terms of marketing, in terms of business. I was thinking maybe are we talking storytelling about the history of an association, or storytelling from the perspective of a member of an association, and maybe it looks like all of those things. But I'm curious, amit, what do you, if you could provide an example or two of kind of how you see that in practice, storytelling in the world of associations?

Speaker 1:

I mean, I think a lot of it comes down to like what do people actually do in their life? Like if you go to an associations conference and you listen to a keynote or you listen to sessions present, and sometimes they're super interesting, sometimes they're story arcs and what people are presenting. A lot of times when people say, oh, my favorite speaker was so-and-so, a lot of times it's because that person actually used a story arc in the way they presented their content or they told personal stories that really illustrated key points. But what do people do once those sessions are over and they're at the lunch or they're at the cocktail hour? A lot of times they're just, you know, they're connecting with other people and they're telling stories. They're telling stories from their lives, they're telling stories from their business, and that's people tend to remember because there's an emotional connection there, more so than just like the dry information being passed.

Speaker 1:

And so, like in my own career, like a lot of times when I've tried to share ideas with team members at various companies I've been involved with, if I had an experience that was particularly relevant, I wouldn't just say something like oh well, I think that entrepreneurs tend to, for example, have very outsized egos, because entrepreneurs have to have kind of big egos in order to even start a company, because it's outlandish to say I'm going to create a new company from scratch. You also have to be a little bit of a psycho in terms of your degree of optimism in order to be an entrepreneur, because it's extremely likely you will fail and that's the highest probability outcome. So you have to be both pretty decent-sized ego and pretty optimistic, and that actually blinds you a lot of times to things that are right in front of your face. So I can tell you those theories, but I can also tell you a story about how, when I was a younger entrepreneur, I was back in California at the time where I grew up, and I had this experience where once upon a time, we got literally a knock on our physical door by a couple of young guys and myself and my co-founder at the time in my old business, we were really young guys and these guys came by and said hey, you guys have a lot of bandwidth, can you spare us some bandwidth? And we're like no, you guys are a bunch of jokers, we don't want anything to do with you. And they're like no, no, no, we really need the bandwidth. We're like well, just pay us for it. And we did. And they said us well, sure enough, we took their cash, not their stock. That company turned out to be eBay. It would have been better to take their stock than their cash.

Speaker 1:

And it was our giant egos that resulted in us not seeing clearly that in fact they were actually pretty far along in their journey. We just didn't pay attention. We're like auction site, that's a joke. We're an enterprise software company, that's a toy. So we had such big egos that we couldn't see past ourselves.

Speaker 1:

And so I tell that story. Everyone always remembers that story. It's like oh you jackass, why did you not, you know, get stocked from those guys? I still feel that way and I remind myself when I tell that story, much more so than the theoretical idea of like, hey, keep your ego in check and think about things from a little bit more balanced perspective.

Speaker 1:

No-transcript. So much power in this. I love it. By the way, as an aside, I grew up in California.

Speaker 1:

Like I just mentioned, I'm a lifelong 49ers fan, so some of you will love that, some will not.

Speaker 1:

I'm also a fan of Colin Kaepernick. I think the idea of him in particular doing this based on his own experience having a very difficult time telling his story after what happened to him in the NFL and the cost, the timelines, just the challenges all of that is a really illuminating thing and he's someone who has been able to tell his story. He's someone who has over time. I'm sure there's been a tremendous amount of perseverance on his part and other supporters of his to tell his story and to tell other stories like his, but I think that's a really inspiring origin story for why this brand has come to the table. I have no idea if this particular company will be the one that's successful or there might be many, but I love the idea of a purpose-driven company like this that's trying to bring a technology to solve a particular pain point. Whether there's a great economic engine behind this I have no idea, but I think it's just really interesting. It's an interesting use case for AI for sure.

Speaker 2:

I don't think this product is out just yet, but I did think it would be interesting to perhaps work with it on Ascend 2nd Edition and see if we could come up with maybe a comic book for it or something a little bit different from what we've tried in the past.

Speaker 1:

Totally, I'd love to see that.

Speaker 2:

And I've got to say I always love the eBay story of me. You told me that a while back. I'll never forget it. I think I told it to our CEO, johanna, earlier last week. It's a great one, so I'm glad you shared it with our listeners.

Speaker 2:

Topic two Google DeepMind's mathematical AI models. Google DeepMind developed models capable of solving complex mathematical problems at a level comparable to top human contestants in the International Mathematical Olympiad or the IMO. The AI systems named AlphaProof and AlphaGeometry2 have demonstrated remarkable capabilities in mathematical reasoning and problem solving. Named AlphaProof and AlphaGeometry2 have demonstrated remarkable capabilities in mathematical reasoning and problem solving. Together, these two models solved four out of six problems at the IMO, earning a score of 28 out of 42 points, which may not sound so good at a glance, but this score is equivalent to a silver medal, just one point shy of the gold medal threshold. The AI system solved problems in algebra, number theory and geometry, but did not solve the combinatorics problems. I had to look that up. That's the mathematics of counting and arranging. Think, permutations and combinations in math.

Speaker 2:

To give you an overview, alphaproof, the first model, focuses on formal mathematical reasoning. It combines reinforcement learning with the Gemini language model and AlphaZero, a different model. It solved two algebra problems and one number theory problem, including the most difficult problem of the competition. And then Alpha Geometry is designed, of course, to tackle geometric problems. It integrates LLMs with symbolic AI using a neurosymbolic approach, and it successfully solved the geometry problem. The success of these two models at the IMO demonstrates that AI can achieve high-level performance in complex mathematical reasoning, a domain traditionally dominated by human intelligence. So this one for me was pretty interesting, amit, because we all know AI has not achieved reasoning just yet. That'll be a really big day on this podcast when that does finally occur. But it seems like AI is reasoning right when handling these really complex mathematical problems. So can you kind of explain that a little bit, how we're seeing the illusion of reasoning without having it?

Speaker 1:

Well, in the context of language models and also language vision models, all the things that consumers interact with Chachi, pt, claw, gemini, et cetera that is where we're seeing a facsimile or an illusion of reasoning, because these models, again, are not actually reasoning, they're just predicting what they should say next essentially and we've covered that in our fundamentals of AI 1 and 2 pods and videos in YouTube, and the idea basically is simple. It's that these models are complex statistical programs essentially that just guess the next word, right but with high levels of accuracy and that's why they're so good. So it makes you feel like they're actually reasoning, but they're not actually using math, they're not actually using any particular approach that is grounded in any structured thinking or process. Now future models are actually doing that. They're overlaying these other concepts on top of language models. What you're seeing here in these particular narrow domains of alpha proof and alpha geometry, are actually kind of like what we've covered in the past as mixture of experts models in the context of language models. It's a little bit different here. This is basically hybrids, where you're taking the strengths of language models, which is not reasoning, but then using, for example, symbolic AI or using deterministic decision-based systems, which are other branches of computer science in some cases AI, in some cases not AI that are really good at specific things, and what you're able to do is use the strengths of these different types of technologies to solve problems that they individually would not solve. That's really what's happening here. There are fundamental innovations at the individual model level that Google or DeepMind, which is a branch of Google, has come up with, but ultimately, what you're seeing here that I think is most exciting is essentially combining models together to solve problems that the models individually would have no chance of solving at this level.

Speaker 1:

So, coming back to the question of reasoning, these models actually are performing reasoning because they're applying a step-by-step way of breaking down a complex problem and solving it, but it's a narrow, very, very narrow use case of reasoning. So they are reasoning, but they're not reasoning on anything you throw at them. These are not models that consumers can interact with and say, hey, I'm going to ask it anything I want. They're specifically designed for this set of problems. So in that sense, they are reasoning, but because they're. You know, if you, if you think of reasoning and its most simplistic definition as breaking down a complex problem into step-by-step solution and executing that step-by-step solution. That is what these tools are doing, but again in a very narrow domain. That doesn't take anything away from the achievement level, because the achievement is truly extraordinary. I mean, you know a silver medalist in the math Olympiad. I mean, you know, four out of six, like I get zero out of six personally, and I'm somewhat decent at math, but I'd probably completely fail at it. So the point is, is it's really really smart in this domain? And that means that if you just kind of extrapolate what's happening in the world is, you're going to see broader and broader examples of this that blend true reasoning with the broader capabilities of these language models. So I think that's what's exciting.

Speaker 1:

The other thing I would say here is where math goes, so does science. So remember that, math essentially being the foundation for just about everything in terms of scientific pursuit. If math can advance at a nonlinear rate, where, essentially, if math can be done by AI, where you have original, novel math being done by AI, then there will be original, novel scientific breakthroughs also originated by and executed by artificial intelligence. So that becomes super exciting. So you could say, hey, let's do a deeper version of this in other domains We've talked about in the past on this pod at some length, things like AlphaFold. We've talked about weather models, we've talked about models in other domains and so, like in material science, for example, you're going to see this type of capability explode those other kinds of scientific models further in terms of their ability solve novel problems. So that I find super, super exciting.

Speaker 2:

This makes me think of an example that I saw in our prompt engineering mini course led by Thomas Altman or perhaps it was a session he led last year sometime. But giving chat GPT a really simple math problem, something with you give apples, you take away apples, and if you don't tell it to think step by step, it gets the question wrong. And if you prompt it, simply prompt it to say think through this problem step by step, it will actually get the problem right, and I've seen that play out in action. I'm, of course, not comparing ChachiBT to either of these two models, but I'm wondering from your perspective, do you think this is better training with these two models? Do you think this is more fine tuning or do you think this is something else altogether?

Speaker 1:

No, it's different types of models being brought together. So it's not so much different training or different fine tuning, it's different algorithmic approaches that are combined together, and then there's obviously some supervisor algorithm of sorts that brings it all together to actually solve the problem in the context that's being described. So we talk a lot about the transformer-based architecture and large language models, as well as others that are sitting on top of that basic innovation. That happened way back in 2017. And a lot of the advancements that have occurred since then are based on that architecture. That's the architecture that, essentially, is what a lot of people think of as, quote unquote AI, where you have this predict next token, predict next word concept and, as amazing and as powerful as it is, it's just one step in the evolution of this stuff. There's many, many other things going on in parallel with that that are completely unrelated to the transformer architecture. They may have shared scientific roots in some ways, but there's lots and lots of branches of this stuff happening at the same time. It's just kind of like. You know, prior to the chat GPT moment in late 22, most people had never heard of a transformer architecture or the idea of a GPT or any of this stuff, even though it had been around for several years before that. Similarly, you're seeing some things now start to bubble up that are really interesting from a research perspective, but aren't yet commercially impacting the world. Might not impact an association day to day, but are things you should pay attention to, because ask yourself this question If you knew, even in 2021, that by the end of 2022, the world would be completely different with AI and chat GPT, what could you have done differently to be better prepared, right? Be completely different with AI and chat GPT. What could you have done differently to be better prepared, right? So, similarly, here we know that some of these innovations by next year, the year after, are going to have, once again, radical impacts on our world. So, if you are an association that is, in any type of scientific discipline you're in engineering, you're in architecture, you're in a branch of science or you're in education you have to look into this stuff because it's going to affect your field. Will it affect the administration of your association? Maybe to some extent because, just generally, the AI will become smarter and more capable of reasoning, because these kinds of mathematical models will actually make their way into consumer models over time as, like you know, essentially accessory models, if you will, to the main models.

Speaker 1:

Something like a GPT-4.0 may ship in its next release with a whole bunch of coprocessors. In fact, that's how computer architectures have worked for a long time, where you have something like your central processing unit, your CPU, which is the main brain that runs your computer. But then, quite a few years ago, we started getting coprocessors, the most famous of which is the GPU, which is now powering AI, but it's really good at graphics and so you're going to have that happen with AI as well. It's not going to be a one size fits all thing and, again, going back to the association context, it means the models are going to have more power for your business as well, but in your domain. Maybe you're not the world's greatest expert in the content of your domain, within the staff of your association, but you have to be aware of this stuff. So, at a fundamental level, that's the most important thing to be knowledgeable about and to follow it. I think it's just fundamentally interesting in terms of where it will take these models within 12, 24, certainly 36 months.

Speaker 2:

So you think what's most impressive about this topic for today is the fact that we have multiple AI models working together to give us a really narrow sense of reasoning, kind of in this one domain of math.

Speaker 1:

Think about it maybe a little bit more abstractly, because the math part is cool, the science part is cool. But think about it a little more abstractly. If all I can do is use everything I've ever read, listened to, watched or heard to predict what should come next, kind of definitionally, I can't really create something new. All I'm doing is predicting what should come next based on what I've been trained on, right. So, as a human being, everything I've read, everything I've seen, everything I've smelled, etc. All of that collectively gets somehow stored in my brain, whatever I've retained and that's going to inform my thinking in terms of what's next. If that's all I could do, right, if all I could do is predict the next token, which sometimes that's exactly what I do. But the point is, is that we can create new things that aren't necessarily based on our training data, if you will. They're based on ideas, or based on logic, or based on knowing certain fundamental ideas of how can you reason through solving a novel problem, how do you take flight, how do you put a man on the moon, how do you create a vaccine?

Speaker 1:

These are all things that are novel breakthroughs. Or how do you solve problems in economics or whatever it is that you're doing. How do you come up with the idea for a story that you want to write? These are not necessarily based on your training data. Of course, your training data influences it, but it's not so much that you're predicting what's next based on what's come before.

Speaker 1:

So these other models that are out there that are capable of applying true reasoning, where they have the definition of a problem and they're able to break it down into components and come up with a novel solution that is what's truly remarkable about this, and in the case of, like alpha proof and alpha geometry, they're doing that with a very narrow domain. If we can broaden that somewhat and we can create novel creations of whatever they are, then the AI goes way, way beyond what we have now, because what we have right now is amazing, like we didn't have it even two years ago. In a consumer sense, it's powerful, but being able to create new things from scratch that aren't necessarily the natural extensions of what happened before is what gets me excited.

Speaker 2:

That makes a ton of sense. Do you think the path to AI reasoning is seeing more narrow use cases of it pop up across the sphere until they all kind of merge into one, or do you think? We'll just suddenly have general reasoning.

Speaker 1:

The generalization is the hard part that I think no one has that I know of at the moment. There is no leading theory that looks like it's going to be the ticket to generalization of knowledge. So that's what we do, right, like we have an experience and we generalize it to something else. Early this morning I was out here on the lake near my house in Utah and I was doing this thing called e-foiling, which I think we talked about before. It's one of the fun things I love to do when the weather's a little bit warmer, and a friend of mine was out there with me and he brought his 11-year-old which was super fun no-transcript a few times when I was a kid and then deciding I didn't want to do that anymore. So, um, but I tried to relate to him like okay, well, just pretend you're on a skateboard and figure it out is what I told him before he went out there and of course he's 11, has no fear and is very athletic kid. So he figured it out and within five minutes he was, like you know, cruising around. It was pretty cool. But that kind of generalization an 11 year old and a lot of 11 year olds can do that right, that doesn't make him a world class athlete, just makes him, you know, probably a typical 11 year old kid. That's a little bit on the athletic side and we can do that all the time with all sorts of things, right? So when we are able to generalize like that, plus that example is very much like in the world, right, that's the world model and that's part of what we've talked about before, that these models, these language models, have not to date had a world model where they understand 3D, they understand the world around them, they understand physics beyond just having read a physics textbook. So this 11-year-old, if he was the next Isaac Newton and he read all the physics textbooks in the world but had never ridden a skateboard, and he was the most brilliant genius but he had no experience in the real world would he have been able to get on the e-foil and ride? Probably not, right? Because he lacks that real-world experience.

Speaker 1:

The same thing with models. So a world model is one piece of it, and then the generalization, which is another piece that, along with reasoning. Reasoning is like one piece of it that we've got to add. But then, beyond that, there's more to it than that and I think you know that's where the next breakthroughs need to occur over, and that's probably, like you know, I'll say five to 10 years, but, you know, maybe that'll happen in the next 12 months. Who knows, better ai begets better ai right. So, like we have tools now that are far more powerful than anything we've ever had in our history, and that means that we're going to keep innovating faster, which is exciting and scary at the same time, right?

Speaker 2:

Absolutely. That's really helpful, though that idea, that reasoning is just one piece of it, whereas I think I was thinking it was the piece. But you're right, how could it EFOIL?

Speaker 1:

It couldn't very natural next step. Generalization would be another next step. Maybe it happens after reasoning, maybe it happens concurrently, and the idea of world model. People are already working on that. It's one of the reasons that these multimodal models are so important, because they're trained not just on text, but they're trained on images, on video and on other forms of data, and that's actually one of the reasons a lot of people look at companies like Tesla and say, hey, they have a really interesting advantage with respect to AI because they have trillions of hours of training video that no one else in the world has from real world physics happening on the road.

Speaker 2:

Well, speaking of Tesla, amit, in our next topic, we're actually talking about Elon Musk in regards to deepfakes and politics. We're actually talking about Elon Musk in regards to deepfakes and politics. You all may have seen this in the news recently, but Elon Musk based some criticism for sharing a deepfake video of Vice President Kamala Harris on his social media platform X, formerly Twitter. The video, which was originally posted by a podcaster and labeled as a parody, was manipulated to make it appear as though Harris was making derogatory remarks about President Joe Biden and herself.

Speaker 3:

I, kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility of debate. Thanks, joe. I was selected because I am the ultimate diversity hire. I'm both a woman and a person of color, so if you criticize anything I say, you're both sexist and racist.

Speaker 2:

I may not know the first thing about running the country, but remember that's disclaimer indicating that it was a parody or manipulated content. Instead, he captioned it with quote this is amazing. And a laughing emoji which led to widespread criticism for potentially misleading his vast audience of nearly 192 million followers. The video quickly garnered millions of views, of course, raising concerns about the spread of political disinformation, especially leading up to this presidential election. Critics pointed out that Musk's actions appear to violate X's own policies, which prohibit sharing synthetic, manipulated or out of context media that could deceive or confuse people and lead to harm. Musk defended his actions by asserting that parity is legal in America.

Speaker 2:

The incident highlights the growing concerns about the misuse of AI and creating deepfakes and the challenges social media platforms face in regulating these types of content. Now, amit, we predicted something along these lines in our 2024 prediction episode of this podcast that we released in late 2023. I believe the prediction was that a major news platform would share a deepfake video without initially realizing that it was a deepfake. So this isn't exactly that, but certainly in that same vein, what were your thoughts when you heard about this?

Speaker 1:

Well, you know, first of all, Musk will be Musk in a way and like love him or hate him, he's he is, and so I think he's going to keep doing this kind of stuff and people should pay attention to his content. Or, if I should say it this way, if you're paying attention to his content, you should be aware that a lot of it is going to be like this. That doesn't necessarily excuse him for not having disclosed that is a deepfake. I wouldn't want to do that if without disclosing that this is like not only a deepfake but something you should be really aware of. So I think he has the opportunity and people like him who have a large follower base have the opportunity to do tremendous good by helping educate people that these things are super easy to create. My thought on this particular piece is yeah, it's exactly the kind of stuff that people will watch and a lot of people will not realize it is fake. I don't know what that percentage would be, but I would be. You know, I'd be surprised if it's a very small percentage. I think that a good percentage of people would view that and say, oh yeah, this is a real thing. So that's scary, right? That's the potential to influence millions of people on lots of topics. So we called that out in our predictions pod, as you mentioned, because year's not over yet. I still think there's a good chance of it that some media source will unknowingly air a fake video, and I think you know something as obvious as this might be different, but it might be also something. Think about the more subtle shifts that you could make. You know, you take a political actor like that and you say, okay, well, we're going to change their position ever so slightly on particular topics to create confusion and to show that they're, you know, disingenuous about certain things. Right, say that they support topic A and then they start hedging that, when in reality, they never hedge that. And so then what do you know? What's real, what's not? So I think the most dangerous types of deepfakes are the ones that are actually just slightly off, because then they start taking people down a path where they believe it, and you could see these accounts getting a lot of followers and then people believing that they're real. So to me, that's the real concern, and I think we have to be talking about this stuff Again.

Speaker 1:

With AI, there is no solution for this at this moment to detect deepfakes. There's no such thing. There are attempts at creating watermarks for authenticity so that you can positively assert, essentially, the provenance of a video or an image or any kind of an asset. You know to say, hey, this is a real production of person X. I think that's going to become more and more important to actually do not trust anything by default and have to verify everything by default, particularly from public figures like this. To me, that's probably the most realistic solution. Maybe that's a blockchain based solution, Maybe it's something else, but AI is not able to just automatically detect deepfakes for you. So don't assume what you have coming to you over the air, through cable, through streaming or on your device is real, and that's a sad state of affairs, perhaps in some ways, but I think that's the only way you can really approach it is to have that very high degree of skepticism.

Speaker 2:

I suppose it's part of our due diligence as humans on this earth now to be really critical of the information and the news we're consuming. It definitely seems overwhelming to kind of have to think potentially everything that we see may not be legitimate. But I think you're right. Until we have a solution, which we don't right now, that's probably how we have to move through life.

Speaker 1:

Yeah, I mean, I think that there will be solutions. It's the cat and mouse game or the move counter move thing, where you know, to the extent that these tools there will always be tools that are somewhat better than what people are able to detect commitment and resources you can stay just slightly ahead and try to scam people. That's been going on for a long, long time across a lot of domains, well preceding the internet and digital-based approaches like this, and this is just the latest variety of it. It's just an AI-powered version of scamming like this is super, super scary.

Speaker 1:

I think there's also hyper-personalized, such deep fakes that you have to be aware of. It's not just political motives to influence elections, which are obviously problematic enough, but it's getting a fake phone call from a loved one that's telling you they're in trouble and they need to send money in order to help them out. How do you address that? Well, you have to be prepared for it. You probably need to talk to your family, certainly talk to your colleagues at business, maybe set up some old school like rotating codes that you and only you would know that are not even anywhere on a computer. That is like handwritten by people, and you use those as like monthly rotating codes to verify your authenticity, right. Of course, that can be hacked too, that can be guessed, predicted, because we're all very predictable machines as people.

Speaker 1:

So, but it's better than nothing, right? So there's just a lot of, there's a lot to unpack here in terms of what to do about it. So, in a way, musk's action I don't think this was his intention, but creating a big controversy out of this ultimately hopefully creates a lot of noise that people can't ignore that hey, there's this thing out here that you have to pay attention to. I don't think he thought eight steps ahead like that. I think he just shared it because he felt like it and that's what he did, right. But when someone with that kind of a followership does that and then it gets people figure that out, I think hopefully it's increasing awareness as a byproduct of that action.

Speaker 2:

We can all agree on this podcast that deepfakes in politics pretty much bad across the board. Deepfakes with scams, of course, bad deepfakes of loved ones. I AI webinars that we had someone ask us about Hagen, which is a tool where you can create an AI avatar of yourself, and they asked are you deepfaking yourself? And I had to think about it for a second because again my mind went to the thought of no, deepfakes are bad, we're not doing that. But Thomas confirmed yes, we are deepfaking ourselves, and it got me thinking of me to look at this from a different angle. Perhaps I know you and I have talked about the use case of creating an AI avatar of yourself and being able to send one-to-one personalized videos from you to a member, for example. Totally, do you think it's always best practice to disclose that when you're doing such a thing?

Speaker 1:

I think you're getting into kind of the little bit of subjectivity there in terms of what's best practice. I personally believe it's a good idea to disclose something that's not, you know, human created versus AI assisted or AI created. You know, it's like in the context of Ascend, right, we talk all the time about it and of course, it's the perfect place to showcase how to use AI. But I think it's really important to disclose that. That's my personal opinion. I don't know that you know that's the right answer for everyone in all contexts. But you know, there are a lot of places where you know and deepfake just to break down that term, the fake part is probably obvious. I mean, all this stuff is fake because it's not authentic, human created content, and then the deep part just comes from deep neural networks, which is the technology all this stuff is based on and has been for about 12 years now or 13 years. So there's really nothing meaningful in that term other than that's been the term of art in the popular consciousness. But yes, we're deep faking ourselves when we use an avatar and we use video editing All that is in that category, and so you can use it to improve your content. You can use it to hyper personalized content. So sending that one to one video that you just described personalized content. So sending that one-to-one video that you just described, um, one of the actually just going back to the realm of politics real quick is that you know people think, oh, the deepfake is going to be used by an opponent to, or or a state actor that wants to influence the election, to essentially undermine a particular um candidate. And that might be the most initial, like you know, blunt, force, trauma, simplistic way of trying to kill off the candidate or whatever right it. It's like this most obvious thing and some people who are influenced by that. It's probably a lot of people. It might work, it might work for a while, but what about what we said earlier, like the more subtle attacks where you're shifting someone's perspective ever so slightly over time, or, and a different use of it is, let's say, you have a candidate that is not doing well because, let's say, they're aging and they're having a hard time presenting themselves well, but they don't want to be perceived that way, and so they use deep fake technology to take, you know, a highly coherent, wonderfully orated, beautiful message to the public and say this is me, this is me debating someone. This is me speaking, right. It could be someone this is me speaking, right. It could be used to amplify and hide or cover up issues, right? So it can be used in a lot of different ways, both in offense, defense and all sorts of different ways. So I think we have to be really thoughtful about it.

Speaker 1:

Coming back to the business use case, I think, as long as you're responsible with your use of it, we're like, hey, like in the book we talk about this idea of really just translating content and we talk about it. There's this whole section of the book, or a chapter of the book, where we talk about translation, translation. People immediately their brains go to, oh, english to Spanish, spanish to French, french to Chinese, whatever, and of course, that's a great use case. But what about translating in this context? Right, taking something from a message that was maybe tailored to an experienced professional group, and now we want to tailor that content to an emerging young leaders group, people who are right out of college and they use maybe not a different language, but they perhaps communicate differently in some ways, right? I could certainly relate to that with teenagers.

Speaker 1:

So I guess the point would be there's lots of ways of leveraging this content for this technology for good and creating content that serves. People know when they're doing it for good, I think, and people know when they're doing it for something other than that. So the question is, like with all tools, like you know, what are people going to do with this stuff? There is the side of it can we detect it? But then there's also the side of it of hey, we're disclosing it. So it's a super interesting topic. I wish I had better answers than this, but I think we're going to have to just wade through this muck as a society.

Speaker 2:

Yeah, it goes back to the saying of kind of like, the more we learn about this stuff, the less we know and the more questions we end up with, which is, I think, a good thing overall, and we're having this discussion live on the podcast. But I certainly something I always come back to is being able to leverage AI to create more stories, to create more personalized interactions, but then realizing too then that it's also kind of missing that human connection piece. If that makes sense, like sending a video of me to someone and disclosing that it's AI, does it still carry the same weight as if Mallory took the time out of her day to record a video for you? I don't know, I don't have the answer to that.

Speaker 1:

Yep, that's where I think the philosophers need to come in and help solve those problems.

Speaker 2:

Maybe that'll be a next guest, a guest soon on the Sidecar Sync podcast. Well, amit, this was a really great convo today. Thank you for sharing your insights and everyone. Please check out Ascend 2nd Edition. You can access it at sidecarglobalcom slash AI for free right now and we will see you all in next week's episode.

Speaker 1:

Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.