Sidecar Sync
Welcome to Sidecar Sync: Your Weekly Dose of Innovation for Associations. Hosted by Amith Nagarajan and Mallory Mejias, this podcast is your definitive source for the latest news, insights, and trends in the association world with a special emphasis on Artificial Intelligence (AI) and its pivotal role in shaping the future. Each week, we delve into the most pressing topics, spotlighting the transformative role of emerging technologies and their profound impact on associations. With a commitment to cutting through the noise, Sidecar Sync offers listeners clear, informed discussions, expert perspectives, and a deep dive into the challenges and opportunities facing associations today. Whether you're an association professional, tech enthusiast, or just keen on staying updated, Sidecar Sync ensures you're always ahead of the curve. Join us for enlightening conversations and a fresh take on the ever-evolving world of associations.
Sidecar Sync
31: AlphaFold 3's Breakthrough, GPT-4o Innovations, and AI's Role in Organizations
In this episode, Amith and Mallory delve into the transformative impact of artificial intelligence (AI) on various sectors, with a keen focus on its role in the association world. They begin by discussing the advancements of AlphaFold 3, Google's latest AI model, which revolutionizes drug discovery and biological research. The conversation shifts to the newly released GPT-4.0o by OpenAI, highlighting its enhanced capabilities in text, audio, and image generation. They also explore the potential future of AI in personalized medicine and complex problem-solving. Finally, they review the 2024 Work Trend Index by Microsoft and LinkedIn, revealing significant insights into AI adoption in the workplace and the evolving job market.
Chapters:
0:00 Introduction & AI at Work
7:54 Advances in Protein Folding Prediction
11:44 Exciting Potential and Risks of AI
26:19 Advancements in GPT-4o
33:04 Product Roadmap and Future Innovations
41:10 The Future of AI in Work
46:36 The Role of AI in Organizations
54:58 The Value of AI Implementation
π Check out these links to learn more about Amith & Mallory discussed in this episode:
GPT-4o Human interaction demo w/ Sal Khan β‘ https://tinyurl.com/bdd9awvz
GPT-4o demos β‘ https://tinyurl.com/y3twacb6
AlphaFold 3 β‘ https://tinyurl.com/4aay9pdn
Microsoft LinkedIn Work Trend Report β‘ https://tinyurl.com/nhchks7e
This episode is brought to you by Sidecar's AI Learning Hub π https://sidecarglobal.com/ai-learning-hub. The AI Learning Hub blends self-paced learning with live expert interaction. It's designed for the busy association or nonprofit professional.
π Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global
Please like & subscribe!
π½ https://twitter.com/sidecarglobal
π https://sidecarglobal.com
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress π https://BlueCypress.io, a family of purpose-driven companies and proud practi
π Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global
π Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress π https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. Heβs had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
π£ Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
π£ Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
Three in four knowledge workers, or 75%, now use AI at work. Ai is credited with saving time, boosting creativity and allowing employees to focus on their most important tasks.
Speaker 2:Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings everybody and welcome back for another episode of the Sidecar Sync. My name is Amit Nagarajan and I'm one of your hosts.
Speaker 1:And my name is Mallory Mejiaz. I'm one of your co-hosts and I run Sidecar.
Speaker 2:It is great to be back. We have another action-packed and exciting episode at the intersection of artificial intelligence, ai and you. But before we get going, let's hear a quick word from our sponsor.
Speaker 1:Today's sponsor is Sidecar's AI Learning Hub. The AI Learning Hub is your go-to place to sharpen your AI skills and ensure you're keeping up with the latest in the AI space. When you purchase access to the AI Learning Hub, you get a library of on-demand AI lessons that are regularly updated to reflect what's new and the latest in the AI space. You also get access to live weekly office hours with AI experts and, finally, you get to join a community of fellow AI enthusiasts who are just as excited about learning about this emerging technology as you are. You can purchase 12-month access to the AI Learning Hub for $399. And if you want to get more information on that, you can go to sidecarglobalcom. Slash hub Amith. Last week, I think it was, we were at the Innovation Hub in DC. Can you talk a little bit about your time there?
Speaker 2:It was fantastic. Yeah, it was last week, wasn't it? In fact, I think it was almost exactly a week ago or eight days ago, and it feels like three months probably. Things go so quickly. We had great turnout. There were dozens of association executives from I don't remember how many different associations or many, and a lot of really great conversations about what they're working on, a lot of collaborative sharing. There were some great speakers from across the Blue Cypress family. Rasaio announced an exciting new product their personalization engine. Outside of the newsletter, there's all sorts of cool stuff happening and it was super fun to be in DC. I used to live in DC. I lived there for almost a decade, a long time ago and it's always fun to go back and reconnect with old friends and get to meet new people, so I had a fantastic time. How about you?
Speaker 1:I had a great time myself. I got to lead a marketing AI panel for the second time. I did it the first time in Chicago, second time in DC. We had some really great insights shared and then I think was it the day of the Innovation Hub of Meaths that OpenAI dropped GPT-4.0, and then AlphaFold, which we're talking about today, was also dropped right around then. It was a crazy few days.
Speaker 2:Yeah, monday of last week GPT-4.0 dropped, so it was like the night that I was arriving in DC. I was able to catch up because they actually live streamed it while I was flying, so I didn't catch it fully live. And then the next day, while while we were together in DC with the association leaders coming to the innovation hub, google had their IO event, which is their developer conference, and they announced a whole bunch of different, really cool AI things there. So it was a busy week. So I didn't blame too many people who were paying attention to Google and open AI instead of us, but you know it was. It was a stiff competition for attention those two days.
Speaker 1:Yep, this week for sure, we had a lot to pick from in terms of topics. It was kind of tough to narrow it down into three. I also want to shout out our fan mail feature. We mentioned that on. Maybe it was two episodes ago that we first mentioned it, but if you are listening on your mobile phone, your mobile device right now, you can go to the show notes and click send fan mail I think that's what it says to the show. We had some great fan mail from one of our listeners, liz. So shout out Liz, from two weeks ago. So if you have any thoughts, questions concerns anything that you want us to cover on the podcast, or just feedback in general, feel free to reach out to us through that fan mail link.
Speaker 1:Today We've got three topics lined up. The first of those will be alpha fold three. Really excited for that one. Next we'll be talking about gpt 4.0 because how could we not? And then, finally, we're talking about microsoft and linkedin's work trends report.
Speaker 1:So, first and foremost, alpha fold three is a new ai model developed by google, deep mind and and Isomorphic Labs. Alpha Fold 3 is designed to predict the structure and interactions of a wide range of biological molecules, including proteins, dna, rna and small molecules that are called ligands. This model builds on the success of its predecessors, alpha Fold and Alpha Fold 2, by expanding its capabilities to cover all of life's molecules and their interactions. So I'm going to try to contextualize this so we can kind of understand why it's important. The model can predict the 3D structure of multiple biomolecules, like we mentioned, proteins, dna, rna helping to understand the complex interactions within these biological systems. This capability provides detailed structural information that can be used to speed up the drug design process.
Speaker 1:Alphafold3 can predict how proteins interact with small molecules with high accuracy, which is crucial for designing new drugs. Because AlphaFold3 can predict the structures of various biomolecules, it allows researchers to model complex biological systems more accurately. And by accurately predicting how drugs interact with their targets, alphafold3 can optimize drug efficacy and minimize side effects. Finally, alphafold3 can predict the effects of genetic variations on protein structures, which can be used to develop personalized treatments tailored to individual patients' genetic profiles Amit, that's kind of a lot, a little bit on the technical side in terms of biology. Can you help set the stage for why this is such a huge advancement?
Speaker 2:I have so many thoughts on this, but first of all, did you know what a ligand was? I'd never heard of that before this.
Speaker 1:You know I'm going to say that I'd heard of a ligand but I couldn't have probably answered in a multiple choice question what it was. But I'd heard of it, I had.
Speaker 2:I'm not a big life sciences guy. I'm intrigued by life sciences but it was always my weakness in high school and in college. I was much more of a physical sciences, physics, computer science, obviously, and also finance type guy, so I had a hard time with it. Had a hard time with it, but I find it extremely fascinating and I want to talk about some of the reasons I'm so excited about this area of AI, perhaps more than anything else that's happening in AI.
Speaker 2:But ligands yeah they kind of occurred to me. I'm like, is that some kind of special New Orleans cuisine or something?
Speaker 1:like that. I don't want a plate of ligands.
Speaker 2:Don't send me a plate, you never know, anyway. So you know when I think about. First of all, alphafold is now in its third generation and many, many things are happening with this the over period of time. You know, when AlphaFold first came out, it was this novel way to predict protein structures in three dimensions, and so, just to quickly cover that, it's kind of like going from puzzle pieces to Lego blocks. So we knew the chemical structure of proteins for a long time, not all proteins, but many of them. We had an understanding of proteins that naturally occurred as well as engineered proteins, but we couldn't predict how they would fold. Meaning, as a protein goes from its 2D chemical representation to the actual 3D visualization of how the bonds are formed and therefore what are the angles and what is the structure of that protein look like, you can kind of see it in 3D. And that's the point of alpha folds to predict that protein folding or the 3D structure essentially. And so if you imagine going from flat puzzle pieces, from a jigsaw puzzle, to 3D Lego blocks of all sorts of shapes and sizes, you can then think about how these things fit together and so many diseases when you're thinking about how to target the disease and how to develop a molecule that potentially can be effective in curing the disease. You're talking about being able to try to find the right molecule to fit. Essentially, it's almost like a lock and key system, and so the concept of being able to predict at high scale. Now, hundreds of millions of proteins have predicted, been predicted by alpha pole going back to alpha pole 2, actually and those designs, essentially those predictions, which were very high accuracy, were all open source by google. So that was a real shout out to them for doing that, because I think in the podcast I listened to recently where they were talking about this innovation, they said that 1.4 million people had downloaded the AlphaFold2 dataset to do experimentation work. So that's amazing that that many people are trying to use this protein folding prediction data to advance life sciences.
Speaker 2:So, coming back to AlphaFold3 and some of the enhancements, we're talking about going beyond basic proteins and dealing with more complex molecules and how they interact and being able to model these complex systems. It's going to usher in, first of all, a better fundamental understanding of biology than what we have today, and so it's going to help us advance fundamental research in biology and many other related disciplines and then, building on top of that, when we talk about solving problems, whether that's ecological problems like how do we get rid of plastics in the ocean? How do we help with deforestation? These are all things that you can have a better understanding of biology to help with this stuff, right. How do we help with deforestation? These are all things that you can have a better understanding of biology to help with this stuff, right. How do you deal with soil erosion? How do you deal with coastal erosion in Louisiana, for example? I don't have a specific example within those subdomains, but the point is that stronger fundamental understanding of these systems is going to be critical for innovation. These systems is going to be critical for innovation.
Speaker 2:Obviously, a lot of the things we're dealing with carbon capture, dealing with climate in general and the challenges ahead in the coming decades this knowledge is going to be tremendous, in particular, related to drug discovery and when we think about how do you solve for either broad diseases, where millions of people or hundreds of thousands of people suffer from something, or even very narrow cases.
Speaker 2:This is a category where I think our ability to have a much better chance of success at drug pipeline is going to increase.
Speaker 2:So if you think about how drug discovery has worked for a long time.
Speaker 2:For every drug that is approved by the FDA in the United States and comparable agencies elsewhere in the world, there are high multiples of numbers of drugs in each prior phase of the process.
Speaker 2:So meaning, as you go through even the earlier stages of drug discovery, when you have candidate compounds and then you have to go through animal testing, and then from animal testing, there's three phases of human clinical trials.
Speaker 2:The first one is kind of focused on safety, the next one is on efficacy, and then you go to, like this larger scale trial to look for larger samples, and each of these phases of clinical trials becomes radically more expensive, which, by virtue of constraints, it means it narrows the number of candidates. You can push through that pipeline If you get it wrong early and you spend hundreds of millions of dollars on a phase two or phase three clinical trial, as well as a lot of time, you end up at a dead end and I think it's something like only 11% of drugs that enter phase one clinical trials ever make it out of phase three, so it's basically one in 10, right, so you can improve your odds. That is an incredible thing and one of the ways you improve your odds is you have a better understanding of these systems, these complex biological systems, drugs that are targeted, safer, more effective. I mean, it's just an amazing thing.
Speaker 2:So, I get really pumped up about it. As limited as my own understanding is of the underlying science in the biology, I get really pumped about this application of AI.
Speaker 1:Yeah, this is incredibly exciting. I mean, just on this podcast we've talked about AI advancements with material science, with weather prediction on one episode and now with drug discovery. Do you see a future, a near-term future, where we do away with traditional research methods and kind of only rely on AI models for this kind of thing?
Speaker 2:I think, in the world of, I think there will always be some work to be done on the bench, so to speak, meaning in the lab, with molecules being able to actually test things out then, leading to various types of trials. We might be able to eliminate certain steps in the process. For example, if we get so good at predicting the way these compounds will work in humans, you might be able to bypass animal studies entirely.
Speaker 2:For example we know an awful lot about how a wide array of compounds interact with rats and mice, and rats and mice obviously share some characteristics with us, but they're also extraordinarily different from humans. So you know, and that's expensive. There's all sorts of ethics questions around animal testing as well, and it's slow and it's expensive, right animal testing as well, and it's slow and it's expensive, right. So I think that you might be able to eliminate some of those steps because you have higher probabilities going in with better and better AI. The other thing, though I think that's important is probably for a while and the while how long that is is going to be a question that people have different answers to You'll probably need some degree of validation in the lab before you go to any kinds of trials.
Speaker 2:It's interesting because here in New Orleans there's a company that I happen to be an investor in.
Speaker 2:That's an early stage, very innovative life sciences drug research company, and they work on essentially simulating with actual live human tissue how different drug compounds and molecules essentially will interact with human nerve tissue. Their focus is on trying to find cures for neurodegenerative diseases like Alzheimer's, parkinson's, et cetera, als as well, and so their innovation was this idea called nerve on a chip, which is this concept of being able to actually take live human nerve tissue and put it on a semiconductor and be able to do very controlled experiments at scale and get feedback loops through that that are much, much faster and much less expensive for a much wider array of potential candidate molecules to see how things interact, both in terms of how the tissue reacts and also with electrical stimuli and all these other cool things they can do. That you can't really replicate in humans at all, but it can give you essentially a prediction of how things are going to act in the actual, much more complex biological system of a living breathing person.
Speaker 2:And so I think that that kind of technology will still be really valuable, at least probably for the next decade is my guess but the AI could actually be a precursor and like a co-pilot along with you know new forms of in-lab testing, like that. So there's just so much happening. That's exciting, I think these we talk about exponentials a lot on this podcast and in our book and in all of our other content, and there's exponentials happening actually within biology itself, with knowledge of things like gene editing and the technology I just mentioned with, like the you know lab or cells on a chip kind of concept, right, which is a new idea, and that's an exponential growth technology itself and they feed off of each other because AI is an exponential and these other exponentials are also growing at an exponential pace, obviously, and they're feeding off each other. So it's just an exciting time for scientific discovery in general.
Speaker 1:Absolutely. New drug discovery is certainly exciting. And then I also think that last part of what I mentioned, personalized medicine. I'm excited to see what advancements we see there, not only with new drugs, but new drugs perhaps tailored to our own genetic profiles. That just seems like next level medicine own genetic profiles.
Speaker 2:That just seems like next level medicine. Yeah, and that's like the concept of hey, mallory, like you know, you have all these various unique attributes and things you're trying to approve or treat or whatever. And maybe there's, like you know, a 3D printer in your home that just pops out a pill for you that morning, not just based on your genetic profile, not just based on whatever your goals are or whatever your issues are, but also based on how your body is doing at that moment in time right, how is your blood sugar and how was your sleep last night and it gives you that personalized, like ultimate, ultimate capsule for you to take in, right, that morning, I just feel great all day. So I mean, we're not that far off from that type of sci fi.
Speaker 1:Wow, that's pretty crazy to think about. I feel like, amit, you're really good at setting the stage for like what's possible next with these things, which is really helpful. Um, I was.
Speaker 2:I'm good at making stuff up and then trying to make it happen.
Speaker 1:That's pretty much what I'm saying I don't know, A lot of what we've talked about I feel like is coming true.
Speaker 2:We'll see about the 3d printed drugs, Maybe Um you you know mallory before I know we want to move on to other things, but I just want to say one thing about, uh, alpha fold. It's that, it's. It is there and the guys behind alpha fold uh, one person from google's team, another person from a major investor, I forget which one. We're talking about this recently. We're talking about, like, open sourcing the database, about all two, and the potential downside of not even open sourcing the model, but the database itself and the possibility of open sourcing technologies like Alpha Fold 3. It kind of goes back to the same general conversation we've had about potential downside risk of open source in general, or maybe even just the potential downside risk of AI in general. Right, like, what are the potential malicious use cases? What could a bad actor do with these technologies? That's where I tend to focus, as opposed to the idea of, like, would the AI itself go bad? Right, like, will AlphaFold become like some kind of bad actor itself? That's like a really unlikely scenario. Nothing's impossible, but it's unlikely. But what is likely is, let's say, a terrorist organization takes AlphaFold3 and says let's design a novel pathogen that can kill people at scale better than ever before. So those kinds of negative use cases could exist, and so we have to think about that. But we also have to recognize that whatever's happening at the frontier, meaning AlphaFold3, seems to be the best in this particular subdomain. The people that are right behind that are probably not that far off. So what was cutting edge in AlphaFold2? Probably anyone and everyone around the world, even in very small labs, can do AlphaFold2. And that's not too far behind AlphaFold3.
Speaker 2:So we have to keep raising the bar, because the good AI has got to stay ahead of potential bad use cases of AI. So there is downside risk to all of this stuff. People could do bad things with any of these tools because they're powerful, and my central point of view is simply that we have to keep advancing it, because everyone has this stuff now. So it's just a theoretical argument to say, well, what if no one had it? Well, everyone has it. So we have to develop stuff that's capable of defending against the things that could go wrong. Hopefully the world doesn't come to that. But hey, that's part of the way I frame it, because I think we have to keep advancing with good AI to stay ahead of bad use cases.
Speaker 1:That's really helpful, because I wanted to ask you about potential downsides. I know we talk about new discoveries all the time, or new tools, new companies Suno AI is one, the text to music model and then Sora, and there's kind of always a great side and a terrible side. With this one particularly, I felt like huh, could there be a downside to something so powerful? But I guess you're right. If it gets into the wrong hands, which it will, I mean, the fact is you're right, we're going to keep pushing that line, the frontier line, so it will get into the hands of bad actors. I guess that's always a downside with this technology.
Speaker 2:For sure. I mean, the more powerful a tool, any tool of sufficient power, is fundamentally dual use, meaning exactly what we're talking about. It can be used for good, it can be used for bad, and that's like the fundamental idea of like gunpowder, same thing, right. You can use it for construction, you can use it for killing people. There's a lot of different dual use technologies out there. Ai is a perfect example of this, and you know there's history preceding this conversation that I think we can look to for both insight on how to do things well and also where to avoid drawbacks. But also we're entering new territory simply because of the speed at which this stuff is evolving and, if you think about it like, gpt-4.0, which we'll talk about soon, basically crushes the stuff that was state-of-the-art six months ago. So how do you keep up with something that's evolving at that pace? That's the open question all of us are struggling with.
Speaker 1:Yep. On a funnier side note, I know a lot of people when they talk about the potential AI destruction in the future, people reference the movie Space Odyssey, which I had never seen. So I actually started watching it this past week because I said you know, this is probably essential if I'm going to be talking about AI this much that I watched Space Odyssey and they got a lot of stuff right. That movie is from 1968. So I'm just throwing that out there. It's a good watch, all right.
Speaker 1:Topic two GPT-4-O, where the O stands for Omni. It's the latest and most advanced multimodal large language model developed by OpenAI, released on May 13th of this year. Gpt-4-o is an evolution from its predecessors, including GPT-4 and GPT-4 Turbo, and integrates and enhances capabilities across multiple modalities, like text, audio and image. So diving in a little bit more to that multimodal integration always a tongue twister multimodal integration GPT-4.0 excels in text generation, of course, comprehension and manipulation. 4.0 excels in text generation, of course, comprehension and manipulation. With audio, it can ingest and generate audio files, providing feedback on tone and speed, and even singing on demand. With images, it has advanced image generation and understanding capabilities, including one-shot reference-based image generation and accurate text depictions.
Speaker 1:In terms of performance enhancements. Gpt-4.0 generates text twice as fast and is 50% cheaper than GPT-4 Turbo. It supports a 128,000 token context window, allowing it to handle extensive and complex inputs. It also shows improved performance in non-English languages, making it more versatile globally. Gpt-4.0 can process and generate outputs in real time, making interactions more natural and intuitive. It can handle interruptions and respond with human-like voice modulation. It's available via OpenAI's API and it supports text and vision models, with plans to include audio and video capabilities for trusted partners. Something really interesting is you can test out GPT-4.0 for free. Actually, you don't even have to sign up with the paid account, but you do have limited access. Amit, have you tested out GPT-4.0 Omni? What do you think?
Speaker 2:Yeah, I've been working with it a bunch since the release date and I have some very positive impressions. A couple quick things I want to point out in addition to your excellent summary. There is number one if you use the free GPT-4, remember that any product that you use that is free, you are the product. It's not a free product. You're the product. There's always a downside risk to that. In the case of GPC4, yes, you can get access to it for free, but if you choose that by default, that means you're opting in to allowing OpenAI to use your conversations for training future models, which you generally do not want. Now you can turn that off, but a very small number of people turn it off. It's better to just pay the $20, where the default is the inverse of that and your data is protected and private. So that's just a little bit of a side note. I think free is great. I love the idea of open access. For a lot of us that are thinking about $20 a month, we're like that's trivial, we don't care, but that's not true around the world. So the fact that they are making it free for everyone on the planet is awesome and I applaud that. Just be aware of it and be thoughtful about turning off that setting that allows training. So other quick thing is it's half the cost for API access compared to GPT-4 Turbo, so it's twice as fast and half the cost and oh, by the way, it's smarter and more capable. So it's a pretty big deal. We talked a lot on this pod Mallory about how AI is, on roughly a six-month price, performance doubling. Gpc4 Turbo was released in the fall of last year, which was the update to GPC4, which was released in March, and GPC4 Turbo was about half the cost and twice the power of GPT-4's original release, as is Omni six months thereafter. So just with OpenAI as one kind of quick heuristic for this and test case against that idea of doubling every six months, it seems to be holding. That by itself is both stunning and crazy and exciting.
Speaker 2:Testing GPT-4.0, a couple different things. First of all, many of you have heard us talk in the past about how one of the projects we have going on at Blue Cypress is this AI agent that we call Skip. Skip is an AI agent that lives within the Member Junction data platform and what Skip is able to do is have conversations with you as a business consultant and then put on a different hat and say, okay, well, I can also be your data scientist, I can be your coding partner, I can also be your report writer and your analyst to look at the output of the report. So Skip can do all these great things. And Skip is powered traditionally by G4. Actually, skip is capable of using Cloud 3 Opus as well as Gemini 1.5 Pro, but most people are using GKC 4. And so we updated to GKC 4.0, which just required a little bit of work on the engineering team's part, and we found immediately that performance doubled roughly and the output was better. So that was exciting. And that's a very complex use case, because Skip is a very complex, multi-agent, multi-shot prompting style architecture that does a lot of things very, very deep prompting strategies. Gpt-4.0 performed extremely well, so that was exciting. And then, as a consumer just using chat GPT, I found GPT-4.0 to pretty much live up to how it's been advertised. It's faster, it seems to be somewhat smarter, there's more nuance in its responses too, which I enjoy. It's definitely better writing than GPT-4 Turbo was. So so far, so good. I'm really excited by it.
Speaker 2:I think that the feature set that they demonstrated with a lot of audio and video interaction, where GPT-4.0 is capable of watching you through the webcam or through the phone camera, if you choose to turn that on, adds more context. It's also capable of looking at your screen. If you use the Mac or PC desktop app that OpenAI is making available, you can choose to share your screen. So, for example, if I'm working on something on my screen, gpc 4.0 can look at what I'm doing and I can ask it questions in audio and it can say oh no, you clicked on the wrong button. You should click on this other button. Or if I'm writing code, it can give me feedback in real time of the code I'm writing. Or it can look at the app that I'm designing, or it can look at the email that I'm writing. So it's got context awareness across everything I'm doing on my desktop. It could see me and my surroundings, and I encourage everyone who's listening to this to check out the YouTube videos for OpenAI from last week.
Speaker 2:There's one in particular that I loved the most, which was Sal Khan, founder of Khan Academy. He's also a New Orleans native, by the way, which was really cool. Just a side note. He is the founder of, I think, the world's largest free online education resource, khan Academy. They've been playing with GPT-4 as a launch partner since last spring and they launched this thing called Conmigo. Conmigo is a tutor that does amazing things, even in its original version, to help anyone learn any number of topics like math and science and language arts and so forth, and they demoed a new version of Conmigo that had this context awareness of what's happening on your desktop or tablet or video and it's actually Saul and his son where his son is getting tutoring from Conmigo. So it's quite a stunning two, three-minute video.
Speaker 2:I really encourage people to watch that, in particular, in this market with associations, many of which are delivering education. Think about the kind of layers of multimodality involved in that demo, where the AI is watching the person, looking at the screen, hearing the voice, looking at what Sal's son is doing on the screen with a stylist and all this kind of stuff happening at the same time. It is getting closer and closer to a real, live, human expert tutor sitting right next to the kid. So it's amazing. And, by the way, the other thing that so it's amazing. And, by the way, the other thing that happened this past, I think the last couple days, is microsoft announced they're giving, uh, they're partnering with con meet cons to give conmigo away for free to the entire world. Uh, previously it was a premium product, very expensive to run it. Microsoft's just underwriting it. Um, and conmigo is available for free for anyone on the planet. So that is super, super exciting, and that's all powered by GPT-4.0. Wow, yeah.
Speaker 1:I did just see that announcement this week. I think you shared it on LinkedIn, amith. Okay, so I tried it out myself and I the thing that I immediately noticed GPT-4.0 is much faster. I mean just the speed at which it's generating texts you can tell. Speed at which it's generating text, you can tell. So I would highly recommend all of you listeners to test that out. And then what I didn't realize on the app last night I normally don't interact with chat GPT using audio, but I tested that out last night thinking it was GPT-4.0. But after talking with you and me before we started recording this episode, I think that was older functionality. Is it correct to say that with GPT-4.0, instead of it transcribing our audio into text to understand it and then doing the reverse, gpt-4.0 can actually just understand the audio itself?
Speaker 2:That's right. So GPT-4.0 is a native multimodal model, so let's unpack what that means. It means that the model, from its pre-training onwards, has been given text, of course, but also audio and video and perhaps a number of other types of content to ingest as part of its self-supervised pre-training process, which means the content is just being thrown into the model as it's consuming it to construct the model's functionality. Essentially so, because it's been natively trained with multimodal content, it natively understands multimodal inputs and it natively generates multimodal outputs. And that's the future of all models. We won't be saying large language models, small language model or large multimodal model. All these things will just be large and small models. You're going to see these acronyms get abbreviated, because all models that consumers use and most developers use will essentially be multimodal models from the start. So what you're referring to, mallory, is the ChatGPT app on the iPhone and, I believe, on Android as well.
Speaker 2:For quite a few months have had an audio feature, and I love this thing. I use it all the time. When I'm driving, when I'm walking around New Orleans, I'm talking to ChatGPT all the time. It is a translation layer, the current version that we have available. What happens is I speak, that there's a different AI model, which is a speech-to-text model that translates what I'm saying to text. That text is then fed to the underlying language model, which is either GPT-4, or you can actually switch to GPT-4.0 now, so you can use the language model GPT-4.0, but it's getting a text input, it's not getting my voice and then it generates a text response and then that text-to-speech separate model that you just described the reverse then basically speaks into me, and that's the way the current consumer app works, while we're waiting for the native multimodal capability to be brought to the app and I think that's weeks away is my understanding.
Speaker 2:So that will mean that there will be lower latency, because you'll be able to talk directly to GPC 4.0 without that speech-to-text to text-to-speech part in the middle, and you're likely to get a much higher quality response. Because, if you think about this, if you take this podcast and you run it through an AI transcription tool and you get the transcription, it's not the same as listening to us. You lose all of the additional information that comes with audio that you don't get with just plain text, and so that's. The issue is that these translational layers are lossy, meaning the information density goes down when you go from audio to text and then also from text to audio. So the model natively being able to generate and ingest these other modalities, you're going to get better quality automatically. So that's exciting, and I think we're a handful of weeks away from getting native access to that.
Speaker 1:All right, I'm not saying you know the answer to this question, but I'd like to hear your take on it. Why do you think this is GPT-4 Omni instead of GPT-4.5 or 5? Do you just see this as a slight update?
Speaker 2:It seems like there's some big updates in there, but I just want to hear your take on it. I think this is really a branding, positioning, marketing type question more than anything else. I'm not sure what their product roadmap looks like. I think this is a 4.5-ish type of release. It's not intended to be the 5 release. They're saving that for something probably much bigger.
Speaker 2:I think they're also kind of testing the market a little bit to see how competitors react to GPT-4.0. And they're releasing something. You know. My suspicion is that OpenAI has something considerably better than GPT-4.0 already, but its capabilities are far more, you know, frontier, meaning they're significantly better than what we have access to. Hopefully they're doing a lot of red teaming, meaning like safety testing, all that. So hopefully they're doing a lot of red teaming meaning like safety testing, all that and that will probably be available later this year, I guess roughly three, six, nine months from now. In that time scale, openai has tended to be ahead of the pack quite a bit. So they also tend to have a bit of a reaction where if someone else gets attention for half a second, they drop something new, like they did with Soro when Gemini 1.5 came out. They were like, oh, we'll check this out just to get everyone's attention back to OpenAI.
Speaker 2:So I think there's some degree of mastery and marketing there that they're doing, but I also think that it's kind of a smart product management move, because even if they had GPT-5 available and ready to go right now, they might feel like they don't need to release it yet, that they can put something like like this out, which is still, you know, gpt 4.0 is, you know, notably better now than plot 3 opus. It's notably better than gemini 1.5 pro, and I'm talking about both in terms of performance and overall, like generalized benchmarks that we're looking at. Is it dramatically different? Should you use gpc 4.0 instead of plot or gemini? Not necessarily. There's obviously a lot of some piece to that that, but they've been on the top of the leaderboards and they are again now, so I think they're playing a little bit of a game there, and if there's truly a remarkable advance from someone else, they'll probably drop something bigger pretty quickly.
Speaker 2:I'm giving them a lot of credit there, though this could be like the best they've got, and they might be a year or two away from GPT-5 and just kind of posturing that way, so I really have no idea, other than I think they have something better than this based on other rumblings we've heard, and it would actually kind of make sense, based on the time they've had since GPT-4, to have something as remarkable as GPT-4.
Speaker 2:It was something considerably better than that, something with better reasoning capability. We've talked a lot on this pod about multi-step complex planning and reasoning going beyond like next token prediction that these autoregressive LLMs are focused on having like true, like reasoning capability kind of baked into the model. It's essentially like a lot of the agentic types of behavior we talked about on this pod, where you have an agent that's capable of taking multiple complex steps, breaking them down, executing those steps on your behalf, taking action, making decisions. These models can't do that yet, and you can build systems around these models that, in fact, are capable of pretty advanced planning and reasoning, but the models themselves do not do that, and so I think that's kind of where GPT-5 probably will end up, and I suspect they're well on their way towards that. And so I think that's kind of where GPT-5 probably will end up, and I suspect they're well on their way towards that.
Speaker 1:Yep, Especially given how we recently talked about Sam Altman saying what GPT-4 is mildly embarrassing, or something like that. It seems to be hinting that they've got some other stuff in the pipeline, but that's helpful.
Speaker 2:Yeah, I think it's interesting because you know, sometimes it's hard to retain broader perspective when you're super deep into something you think about. We talk about Skip all the time. We're like, oh yeah, the current version 1 of Skip. It does all these amazing things, but we think it's not that great. It's just okay and it's going to get really powerful soon. But in reality, when someone who's never seen an agent like Skip and they can have a conversation, it generates a very sophisticated report that gives them all these business insights that might have taken them 6 to 8 weeks and thousands of dollars before if they ever got it at all they're blown away. So I think the average business person not just the average person on the planet, but the average sophisticated business person is blown away by GPT-3.5 still. So there's some perspective I think leaders in Silicon Valley like Sam need to probably take into account when they think about how that messaging works. But that even affects people like us who are deep in this particular vertical. It's really hard to maintain that perspective.
Speaker 1:I saw it at the innovation hub. I think you know obviously you and I meet every week talking about this stuff and for me, the tool like perplexity that I've spoken about on this podcast, that I use all the time, is like, oh, of course, perplexity. You know people use it, nothing special. And then it came up during the panel and I realized a lot of people attending hadn't heard of it and didn't know what it could do. So it's definitely good to always remind yourself to kind of step back and look at the greater context.
Speaker 2:Totally. You know and a quick tip related to that before this pod we were talking about how I use GPT-4, chatgpt Voice in the context of like drafting content. Like some of you are aware, that we're in the process of updating Ascend, which is our book on AI for associations that so many people have read and provided great feedback on. Well, we released that book in kind of the late spring, early summer of 2023, which is eons ago, and AI time scales. So we're doing a complete update of that book lots of new content, refreshing all the existing content. We plan to have that out later this summer and in doing that, uh, that book, we have a number of new topics we want to cover that weren't in the original book and a good example that is AI agents, which I was just touching on a little bit.
Speaker 2:So last night I decided to use GPT-4.0 and the audio mode on my chat GPT app on my iPhone and I was walking around New Orleans just talking to myself, as I do. Really, I was talking to GPT-4. And I had this great conversation, but the way I approached it is, I talked to the AI and asked it for its input. I did give it some context on the people that I'm writing for. I gave it context on my point of view on certain topics like AI agents and I kind of went back and forth and said what do?
Speaker 2:you think and I got feedback and I said well, I like this, but I don't like that. And then I had to start drafting some content with me and then I gave it feedback and I kept going back and forth and over the course of just over an hour I ended up with about, I think, 7,000 or 8,000 words, and they weren't perfect. But then I got back to my house, I put that content into a Word document, I edited a bunch and in less than a couple of hours of work I have a new chapter. Now is it a final draft? Of course not, but it's a really good first draft and it's very much a co-creation process.
Speaker 2:But if you go to the AI and just say, hey, give me a chapter on AI agents for an association audience, you're going to pretty much get garbage. So you have to work at it a little bit harder. But you also have to remember these things are not like traditional software where you have to go through a set of menus or click a certain set of buttons. You can just talk out loud and think through it and AI will help you figure out where you want to go with it. That's the power of these things. A lot of people haven't yet unlocked because they're thinking kind of a linear way of how to use them, based on, like, the prior biases we all have, where we're working with deterministic software that only operates in one way, um, and here you just kind of like throw your creativity at it.
Speaker 1:See what happens further expanding on that in terms of potential use cases we'll see popping up. I want to talk a little bit about multi-party conversations. Amit, you also mentioned this before we started recording, but the idea that we could have a conversation, you and me, and perhaps a podcasting expert and a marketing expert and an AI expert we could all have this conversation together. Oh, and I should mention, all those experts would be AIs besides Amit and myself. Can you talk a little bit about that? Besides Amit and myself? Can you?
Speaker 2:talk a little bit about that. Yeah, I mean, if you think about a multi-party conversation whether it's in person or on a video call like this, or perhaps online with the Microsoft Teams or Slack or something like that that's more asynchronous. You have these things happening all the time with people where we're having conversations back and forth on a variety of topics. Different people chime in with different points of view, different opinions, and there's no reason why AI can't fully participate in that, and there actually are use cases of that. For example, another one of our projects, betty, which many of you have heard of, is capable of being directly integrated with your Microsoft Teams, your Slack online communities like HireLogic and Circle and others, and when Betty is a party in a multi-party conversation, betty will chime in whenever it's appropriate, just like Betty, as a real person, might do. And so if you imagine a world where there's multiple different AIs that you invite to a conversation, so say, I want to develop a new conference for my association, so I'm thinking, hey, this new conference is going to be for a particular subsegment of my market. Maybe it's young professionals in a certain region or people with a certain interest.
Speaker 2:I bring in an AI expert, that's an expert on demographics and the generation I'm targeting. I bring in another AI expert that is a domain expert in the subject matter that I'm focused on. Maybe I bring in an event planning AI expert that's really good at the region that I'm good at. What does that mean? It's basically the same type of AI models, but I've pre-prompted them to tell them what their role is. I've said, hey, ai agent one, you're the expert in this, and I give a detailed, almost like a resume of here's who you are.
Speaker 2:And I tell the next AI a detailed resume of who they are and what their point of view should be, and then I have them talk to each other along with talking to us, and we have an interesting conversation, and that sounds kind of sci-fi, but you can do that right now. In fact, there's some products coming out that do this. There's a product we're actually about to start a trial of here at Blue Cypress called Glue, which I'm really excited to get going with. I have no idea how it works specifically or what its functionality is, but we're excited to test it out. I also think all the mainstream communication tools are going to embrace this concept. You're going to see this inside Microsoft stuff, co-pilots are going to pop in, and you'll see it in Slack and you'll see it everywhere else.
Speaker 2:But multi-party conversations are just part of how we collaborate as a species. That's what we've done since the beginning of time and tribes and with tools, and now we're just doing that with AI. So it's something we'll have to get used to, but I think we're going to see more and more of that. I mean, it wouldn't surprise me at all if, within 12 months, that one of the episodes of the Sidecar Sync pod, we have an extra participant that we interview. That's a live interaction with an AI or multiple AIs that are joining the podcast and having a chat with us right, that's not far off. We could probably do it right now, actually, if we did a little bit of work from a software engineering perspective, but very soon you'll be able to do that with consumer-grade tools and you'll get some interesting results.
Speaker 1:Yeah, stay tuned for that one. I guess you're right. All the pieces are there, more or less. We just need to see them come together. All right, that will definitely be an interesting future episode of the sidecursing Topic three.
Speaker 1:Today, microsoft and LinkedIn's Work Trends Report. Microsoft and LinkedIn released the 2024 Work Trend Index on the State of AI at Work, which provides an overview of how AI is transforming the workplace and the broader labor market. The report is based on a survey of 31,000 people across 31 countries, labor and hiring trends from LinkedIn, analysis of trillions of Microsoft 365 productivity signals and research with Fortune 500 customers. Here are some key points from that report Three in four knowledge workers, or 75%, now use AI at work. Ai is credited with saving time, boosting creativity and allowing employees to focus on their most important tasks.
Speaker 1:In terms of leadership perspectives, while 79% of leaders agree that AI adoption is critical to remain competitive, 59% are concerned about quantifying the productivity gains from AI and 60% worry that their company lacks a clear vision and plan for AI implementation. There's been a significant increase in LinkedIn members adding AI skills to their profiles, with a 142x increase in skills like Copilot and ChatGPT. Ai mentions in LinkedIn job posts lead to a 17% increase in application growth. Organizations that provide AI tools and training are more likely to attract top talent, and professionals who enhance their AI skills will have a competitive edge. Ryan Roslansky, ceo of LinkedIn, emphasizes the need for new strategies to adapt to AI's impact on work. He suggests that leaders who focus on agility and internal skill building will create more efficient, engaged and equitable teams. Amit, I'm wondering do you have any gut reactions to this report? Any of this surprising to you, or does this feel pretty spot on?
Speaker 2:It kind of makes sense. I mean, I think the 142x increase in people posting chat GPT on their resume or on LinkedIn totally makes sense. I mean, you know, I think the 142x increase in people posting chat GPT on their resume or on LinkedIn totally makes sense. I think employers are looking for that. You know, you're looking at people that are AI natives.
Speaker 2:Right, we talked about digital natives, social natives, pc natives in the past, people who kind of grew up with the technology or just are accustomed to using it. It's in their workflow. It's a new skill set. It's a different way of thinking. It's a new and it's harder and different in a lot of ways than learning how to use a new application on a computer, because it requires more creativity, because there is no manual for chat, shape and tea, like we talked about before. There is no manual for profile. You just have to figure out how to use it in a way, and there's cookbooks and there's prompt guides and there's courses you can take. Those are all helpful starting points, but it's really a different way of thinking. It's a different way of thinking about your own processes. So I like the fact that people are going forward and saying, hey, I've got these skills and I'm hoping more and more employers are really looking for that as a key indicator of not just what someone's been doing but like their willingness to adapt, and I think that's the key thing.
Speaker 1:How curious and how insightful are they, how willing are they to learn new things? What's particularly interesting to me is, on the applicant side, seeing that boost of people putting AI skills on their profiles, but then also on the application growth side, on the business side, job postings that mention AI are seeing growth as well. Now, you've mentioned on the podcast before you wouldn't necessarily hire a chief AI officer or an AI marketer, for example, because that might insinuate that AI is only the responsibility of that person. Do you still feel that way?
Speaker 2:I think it depends on the organization critical role for a fortune 500 company, maybe a very large association, for someone whose responsibilities to think 100 of the time about just this topic, how it stitches across the whole enterprise. Um, I my point earlier when I said that is simply I don't want to delegate ai to one person which essentially takes the pressure off of everyone else. That's what I don't like in a way to think about. Like even technology, a chief technology or chief information officer at an association often has been the go-to for anything even moderately technical and a lot of other executives have been like no hands off, we're not going to touch it or worry about it. That's the CTO's thing to worry about and that's a mistake and that's one of the reasons associations are in many cases so far behind on tech. It's the broader leadership team is not very advanced in terms of their tech understanding.
Speaker 2:I'm not saying go get into the code, I'm just saying like understanding what these systems do, how they work. You make a lot of poor decisions when you're not well informed and so with AI I worry about the same thing. If you have a chief AI officer, if the events person, meetings, person sorry, the membership person and so on. If they're just like, okay, yeah, we're good, now we don't have to worry about this, the organization is not going to do well. And, frankly, those people aren't going to do well in their careers because if you're an events marketing manager or whatever your title is, you don't know how to use this stuff. You're going to have a problem in a handful of years.
Speaker 1:So three and four knowledge workers are using AI according to this report, which to me sounds like a lot. But I'm not super surprised as a leader, as someone who has hired many, many people, I'm sure, amit, at this point would you be concerned hiring someone who isn't familiar with AI?
Speaker 2:Well, let me tell you what I look for when I hire people, and I don't think this changes. I look for people who are curious, people who like to learn stuff and that's demonstrated by their behavior, right, not just by saying what do you like learning? Tell me about it. What's your most favorite recent book? Or, oh, you're an audiobook person? Tell me about that. Why did you like it? What was interesting to you?
Speaker 2:You can tell pretty quickly when you're talking to a learner versus someone who's kind of staffed right, and it's very hard to change that. It's hard to turn someone who's kind of like they turned off their learning capability at the end of college or whatever and they really haven't advanced much, mainly because they just don't want to. That's a different characteristic than someone who's really interested in learning new stuff. So I look for that, because if you have someone who's a strong learner mindset, you can teach them almost anything. Now, I'm presupposing that the person has reasonable intelligence. It's not looking for the geniuses in every role. Of course that's wonderful to have someone ridiculously smart, but someone who's got good smarts but is a learner. And of course I'm looking for a work ethic, someone who's really hungry, who's going to push themselves and we work in startups, so that doesn't necessarily mean like 100 hours a week person, but someone who's going to push themselves hard. So the reason I point to that, to your question about AI, is I don't think that changes. I think we need people who are curious, who are learners, who are pushing themselves hard, and that's true in every context and with AI completely changing the game in terms of what we as humans do versus what the computers do, we've got to really be on that. We've got to be learning stuff constantly. We've got to be consuming podcasts and doing online courses and talking to people about the way they're using stuff and experimenting. So to me, those are the important qualities.
Speaker 2:I think the three and four knowledge workers using AI is both exciting and also a bit of a smokescreen in that the reality is that, of those three people out of the four, out of the 3 million million, out of four million or whatever, the amount of debt that most of those people have gone to with these tools is really like an inch deep kind of stuff, which is which is fine. It's great that they started, but a lot of times people like, oh yeah, I signed up for Jack CPT and I had a conversation with it, or I had to help me write an email, and that's great, like it's. I'm not negative on that at all. I think it's wonderful.
Speaker 2:But how many people actually spend, let's say, 10 hours a week or more working with AI tools? And the answer will probably drop very rapidly. So what we have to do is drive adoption in our organization. We got to start learning and then we have to drive experimentation to find the productivity gains, to find the increases in value for our audience, and then we have to really hit hard on those things. When we find these like veins of gold, so to speak, in our mining activities, we've got to go after those and really fully exploit them to benefit our organization.
Speaker 1:So, on that note, 59% of leaders are concerned about quantifying the productivity gains from AI. So how do you balance the need to experiment right now and adopt AI with also being able to quantify the impact of those experiments?
Speaker 2:You know, with every technology disruption cycle we don't have the ability to project what it means in economic terms. We can look back in history and say how long does it take to have, let's say, a 10x increase in total economic output with prior cycles? So you take about the first industrial, let's say, a 10x increase in total economic output with prior cycles. So you think about the first industrial revolution and say, you know, we went from a $1 trillion global economy roughly in the 1700s to a $10 trillion global economy in the mid 1950s. So that's a 250-ish year cycle time for a 10x increase. But back in the 1950s, if you said, hey, you know, it's 250 years for the last 10x increase in global GDP, how long do you think it's going to take?
Speaker 2:Many people, even were super optimists on technology, might have said well, it was 250 years last time, maybe half as long, maybe 125 years. I don't think a lot of people would have guessed 50, 60 years. And the question is what's the timeframe for the AI's impact? Because AI is as big of a deal by most people's perspectives as technology, information technology was, or certainly kind of the earlier technologies I referred to. So I think the question is then okay, if we're on this curve and power is increasing so fast, the economic gains are going to be out there. The question is, who's going to get them? And so, coming back to your question, economic gains are going to be out there. The question is, who's going to get them? And so, coming back to your question, I can't quantify the productivity gains for every organization until I go into that organization and look at what they're currently doing, look at where their market is heading and what needs to be built to serve the future needs of that audience, and then start to build little experiments to test it out.
Speaker 2:But I know that there's opportunity out there. So I think experiments actually is what educates you. To then have a thesis to say, okay, if we build this fully from this little experiment, that's what's going to drive a 2x, 5x, 10x, 50x increase in output. So I think experiments are the way to educate yourself more empirically compared to the theoretical education you get from an online course. Obviously, we have an online, an online course in our learning hub. It's awesome, you should sign up for it, but it's not going to actually teach you what's going to happen in your organization. It's going to give you certain fundamentals and then you go run these experiments in your organization and you learn from that and you say okay, we can extrapolate from this little bitty experiment we did that if we do this fully it will result in this outcome. But the experiments ultimately shed that light.
Speaker 1:Okay, I'm going to put myself on blast a little bit here, but obviously you all know that at Sidecar we use AI every day, all day, basically in kind of every part of our business. But, amit, if you asked me right now quantify like the exact productivity gains Sidecar's had from AI, I wouldn't be able to do that for you. Maybe if I really like sat down and did some digging, I could pull those numbers together. But I'm wondering for our listeners, if there's something, should we be like tracking this in a spreadsheet somewhere, kind of keeping track of how long things used to take us versus how long they take us now? Is there anything we can keep in mind as we're running experiments to contextualize those gains?
Speaker 2:Well, I think, on the one hand, the question is, should you even bother? And I think that there's arguments to be made for both yes and no. On the yes side, it is helpful to quantify things because they'll help you predict future gains. It's also helpful when you're reporting to your board and saying, hey, this is why we put this X dollars in time into this and this is what we got out of it. The flip side of it is is it's such a squishy kind of thing to try to calculate that. Is there significant value? That would be the potential. No side of it.
Speaker 2:But what I would say is this ask yourself after you've achieved some kind of gains, you know in your gut that you get a lot of value from AI right, and a lot of value from AI In your job. Day to day, you're using AI. Ask yourself this if you stop using AI? If I said at the Blue Cypress level hey, we're banning AI, you can't use AI in your job anymore, what would that do? How much more time would you need in your day to do the same amount of work you need now? It's probably two, three, four, five. It's just an insurmountable obstacle.
Speaker 2:So that's one way to think about how much lift you're getting from the technology, or it's like you know, different modalities of transportation might be an interesting comparison. I say to you hey, mallory, go to DC for three days to participate in this. Oh, by the way, you can't fly, you can't drive there. Well, now you have two days of transportation to drive whatever 1500 miles each way from New Orleans to DC. I've radically impacted your productivity right by taking away an advanced technology that's an order of magnitude faster than what you have available without it. So I think it's the same kind of concept. You can imagine the world without it and then, kind of in rewind, see what the productivity gain is, but it's hard to know exactly what it is when you're looking forward, if that makes sense.
Speaker 1:That's actually very helpful, so I'll think of it that way. It sounds like a total dystopia, a world where Blue Cypress says no more AI usage. I don't think we'll ever get there, but yes, it would certainly impact Sidecar's business if we could not use AI, so I feel like that is a good way to think about it.
Speaker 2:As long as I'm around, that will not be Blue Cypress policy, but even beyond that, I'll have an AI avatar of me to say oh my goodness.
Speaker 1:We'll have a multi-party conversation about it, Amit. Last question here 60% of the leaders surveyed feel that their company lacks a clear vision and plan for AI implementation. What would you say to association leaders who might be listening to this and feel the same way?
Speaker 2:Look, I empathize with that deeply because I lack a clear vision of what I'm going to do with AI for Blue Cypress or how I'm going to help associations, because the field is moving so fast you cannot say that you have absolute clarity. Even someone at the forefront of this, like Sam Alton or Dennis Sabas or Sayan Adela these guys don't have completely clear vision. So, first of all, just like, let's kind of align that and feel a little better about ourselves as association leaders. You're not in the dark alone. We're all kind of in the dark. Some of us have a little bit bigger aperture to see what might be coming, but not a whole ton. So that's good, because you catch up with this. The key to this is it's kind of a multi-step process. First, start learning. You're doing that by listening to this pod, read some of our content, check out the Learning Hub, check out other resources in AI. There's tons of great stuff out there. Get started with basic learning and then experiment, because the experimentation will open up that aperture further. It'll teach you what will happen on the ground with your organization and after you've done a little bit of that, or maybe before you can engage in this process we like to call an AI roadmap, which you can do this on your own. There are people like us who can help you with that. There are plenty of other people who can do this as well.
Speaker 2:The whole idea behind a roadmap is to build a near-term plan that will say, hey, this is what we're going to go do, these are our priorities, and those priorities are informed by how we feel the external environment is going to change. That's the first thing you want to think about is what's going to happen to your members because of AI. So, if I have an audience of attorneys as my association's membership, what's going to happen to the legal profession in the next two, three, four years? What kinds of education, products, services will lawyers need over the next several years from my association, and what do I need to build? Essentially, what do I need to have available to serve those future needs? Because I need to start building that now, or at least start planning to build that now, where I'm not going to be able to catch up and end up where those people will be in two, three years.
Speaker 2:That's one piece. The other part of it is how can I make what I currently do better? How do you go do what I just said. Well, you've got to automate a lot of what you currently do, because no one's saying stop doing what you're doing now, don't stop your annual conference, don't stop delivering traditional CLE over the web or in person. You have to do those things. But can you make those things twice as efficient and then repurpose staff time to then think about, hey, what can we go after?
Speaker 2:So the idea of an ai roadmap very much centers around both of those macro external factors and then the internal process factors. Um, and the reason I usually suggest this is a second or maybe third step in the process is if you're starting from ground zero and you try to attack a roadmap project, you're likely to find it overwhelming and you're probably not going to get the best output because you just know so little about what these tools can do and where they're going without having to fight with them a little bit. So I usually recommend people start with learning a little bit of experimentation and then do a roadmap. We've got templates on this. We've got a lot of great content you can download and use as a guiding post. But the basic idea is super simple Study the external environment, try to predict where your external environment's going. What do you need to do to serve that market down the road in two, three years and then look at your internal environment, look for process opportunities as well.
Speaker 1:Well, that sounds like a great plan. And, yes, all you listeners. If you're interested in that AI roadmap piece, feel free to use your fan mail link and your show notes to send us a message. We'll read those and respond to them. Thank you all for tuning in today and we will see you next week.
Speaker 2:Thanks a lot. Thanks for tuning in to Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.