Sidecar Sync

Apple's AI Breakthrough, Wayve's Autonomous Driving Innovation, and Predicting the Weather with Microsoft's Aurora | 34

β€’ Amith Nagarajan and Mallory Mejias β€’ Episode 34

Send us a text

In this episode of the Sidecar Sync, Amith and Mallory discuss the exciting milestone of approaching 50 episodes, the development of next-generation AI systems, Apple's groundbreaking AI software suite, and the intriguing partnership between Apple and OpenAI. The conversation also covers Wayve AI's revolutionary vision-language action model for autonomous driving and Microsoft's Aurora, a cutting-edge foundation model for atmospheric predictions. Tune in for a thought-provoking discussion on how emerging AI technologies are shaping our world and what lies ahead.

Chapters

00:00 Introduction
00:40 Milestone Celebration and Future Plans
03:54 Apple Intelligence and OpenAI Partnership
11:15 Developing Next-Generation Multi-Agent Systems
18:05 AI in Autonomous Vehicles: Wayve's Lingo 2
26:00 Microsoft's Aurora: Weather and Air Pollution Predictions
34:30 The Importance of Small and Specialized AI Models
43:00 Real-World AI Applications and Impact on Associations

This episode is brought to you by Sidecar's AI Learning Hub πŸ”— https://sidecarglobal.com/ai-learning-hub. The AI Learning Hub blends self-paced learning with live expert interaction. It's designed for the busy association or nonprofit professional.

πŸš€ Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global

Please like & subscribe!
πŸ‡½ https://twitter.com/sidecarglobal
🌐 https://sidecarglobal.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress πŸ”— https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

πŸ“£ Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating op

πŸš€ Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global

πŸ‘ Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress πŸ”— https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

πŸ“£ Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

πŸ“£ Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Speaker 1:

The realm of science fiction is quickly becoming science fact, and AI is the conduit through which science fiction becomes science fact.

Speaker 2:

Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, Chairman of Blue Cypress, and I'm your host.

Speaker 1:

Greetings and welcome back to the Sidecar Sync everyone. We're excited to be back for another fresh episode with exciting and interesting news in the world of artificial intelligence and associations. My name is Amit Nagarajan. I'm one of your hosts.

Speaker 3:

And my name is Mallory Mejiaz. I'm one of your co-hosts and I run Sidecar.

Speaker 1:

It is great to be back. We've got a whole bunch of exciting stuff to cover. Things never slow down in the world of emerging technology, particularly AI and how it applies to your world as an association or nonprofit leader. But before we get going with this week's episode, let's take a moment to hear a quick word from our sponsors.

Speaker 3:

Today's sponsor is Sidecar's AI Learning Hub. The AI Learning Hub is your go-to place to sharpen your AI skills and ensure you're keeping up with the latest in the AI space. When you purchase access to the AI Learning Hub, you get a library of on-demand AI lessons that are regularly updated to reflect what's new and the latest in the AI space. You also get access to live weekly office hours with AI experts and, finally, you get to join a community of fellow AI enthusiasts who are just as excited about learning about this emerging technology as you are. You can purchase 12-month access to the AI Learning Hub for $399. And if you want to get more information on that, you can go to sidecarglobalcom. Slash hub, amith, episode 34, I think. How are you feeling about that?

Speaker 1:

It's pretty cool. 34 episodes is a good chunk. It's been a lot of fun and I think, from the feedback we're getting, it seems like people are finding the podcast pretty helpful with what they're trying to do, so I'm excited about it. How about you?

Speaker 3:

I'm excited, I'm really well, I'm hopeful to get to the big milestone. I feel like 50, we'll have to do something special. Maybe it'll be a giveaway or a prize of some sort, but it's been really exciting to see the podcast grow. And you know, sometimes we're talking to each other and I do forget that there are other people listening.

Speaker 2:

but last week I had.

Speaker 3:

I had a few meetings and, um, several people were like oh, I heard on the podcast, you moved to Atlanta. And I had to remind myself oh yeah, you know, there's a lot of people listening to these conversations, which is really exciting.

Speaker 1:

Yeah, it is. And uh, yeah, 50 episodes. That'll be right around the time we're up in DC for digital Now. This uh fall October 27th through 30th. For those of you who don't have that memorized like we do, it's our biggest event of the year and we'll be in DC. Maybe we'll do a live pod while we're at Digital Now. It might be right around that time of 50 episodes, right? Just because we started the Sidecar Sink right after Digital Now in Denver last fall, right? So it's about that time.

Speaker 3:

I like that idea. We've also been toying with the idea maybe of doing a live podcast, maybe at ASAE annual or I don't know something, where we interview attendees and kind of make a compilation episode. So everyone stay tuned for that.

Speaker 1:

Yeah, it's going to be fun. Well, this week I'm up in Utah and enjoying some cooler weather and working on the next version of the Ascend book and a whole bunch of software initiatives. We've got a group of people up here in Utah doing a hackathon at my place, so we're having a lot of fun and creating some very interesting, complex, like next generation multi-agent systems that are designed to work with GPT-5 and 6 class models. So having a lot of fun with that and, yeah, it's not as hot here as it is in the South.

Speaker 3:

Of course. Can you share any sneak peeks with our listeners about what you're working on in Ascend version 2?

Speaker 1:

Well, in the second edition, what we wanted to do is, first of all, update everything, because it's been less than a year since we published the first edition yet so much has changed in the world of AI and how it applies to associations, so we wanted to make sure the book was fresh. We are including a couple chapters that are being guest written by association leaders about their actual applications of AI and their associations so really excited about that and we are also introducing several new chapters that are detailed, really deep dives in the use cases. So there's a deep dive on event management and events and associations or sorry events and AI, and there's a deep dive on education. There's a lot of information about common data platform type strategies. So it's a refreshed book.

Speaker 1:

It's significantly bigger. It's going to be probably on the order of around 250, 270 pages instead of the last one being, I think, just in the upper 100s. Maybe it was 200 pages. I've got it right above me or in my background, but I don't remember exactly how big it is, but it's a lot longer book, a lot more details. There's also a lot of practical exercises at the end of each chapter that we've added, suggesting different kinds of brainstorming and other activities that you can do as a team in your association. So definitely keep on the lookout for that. We plan to drop the book on Amazon just before ASAE's annual conference coming up in early August.

Speaker 3:

Very exciting. Everyone be on the lookout for that. Well, in today's episode, we've got some interesting topics lined up. First and foremost, we're talking about Apple intelligence, which was kind of the hottest news item, I would say, within the last week, and Apple's partnership with OpenAI. And then we'll be talking about AI and autonomous vehicles, and then, finally, we'll wrap up with a discussion around Aurora, which is Microsoft's foundation model of the atmosphere. So, first and foremost, apple Intelligence is a new AI software suite introduced by Apple, of course, at the 2024 Worldwide Developers Conference.

Speaker 3:

Apple Intelligence is designed to enhance user experience by leveraging generative models and personal context, while maintaining a strong focus on privacy. Apple Intelligence includes a 3 billion parameter on-device language model and a larger server-based model running on Apple Silicon servers. The suite includes models for writing and refining text, summarizing notifications, creating images and taking in-app actions. There are also models for coding and Xcode and visual expression in the Messages app. So, to dive in a little bit more to the writing tools, enhanced language capabilities help users write more effectively, summarize content and manage notifications. The tools can change the tone of the text, proofread and create too-long didn't-read summaries. In terms of visual expression, features like Genmoji and ImageWand will allow users to create personalized images and transform sketches into polished visuals. These tools leverage personal context to make the experience more engaging.

Speaker 3:

Now Apple Intelligence processes data on-device whenever possible, using private cloud compute for more complex tasks, without storing or accessing personal data and kind of the news item that's gotten a lot of press, as like generating custom bedtime stories, suggesting room decorations and composing emails. This integration aims to make Siri more versatile and intelligent by leveraging OpenAI's advanced language model GPT-4.0. Chat GPT will also be embedded in Apple's system-wide writing tools, allowing users to generate content, rewrite text and adjust the tone of writing. This feature will be available across Apple applications, including Mail and Notes. Amit. This one was particularly interesting for me because in all of these podcast episodes we have not really talked a ton about Apple and we've talked about a lot of major tech giants. Do you feel like this is overdue and kind of? What are your initial thoughts?

Speaker 1:

Well, I mean, this is pretty much part of the course for Apple. We have to remember that they tend not to be the first to the party with the new technology. They tend to be one that a group that really likes to really refine and perfect something to a large extent before they release it, and that's just built into the way they run the business. So it is overdue from a consumer's perspective. If you're AI forward, if you're listening to this podcast, if you're part of this group of people that are actively involved in artificial intelligence conversations, it seems overdue. For the rest of the world, which is most people, it probably seems like a novel thing. It's really exciting. It's brand new. It's capabilities they don't have to think about. It's just on device.

Speaker 1:

I think the idea of Siri being upgraded to be pretty useful as a conversational system, even if it requires chat GPT for some things, is also interesting because more people will use it. So the broader the adoption, the more AI grows and also it means the more expectations people will have because on their phones and in interactions with people like Apple, they have this capability. Even if it's not the most contemporary capability, it'll change their expectations of you and your brand and your business and how people expect you to interact with them, so I think it's definitely going to be a big thing. I don't think it's a technology announcement. I think it's more of a marketing distribution announcement.

Speaker 3:

Can you clarify what you mean on that?

Speaker 1:

Well, there's nothing about the technology that they're releasing that is particularly exciting. It's basically very like what they did with the three. What I understand of the on-device model their capabilities it's no better than a lot of the open source models that we've talked about that can be quantized and run on-device, meaning they can be essentially compressed to be made even smaller and faster, but their capabilities are very limited, which is pretty much all you need on the go in a lot of scenarios, since you need one of these small models. I've talked a lot on this pod and writing that I've done about how important small language models are and how much improvement there will be in small language models, and this certainly is kind of the next step. So on-device is good in terms of both speed of execution, because it's right there, but also privacy and Apple.

Speaker 1:

One of the things I like about Apple a lot is they do tend to be more focused on privacy than probably all the other technology companies, all the major technology companies.

Speaker 1:

That seems to be a big thing to them, at the very core of the way they design products.

Speaker 1:

So I applaud that and I think that the idea that they have their own cloud ecosystem that they have securely designed to host a slightly more advanced model is also exciting, but that's probably something on the order of like a Lama 3, 70 billion parameter. I mean, it's probably not that model, but it's something on that capability level. And then clearly, they don't have frontier capabilities or they wouldn't have partnered with OpenAI to say, oh and, by the way, if you want something even better, chat GPT is being woven right in. So, if you think about it, they're essentially saying look, we don't have the best AI. If you want the best AI, we're going to make it possible for you to access it through our friends at OpenAI, but if you want really basic AI capabilities, we've got you covered. That's essentially what the announcement is, and the interesting thing is they are security and privacy conscious, yet they chose to partner with OpenAI, which is an interesting thing to maybe talk about a little bit, because it's not necessarily the guess that you'd make, thinking about their culture and their approach.

Speaker 3:

Well, if you kept up with the headlines listeners, you probably saw Elon Musk had a lot to say about this partnership, and I quote he said it's patently absurd that Apple isn't smart enough to make their own AI, yet is somehow capable of ensuring that open AI will protect your security and privacy. He said that on X Amit. What do you think about that?

Speaker 1:

Well, I mean, first of all, it's Elon Musk, so of course it's interesting and entertaining to some extent, and concerning Usually all at the same time. He's good at packaging all the emotions into one sentence or two, so I don't think he's wrong. Though, in this particular case, I think that it is true that you know, on the one hand, they're saying, hey, we've got your back, you're on a device, you're 100% safe With our Apple intelligence in the cloud, on our cloud, you're super safe because we control that environment. But oh, by the way, if you want to use the most cutting edge capabilities, which of course, people likely want otherwise they wouldn't have introduced it you use ChatGPT from OpenAI, and they have zero control over what OpenAI does. And, by the way, neither does Microsoft. Right, I mean, openai is mostly, you know, when you think about control, they absolutely have a lot of control coming from Microsoft, but they still, ultimately, are an independent business, and so what they do or don't do with the data, you kind of just have to take it at face value and trust that their agreement is what it is. And so, do you trust OpenAI? Do you trust Sam Alvin? Do you trust their business? How about Dario Amadei and the people at Anthropic. Do you trust them? They have a massive investment from Amazon, so the corporate backer is there.

Speaker 1:

I'm not suggesting you should or you should not trust any of these companies. Specifically, I would offer that people should evaluate these companies the way they would evaluate a cloud provider of other services that are non-AI. Do you trust AWS to store your sensitive data? Do you trust Azure to host your sensitive data? Do you trust NetSuite to keep your sensitive data? Do you trust Azure to host your sensitive data? Do you trust NetSuite to keep your accounting data? These are all companies with access to large chunks of your private data.

Speaker 1:

The question is how will the AI companies be different? Will they try to use your data to advance their models? That's fundamentally the issue. People haven't been concerned about NetSuite using your accounting data to make NetSuite better somehow, although, by the way, people like NetSuite 100% have been using that data to improve their product through AI as well, like earlier types of AI and through other forms. So most SaaS agreements allow the SaaS vendor to utilize the identified and or aggregated versions of customer data to do all sorts of things. So it's not a new issue. It's just become a bigger topic because there's questions about these companies and their lack of track records, because they're all new.

Speaker 1:

Anyway, coming back to the must quote about Apple can't control OpenAI's commitment to privacy and security. That's just a very simple fact. They can't. Are they an important customer to OpenAI? For sure, I'm sure they're paying a ton of money and they're important financially to a degree, but that doesn't necessarily mean they have any control. So you have to be eyes open on this. Even with Apple's own AI server, do you trust Apple with your private data or not? It's a fair question, right? They're also a very large technology company.

Speaker 3:

I'm thinking, if there is a base to this claim which it sounds like there is, that it's a fact. Apple can't really control what OpenAI is doing with your data. Do you think listeners should be concerned with using tools like ChatGPT? I mean, I know across the Blue Cypress family we all pretty much use ChatGPT, so we're, I'm assuming, willing to take on that risk. Is there a flip side to that?

Speaker 1:

Well. So my point of view in terms of Blue Cypress use of the OpenAI tool set is that, first of all, we've trained our people to not share truly sensitive data with the tool. We don't upload customer data, we don't include what we consider super sensitive. We do pay for the premium version of it, so we have a terms of use that are definitely, on paper, a lot more comforting than if you're a pre-user or even a paid user as an individual. So there's some things like that that you can diligence. I'm not pro or against OpenAI, and nor with any of the other companies in the space. I think that OpenAI at the moment still has the best model, but not by a mile, probably by a couple of feet at this point, in terms of how much of a lead they have over Anthropic and over even some of the open source models. So I think the question is going to be is like who do you? The question of who you trust is going to be highly dependent upon your organization and your culture, your established relationships. You want optionality. Ultimately it's what I always come back to you don't want to try to hit your wagon to one vendor. You have to have optionality, and so, whether you're building custom solutions. If you're doing that, you have to have a layer in between what you're building and the underlying model so that you have the ability to switch very easily and to use many models at the same time, because you might say, hey, openai is really good at really complex reasoning, so let's give them those tasks, but let's use LAMA3 and let's inference on Grok, for example. We talked about Grok with a Q, not Elon Musk's Grok with a K, and that's the capability that you could mix them up.

Speaker 1:

Essentially, yeah, do you see on-device AIs as something that will catapult the world into full-blown AI? Adoption is going to make a big difference. Think about this when GPS became a reality on billions of devices that enabled apps like Uber and many others that took advantage of that new capability. So when we bring AI to the device, there's all sorts of things that are going to flourish. The new round of AI-enabled computers from Microsoft and their new line of computers, as well as pretty much all PCs soon, and also the new Macs are going to have device-level computers as well as pretty much all PCs soon, and also the new Macs are going to have device level AI there as well.

Speaker 1:

So, like in the browser you have, like instantaneous assistance and support and, just in general, like what we've received over the last few years, is a new general purpose technology. It takes sometimes decades for a general purpose technology to really weave its way into everything we do. And just think about like the internet, and think about the web. You know companies still haven't fully realized anywhere close to the potential of that, even independent of AI. So to me this is a consumer awareness thing more than anything else. Like from a business perspective. I look at it and say, okay, what will this do over the next 12 months? As you know, millions and millions of people using Apple products have these capabilities on device. I think what it's going to do is improve their. It's going to create value for those users, of course, but then ultimately it's going to raise the bar in terms of expectations everyone has of all of us as providers of services and products, because people are going to expect AI-enabled interactions everywhere they go.

Speaker 3:

All right. Moving to topic two, ai and autonomous vehicles, Wave, a pioneering UK-based company in the field of autonomous driving, has recently unveiled Lingo2, a groundbreaking vision language action model that represents a big advancement in the development of trustworthy and explainable AI systems for self-driving vehicles. Lingo 2 is a closed-loop driving model that deeply integrates vision, natural language and driving actions, enabling it to generate both driving behavior and textual explanations from the same deep learning model. This approach provides unprecedented visibility into the AI model's understanding of driving scenes and its decision-making process. Let's dive into some key features quickly of Lingo2. It is the first vision language action model tested on public roads, marking a milestone in the development of trustworthy autonomous driving technology. By generating natural language explanations alongside driving actions, lingo2 enhances the explainability and transparency of Wave's AI driving models. The integration of natural language opens up new possibilities for human-machine interaction, enabling drivers to customize and control the autonomous driving experience through language-based interfaces. By aligning linguistic explanations with decision-making, lingo2 offers a new level of AI explainability and human-machine interaction.

Speaker 3:

Amit. Why is the addition of natural language processing to autonomous driving technology so exciting?

Speaker 1:

Well, the first thing to understand is that the autonomy that we have in vehicles that are on the road today is based on much older technology that we're discussing. So Tesla, for example, as well as GM with their I think Super Cruise is what they call it they have different levels of autonomy depending on the vehicle manufacturer. There's obviously companies out there who've produced attempts at fully autonomous vehicles, like Google's business in that area, and these technologies have been around for quite a while and they're based on training, on massive amounts of video content, and then training based on driver feedback and all these other signals, which are non-language-based signals, but they're signals like video, and then what happens when different kinds of decisions are made in the driving system. So the way to think about this is that this is the next generation of AI models. Now coming back to one of the things that's become kind of a classical use case of AI that people talk about, which is the autonomy in vehicles, or the desired autonomy in vehicles. So the key insight I think that's important is that it's a multimodal model. We say that a lot, but multimodality or the ability for a model to naturally process language and video and audio and other things, other kinds of inputs and then provide feedback in all those modalities is where all these models are going. In fact, the latest from OpenAI, the latest from Google, the latest from Anthropic, these are all multimodal models. So that's a normal thing, that's becoming the standard thing. In fact, it's interesting how acronyms go.

Speaker 1:

Everybody's saying LLM for large language model. A lot of people are abbreviating that to just language model and their small language model, but then one of the things that people are starting to do is drop the language, because that's just kind of assumed as just small models and large models. And you know, these models are all becoming multimodal and so in this particular case, what you have is a company that's essentially taken some of the generative AI technologies, these kinds of transformer-based language and related modality models, and has created a version of that that has specific applicability in the world of driving. So that's really what you're seeing in application. So I think it's an exciting advance because there is, first of all, better performance, because these models are more advanced, they take into account more and more information. Um, you know, so, like with earlier versions of, like waymo and other of these types of vehicles, there's no way to provide feedback loop like in real time. It was just trained on massive amounts of data. No language, no like human feedback.

Speaker 1:

Some of these systems, like in some companies, they would have like rule-based overlays.

Speaker 1:

So, like Tesla actually for a long time didn't use neural nets, they have rule-based systems where they were trying to essentially create hierarchical categorization of every possible scenario and videos that are trained with it, but the decision-making rule set was essentially what would have been called an expert system or a rules-based system, and then they've woven in neural nets over the last several years, which has led to obviously tons of improvements. But these current systems are all hybrids. But really the key point for people listening to this podcast is this is yet another example of an advancement where we've blended together different kinds of data to achieve new capabilities right. Higher level capabilities in terms of the model's performance and the ability to interact with us is really key. I think that's the key to the natural language part being able to just talk to the model and say I didn't like that turn you made back there. Maybe it was legal, but it may be uncomfortable, so please don't do that again and the model being able to take that kind of input into account.

Speaker 3:

Without getting too technical here, I'll try not to Without the natural language piece. How would you have instructed the AI that it made a good or bad decision? How would you have instructed the AI that it made?

Speaker 1:

a good or bad decision. Well, I mean, historically, this would be part of the model training where, depending on the company, they use different approaches. But if you have a rules-based synthesis of rules-based systems along with neural nets, you would actually start to program rules into that part of your architecture. If it's purely neural net-based, you would try to feed more training data to the next training iteration of the model to say, okay, these are scenarios that we've encountered and try to put more weight on those new scenarios, and that's still really important. But I think what we're saying here is that also, not only can you, it might be real-time, where you or I say what I just said, like oh, don't make that left turn that way. But really, actually, a lot of it will also be people writing up rules and explaining those rules to the AI, also potentially just taking in like okay, well, we're driving in the United States, let's take like all of the literature that's out there on, you know, safe driving in the United States, and all the literature that's available on like the actual rules of the road in the United States, which are different, let's say, if you're in Australia. So that's another thing you can ingest that the previous models couldn't take advantage of those mountains of content, of text-based data. So that's another element of it, and that idea is the more you can teach the model from video, for sure, but also human feedback loops and written content the smarter the models get, the more comprehensive their perspective is. One way to think about it is if you are the model and I said, hey, mallory, all I'm going to do is let you watch video, and maybe it has audio, maybe it doesn't, but I can't talk to you, I can't give anything to read. Your understanding is going to be limited compared to if I gave you this additional material to read. So that's one way to think of it is, the more you feed in of different kinds of things, the more complete the world model or the world view is.

Speaker 1:

The other aspect of it is kind of the other side of it, which is the model's ability to explain itself. We call that interpretability, and that's an important concept to dig into a little bit, because when the model makes the decision hey, did I turn left? Did I turn right? Did I stop at the stop sign? Whatever I did, you know models, being transparent about how that decision was made is really key. Actually, another company in the space Anthropic who we've talked about a lot. They're the makers of these models called Quad. They had recently had a news release talking about a major advancement that they've had in the field of AI interpretability, where they are able to show much more mathematically what's happening inside the model. So we talked about these things being kind of black boxy and not really being able to describe why they made decisions, and they're doing a lot of work in that area to make the models more interpretable and to discover what's happening. So it's kind of related to that as well.

Speaker 3:

Yep, and that was actually my next question was about interpretability, because I never really thought about the scenario of just asking the AI why it behaved in a certain way. Is it as simple as that with interpretability? Is that something we could do with the models that are currently available to us?

Speaker 1:

You could certainly do that, but remember that each additional interaction, while it feels as though you're asking it about what it did, really it's doing a new, generative response. So if I go to ChatGPT and I say, hey, chatgpt, what should I wear today? And ChatGPT gives me a response, and then my next question is tell me how you made that decision? It's a brand new response. It's generating more text based on everything that's been there. It's not actually telling me what it did, so it might seem like it's explaining itself, but really it's just generating the next token, right? It's generating the next set of words, the next set of sentences, which might actually give you a response, but might not.

Speaker 3:

Moving to topic number three, aurora. Aurora is a large-scale foundation model of the atmosphere developed by Microsoft. It leverages deep learning techniques and is trained on over a million hours of diverse weather and climate data. Aurora is designed to produce high-resolution weather forecasts and air pollution predictions, outperforming classical simulation tools and specialized deep learning models. Some key features Aurora uses a foundation modeling approach which allows it to learn general purpose representations from vast amounts of data. This enables it to tackle a wide variety of atmospheric prediction problems, including those with limited training data and extreme events. Aurora can produce five day global air pollution predictions and 10 day high resolution weather forecasts in under a minute predictions and 10-day high-resolution weather forecasts in under a minute. Aurora's architecture is built upon a novel multimodal there we hear it again fusion approach, integrating various data modalities like numerical weather data, satellite imagery and climate simulations. This enables the model to learn intricate relationships between atmospheric variables and their visual representations, leading to more accurate and comprehensive weather forecasts. Aurora has a wide range of applications in meteorology and climate science, including generating high-resolution localized weather predictions. Its ability to excel at downstream tasks with scarce data could democratize access to accurate weather and climate information in data-sparse regions such as the developing world and polar regions.

Speaker 3:

Amit, I was thinking as I was going through this topic. I don't know if you and I have ever discussed this, but typically on the podcast, topic three sometimes topic two and three are always from different industries, particularly science. We've talked about DeepMind's GraphCast. In the past, we talked about AlphaFold 3. Now we're talking about Aurora. Why do you think it's essential to keep an eye on AI advancements that may not feel like they directly impact associations?

Speaker 1:

Well, first of all, I think it's important to pop your head out and look around a little bit every once in a while just to kind of see what's happening in the rest of the world, just as an informed citizen. That one side of it but, uh, part of it is is that you can see patterns emerging where you know one thing happening in one field that's going to lead to, like you know what we talked about in the world of synthetic biology and you know alpha fold three. How does that apply to the material science conversation we had in the fall about the new models in that field? How does that then apply to things like predicting the weather? You know it's super interesting because science there's all these domains of science where predictions are obviously super, super important and they're things that can affect the fields that the association serves. Certainly. So if you're an association that deals with anything in the realm of science or engineering, these kinds of things are super, super relevant. You may not be the expert in them, but being aware of them is important in order for you to be relevant to your members. It might help you pick great content to write and create for your association to educate. You know, find the expert in this field, for example, and bring them into your annual conference and things like that.

Speaker 1:

But there's also some second order type things to be thinking about. So, while the primary things you hear about, like improved atmospheric forecasting or weather forecasting, might be in agriculture, in insurance, where you're thinking about, you know, risk profiles, obviously, and all the things governments are concerned with in terms of safety services and emergency services and all this stuff. But what about just like event plan? So if I had like a really good weather forecast 30 days in advance that was so accurate that I could say you know what? I've got my annual conference coming up and one of the events we have is in the evening and it's an outdoor event and all event planners know this is. You plan outdoor stuff. You always have a fallback of some sort, which hopefully you do. There's some fallback like hey, if it's raining, if it's snowing, if it's bad, we'll move them inside. Um, that'd be kind of nice to know that you know. Um, especially when you're up here in Utah, you know if they stay in the mountains, if you. Obviously there's the recreational side of it. As an avid skier, I'd love to have better forecasts a few weeks earlier, so I could plan trips to come up to Utah and take advantage of the powder as much as possible.

Speaker 1:

But aside from that, I think you just have to look at and say the realm of science fiction is quickly becoming science fact, and AI is the conduit through which science fiction becomes science fact. And so if you are up to speed on what's going on here, even if you have no idea how this applies to you this is a person living on planet Earth it's important to know what's going on. And if, like, okay, real time, accurate weather forecast, not only further out, but what about like down to not just my zip code, but like where I'm standing, you know what does that mean for me? Like, if we can get high accuracy forecasting at the micro level, that's super valuable as well, for for lots of reasons. So, um, I, just in my head, I keep bouncing around on this topic because I find it personally interesting.

Speaker 1:

Um, not because I just like to geek out on it, but because I think the ramifications have so many layers, right, so many different orders of effect. There's the primary one, but then there's second and third order effects that you see in all sorts of businesses. So, thinking about this stuff at the zooming out level, I think is really important for everyone in the association sector, nonprofit sector, because it'll help you teach your brain how to think about what's coming. What's here today, for sure, because there's a lot here right now, but what's coming and as you see these new things evolve, it'll hopefully coming back to your business, teach you what might be possible. It'll open your mind to new ideas and new ways of thinking about problems. So to me that's the big reason for it.

Speaker 3:

Absolutely Right before you brought up skiing, I had a moment where I was like, oh, I wonder if we talk about this so often because Amit likes to ski and it would be nice to predict good weather for that. Not that that's the only reason, but it's funny that you brought it up.

Speaker 1:

It's certainly high on my list. It's funny Last night out here in Utah I was hanging out with a friend of mine that lives here locally and we were talking about exactly this. Like you know, skiers are glued to their apps on their phones. We have all these like different, like highly specialized apps with like detailed ski forecasts, and those things will, I think, always be useful at some level. But you know, having really high quality uh ai driven forecasts is going to change the game completely with that stuff yep and further contextual it.

Speaker 3:

I'm just thinking we have someone who recently came to the Blue Cypress Innovation Hub from the Ski Instructors Association, so I guess really, this technology can impact more than just people directly related to weather. Now, you mentioned this in topic two, which is funny because I was thinking about it as well. We predicted in our end of 2023 podcast episode that multimodal models would go mainstream this year, and I feel like that prediction is certainly held true. Aurora is just one of many examples, with including data modalities like the numerical weather data, satellite imagery and climate simulations. Is there a need or a place still for single modality models? I'm not sure if that's what it's called, but is there a world where we don't go multimodal?

Speaker 1:

Yeah, I mean, I think it's kind of like you have to apply systems thinking to this to really, I think, get the right view of it. So models come in all shapes and sizes. There's tiny ones that do one thing really well. There's big models that do several things reasonably well. There's big models that do several things reasonably well. You have predictive models, for example. If we want to say, hey, I need a model for my association that gives me a 90% plus probability forecast of which of my members are going to renew and which of my members are likely to not renew, that's a classical predictive AI problem which, by the way, that problem has been solved for a long time. It's just most associations aren't using that type of tech, but that's a predictive AI challenge. That type of highly specialist model will always play a role, along with larger generative models. So think of it this way you are assembling a team so, just like you wouldn't have one person, try to do everything. I mean, you do that in some cases like everyone's. You know everyone on the team. You have three employees and so everyone has to do a bit of everything. But if you have specialization you tend to get better results. But then you have to coordinate these AIs, you have to bring them together in a way where it makes sense. One thing I can jump back to is a few months ago we were talking about Mistral, and they have a new model not so new anymore in the world of AI but still relatively new in the real world called Mixtral, and Mixtral was their Mixture of Experts model or MOE is yet another acronym, since we definitely needed another acronym in this world essentially combine multiple smaller models together along with something called a routing capability, where the model essentially first figures out. Oh, you're asking a language question. Let me route to this little model that's really good at language. Oh, you're asking a science question. I have another model that's really good at that. Oh, you're asking a question about recreation. I have a different model for that. So these mixture of experts models, which GPP4, all the major models in the new Gemini family all of them are MOE models. Really, they all have a bunch of small models together. When I say systems thinking, though, then you kind of zoom out and say, well, what does this all mean? So when you interact with a cloud, opus 3 or GPP4O you're actually interacting with, kind of the synthesis of many components of functionality. You're actually interacting with the synthesis of many components of functionality In the world you live in. At the enterprise level, you'll have many models that will work together. The key will be to figure out what are the business use cases you have and which are the best tools. There's another way to think about this. If you think about software tools that you have used for a long time that are not AI, that you have used for a long time that are not AI.

Speaker 1:

So on your desktop you probably have a word processing program like Microsoft Word, google Docs, private spreadsheet program like Excel, of course, or Google Sheets or something else, and you go to the spreadsheet when you want to do calculations. Typically, you want to build different kinds of calculations. You go to the word processing if you want to create a document that has more of, like you know, free-flowing text. But can you create tables in the word processor? You can and can you even embed bits and pieces of Excel inside a Word document? You can. Can you do the opposite and embed pieces of documents in Excel or do formatting and lots of things in Excel or Google Sheets? You can, but Excel is better at calculations and more structured types of things and Word is better at more freeform type stuff.

Speaker 1:

And then, if you add PowerPoint to the mix or if you add something like an email tool, what you see is a specialization of tools. Do you need a thousand tools? No, but you probably have on your computer five or 10 tools that you use pretty regularly. And what we're having happen with AI is we have lots of new tools all of a sudden right, and you know, we as a species, we've evolved to be tool makers and tool users and we find all sorts of novel ways to use tools that were different than even how the tool was designed. What makes this AI conversation much harder is that the tool's purpose is very hard to describe. You can't say that ChatGPT can do these 10 functional things, because it can do those things, but what else can it do? There's no manual to accompany it. That's what makes it harder to put your finger on.

Speaker 1:

But my general view of it is that there will be many models. Some of them will be highly specialized. They do one thing extremely well and they might even be private to your association, where, let's say, your association has a set of data that's unique to you. Maybe you're a medical association, you have quality data from hospitals, or you have registries of medical procedures, or maybe you're in a recreational area and you have a lot of data about people that are performing in a particular field, or things like that. These examples are so abundant, where there's this pocket of data that you have and probably only you have and you might want to create a model based on that. That's becoming easier and less expensive, and that model isn't designed to replace EPP, whatever, but it's designed to complement it. So that's why, a really long way of saying you're going to be dealing with a lot of different pieces as tools that come together to form, ultimately, a solution.

Speaker 3:

But with having all of these highly specialized models, small models, maybe even large models, do you think they will still be multimodal? Or, because they're focusing on one specific task, it might not be necessary.

Speaker 1:

I think some of these models will be just focused on one narrow training data type. So, for example, if I'm predicting member renewals, you know so would member renewals potentially benefit from multi modalities and training data? Perhaps, but what I really need is a lot of structured data. I need data about you. I need data about, like your transaction history. Maybe I want your email history also in there. It's the model, it's still all text, but I have a mixture of structured data like your transaction history. Maybe I want your email history also in there, into the model. It's still all text, but I have a mixture of structured data like your transaction history. Maybe I pull in the emails. Why would the emails be relevant? Because your emails may indicate how you feel about the association and that might be useful for the model to know.

Speaker 1:

Okay, this is a set of emails that Mallory has had with the association over the last year or two years, and does that have any correlative, you know, relation to the predictability of renewal or non-renewal, but it's still probably all text. Does it help to have language in there? Sorry, not language audio or video in there? Maybe, so you know you could argue that it potentially would, but I would probably guess that in that use case the structured data would be the most valuable and maybe some of the less structured, but still text-based data would be valuable. So some models will be very, very specialized and focused on this one modality. I don't know if there's anything wrong with that. It's just if you then take that model and say now we have a multi-agent environment where your renewal model, which was trained just for you, interacts with all these other models that are in the ecosystem, that gets really interesting. Just for you interacts with all these other models that are in the ecosystem.

Speaker 3:

That gets really interesting. So it sounds like to me what we've learned within the past 34 episodes and kind of within the past few years, is that there's really a place for everything with AI. You've got your small models, your large models, your general purpose models, your more specialized models. It seems like at this point we're not doing away with any recent AI advancements. We're just finding where they best fit. Does that make sense?

Speaker 1:

you are potentially paving yourself a path to a painful future, and my favorite example to pick on in this area is the Bloomberg company's Bloomberg GPT, which was announced to great fanfare about a year and a half ago. It was a $50 million investment that they said hey, we have all this amazing content for Bloomberg. We have all of our content that only our subscribers have access to. We're going to train a GPT model on just our content that no one else has. Concept sounds good. It's protected private data.

Speaker 1:

The category, though, of financial services financial insights on companies that Bloomberg has all this data on is not uniquely differentiated enough, so the outcome is GPT-4, which is a general purpose model trained on the internet, is actually outperforming Bloomberg GPT on most financial analysis tasks. So could Bloomberg have predicted this back then? Well, probably not, actually, because I was pretty excited that that model was being built, and I think a lot of people were no-transcript, and what we just saw is the pace of change. Right, because the exponential growth keeps happening the way we've been talking about that. General purpose models now are better than Bloomberg's specialized model, but that's different than saying, hey, I'm an insurance company and I have all this claims data that no one else has about my insured people.

Speaker 1:

So I'm going to use that to train a model. Or I'm an association that has a very unique private data set that's unlikely to overlap with data sets that other people have. Do you think that if your data set truly is that differentiated, then a custom model could make a lot of sense? But back to your point. I think your generalization of it is a really good way to put it, mallory, that there's probably a place for most things to fit in and over time, because things are moving so fast, some models that did have a place might become somewhat irrelevant. You could flip that whole conversation around and say, well, is there still a place for Bloomberg GPT? And there might be, in the sense that it's much smaller, faster, cheaper for Bloomberg to run that than to use GPT-4. Even though GPT-4 is comparable output, it might be way slower and more expensive to inference it.

Speaker 1:

So there's always ways to look at this and again it's kind of like the tooling conversation. Software has completely blown up over the last 30 years the number of options in terms of software applications. I mean, just look at marketing technology. There's these famous infographics on MarTech that have been coming out consistently. I forget the company that creates it each year, but there's an infographic that shows the MarTech landscape and there were hundreds of companies on that, then thousands. Now there's probably tens of thousands of companies because of this high level of differentiation. I think that's going to be true for AI as well.

Speaker 3:

Yeah, I'm thinking, you even see, with AI companies right Like Munch and Opus Clips, doing the exact same thing in the exact same space, and I'm sure we'll see more similar companies pop up.

Speaker 1:

So your comment, munch and Opus Clips are good examples. They're basically directly competing products and so you could say it's the same thing, for, like Jasper and Writer and Grammarly, they have very similar overlapping features. So in the horizontal world meaning companies could try to go after broad markets there's going to be a lot of bloodshed. There's going to be a lot of competition, a lot of companies that are going to go away, a lot of companies consolidating all these typical things. We had hundreds of companies manufacturing cars back in the 1920s and 30s. Down in the 30s and then 40s. It consolidated rapidly.

Speaker 1:

There's a lot of reasons for that, but in many maturing industries that happens. You're seeing that now with AI. I think where you're going to have a robust, long-term, vibrant ecosystem is hyper-specialization, where what we're doing across Blue Cypress obviously is building AI models and tools specific to this vertical which we think will have durability because they will be able to solve a specific problem in this space better than general purpose models and they sit on top of general purpose models anyway. So I think there's a lot of room for innovation, a lot of room for differentiation. You just have to look at the market and say, okay, is what we're building something that will become irrelevant in 12 or 18 or 24 months because of the curve. That's what's happening with the exponential curve, and so for Munch and Opus Clips, you know, maybe one of them finds a space where they're like yeah, actually, what we do really really well is Clips for a particular platform or for a particular sub group of users that we create the most value for.

Speaker 3:

That's a great point. And disclaimer. I've never tried opus clips, but from what I've seen it's very similar to munch. They may be distinct. All right, amit. Well, thank you so much for your insights today. Thanks for tuning in to all you listeners. We will be back next week with another episode of the Sidecar Sync.

Speaker 2:

Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.