Sidecar Sync

Meta’s New Llama 3.2 Model & Decoding Digital Twins | 52

Amith Nagarajan and Mallory Mejias Episode 52

Send us a text

In this episode of Sidecar Sync, Amith and Mallory delve into the latest in AI innovation. They kick things off with an in-depth discussion on Meta's new Llama 3.2 model, highlighting its capabilities for on-device AI and vision tasks. Then, the conversation shifts to digital twins—virtual replicas used to simulate real-world entities. Amith explains how this technology can be a game-changer for associations, from optimizing conferences to creating personalized member experiences. Tune in to explore AI’s expanding role in reshaping association strategies.

digitalNow Conference 2024
🔗 https://www.digitalnowconference.com/

🛠 AI Tools and Resources Mentioned in This Episode:
MemberJunction Library ➡ https://docs.memberjunction.org/
Llama 3.2 ➡ https://huggingface.co/meta
ChatGPT-4 ➡ https://openai.com
Claude 3.5 ➡ https://www.anthropic.com
Gemini 2.0 ➡ https://www.google.com

🎬 Sidecar Sync Ep. 48: Unlocking AI-Powered Insights from Unstructured Data
https://youtu.be/QyPV7A6VRn4?si=4hYpUofgvZXriqR5

🎬 Sidecar Sync Ep. 35: Understanding Vectors & Embeddings
https://youtu.be/IUEpRW-UUSs?si=MRkUXp7ihrCbnhhv

Chapters:

00:00 - Introduction
02:31 - Digital Now Conference Update
05:44 - Meta’s Llama 3.2
09:50 - AI at the Edge: On-Device Benefits and Privacy
14:48 - Vision Capabilities in Llama 3.2 Models
20:21 - Which AI Model is Best for Complex Use Cases?
23:48 - What Are Digital Twins?
31:59 - Applying Digital Twins in Associations
36:42 - Optimizing Member Experiences Using AI

🚀 Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global

👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Speaker 1:

You cannot even begin to contemplate doing a digital twin exercise because you just won't have the data. You know you can't launch a rocket, no matter how cool your rocket is, if you don't have any fuel. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host, sidecar Sync. Listeners and viewers, welcome back to another episode. We are so pleased to have you with us. Thank you for spending a short bit of your day with us. We have, as always, a bunch of exciting topics. My name is Amit Nagarajan.

Speaker 2:

And my name is Mallory Mejiaz.

Speaker 1:

And we are your hosts. And before we get into our two topics for today, at the intersection of all things, artificial intelligence and associations, let's take a moment to hear from our sponsor.

Speaker 2:

Digital Now is your chance to reimagine the future of your association. Join us in the nation's capital, washington DC, from October 27th through the 30th. For Digital Now, hosted at the beautiful Omni Shoreham Hotel. For Digital Now, hosted at the beautiful Omni Shoreham Hotel, over two and a half days, we'll host sessions focused on driving digital transformation, strategically embracing AI and empowering top association leaders with Silicon Valley level insights. Together, we'll rethink association business models through the lens of AI, ensuring your organization not only survives but thrives in the future. Enjoy keynotes from world-class speakers joining us, from organizations like Google, the US Department of State and the US Chamber of Commerce. This is your chance to network with key association leaders, learn from the experts and future-proof your association. Don't just adapt to the future. Create it at Digital Now. Don't just adapt to the future create it at Digital Now. Amit, how are you doing on this lovely Wednesday morning?

Speaker 1:

I'm doing really well. It's a cool morning in New Orleans. I got a run in this morning and it's just nice outside and that puts me in a good mood. How about you?

Speaker 2:

It's chilly over here in Atlanta. It was 41 degrees when I woke up, so I was not fully prepared to take the dog out in that weather, but it's been nice. I really enjoy fall. It was 41 degrees when I woke up, so I was not fully prepared to take the dog out in that weather, but it's been nice. I really enjoy fall. It has been a crazy few weeks, for sure, as we gear up for Digital Now 2024. Are you excited, amit?

Speaker 1:

I am super excited about Digital Now 2024. We've got two weeks to go, as you said, and October 27th I guess it's less than two weeks. It's 11 days from today. We're recording this on the 16th, so we've got, I think, record attendance already or record registration already and growing. We've got 11 days to go, so if you haven't yet registered and you intend to come, you should definitely get on that right away. A lot of people post-COVID have been registering for conferences like in the last minute or even showing up on site to register. So those of you association folks that run meetings know this very well and we're challenged with that, obviously in planning. But we're so excited We've got some amazing content lined up. Can't wait to see this large group get together in DC and we've got an exciting surprise to announce when we get there.

Speaker 2:

Absolutely yes. What is this trend about Amith, where I swear I don't know. Maybe over 40% now, maybe over 30% of our attendees have registered within the last probably month and a half, which is really amazing but also really hard to plan for. But I mean, as I said, the fall is a busy season, not just for us but for everyone. So I think a lot of people were waiting to see, kind of, how their schedules shook out.

Speaker 1:

Yeah, a couple of evenings ago I was chatting with the CEO of a pretty large association that we work with a lot and he was telling me that he just got back from his conference and I don't remember how many thousands of people are there me that he just got back from his conference and I don't remember how many thousands of people are there, but it's a big event and he said that 25% of their registrants had registered in the last 10 days. Wow, I don't know if that's a general trend line or if that's more than normal, but yeah, and that's definitely a shift compared to pre COVID. You know, I think there was just a. It seems to be a durable shift in behavior where people used to take advantage of early bird type pricing more or just plan ahead more. But it's interesting and I think maybe AI will help us, based on changing events in the world. But we're excited. It's going to be awesome.

Speaker 2:

We are indeed excited. We had actually a listener of the Sidecar Sync podcast reach out to us in our inbox and they are based in the UK, but they're avid listeners of the Sidecar Sync and they asked if we had a virtual option for Digital Now, which we don't technically, but we will be recording those keynote sessions and adding them to our AI Learning Hub, which you've heard us mention on the podcast before. We also have all of our 2023 keynote sessions from Digital Now in the AI Learning Hub as well. So if that is of interest to you, if you're listening to the pod but you can't join us, keep a lookout for those keynote sessions on the AI Learning Hub. Today we have two exciting topics to talk about. First and foremost, we're speaking about LAMA 3.2. And then we'll be talking about the concept of digital twins, which is a new one for us on the Sidecursing podcast.

Speaker 2:

Lama 3.2 is Meta's latest advancement in large language models, and here is an overview of some of its key features and advancements. Lama 3.2 offers a range of model sizes to cater to different use cases. We've got lightweight models with their 1 billion and 3 billion parameter models designed for edge and mobile devices, and also medium-sized models at 11 billion and 90 billion parameter models with vision capabilities. The lightweight models are optimized for on-device applications, providing instant responses and enhanced privacy by processing data locally, and the larger models introduce multimodal capabilities allowing for more sophisticated reasoning tasks, including image processing. As a note, the 11 billion and 90 billion models are the first in the Lama series to support vision tasks. All Lama 3.2 models support a context window of up to 128,000 tokens and they offer improved support for eight languages, including English, german, french and they offer improved support for eight languages, including English, german, french, italian, portuguese, hindi, spanish and Thai. They're also optimized for deployment on various hardware platforms, including Qualcomm and MediaTek chips, as well as ARM processors.

Speaker 2:

Now these models are suitable for a wide range of applications, like personal information management and multilingual knowledge retrieval, on-device AI for edge and mobile applications, image reasoning tasks, like we mentioned, including object identification and visual grounding, as well as document-level understanding, including processing of complex graphs and charts. It is available now through various platforms. You can download it from Hugging Face and Meta's website. It's deployable across major cloud providers like AWS, google Cloud and Microsoft Azure, and it's accessible through Meta's LamaStack, which provides APIs and tools for easier customization and deployment. So, amit, we're big Lama fans. I would say over here what do you think is exciting about Lama 3.2?

Speaker 1:

Well, first of all, I think it's the season for new models, it seems like, but then again it's kind of like saying it's hot in New Orleans, it's always the season for new models.

Speaker 2:

It's always Right right.

Speaker 1:

You know it's crazy and actually, just as an aside, I read I don't know how truthful this is or accurate it is, but there's I forget the name now but there's a particular Twitter user that has consistently given scoops on leaks from the major labs and supposedly there's a GPT 4.5 coming. There is a Claude 3.5 Opus, which is the larger version of Claude's model that's on the horizon, which is likely why GPT 4.5 would probably drop right after that, because OpenAI has a good habit of dropping things as soon as Claude does. And a new plan not a new plan, but a plan for Google to release Gemini 2.0 at some point this fall. Now, that's all speculation, but the point would be at least one, probably two of those three statements are likely to be true in the next several weeks. And coming back to Lama 3.2, this is a very powerful model from Meta. It's open source, open weights, totally free inferences everywhere around the world and different languages and on any platform you like. So it just gives us more flexibility and all of that is good. This is all increasing choice, increasing flexibility and pushing things forward.

Speaker 1:

Now, coming back to the question about on-device or on-edge and I think a lot of us are just generally on-edge with AI. But it's one of these things where, why would you want to do? That is one of the questions people ask me a lot, and the real reason is, first of all, performance. Think about it this way your phone is an unbelievable computing device. It has an enormous amount of power in it, and these phones keep getting better and better, as do laptops and tablet devices and so forth. Even your watch probably has a pretty powerful processor in it, and so why not take advantage of all of that computing? That's just literally sitting there.

Speaker 1:

You know, over the history of computing we've had kind of this ebb and flow back and forth from central computing to on edge or on device computing and back and forth and back and forth. And you know it started off with mainframes and minicomputers, where you had these dumb terminals, as they were called, which were these green screens, and it was all character based applications and all the processing happened on the mainframe of the minicomputer. Then in the 90s we moved to client-server, where there were applications that kind of had a mix. There were some things on the server and then some things on the Windows PC typically and then with the web it's moved largely back to the data center. Although more and more can be done in a web application and, of course, with mobile apps running on your phone, you can tell very quickly what's a native app versus an app that's just a thin frame around a web experience, the fidelity and quality of that application is usually higher.

Speaker 1:

The reason I share all that quick history of computing in the last 60, 70 years is that, as these things have kind of gone back and forth and back and forth, we keep realizing well, computing doesn't slow down and there's all this power on edge or on device, in your phone, on your tablet, on your laptop. We should be taking advantage of it. So that's one reason it's interesting and many problems don't require the most powerful model in the biggest computing data center. The other reason is privacy. So when you think about Apple's entire strategy around AI, it really focuses around this on-device model concept. Apple has their own, very small, I think it's a 3 billion parameter model that runs on the latest iPhones that were released, and Meta's 1 billion parameter model clearly is targeting the Android world and, just in general, anybody who wants to run small models. Of course, gemini Flash, which is their tiny model, is super fast and super small.

Speaker 1:

So, coming back to the whole idea is that if we can distribute our workloads where some of the AI happens locally, some of the AI that's really complex maybe happens in a data center. That's an interesting ecosystem we can take advantage of this incredible infrastructure that's out there For associations. When we think about this, we might say well, what aspects of our member experience may we want to? Perhaps not ever get the data for? We don't necessarily want association members to ever share, let's say, patient information. We really wouldn't want that if we're a medical association.

Speaker 1:

So maybe there's some kind of an application that we release that's trained on our corpus of content, so it's an expert assistant in a particular medical domain. But we want those conversations to be entirely local to the environment that the user is using it. Maybe they have it deployed in their environment, so that's just one example where you have personally identifiable information, just generally sensitive information. So that's one example. Or just within the association itself, if you have the opportunity to deploy these models in your own data center or in a cloud environment which is most people do these days and you can control it, then you know that your data isn't kind of leaking away and going to major cloud providers or, sorry, major AI providers, who you may or may not trust. I'm not suggesting you shouldn't, by the way. It's just an opportunity to create a more secure infrastructure for your most sensitive data.

Speaker 2:

In that case, if security is of the utmost important for an association, would you recommend that all of the AI models they run be run locally, or is there kind of a use case where you could use these other models as well?

Speaker 1:

the highest priority for data in all associations all the time. Even the government, even the CIA, has different tiers of security around different kinds of content. They have a public-facing website with some information and then they obviously have various levels of classification, and I think that should be true for most other organizations, including associations and perhaps your most sensitive workloads that you do want to AI-enable you run on-prem or in a virtual private cloud environment where you have a higher degree of control. With Lama 3.2 specifically, I think there's some really interesting things about this particular series of models that are worth noting too. One is the size of the smallest model, being 1 billion versus most of them have been 2, 3, 4 billion parameters. That's a notable size reduction, which means less memory, less compute, and their benchmarks show performance that's roughly on par with prior model generations, aka six months ago that were 7, 10, 12 billion parameter. So we've talked about this trend line over time here at the Sidecar Sync, where we've talked about the enthusiasm we have for small models as much as the latest frontier models, the biggest models, because they're democratizing access to really high-powered AI, and the one billion parameter LAMA model is roughly as good as GPT-3.5 was, which is kind of crazy. That's back in late 2002 when ChatGPT launched and people first got that taste. That was based on a model that's approximately equivalent in power to the 1 billion parameter LAMA. So that's exciting.

Speaker 1:

And then the vision capabilities you mentioned earlier, mallory, I wanted to quickly highlight. When you're able to have a multimodal experience, the model can understand more about what the user is trying to do, can both look at pictures of the world over time that will be video as well and then respond to you both in the form of text and pictures. So that, I think, is a really powerful concept. Right now the vision model doesn't extend to image generation, but that likely will soon be kind of a standard thing all models do. But this model is able to look at images and so there are many applications that can be enabled by that kind of multimodal multimedia type of capability and that's done, and even they're kind of you know what was it? The 11 billion parameter model, which is very small. You can also inference that locally 90 billion. Once you get into that range, those models are big enough where you probably have to run them in a data center.

Speaker 2:

But the $11 billion parameter model having vision is pretty crazy. Amit, I would say you're probably the biggest AI power user that I know and I would bet that many listeners and viewers of the Sidecar Sync would probably agree. I'm curious are you considering or do you run an AI model locally on your mobile device for personal information management? Is that something you're considering doing?

Speaker 1:

I definitely am considering it. I have actually done it. I've downloaded Llama 3.1, the small model, and I ran it. I'm trying to remember what the app was. There's a whole bunch of apps you can use Be careful, because there's a lot of malware out there too. But there's apps you can download from the Apple iPhone store and also from the Android world and then you can download different models and you basically have a chat type experience with them. The problem with these things is they're not integrated well into the experience, so they're not kind of natural and engaging in a way where you know you can talk to an on-device assistant.

Speaker 1:

So if you think about where Apple's trying to go with Siri, where Siri has local memory and local inference, that's really interesting because a model like that that has, you know, essentially guaranteed privacy, where the data is encrypted on device, is not available to anybody. I would definitely think people would be able to get comfortable using it for other things, like. An example might be like personal data. If you use, like, a fitness tracking tool that's linked to your iPhone, do you want that health data going to any of these cloud providers or AI providers? Maybe you're comfortable with that and maybe you're not, but more people would be comfortable, I think, with local. So I think there's a number of use cases where that makes sense. I also think what's going to happen is every app is going to have a built-in AI model where it's going to just say hey, this app you downloaded from the App Store. It has Lama 3.2.1b just baked into it. It's just part of the app, it's part of the download and maybe it's shared or something, but the apps become dependent upon local inference capabilities. Just like the apps are dependent upon multi-touch capabilities, so the apps are dependent upon having a camera. There's whole generations of apps that are going to become available locally because there's zero incremental cost for having those basic AI capabilities. I call them basic, but they're actually pretty advanced. So if you just assume that mobile app developers in 2025 are going to say, yeah, of course, there's an AI model that runs locally, there's zero latency, essentially, with network, it's super fast and there's no incremental cost, it changes the equation Because even though the cost of running these models in the cloud has been dropping really rapidly, it's still something. So if a mobile app developer says, hey, I want to do a free app that helps you with nutritional coaching, and you can inference that locally it's both better privacy-wise. Plus, the mobile app developer has zero incremental cost when people download it. So I think you're going to see an explosion of AI-enabled apps.

Speaker 1:

Is the short version of my answer. And coming back to the question about me, yeah, I'll totally use this stuff. I'm a little bit of a strange case in lots of ways, probably, but particularly in terms of AI use. I do use AI really heavily, but I'm also something of a creature of habit, so I'm not necessarily going out there and trying every single new tool as fast as I can. I do try to go deeper in tools and try to stretch. How much can I get out of it?

Speaker 1:

Part of over the last 30 years of entrepreneurship one of the things I've spent a lot of time thinking about is how to grow other leaders, and part of that is helping people learn how to think about their own prioritization and how to delegate, and part of that is being really good at protecting your time and focusing your time on high value activities, finding low value activities and then pushing them off to other people traditionally. Now you can push it off to AI, so I do come back to it and say, hey, what else can I get out of the tool set I have? So in that way I'm probably a power user. I'm very deep in like using these things, things from a programmatic perspective, because I work with our software development teams across the family of companies all the time. But anyway, that's kind of what I do with them. But I have not yet played with Lama 3.2 specifically.

Speaker 2:

Well, that's kind of a good segue into my last question here, which is we've talked about on the podcast before this plug and play approach. So if you are developing an AI product at your association, you don't want to be locked in to one model. You want to have kind of this layer of protection in between you and the technology so that you can plug and play new models as they come out. So I'm going to put you on the spot here, Amit, as someone who is developing and helping to develop AI products. Out of all the recent models we've seen thus far, which one do you find yourself going back to?

Speaker 1:

Well, I would tell you that, for the complex products we're building at the moment, openai's GPT-4.0 is still the best at something that we refer to as deep instruction following. So if I provide the model with a really complicated prompt, a prompt that has multiple layers of direction and is asking for a highly specialized type of output, that is something that OpenAI has done a better job. They've essentially fine-tuned their models to have very, very good structured outputs and they actually have APIs for it. So OpenAI is more reliable by enough of an incremental margin for really complex use cases that our teams tend to default to. Openai still, and we have no problem with that. I mean, openai seems to be a reasonable company. I don't know that I trust them any more or any less than anyone else, but they have a really good product. At the moment. I don't think that advantage is going to be durable. I think that the other products are right on their heels and I've seen really good results from the latest Lama models, like the 405B model, specifically. So for really complex software development, I think OpenAI has certain advantages, but that's not true universally. There's aspects of, for example, with one of our AI agents that we have called Skip, which is our AI data analyst product. Skip uses GPT-4.0 in certain areas, but also uses 4.0 Mini in some areas. And also in the case of where 4.0 Mini is used, you could also just as easily plug in LAMA 3.2, 90b as a very easy substitute for that. So we like to say we're agnostic. The point you made about plug and play, though, mallory, I think is the most important thing I can try to hammer home for our audience. You should not really closely couple your development to a particular company's APIs or a particular company's proprietary tools.

Speaker 1:

Of course, all the companies are trying to create additional features Like. A great example of that is OpenAI has something called the Assistance API, which is think of that as kind of like being a custom GPT under the hood. So you can provide files, you can have it run Code Interpreter, you can have it do a whole bunch of things that the regular API can't do. But once you build on top of the assistance API, first of all it's quite limited right now, but that will change. But if you build on top of it, then you're way more closely coupled to OpenAI. You have less portability than if you build in a more general way. This has been going on since the beginning of time with every platform you build on. For example, you build a web application and then, back in the days before browsers were really standardized, someone like Microsoft might come out and say, hey, we've got this browser called Internet Explorer and it supports this extra feature, and then all of a sudden, websites are incompatible with other browsers, right? So that's exactly what's happening with these AI models.

Speaker 1:

I would just encourage people that, to the greatest extent, you can come up with a generic way of interacting with them. You can come up with a generic way of interacting with them. Actually, the Member Junction team has a free library for doing this that you can get on GitHub. We'll include it in the show notes. It's just called the Member Junction AI Library. It allows you independent of using anything else in the Member Junction world. This tool allows you to automatically switch which models you're running your code against, so you can very seamlessly switch from one to another, and there's other libraries that do that as well. But this is all free, open source software that's easy to take advantage of.

Speaker 2:

Awesome. We'll include that in the show notes, for sure. Topic two today is digital twins. Digital twins are virtual representations of real-world entities, processes or systems within a business, synchronized with their physical counterparts in real-time. This concept has gained significant traction across various industries, offering a powerful tool for simulation, monitoring and optimization of business operations. So a business digital twin typically consists of the physical business entity and that could be products, processes or systems the digital representation of the entity and then the data connection between the physical and virtual representations. These digital replicas use real-time and historical data to represent past and present states of their physical counterparts and simulate predicted futures.

Speaker 2:

Digital twins can be applied to various aspects of a business, like operations and processes. They can model and optimize supply chain management, production lines, customer service processes and resource allocation. They can be used for product design and development, performance monitoring, predictive maintenance, and digital twins can simulate user interactions, customer or member behavior patterns and service delivery. There are quite a few benefits of having a digital twin in business. I'll share a few of those with you all. They can provide real-time data and predictive analytics, which enables more informed and timely business decisions. They can expose previously undetectable issues and guide managers to make data-driven improvements. Insights from digital twins can be used to improve products in future iterations or uncover opportunities for new product lines, and they can be used to deliver novel experiences and features to customers.

Speaker 2:

A primary example of digital twins that you definitely recognize would be Uber. So Uber's sophisticated digital twin system demonstrates the potential of this technology in business, and it allows Uber to do things like manage dynamic pricing, optimize their routes and ultimately, improve their customer experience. So, amit, I had not heard of the term digital twins. I think it makes a ton of sense as it relates to business, and at a glance our listeners might think well, this sounds great for Uber, but what exactly might a digital twin look like in the world of associations?

Speaker 1:

Well, I think the concept of digital twin, another way to describe it, is it's a simulation, it's a way of simulating a complex environment or a complex system, and so digital twin is just the term of art that's been around for the last few years, I think, and companies have embraced the concept. What's different about it is now the amount of data we have and, obviously, the AI that we have to be able to simulate increasingly complex environments that have more and more dynamic variables to them, more and more externalities that are being considered. So, on your question of how associations may apply it, imagine if you had Digital Twin that represented something like your annual conference and the annual conference Digital Twin. Basically, you had all of the content, all of the speakers and all of the attendees kind of loaded up into this digital twin of this environment, and don't picture it as like a video of people attending the conference, but more about, like the idea of what happens. How will different people behave? You know, say you have 10,000 people coming to an event and you have 300 sessions and two dozen keynotes and you have all these different things going on. Well, what would happen if you moved session A from one room to another? Or what would happen if you change the musician that you have featured for an evening entertainment venue from one group to another? Or what would happen if you change some of the topics right?

Speaker 1:

So if we have all of the individuals modeled as people right, essentially in this digital twin, in this complex ecosystem, and we could say, well, this is likely what's going to happen. This is how that system will react. Your annual conference might have attrition. You might have fewer people come. You might have more people come. More people might go to this session than this session. This might help you with planning, and so that's one example. You could model an entire association's membership as a digital twin and say how will the membership react to the association taking this public policy position, for example? And that might be based on all of the data you have on all of the people, based on all of your behavioral interactions, like newsletter clicks and website visits and educational courses they've taken, social media listening. You bring in all that data and say I'd like to be able to better forecast or simulate really what happens in this complex dynamic system. So I think there's a lot of applications for associations.

Speaker 1:

Now, what we're describing here is very sophisticated, so a company with the technology and capital resources of an Uber or someone like a large-scale manufacturer and capital resources of an Uber or someone like a large-scale manufacturer. Historically, they've really been the only organizations that have had access to technology along these lines. Remember that the cost of these types of systems is going to keep coming down because we're on the backs of not only Moore's Law but the AI acceleration that we're all experiencing together, and so it's going to become less and less expensive to do the kinds of things that we're all experiencing together, and so it's going to become less and less expensive to do the kinds of things that we're talking about. And I think it's a really key thing to be thinking about is more generally than the term digital twin. How do you better simulate what would happen as you make decisions and then as you stack up decisions and say, okay, we've made the decision to host the conference in Illinois, which city in Illinois should we pick? Okay, well, what time of the year should we do it and all these other things, and then you can have kind of downstream decision-making from there. So it is a really interesting technology if you generalize it.

Speaker 1:

I think that the other thing to think about with digital twins is the applicability within the professions that you serve as associations. So the example I just provided is really about how the association may use digital twins as a platform, essentially to make decisions and model the future. Well, what about if you are an association, let's say, in the healthcare world, your doctors, your practitioners, your clinicians, your researchers, how are they using this technology or how will they likely use this technology? Well, imagine if there was a digital twin of a patient and that digital twin was essentially a representation of everything we know about this individual, all of their key data, right, their full electronic health record, everything we know about them from a genetics perspective, behavioral insights, like every data point we have from, let's say, the ongoing data stream we get from wearables these days, right, anything else that you can think of, right, like, all that information is loaded up into this digital twin and we might say, well, let's see what's going to happen.

Speaker 1:

What would happen to this guy, amith, if we did this procedure on him? What would happen if he took this particular medication? What's the combination of his biology, specifically with this particular chemical compound that we're thinking about giving him, rather than basically trial and error where you say, oh well, this person's sick in this way, let's see what this medicine does, which is kind of what happens. Right, we have these broad-based research that says first of all, like, is this going to hurt someone? Obviously, that's what we're filtering out, but we really don't know what's going to happen with different individuals when we have different interventions, whether it's medication or something else.

Speaker 1:

So I think that's the most obvious one to me, where digital twins literally mean a twin of an individual, a human being. But I think you can model any complex system and then you say, okay, well, what happens when you model a digital twin of each individual member? So you say, I have a digital twin, perhaps a lot less sophisticated than the biological representation and that's probably a lot less interesting to an association, but I have a digital twin of every single member which represents everything we know about them and we can kind of simulate what's going to happen. So we have Mallory as a digital twin in the system and we can determine, okay, what's Mallory likely to do, where is she going to go, what is she going to do, and then we aggregate that up by 100,000 people and we have our whole membership as a digital twin as a system. So those are the types of things I think get kind of interesting and obviously there's an AI engine behind all of this because the scale of data is way beyond what any of us can individually comprehend.

Speaker 2:

That's super interesting. I want to talk about the annual conference example that you gave first, which of course you were just providing as an example and it doesn't seem super feasible, but I was jotting down potential data sources. We would need to make that digital twin possible, including data on individual preferences, information about the venue, like layout, location, historical data for all of our events, info on all the conference sessions, even weather information to know if it's raining or whatever. Thinking all of the information that would be necessary to create that digital twin in this moment seems infeasible. Be necessary to create that digital twin in this moment seems infeasible. But I'm wondering if there are ways to do mini simulations that you could talk about, where maybe we don't have everything but we could kind of plug in some of this information and have a good guess.

Speaker 1:

Well, I think it's one of our core values is that we want to seek progress over perfection. That's a Blue Cypress core value. It really means a lot to me because it's put another way. It's saying you know great can be the enemy of good. Where you know you have good enough can get you 80% 90% of the way versus 0% of the way towards perfect right.

Speaker 1:

So I think, with digital twins or any other kind of modeling exercise, what we're talking about is gathering as much information as we actually have access to and then working with that as a starting point. More and more data sources are going to become available to you over time, partly because you're going to get better at capturing them and thinking to capture certain kinds of data. And then what you're also going to be able to do is to extract insights from unstructured data more and more, and we've talked about that in this pod. We had a really good episode on unstructured data, I think two weeks ago or so. It's one of our most listened to episodes and I think the ideas in there are really interesting here, because you have mountains of data in the form of emails and other forms of correspondence with your members and your audiences, but you often don't really think of that as your data. So the unstructured content itself can be fed into these kinds of ecosystems. But you can also run that unstructured content through AI to gather structured insights out of the unstructured content and then feed those more structured attributes into these kinds of simulations and models.

Speaker 1:

And so I think that if someone wanted to go experiment with this, I would start with something really small, something really simple. The annual conference might be too big of an exercise to go after right away. You might start off by modeling, let's say, an individual member and trying to see what's going to happen with that one person, and then group together multiple such models to say what's going to happen with this cohort like a cohort of people that came in at the same time or based on like the same graduating class or something like that and to grow from there. But the sophistication of these models obviously drives the likelihood of them being useful. So if you only have a tiny, sparse amount of data, you know the probability of them being useful to you is probably pretty low. The only thing I'd add to that, which is a little bit of a contradiction to my last statement about less data meaning less useful is that remember that the foundation models that we have available have a very good general understanding of people and the world, and so, even though you may not have perfect data on every individual or even all of your individuals collectively, the AI models that are pre-trained on all of the world's data could probably still be pretty helpful to you in other ways as part of this ecosystem.

Speaker 1:

I think the key thing to be thinking about when you hear about digital twins or something that may seem like maybe more esoteric to some people, or maybe seems out of reach in some ways, or just how do I use it is think about business problems that you'd like to focus on. Where are your pain points as an organization and how could you improve the quality of your organization, your business, either increasing efficiency or improving value to members? And venue selection is one that I always hear people talking about in meetings. Some people plan these things out like six years in advance because their events are so large. Meetings. Some people plan these things out like six years in advance because their events are so large. It dictates that, but within those venue selections there's so many. There's hundreds and hundreds of other decisions, as you know well, mallory for planning digital now, even an event on a smaller scale like ours.

Speaker 1:

So potentially these kinds of simulations could be really helpful for optimizing the quality of events, the business outcomes like the number of attendees. We're optimizing the quality of the event, the business outcomes like the number of attendees, and just really creating an experience. That's the best possible experience. So I come back to that, because that's the physical world. But there's similar concepts, I think, available, like in an LMS, you know, or in any other environment. Say, hey, we have this LMS full of courses. What would happen if we change these courses or change this structure or change this learning path? What would happen if we change these courses or change this structure or change this learning path? Well, you know, the digital twin concept applied there can help us simulate what's going to happen if we made those changes before we actually make them.

Speaker 2:

It's kind of like doing QA on the future in a way Mm-hmm. We've talked about using vectors to personalize offerings for members in a previous podcast episode as well. We'll link that one and the unstructured data one in the show notes. So if we're vectorizing data points to provide personalization to members, are we kind of creating digital twins in that process by saying we think you would be interested in this. Is that kind of a digital twin?

Speaker 1:

A vector of, like a profile of an individual, would essentially create a semantic representation. Put another way, there's like a mathematical meaning to like who is Mallory or who is Amith. So if I take, let's say, a document that outlined here's everything I know about Mallory, and then I ran that document through what's called an embeddings model, which would generate that embedding, or the vector that you're describing. It's a sequence of thousands of numbers that represent the semantic meaning of that document, which is obviously a proxy to saying, hey, this is the meaning of who is Mallory, right, and so, in a way, what we're doing is compressing the information in that document in a format that's super efficient from a math perspective to then compare against lots of other vectors like that. So if I create 100,000 embeddings for 100,000 members, and then if I have, let's say, every course that I offer or components of courses that I offer, and I vectorize the content from those courses now using vector math, vectorize the content from those courses, now using vector math, I can compare people to courses and then I can say which courses are most likely to be relevant to Mallory or to Amit or to anyone else, and so vectors allow us to scale in terms of both amount of data but also the speed at which we can do these comparisons.

Speaker 1:

But they don't mean that they capture every nuance. They capture a lot, but the actual data, the actual bits and bytes of every single piece of information we have, is far more robust than what the vector is going to capture. So vectors are absolutely a part of a solution that is really important broadly in AI and in the context of digital twins, for sure, because you're going to need to be able to do things very quickly at scale. You're going to need these kinds of shortcuts and representations, but I do think there's a broad, broad array of data that will go beyond that, because the digital twin needs to have an understanding of all of the different knobs and levers essentially that affect what it is that it's modeling. So the vector will be part of that solution, but not all of it. It'll be probably actually a small component of the overall idea of what a digital twin represents.

Speaker 2:

I think that makes sense. So comparing vector to vector is essentially comparing numbers using math. But having a simulation of me, for example, I would be able to ask it any potential questions, like what if we created this new thing, or what if we switch the venue location, and that it doesn't seem like you could do with vectors.

Speaker 1:

Yeah, exactly Like if I was trying to predict which piece of content on your website Mallory might find most relevant. If I take a vector representation of you, which is again an output from looking at a document essentially that describes you, it's going to be effectively a shortcut to saying, hey, these are the attributes of Mallory that we think are most semantically interesting from the embeddings models point of view, and then I'm going to give you content that's also in a similar way, kind of summarized, if you will, in a mathematical way that makes it possible to do that at scale. But is it capturing every possible nuance? No, does it tell us what happens to?

Speaker 1:

Let's say, for example, I have 10 different values that I want to track about you your purchasing level, your customer satisfaction score, the number of events that you have attended and a number of other things like that and each of those values is part of that profile. The vector essentially compresses all that into one kind of summary mathematical meaning, but I still want to know what those individual values are, particularly in a digital twin environment. Part of what we're doing is looking ahead. Kind of think of it as like a video where, frame by frame, we're looking ahead five frames, 10 frames or 50,000 frames. We want to know what those individual values might end up being down the road so we might be able to say oh okay, mallory's order volume has gone up, but her customer satisfaction level has gone down. And digital twins go to that level of granularity, essentially simulating the future state of the entire complex system, if that makes more sense.

Speaker 2:

No, it does. That actually makes a lot more sense In my mind. I was thinking they were more similar than they were. It sounds like then well, I'll make the statement you can tell me if it sounds wrong or right that in moving toward the process of digital twins in the future one day, or even a mini experiment that it would be pretty essential to have some sort of common data platform in place. Would you agree with that?

Speaker 1:

Yeah, now more than ever, you got to get your data house in order and a data platform a common data platform is critical to be able to ingest data from all of your structured systems, but also to help make sense of your unstructured data. There's as much, if not more, value in the unstructured data you have in Microsoft Office, in Microsoft Office, in Google, on your website, in emails, in boxcom and all these other places where you have stuff that can all be made sense of. Now you can bring that all into one unified repository where you understand what this stuff is, and that can lead to the opportunity to create digital twins. It can lead to the opportunity to do way more advanced analytics. You can do more predictions. So put another way if you don't have your data house in order through a common data platform style of approach, you cannot even begin to contemplate doing a digital twin exercise, because you just won't have the data. You can't launch a rocket, no matter how cool your rocket is, if you don't have any fuel and the data is essentially that.

Speaker 2:

Seems like a good line to wrap up today's episode on Everyone. Thank you for tuning in to episode 52. We will see all of you next week.

Speaker 1:

Thanks for tuning in to Sidecar. Sync this week. Looking to dive deeper? Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.