Sidecar Sync

Celebrating 50 Episodes, Exploring AI Learning Hub, and Advances in Spatial Intelligence | 50

β€’ Amith Nagarajan and Mallory Mejias β€’ Episode 50

Send us a text

In this milestone 50th episode of Sidecar Sync, hosts Amith Mallory celebrate the show's journey while diving into exciting new developments. They explore the enhanced AI Learning Hub 2.0, which now includes a professional certification, and discuss the importance of continuous AI education for associations. Plus, they introduce spatial intelligence, a game-changing area in AI, and share insights from Fei-Fei Li’s groundbreaking work. Tune in for a fascinating look at the future of AI in the association world, and join the celebration of 50 episodes!

πŸŽ‰ digitalNow 2024 Contest:

Post about this episode on LinkedIn! Share something you learned or a cool tool you're going to try, tag Sidecar (https://www.linkedin.com/company/sidecar-global/), and tag #digitalNow. Each post is an entry, and two winners will receive free passes to digitalNow 2024.

Note: Every post counts as one entry. The contest ends on October 4th.

πŸ”— https://www.digitalnowconference.com/

πŸ›  AI Tools and Resources Mentioned in This Episode: 
NotebookLM ➑ https://notebooklm.google.com
AI Learning Hub ➑ https://sidecarlearninghub.com
The a16z Podcast ➑ https://podcasts.apple.com/us/podcast/a16z-podcast/id842818711

Chapters:
00:00 - Introduction
02:10 - Celebrating 50 Episodes
06:20 - Introducing the AI Learning Hub 2.0
12:36 - The Value of Continuous AI Education
19:18 - Fei-Fei Li and the Future of Spatial Intelligence
21:35 - How Spatial Intelligence is Changing AI
28:58 - The Future of Training Models
37:47 - Contest Announcement and Final Thoughts

πŸš€ Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global

πŸ‘ Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress πŸ”— https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

πŸ“£ Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

πŸ“£ Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Speaker 1:

You know what I tell people when I speak on AI is I say listen. I want you to promise me one thing walking out of this room I want you to write down on your notepad that you're going to devote 15 minutes a day to learning AI, because if you do that consistently, you're going to become way more knowledgeable than probably everyone else. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Welcome to the Sidecar Sync. We're excited to have you join us and we have a whole bunch of interesting content to cover at the intersection of associations and artificial intelligence. My name is Amit Nagarajan.

Speaker 2:

And my name is Mallory Mejiaz.

Speaker 1:

And we are your hosts. And before we get into all of the excitement of today's episode about AI and associations, let's take a moment to hear from our sponsor.

Speaker 2:

Digital Now is your chance to reimagine the future of your association. Join us in the nation's capital, washington DC, from October 27th through the 30th, for Digital Now, hosted at the beautiful Omni Shoreham Hotel. Over two and a half days, we'll host sessions focused on driving digital transformation, strategically embracing AI and empowering top association leaders with Silicon Valley-level insights. Together, we'll rethink association business models through the lens of AI, ensuring your organization not only survives but thrives in the future. Enjoy keynotes from world-class speakers joining us, from organizations like Google, the US Department of State and the US Chamber of Commerce. This is your chance to network with key association leaders, learn from the experts and future-proof your association. Don't just adapt to the future, create it. At Digital Now. Amit, we've got big news today.

Speaker 1:

Yeah, what's that?

Speaker 2:

We are celebrating episode 50 of the Sidecar Sync podcast. Can you believe that?

Speaker 1:

It is pretty amazing.

Speaker 2:

Actually, I'm pumped about it, I mean 50 episodes and we have so much to celebrate. This week we're going to talk about our brand new and improved AI Learning Hub, but also for our listeners. Monday, September 30th was International Podcast Day, and so we got to celebrate the 50th episode of the Sidecar Sink podcast on the same week as International Podcasting Day, which is really exciting to me.

Speaker 1:

That is super cool, I have to say. You know, just getting this episode started in the back of my head, I've been playing around with Notebook L. I'm a Bunch and we covered that in our last episode, or maybe it was. Was it two episodes ago, I can't remember.

Speaker 2:

They blurred.

Speaker 1:

Recently. We covered it recently and at the time we recorded that episode I had not had hands-on time with it yet and Mallory had, and she played an example clip of the podcast that it generates and since then I've generated a few myself and it's pretty impressive. So definitely, if you haven't checked out Google's Notebook LM, go check it out. But I also have those two voices in my head every time I listen to a podcast, because I've listened now to like 10 or 15 examples of that. So when we're starting to talk, I'm like, do we kind of sound like those AIs from Google?

Speaker 2:

No, the AIs from Google sound like us, amit, that's the way.

Speaker 1:

Yeah, that's probably it.

Speaker 2:

Well, we're super excited to be celebrating episode 50. Off the top of your head, this might be too hard of a question. I'll have to look as well. Amit, do you have a favorite episode that you want to talk about, maybe a favorite topic or a favorite guest that we had?

Speaker 1:

You know, I would say when we recorded the Foundation episodes, there were two of them Foundation episode one and two. They still are some of our most popular episodes, even through now, because I think they give people a really good intro to AI and it's a format I think that's pretty easily consumed. So probably those two, although recently we did another evergreen style episode on unstructured data. That seemed to get a lot of really positive feedback. I enjoyed that topic a lot. I think it's an area with so much opportunity. So probably either of those two.

Speaker 2:

I would say on my end, I like the Evergreen episodes as well. I like the Foundation of AI episodes. For me, the Vector episode was also really fun because it was intimidating, to be totally honest, and kind of scary. And I did my research and after that episode I genuinely felt like, okay, I've got a good grasp on this, and hopefully our listeners felt the same way. And I always think back to Neil Hoyne's episode. I think that was a really great interview that we did.

Speaker 1:

I agree with that.

Speaker 2:

Well, everyone, I want to remind you all about a contest that we are currently running in honor of our 50th episode that entails complimentary attendance for you and a colleague to Digital Now 2024, which is this month, October 27th, through the 30th, in Washington DC. All you have to do is post on LinkedIn about the Sidecar Sync podcast, tag Sidecar and hashtag Digital Now. Each post is one entry and the contest ends tomorrow, Friday, October 4th, so get your posts out there.

Speaker 1:

That's going to be exciting. Well, I can't believe. Digital Now is right around the corner. It's kind of crazy. I mean, every year we say the same thing, but it comes back around really quick. It has been slightly less than a year since Digital Now. This year it will be October 27th through 30th. Last year I think it was November 8th or 10th or something like that, but still, that's not the reason. Time is just moving quickly.

Speaker 2:

Yeah, 10th or something like that, but still, that's not the reason. Time is just moving quickly. Yeah, and I associate the beginning of this podcast with Digital Now from last year because they were very close I don't remember if it was the same week, the week before, the week after, but really close to one another, and so it's really. It's just. It's insane to think that we've done this for pretty much a full year, but excited to be here on episode 50. Year, but excited to be here on episode 50. Well, today's episode we're covering two topics. One of those is the AI Learning Hub, or our new and improved AI Learning Hub 2.0, as we like to call it, and then we'll also be talking about spatial intelligence, and that'll be a really interesting conversation as well. So if you have listened to the podcast before or watched us on YouTube, you have heard us mention the AI Learning Hub only, I think, every single episode. It is our library of asynchronous AI lessons and courses with association-specific use cases and applications. Within that Learning Hub, we have office hours with AI experts, and then you also get access to a community of AI enthusiasts, as I like to call it, and then you also get access to a community of AI enthusiasts, as I like to call it, and we are thrilled to announce a brand new AI Learning Hub that we just released last week, essentially at the end of September.

Speaker 2:

The AI Learning Hub was always meant to be living and breathing and we plan to update content frequently from the start, but we really zoomed out and took a look at the whole thing and realized this stuff is changing so quickly that we decided to reinvent. Look at the whole thing and realized this stuff is changing so quickly that we decided to reinvent kind of the whole learning hub. We wanted to have a better flow, we wanted to have better recordings, we wanted to have better checkpoints, more engagement with our learners and better courses overall. So you might be wondering what's new exactly about this AI learning hub 2.0. Pretty much everything. Every course in there is new, minus the AI prompting course, which we redid just several months ago, so that one's new as well. We have a foundations of AI course AI and marketing and member communications, ai and events and education, data and AI strategy in the age of AI, ai agents, and we're working on an eighth and ninth course currently which will be responsible AI and a chatbots course. We also moved to a new LMS or learning management system.

Speaker 2:

So formerly we hosted the AI learning hub on a community on the circle community, which we love, but we were limited in terms of learning features, especially as it pertained to things like assessments. So now we aren't. We've added in knowledge checks and activities throughout. And we it pertained to things like assessments, so now we are. We've added in knowledge checks and activities throughout, and we're also thrilled to announce the launch of our new Association AI Professional Certification. So for those individuals who take all the courses in the Learning Hub, complete them and then take and pass seven assessments that go along with those courses, they then share reflection on ways that they're incorporating AI into their regular work. Once they do all of that and pass everything, of course they earn the AAP or Association AI Professional Certification that recognizes that individual for outstanding theoretical and practical AI knowledge as it pertains to associations.

Speaker 2:

I hope you all know, by having listened to this podcast before, that we at Sidecar committed to bringing our audience the best, most up-to-date content out there and making it highly relevant for you, and that is why we're thrilled to be announcing the launch of this new AI Learning Hub.

Speaker 2:

If any of this sounds interesting to you.

Speaker 2:

We do have our AI prompting course that is at the really low price of $24.

Speaker 2:

It's a great entry point if you kind of want to see what the AI Learning Hub is about, and it has some fantastic tips and tricks in terms of prompting ChatGPT, for example, or Claude, or Google Gemini. We've also got a middle tier that we're offering, which is our standard one. You get access to all seven courses in the AI Learning Hub, and then we have our pro tier, which gives our standard one. You get access to all seven courses in the AI Learning Hub, and then we have our Pro Tier, which gives you access to all the courses, access to the office hours and access to that AAP certification that I just mentioned, and that's $399 a year. I'll also mention if you want to get your whole team involved in AI Learning, which is really where we recommend starting if you want AI to have that fully transformative effect on your association, you can get your whole team in there as well for one flat rate based on your organization's revenue. So, amit, what are you most excited about with this new AI learning hub?

Speaker 1:

It's kind of hard to pick one thing. There's a lot going on in that release and AI is changing quickly. So our commitment since the beginning of all of our learning opportunities with Sidecar around AI have been to continuously update, iterate, improve, and we do that all the time, and we have tons of content that's available for everyone for free. We have this podcast, we have blog posts, we have a monthly AI webinar. We do a whole bunch to try to provide as wide of a swath as possible of free content, and then our premium offerings through the Learning Hub that you mentioned have really taken a step up in terms of both quality and depth. So I'm excited about that.

Speaker 1:

Probably, if I had to put a pin on, the one thing that I'm most excited about is the certification to pin on. The one thing that I'm most excited about is the certification, and the reason is is that I think certifications are very clearly understood as being valuable for career advancement, for designating your expertise, and there's a lot of people out there already which is exciting who have taken the time to learn a lot about AI. A lot of them are in this community and we want a way of acknowledging and recognizing those strengths in the community. For employers who are seeking to hire AI-capable association professionals, this is going to be a great way to be able to showcase that on a resume or on LinkedIn, and so we're excited about that. I think that's. To me, the biggest thing about it is having a professional designation that shows that you're an expert at that intersection of associations and artificial intelligence. So that's my number one thing, probably how about you.

Speaker 2:

That's, in a way, the most exciting because it is the newest thing that we're doing A lot of the other parts, the office hours, that we kind of revamped, all the course material we already had that, so the certification is exciting for that reason. But being that you already said that one, I think the thing that's most exciting for me is really the overall flow. So for the first round of the AI Learning Hub it was a bit more disjointed, I'll say. We kind of focused in on topics where we had that expertise. We shared use cases and examples, but I don't think there was a great flow, a great story from beginning to end. And since we decided to recreate everything all at once which was a bit crazy, but I'm happy that we did we had the ability to take a step back and look at the picture from afar and say where are we starting, where are we ending, what do we want people to learn from this AI learning hub? And so for me that was the most exciting feeling like it's a cohesive product.

Speaker 1:

Yeah, it makes a lot of sense, you know, just reflecting on the journey of everything we've been doing around AI, which goes back many, many years. I mean, sidecar has been talking about AI for years and years now, and we really have ramped up our AI conversation in the last two and a half years. But the thing that we've been consistently beating the drum on is learn, learn, learn and I think that's probably true for any topic that if you have something disruptive or something emergent that you've got to go figure out, you need to learn. You can't just make a bunch of uneducated choices, decisions. It's all guesswork at that point. And still today I see a lot of people that are out there saying, hey, what's our AI strategy going to be? Let's hire consultants or let's try to figure it out ourselves, and they don't really have a deeply rooted foundation in the basics of what AI can do, where AI is heading, how to think about it, and so, fundamentally, I think it's a really worthwhile investment to slow down for a second and do an education program of some sort. Obviously, we have ours, and it's tailored for the association community, but there's tons of great educational resources on AI. Ultimately, what we care about is people going out there and doing something to advance their AI learning.

Speaker 1:

What I tell people when I speak on AI is I say listen. I want you to promise me one thing walking out of this room I want you to write down on your notepad or in your brain, if you don't have a notepad with you that you're going to devote 15 minutes a day to learning AI. I don't need an hour from you. I don't need three hours from you. I just want 15 minutes a day, five days a week, to learn AI, because if you do that consistently, you're going to become way more knowledgeable than probably everyone else or the vast majority of people, and it's going to put you in a position where you're super effective. Using the tools, you're going to be able to provide valuable counsel to others in your association, to people outside of the association. It's going to help in your personal life. It's going to help you also be more competitive in a changing labor market as the world is changing. So I think it's a great and very easy investment to make.

Speaker 1:

If you think about it. We can all afford to spend 15 minutes doing something that's important every day, and if you think you can't do it once a day, do it once a week, you know. 15 minutes every Monday morning, 15 minutes every Friday afternoon. Pick a time that works for you, block it off and do it. Obviously, our learning hub it has a bunch of small lessons that are broken up, typically in under 15 minute chunks, so it fits in really nicely with that idea. But again, you know, the goal here isn't to pitch the learning hub by itself. It's the idea of learning on a continuous loop basis.

Speaker 1:

And don't stop If you think, like you know. Okay, I have a pretty good understanding of AI. I know how to do the prompting. I understand a little bit about vision models. I understand about agents. That's awesome. Don't stop there, keep going. There's no such thing as graduating from AI college. You know, like, all of us are basically, like you know, just slightly less or more incompetent with AI than the next person. That's honestly where it is. Like even people who say, hey, we're AI experts, there's really no such thing, because we have no idea what these models can actually really do, and that's true even for the people who create the models.

Speaker 2:

A hundred percent, and then we're going to kind of touch on that with the second topic, which is spatial intelligence, which really blew my mind. Amit, I was just thinking of something while you were talking. I feel like the top AI influencers that you follow, because you'll send me things sometimes and I'll be like I don't even know who that is and then I'll click follow. But I do feel like you have a really good repository of resources that you keep up with.

Speaker 1:

Thanks. Yeah, I follow a lot of different people on LinkedIn. I subscribe to a whole bunch of different newsletters. Some of the stuff I subscribe to is kind of esoteric or technical. If I see someone who is not a big name, I do follow all the big names, like Fei-Fei Li, who you're going to talk about shortly with spatial intelligence, anir Karpathy all these other people that are very big names in AI, and they often have some very interesting things to say. However, there's also a ton of other people out there who aren't that well-known, who are researchers or engineers or people that are starting companies doing interesting things in AI.

Speaker 1:

So I'm pretty liberal with how I sprinkle the follow button on platforms. Particularly LinkedIn is the main platform where I hang out and a lot of things end up in my feed that I think are interesting, and the newsletter side of it is really valuable too. I don't even know how many newsletters I subscribe to, but I look at probably I don't know easily 20 or 30 newsletters a day. I don't read them from front to back, but I scan them and I look for interesting things, and that's a lot of what I spend my time doing, plus I listen to I don't know probably 10 or 15 different podcasts on a regular basis. So I'm just consuming information constantly.

Speaker 2:

Wow, that's intense. We should make a list. We should post it on the site?

Speaker 1:

Yeah, I'd be happy to. That'd be fun.

Speaker 2:

Last question on the AI Learning Hub what was your favorite course that you recorded? And just so you all know, Amith led the way on teaching a lot of the content in the Learning Hub. I have a course in there as well, and then we have some other teachers too. But I'm curious, what was your favorite?

Speaker 1:

Well, I mean, I think, probably for me personally, my favorite one is Strategy in the Age of AI, and the reason I like that course so much is that it gives you an intellectual framework for how to think about strategy generally, based off of the Hamilton-Helmer Seven Powers framework, and we've adapted it for the association market to some degree but, more importantly, applied it to the world of AI.

Speaker 1:

You know, whenever there's a period of rapid change, there's opportunities for new businesses, there's opportunities to displace existing businesses, but you have to rethink the way your business is going to work, whatever that business is, and so having a really rigorous intellectual framework for what could create a strategic advantage ultimately, and what's a durable one, is something I think is really interesting.

Speaker 1:

So that one was my personal favorite, probably because that's the stuff I think about constantly in starting businesses within the Blue Cypress family or working with you on product strategies for Sidecar or whatever. I think probably the other favorite of mine that's a little bit much more practical is the data one. The course all about data, because we talk about some of the things you mentioned earlier, like vectors. We talked about unstructured data that I mentioned earlier. We have a lot of depth of content in there, and I think a lot of association folks I know, whether they're in membership or marketing or technology roles, struggle mightily with data, and AI can help a ton with this, and so the data course I think could be very, very practical for a lot of people.

Speaker 2:

Moving to topic two, which is spatial intelligence. So Fei-Fei Li, who Amit just mentioned, is a renowned AI pioneer, Stanford professor and is also known. So Fei-Fei Li, who Amit just mentioned, is a renowned AI pioneer, stanford professor and is also known as the godmother of AI, which I didn't know, but several news sources call her that. She has been speaking about spatial intelligence for a while and essentially saying that it's the next frontier in artificial intelligence. Spatial intelligence aims to give AI systems a deeper understanding of the physical world and enable them to interact more effectively with their environment. Spatial intelligence encompasses several important capabilities that I want to cover for you all briefly, and one of those is visual processing, the ability to process and interpret visual data from the surrounding environment. But really, the key here is this 3D understanding, so understanding the three-dimensional nature of objects and spaces, including their geometry and spatial relationships. Spatial intelligence also creates the capability of making predictions about how objects will interact or move in the physical space, and it's all about linking perception with action, so allowing AI not just to see and understand, but to also interact with the world. Fei-fei Li has founded a startup called World Labs, which aims to build large world models LWMs that can generate interactive 3D worlds. World Labs has raised significant funding about $230 million thus far to pursue this vision. What's interesting here is that Fei-Fei Li compares the development of spatial intelligence in AI to the evolutionary leap that occurred when organisms first developed sight, which led to an explosion of life, learning and progress. She essentially saying that a similar transformative moment is about to happen for computers and robots.

Speaker 2:

So, amit, this was a lot for me to tackle. You sent me a podcast episode which we'll link in the show notes. It was an A16Z episode where they interviewed Fei-Fei Li and it was pretty mind-blowing. It was a short episode. I don't know how they got so much into just a short period of time, but it was great. You all should listen, and you and I on this podcast have talked about computer vision, which allows for object detection and recognition in images and videos. We talked about SAM2 or the Segment Anything model recently, but what we're actually talking about is 2D analysis and so spatial intelligence. As I understand it, is different in that it refers to an awareness and understanding of the three dimensional world. Can you talk about that distinction a little bit?

Speaker 1:

Sure, and before I forget to do this, fei-fei Li wrote a book called the Worlds I See, which is an excellent book on AI kind of generally, and it does talk about her personal journey and how she went through this process of really getting into this particular subspecialty within the world of AI, which I think is just a really interesting thing. So I think of it this way. So you know, it's actually I think her analogy is best that it is like an organism first evolving to develop sight and as the sight gets better and better, the capabilities are radically different than you know, organisms that didn't have sight. So that, I think, is one way to look at it. Another way to think about it is if I, through language models, if I am just essentially describing to you this is the house I live in, these are the dimensions of the living room and the dining room and the bedroom and this is how tall my roof is. I can give you a lot of textual descriptions of that, but if I take you to my house and I show you my house and I walk you around it and you see, constantly in your brain is processing these multidimensional images of what's going on and you've got movement in there, which is a fourth dimension, you have a much better understanding. Plus, you probably have understanding of materials as well and physics, and you have an intuitive understanding of those things where you're like oh well, I know that that beam is made out of wood and this other thing is made out of steel and this is made out of glass. And what would happen if you threw a baseball at the glass? It probably will break. It might break depending on the kind of glass, whereas if you throw it at the wood, it probably will break. It might break depending on the kind of glass, whereas if you throw it at the wood, you know intuitively it's probably not going to do anything right. So you have intuitive understanding of physics and materials and a whole bunch of other things in your brain, and a lot of that has come not because someone has told you hey, mallory, baseballs break glass but don't break wood. It's because you've had experiences in your life, largely visually, where you've ingested all that training data, essentially over the course of a lifetime and essentially right now.

Speaker 1:

The concepts that she's talking about is to couple the ability of models to take in video and image data, but to actually have understanding as opposed to just basically look for patterns. And so the understanding means that there's a physics engine in there where the movement of objects is not just based upon patterns that were inferred from millions of hours of video previously, but also based on understanding of what the rules of physics are or understanding the materials. So you may or may not have any way of determining what the materials are. You know, if you look at a car, is the panel on the side of the car made out of aluminum or steel or composite or something else? And so you can make some guesses, some educated guesses, based on what you think most cars you know panels are made out of, on what you think most cars you know panels are made out of. But you know, if you actually knew that, then you'd be able to better predict what would happen if that car crashed at different speeds and all this kind of stuff. So a lot of what she's talking about is what's going to in my mind. There's world models, there's vision models, there's language models, there's all sorts of different kinds of models. Ultimately, these things are all going to converge into models, and whether they're large or small, I think, will be a distinction that matters more to the engineers than most people, but this is a capability none of these models have today. They don't have an understanding of what's going on. They're just basically predicting the next frame of the video, essentially, as opposed to having a deeper understanding. So that's a big part of what she's describing.

Speaker 1:

The other thing that I want to point back to is some time ago, there was a big, big amount of noise about OpenAI releasing videos from something called Sora, which we covered on this podcast Maybe it was back in the spring, right around there and the speculation at the time was that the reason OpenAI had invested in this text-to-video generative model is they were playing with a world model which is a physics engine, physics-based world model, which is similar to what's being described here. So I think a lot of labs are working heavily on this. I think she's amazing. Fei-fei Li is amazing. I think that she of labs are working heavily on this. I think she's amazing. Fei-fei Li is amazing. I think that you know she'll probably do something really interesting there. The question will be like will this company, like, have a distinguishably different you know capability compared to what OpenAI Anthropic and a lot of other big labs are doing, who are all thinking about this problem as well?

Speaker 2:

Mm-hmm. Oh man, this is so interesting To me. There was a thread, particularly when she was talking about language models, there was this underlying current at least in my words of you ain't seen nothing yet which I think is interesting because we talk about generative AI so much on this podcast, and particularly models like GPT-4-0 and CLAWD much on this podcast, and particularly models like GPT-4-0 and CLAWD, and my mind has been blown seeing these generative language models, but the fact that that really is just a fraction of the way we kind of perceive the world. I thought it was quite a provocative statement she made about how language doesn't really tell us all that much about the environment around us. But you nailed it on the head it's that 3D interpretation about the environment around us.

Speaker 1:

But you nailed it on the head, it's that 3D interpretation. Well, I mean Jan LeCun, who's the head of Meta's FAIR lab, their AI research lab, and brilliant, you know, computer scientist and AI researcher. He's the main head of that lab and so he has talked in the past about how you know, in a period of time when a human being, from birth to I think he talks about like four years old roughly has taken in, you know, so much information through our various senses right so sound, vision, touch, smell these are senses that generate information for our brains and our training process that in that four years that that child has taken in more information than the entire world's comprehensive, collective knowledge base, right from a digital perspective. Actually, I think I have that stat wrong. It might be like multiple orders of magnitude greater. So he talks about that and then you know kind of pointing out some similar things. So you know he has mentioned publicly in the past, in the recent past, that they too are working on models, kind of in this genre. So I think it's an exciting category.

Speaker 1:

Fei-fei Li, by the way, one of the interesting things about her is she was the main thrust behind ImageNet, which was the thing that put deep learning on the map. So back in the two decades ago she was doing research on image recognition and they built a really interesting labeled data set. So in classical machine learning, what you do is you'd have data and you'd have to have labeling for it, and the idea would be that you're trying to get the machine learning algorithm to predict what an image or, in this case, an image what the labels are for it, based upon its training set. So you would have millions and millions of images that have been labeled very painstakingly by hand by people. And she was able to do that actually using the power of scaling through Google and getting data and then also using a distributed kind of gig economy type workforce to build that. She talks about this in her book, which is really interesting.

Speaker 1:

But what ended up happening was there was an entry. She created the contest called ImageNet, which was an annual computer science competition, where it was like hey, you know, you can submit essentially any algorithm you want. You can show us through your research how accurate your algorithm was for a set of benchmarks, and the team from the University of Toronto that came in with the AlexNet paper basically showed that they had this deep neural network. They trained, actually, on two GPUs, you know, in a lab in the university, were able to blow away everyone else and that you know from there exploded into the whole deep learning thing. So she's been, you know, deep in all of that for a very long time. Brilliant lady and super interesting person. So I think she'll probably do well with whatever she does is my prediction, but I think a lot of other people are putting energy into this as well.

Speaker 2:

I'd like to talk a little bit about the training of a world model, because for me it seems a bit like a paradox, in that we use text to train generative text models, images to train image models, video to train video models, and sometimes a diverse set use text to train generative text models, images to train image models, video to train video models, and sometimes a diverse set of those to train models. But how can we train a spatially intelligent model when we're using these other modalities? Does that make sense?

Speaker 1:

Yeah, your question makes a ton of sense and I wish I had a good answer for you. That's an area where this, you know, I think a lot of the researchers that are working deep in this haven't yet published a ton and I don't know how much will be published. You know, I suspect and this is purely speculation on my part that there will be some kind of an MOE style approach to this or mixture of experts style approach to this, where you know you imagine a model that's trained in a more traditional manner, that has a rule set that's like physics related. Then you have other models that are more generative in nature and maybe collectively, they're able to have hyper-accurate predictions of the world around us. So I think that the data set will probably be one of the most unique challenges of training these kinds of models. There may be a need for novel model architectures as well that are particularly efficient at dealing with this kind of data. A lot of the data that we've been whether it's image or text or video, as well as DNA and other things are sequential in nature and you're able to say, hey, let's put them into a sequence, which then bodes well for the transformer architecture in terms of predicting the next token in that sequence right, whether it's a pixel or if it's a word or whatever the case may be or even like using it for diagnostics, where you can put, like, a patient's symptoms into a sequence and then use that to predict what their condition might be. But I don't know whether or not this type of problem fits into that kind of concept of a sequential model, so that would be interesting. I think there's a decent chance. There's ways to encode it that way. The question is whether or not that will be efficient and whether it'll scale.

Speaker 1:

One of the interesting conversations around AI, particularly around world models, is who are the companies that are likely to have the advantage here? And this is where people who have a lot of real-world data, like video, could potentially be interesting, especially if they have a lot of mobile video, where you have both video capture but telemetry-type data that tells you things about location, speed stuff like that. And guess who has a lot of that? It's Tesla and other auto companies as well that have camera systems in their cars more recently. But Tesla has by far the world's largest repository of video and not only have the video, but they also know the location where the video is taken and perhaps, more importantly, they know things like acceleration and velocity and all these other things that could be really interesting in building a world model. And also remember that Tesla is working on their humanoid robots, which would require a world model to be really effective in kind of non-industrial settings. So I think it's going to be an interesting race to watch for sure.

Speaker 1:

Getting back to your question, short version is I don't actually know what people are going to do here, but I think it's going to be an explosion of research that's published in the next couple of years. That will really give us more insight. The other thing we have to keep in mind is that we have kind of tailwind. Behind us is all the AI stuff that's happened and the continuous compounding of Moore's Law that keeps happening, and new computing architectures. There was just an announcement from IBM, something called North Pole, I think it is. I haven't read the paper yet, but they just announced some research with a new processing architecture that's dramatically more efficient than anything out there, even more so than the Grok platform that we've talked about. So you have that kind of stuff happening which is going to open up the capacity to process just a ridiculously larger amount of data ultimately than what we've been doing.

Speaker 2:

Yeah, and this is more of a note that might be interesting for listeners but you mentioned robots and humanoid robots. But they don't have to be humanoid robots, but the fact that they interact with the 3D world essentially as they move through space, but their compute or the brain of the robot is, by definition, the digital world or the 2D version of things, and so I think Fei-Fei Li was talking about spatial intelligence kind of being that connector, and if we could build natively spatially intelligent robots, we'd be seeing vastly different things, I think.

Speaker 1:

I think you're right. Yeah, I think there's definitely going to be some interesting, interesting opportunities around that, you know. I think that ultimately, the way computers represent information and make decisions, whether it's AI or more classical deterministic software, is based on an extremely narrow, limited amount of data, you know, compared to what we use as people, and so that can be good and bad. I think part of what we have been, like you know, evolved to do that's really a good thing, because we wouldn't be able to survive without it is the ability to very rapidly filter out a lot of noise from the signal. And that's where I think our perspective on how to design these algorithms is a little bit different, because computing resources have been very limited until recently and they're still quite limited, especially if you look ahead and say where will computing be in five years, so the opportunities to do things at scale are going to be quite different. I mean, another little often lesser known fact is that neural networks are not a new innovation. They scaled into deep learning in the early 2010s is when we started to see that really explode.

Speaker 1:

But the idea of neural networks goes back decades and decades and decades. I mean. The first deep research into this occurred in the 70s and 80s. The problem was this compute was so tiny back then, so limited, that neural networks were not useful and they were considered a toy. And you know, back in the late 90s there was another wave of people saying, hey, neural networks seem great, but compute still was very limited and the amount of data we had because this was like early days of the Internet was very limited. And you know, once again, it's like these, these waves. So, in any event, I think that there is a lot of opportunity here.

Speaker 1:

The thing that I want to tie together for our listeners is how does this relate back to the world of associations? And there's two comments I want to make there. First of all, even though associations are traditionally dealing with information and providing services like membership and education and so forth through the traditional means, you ultimately operate in the world just like anyone else. So you don't really know exactly how something like this would affect the way you operate your business or the services and products that your association may be asked to produce. But it's likely going to change, because when you think about intelligence in the spatial world coming online, the needs and the expectations of your members are going to change. And that's really the second point is that, first, when you think about a new innovation like spatial intelligence that's coming online, let's say, in the next five years, there's going to be major progress there, perhaps similar to the last five years of what's happened with language, right. So if you hypothesize that that's going to occur, let's say, for the rest of this decade, what does that mean for your field?

Speaker 1:

If you're in a branch of medicine? What does it mean for your doctors and nurses and medical assistants? If you're architects or engineers, what does it mean for your world? If you're a material science organization, what does it mean for your members? And the list goes on and on. In every field there's some impact from all of these technologies, and what associations, in my opinion, have to do is to anticipate those and start thinking about the products and services that the members in that future state will need, and not necessarily to go build them right now, but to start thinking about what those products might actually be start thinking about what those products might actually be.

Speaker 2:

Listening to Feifei Li speak just emphasized to me that we are on the cusp of an absolute explosion. It kind of felt like we had already lived the explosion. But hearing her talk about spatial intelligence, I don't think we've seen anything yet.

Speaker 1:

I agree.

Speaker 2:

Well, Amit, thank you for 50 episodes of the Sidecar Sync podcast. How exciting We've got to celebrate. Everyone thanks for tuning into this episode and reminder, if you're interested in attending Digital Now to compete in our contest that ends tomorrow, and check out the AI Learning Hub as well.

Speaker 1:

Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.