Sidecar Sync

Cartoon Avatars, Talking Robots, and AI That Draws | 78

Amith Nagarajan and Mallory Mejias Episode 78

Send us a text

In this vibrant and forward-thinking episode of Sidecar Sync, Mallory and Amith dive into the fun side of AI—exploring OpenAI’s new GPT-4o image generation capabilities, Hedra’s cartoon animation tech, and the jaw-dropping robotics work between Nvidia, Google DeepMind, and Disney. You’ll hear about Mallory’s creation of cartoon avatars of herself and Amith, animated with lifelike voices using Hedra and ElevenLabs, plus a lively discussion on how associations can use these tools for scalable content and member engagement. Plus, a look at how Disney’s droids and advanced physics engines are shaping the future of robotics far beyond entertainment.

🔎 Check out Sidecar's AI Learning Hub and get your Association AI Professional (AAiP) certification:
https://learn.sidecar.ai

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecar.ai/ai

📅  Find out more digitalNow 2025 and register now:
https://digitalnow.sidecar.ai/ 

✨ Power Your Newsletter with AI Personalization at rasa.io:
https://rasa.io/products/campaigns/ 

🛠 AI Tools and Resources Mentioned in This Episode:

ChatGPT-4o ➡ https://chat.openai.com
Hedra ➡ hhttps://www.hedra.com/
Nvidia Reveals Project GROOT and Disney Robots ➡ https://www.youtube.com/watch?v=51TYhPJ4zys
Amith’s AI Comic Strips ➡ https://shorturl.at/ItvQZ

Chapters:

00:00 - Introduction
05:30 - GPT-4o Image Generation: Features and First Impressions
07:28 - Creating Comics and Infographics with GPT-4o
11:40 - What Makes GPT-4o Truly “Omni-Modal”?
12:53 - Animating Avatars with Hedra: Cartoon Hosts Go Live
17:09 - Clip Reveal: Cartoon Mallory and Amith Talk AI
27:43 - Disney, DeepMind & Nvidia’s Plan for Entertainment Robots
36:30 - Should Associations Start Prepping for AI Robotics?
40:35 - Advice for the Next Generation in the Age of AI

🚀 Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global

👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Speaker 1:

Hey there, Sidecar Sync listeners. I'm Cartoon Mallory. While I might not have all the nuances of the real Mallory, this technology showcases how AI is transforming content creation.

Speaker 2:

Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings everyone and welcome to the Sidecar Sync, your home for content at the intersection of associations and AI. My name is Amit Nagarajan and my name is Mallory Mejiaz, and we're your hosts. And before we get into today's topics, which are super fun, as usual, but particularly fun today, we're going to take a moment to hear a word from our sponsor.

Speaker 3:

Let's face it Generic emails do not work. Emails with the same message to everyone result in low engagement and missed opportunities. Imagine each member receiving an email tailored just for them. Sounds impossible, right? Well, Rasaio's newest AI-powered platform, Rasaio Campaigns, makes the impossible possible. Rasaio's campaigns transform outreach through AI personalization, delivering tailored emails that people actually want to read. This opens up the door to many powerful applications like event marketing, networking, recommendations and more. Sign up at rasaio slash campaigns. Once again, rasaio slash campaigns. Give it a try. Your members and your engagement rates will thank you.

Speaker 4:

Amit, how are you doing today? I'm doing great. How are you? I'm doing well myself. I'm also very excited for this episode. I don't want to spoil it, but we've got some fun conversations and demos lined up.

Speaker 2:

Pretty cool stuff. Yeah, you showed me some previews of what you want to show today and I'm excited about it. Some really interesting implications for this next evolution in this technology.

Speaker 4:

Yeah, I love that. I love that the impact of this technology will be transformative and just huge in general, but also that it can be so fun, and I told Amit before we recorded. I had a moment thinking is this really my job? To do this, because I was having a blast. Amit, I'm curious if you can share with listeners. How has that content automation that we've been discussing for our AI Learning Hub for?

Speaker 2:

members version been going. It's really, really fun. It's a great project. We've been incubating this idea for quite some time now. It's been percolating really in our minds for probably over a year in total, but really in development for a number of months.

Speaker 2:

And, for those who haven't heard us talk about it before, what we're doing essentially is trying to attack the problem of change across two dimensions. Essentially for our learning content and I think all associations can relate to this with their own learning content is that learning content, the minute you create it, starts becoming stale because the information it becomes out of date. And, of course, we're in the world of delivering AI education. So when you're teaching people about AI, this stuff is changing so crazy fast it starts to decay really really quickly, and so we tend to rerecord our content in the traditional way, with people like Mallory and myself and other colleagues recording new versions of the same courses as well as new courses on a high frequency basis, and that's a problem, because we want to be able to update our content even more frequently than we've been doing, which is roughly every six months. We do a complete refresh and we think it needs to be updated probably every month or two, and so there's that issue. The other dimension of change is that we're starting to partner with more and more associations where we take that learning content about AI and we adapt it for their industries. So we'll make it AI for X, x being, whatever your profession or industry is, and we're doing this as a really exciting revenue share partnership with our friends in the association community.

Speaker 2:

We're super pumped about it, and so, of course, that's another dimension of change, because now you have all the changes happening in AI and then you have lots of variations of the content that essentially, are like specific versions for different use cases and different vocabulary that are, of course, very important to make it really hit home for a given audience, and so we wanted to solve that problem.

Speaker 2:

We wanted to make it possible to move really fast and do all this at the same time, and so pre-AI, this would have been effectively an impossible task, but what we have with AI is the ability to generate high quality audio and high quality video and then to be able to stitch that together in a way that ultimately results in us being able to fully automate our production pipeline of content from source material all the way to produced content on the LMS, so I won't go into more detail on it because that's not the point of today's pod, but it is something I'm really fired up about we're excited to. Over the next 30 days, we're going to start putting some of the content on our learning hub that's generated by this tool and starting to get some feedback. We've already gotten some informal feedback from folks that it's been super positive. So very, very excited about that and I think hopefully it'll serve as inspiration for what associations can do as well.

Speaker 4:

Yeah, yeah, and it kind of fits with today's topic too, because we're talking about image and video and I do have a question in there Maybe we'll want to start integrating some cartoons into the AI Learning Hub, potentially.

Speaker 2:

That sounds like fun.

Speaker 4:

All right. Today we are talking about some fun topics, as Amit said. We're talking about ChatGPT's 4.0 image generator and Hydra, which is a company I was not familiar with until I demoed it for this pod, and then we'll also be talking about Disney robots. I'll explain more on that in a bit. But first and foremost, chatgpt's 4.0 image generation feature is OpenAI's latest advancement in generative AI, for now, offering users the ability to create and edit highly detailed and realistic images directly within the ChatGPT interface. So here's an overview of some features and functionality. It excels at creating photorealistic images, editing existing ones and rendering text within images an improvement major improvement over earlier models like DALI. It supports multi-turn conversations for refining images, allowing users to iterate and improve their creations. Users can specify details like aspect ratios, colors using hex codes or even request transparent backgrounds. The model can transform uploaded reference images or use them as inspiration for new creations. It's capable of generating a wide variety of content, from cinematic landscapes and logos to creative art styles. It can also handle practical applications like designing infographics or mock ups for businesses, or even comic strips. The tool is available now for all users, including those on the free tier, though free users are limited at this moment to generating three images per day. Paid subscribers, the plus and pro plans gain enhanced access with fewer restrictions.

Speaker 4:

I want to take a little pause here before we talk about Hedra. Amith, I know you have done some experimenting with the image generator that we talked about recently. You built a comic strip. What was your experience like?

Speaker 2:

Yes, I did this the night that this product was announced by OpenAI and in fact I posted this comic strip on my LinkedIn. We can link to that in the show notes, but if you follow me on LinkedIn you can go to my profile and see it. By the time you listen to this it might be a few weeks old, but I thought it would be an interesting test because I wanted to take some concepts from what we talk about a lot in the association world and see if we could create something kind of sort of funny, but really also see if could the image generator like stitch together several panels in a comic and actually keep the scene like logical, where you're not like having different characters appear and disappear and put in text. As you mentioned, that's been the big hole in all of these image generators. None of them have been able to do really high quality, reliable text, and that's been the big hole in all of these image generators. None of them have been able to do really high quality, reliable text, and that's been a big hole.

Speaker 2:

In fact, prior image generators you couldn't even really reliably tell them not to emit text at all, so a lot of times they would like throw text out there, even when he said please don't do that, because your intention was to take it into a design tool and add text yourself. Now you can. I can say this like my immediate reaction was oh my gosh, this is amazing, because I was able to create a comic strip that the copy in it wasn't necessarily funny, but like the fact that it did all that in one shot and it maybe like took 60 seconds or something, was really phenomenal. So it's definitely a next level capability in terms of image generation. And the thing I got really excited shortly after that was things like infographics or other kinds of typical business communication tools that normally you wouldn't build these things, but now you can start to build them for all sorts of different things to improve your communications.

Speaker 4:

I think I've told you all at this point, Midjourney is my preferred AI image generator and I think there's kind of use cases still for it. I'm interested to see how Midjourney updates with this ChatGPT 4.0 image generator. I think I've got to double down on what Amit said about the text. It is fantastic. I kind of thought when I gave it instruction to put sidecar sync, for example, in an image, maybe it would take me a few times. No, it got it perfect the first time with very minimal direction.

Speaker 4:

I also think the ability to iterate with it is so helpful. So in Midjourney you kind of have to do it in a one-shot prompt. You can do like subtle variations or big variations within Midjourney, but you kind of have to get the prompt right the first time. But with this one you can go back and forth and say, oh, could you brighten up the colors a little bit? Can you do xyz? So it's much more user-friendly. I find that it still has an ai look, if you know what I mean to me like you can now tell which images have been generated with a gpt 4.0. But overall I'm really, really impressed with this. I still think I will use Midjourney, maybe for more realistic images, but in terms of creating cartoons, I'm very impressed with 4.0.

Speaker 2:

Well, I think that's a great use case because cartoons are a fantastic way of making people remember stuff, because, especially if the copy in there is well written and of course we know language models can help you quite a bit with that, and that's interesting. I do want to point out one quick thing before we move on about the 4.0 image generation. That is a little bit of subtlety that might not be obvious to everyone is that up until now, the tools like ChatGPT that have had image generation as one of their features as part of like a language model interaction, have actually been calling out to a separate image generator model whenever they need to generate an image. So in the case of OpenAI and ChatGPT, they would use DALL-E 3 under the hood, so you could basically use DALL-E from ChatGPT and actually, if you remember a little bit earlier in time, you didn't have that ability to generate an image. You'd have to go to a separate page for DALL-E and you would prompt it directly. And with DALL-ALI 3, they embedded it in ChatGPT and in fact, chatgpt rewrote the prompt to DALI, but it was actually calling a different model and the DALI model. The point is is the DALI image model had no idea about anything about the conversation other than the enhanced prompt that ChatGPT sent to DALI. So that's how it worked up until the moment in time when we got to 4.0.

Speaker 2:

Now, when OpenAI released GPT 4.0 and later the chat GPT that used it, people were like what is this 4.0? People call it 4.0, which, by the way, is not the name. It's 4.0. And O stands for omni and the intention behind it is it's omni-modal right and that means that multiple modalities in multiple modalities out. And so the original 4.0 could take in images and it could describe them to you and use them as part ofa prompt. It would understand the images, but it would not emit images directly.

Speaker 2:

Now the 4.0 Omnimodel as far as I know it's the first model that does this. It's a single model that's able to both have inputs and outputs in various modalities. Why does that matter? Well, by having an omnimodal model, that means that the model understands the full conversation. So what you described the ability to have iterative, continuous improvement for that image to represent kind of holistically what the conversation is about that's a very, very powerful kind of layer of dimensionality that image-only models are going to have a really, really tough time dealing with. So my prediction on this is models that are pure images, like just Midjourney unless they do some really amazing magic are going to have a hard time dealing with this because they just don't have the context of what an omnimodal model has when interacting with the user, at least for most use cases.

Speaker 4:

Yeah, that's. That's a really interesting distinction. I love mid journey, so do better guys. I want to. I want to be able to use both. All right, I want to introduce kind of a subtopic to this topic, which is Hydra. So Hydra is an AI company specializing in generative video creation. Founded in 2023 by two Stanford PhDs with experience at NVIDIA, google and Meta, the company aims to democratize video production by making it safer, more expressive and accessible to creators of all skill levels.

Speaker 4:

Character 3 is Hydra's latest omnimodal model and processes text, audio and images simultaneously, enabling seamless integration of storytelling elements in a single workflow. There's also the Hydra Studio, which is a unified platform that combines various AI tools for video production. It allows users to create customizable avatars with unique appearances, voices and personalities, while offering real-time previews and intuitive controls. Key capabilities within Hydra are the ability to transform static images into lifelike characters that can speak, sing or rap. There's support for multilingual text-to-speech inputs, and you also get fast video generation, like 60-second videos from 300 characters of text with lifelike expressions and synchronized movements.

Speaker 4:

So I'm wondering if you all can kind of deduce where we're going with this. So we talked about creating images with 4.0 and ChatGPT, and then we've talked about using Hydra to animate some images, so I'm going to share my screen for those of us who are joining us on YouTube, and then I'll try to kind of talk my way through this as best as I can. I'll show you the finished project at the end, but I just wanted to show you my process a little bit and how I created this. So right now I'm using ChatGPT 4.0 and I actually took my headshot that I use on LinkedIn and also that we use in the side car sink cover. I dropped it in here and I gave it a really simple prompt just to see what it would do with it, and I had not ran this experiment prior.

Speaker 4:

So turn me into a cartoon wearing a yellow cap that says sidecar sink. I did get an error the first time, which might have been my internet connection, but I asked it to try again and you all have to check this out. I'll probably post this on LinkedIn. I get like an exact replica of my headshot, even down to the leaves in the background, because I took this in front of like a bush, where I live even down to the leaves, to my earrings, I'm pretty sure. Yes, it is very detailed and I have this yellow cap on that says sidecar sink. Nothing misspelled, nothing looks crazy, and my exact outfit. So that was step, step one. I then had to do the same thing with you and me. Hopefully I didn't get your permission to do this, but I feel like we're co-hosts, I'm just allowed to.

Speaker 4:

Uh, I said do the same thing for this image, so I didn't even like necessarily repeat the prompt, but I said you know, please include the sidecar sink cap and then we get a meath version. I don't feel like this looks a ton like you and me, but it did get the bricks in the background of your headshot. It even got the stripes on your shirt Pretty impressive all the way around. And then I went a step further and asked it to create a little infographic for the Sidecar Sync podcast with both of those. So, amit, you were talking about context. This is pretty impressive that it then used both of those images to create a new one and it says Sidecar Sync Podcast the intersection of AI and association. So that was kind of step one to this.

Speaker 4:

Then I went to Hydra. This is inside the Hedra platform. I had to do this separately. So first I dropped in my avatar and then I generated a script. I used Claude to help me generate a really kind of short and funny script, which you'll see really soon. I was also able to choose the voice Lots of options here. I did some digging because the voice is pretty good and I had a hunch that they may have used 11 Labs and they do. So you're actually getting 11 Labs audio within the Hedra platform, which is great. I typed in my script here, I chose the voice and then I said she's an enthusiastic podcast host. I pressed generate and then it probably took about three to five minutes and I got my clip. I did the same thing with Amit's clip and right now I am going to stop sharing and we are going to play for you all the clip.

Speaker 1:

Hey there, sidecar Sync listeners. I'm Cartoon Mallory, created using ChatGPT's image generator and brought to life with Hedra's animation technology. Pretty wild, right. While I might not have all the nuances of the real Mallory, this technology showcases how AI is transforming content creation. Just think about the possibilities for your association personalized welcome messages, multilingual communications or scaling your educational content without having to be on camera all the time. But hey, don't just take my word for it. Let's check in with my cartoon colleague, cartoon Amith. What do you think about this technology?

Speaker 5:

Thanks, cartoon Mallory. I have to say, being a cartoon is quite liberating. No bad hair days and I can present from anywhere without leaving my desk. On a more serious note, what excites me most is how this technology could help associations with limited resources create professional video content at scale. Think about educational modules or personalized outreach that would otherwise be impossible to produce. The technology will only get better from here, and forward-thinking organizations should start experimenting. Now Back to you, real Mallory and Amith.

Speaker 4:

Amith. What did you think when you saw Cartoon Mallory and Cartoon Amith?

Speaker 2:

It put a big smile on my face and it was so funny. I knew that you were preparing for this pod and working with these kinds of tools. I'm like that's so awesome because it's fun and fun. Things obviously are enjoyable, but they also really open up perhaps a creative avenue that you might not otherwise choose to exercise. In some ways you think about like, oh, the association's in a serious business, but like, yes, you are, but your members also like fun stuff. And you also can think of this as like, okay, well, what else can you do with this? And maybe it's not always cartoons, but I think cartoons can be a very powerful way of communicating something important, something serious, but in just a really expressive, interesting way.

Speaker 2:

Modal models really. Well that you had this conversation it was just natural conversation that you're having with the model. It knew what you had said earlier. It knew what it generated in terms of the images it outputted. It knew about your input images. It just all blended together. So we've said on this pod a number of times that at some point it won't be this kind of model, that kind of model. It'll just be like AI or AI model, right, and all these models kind of blend together and that's kind of where things are going. So it's pretty impressive what you're able to demonstrate and if you think about where we were, even maybe six months ago, I don't think either of us would have predicted that we'd have it this quickly, that this level of capability would be in our hands, and it's essentially free.

Speaker 4:

Yeah, free, yeah, and I'm pretty sure was it last week or the week before Amit on the podcast? I'm just not sure if it's out yet we were talking about. The next step will be creating a video version of these cartoons and literally it was already available. It goes back to the idea of breaking our brains. We've got to just constantly challenge ourselves.

Speaker 2:

Totally.

Speaker 4:

What does this make you?

Speaker 2:

Next, we need like a hologram or something uh-oh.

Speaker 4:

I'm sure we could do it If we really wanted to. We could probably have a hologram at Digital. Now you mentioned kind of that. This is a fun technology. I had, frankly, a blast with it. I highly recommend, if you all I think you do listening to this podcast have some interest in this stuff, that you play around with it. I think it could be also a really fun tool to like create a cartoon for your family, for your children, if you have them. But where does your mind, amit, go for associations here, like zooming out broader landscape trend lines. What does this make you think of for associations?

Speaker 2:

Well, first of all, I'm just thankful that you didn't make my character sing or rap, so thank you.

Speaker 4:

Okay, I didn't know, I probably would have made your character sing or rap. I didn't know I could do that, but next time.

Speaker 2:

That'll be in a future episode, I guess. So, in terms of what I think happens with associations, I think about this from a more general lens, which is how do we communicate, and how do we communicate with each other effectively in an interesting way? How do we communicate something that might be kind of dry and kind of a little bit more exciting way? So you take some key concepts, whether you're communicating with, with kids or adults, whether it's professional context or personal. Um, if we can communicate, if we can express ourselves in different ways and ways that go beyond our personal creative ability, right, like for me, I can barely do stick figures, but now I can express ideas I have in all sorts of cool kind of artistic ways, right, is it real art? Is it not real art? I'll leave that to the philosophers and the artists to decide. I know there's a lot of controversy about this, but it allows people like me who have zero artistic capability, to have an outlet where I can communicate ideas in a completely different way than I've ever been able to. So that is exciting and that leads to an organizational capability to say how can we best communicate, how can we best educate, how can we best deliver content? Those are the types of things that are so exciting for associations, because associations are in the business of educating.

Speaker 2:

Of course, communicating is the fundamental of everything, but educating really are the applications. And then, of course, connecting people. And so you know, connecting people for professional networking, or connecting people to collaborate as volunteers in a committee, or connecting people to, let's say, collaborate on a standard that maybe competing organizations are coming together in an industry to work together, to collaborate on an open standard to advance the industry, and on and on and on. There's so many ways that connecting us is such a critical part of what associations do. So to me, it's all those things. I think some very immediate, obvious applications are think about your learning content. Think about adding some more dynamic elements to pretty static learning content. I know we're going to be doing this with all of our AI content. The Sidecar AI Learning Hub is. We're going to have a lot more fun stuff like this to illustrate concepts, because why not? Right, it would have been prohibitive to create cartoons, or to create tons of infographics everywhere, to create whatever, but now we have all these additional creative expression outlets. To me, that's the real idea here.

Speaker 4:

And something I mean not to get into the philosophy of the art side, but that that side is interesting to me for sure, but it's not like Sidecar would have gone out there and contracted an individual to animate all of our content. We just wouldn't have done it. We would have recorded it ourselves, to be frank. So I do agree with you that this is opening doors to things that we never could have considered. You did mention the AI Learning Hub, content automation, amit, and I'm just curious from your perspective, would you consider incorporating, like cartoons? I know we're using human, okay, human AI avatars right now, but would you use both, either or?

Speaker 2:

Oh, a hundred percent. I would love to see that happen, because to me it's about how effective are you at getting the material across in a way that the person can relate to, can understand and will retain. And so that doesn't mean like everything's a cartoon, because then you kind of kill the point, but like if you introduce bits and pieces here and there, I think people look forward to that. You know you have a lesson, like you know, data in the age of AI, which is, like you know, super important but maybe not the most compelling, like exciting topic in the world. Like don't watch that video at night, but you know, but what if we had a little cartoony thing in there, right? Or maybe the cartoon comes to life and each of the panels in the cartoon individually comes to life as an animation with cool audio and there's some singing in there or whatever you know. Like there's all these ideas, and if you can do that effectively just by thinking about the idea or even have the AI help you brainstorm the idea, that becomes really powerful. It makes us more effective at what we do. So we 100% will be experimenting with this across Sidecar's AI learning content. You'll probably start to see this stuff in our LinkedIn posts. You'll see it in blogs. You'll see more and more and more of this content. And think about this this way too. If you're doing these things and your competitors to the extent you have any in your market but you have competitors, by the way, in terms of attention that people give you right Like their time is split amongst however many different things they look at, so you are competing for attention. So if you're doing this and others are not, you have a significant advantage, and so that advantage may not last forever, but it lasts for a period of time, and I think that's exciting as well. There is one other thing I want to quickly mention about content modalities like this that I think is underappreciated, and it's an emerging area that I predict is going to become a much bigger deal in learning in the next, say, 12 to 24 months, and that has to do with interactive games or interactive role play type scenarios. So think about this If you were developing a learning module for your members and you know most of it's, let's say one way, single directional content like pre-recorded videos or, you know, downloadable assets they could read worksheets that's all great and there's like a bulk of content, but sometimes you say, hey, you know, this particular concept is so important, we're going to have some kind of interactive learning exercise, right?

Speaker 2:

And a lot of times these things are frankly pretty lame, they're not very interesting. But what if you could have an AI just generate a game specific to that lesson? Right, where you say, oh, for this particular lesson on AI data platforms, we're trying to teach people how they can connect all their disparate data sources to the AI data platform. Let's give them a game of some sort and we don't even specify what. But we go to Claude 3.7 or we go to Gemini 2.5 Pro both are outstanding at this task and we say we have a learning module that has this kind of content.

Speaker 2:

We want you to think up of five ideas for games that will reinforce these concepts, and then not only will it give you those ideas, but it'll actually build game for you and in an interactive artifact in Cloud, you'll be able to play the game and see if you like it, and if you like it, you can literally copy and paste it into your LMS. So the opportunity and of course you could have done this in the past, but that production budget for what I just described probably would be six figures for a single such game, right, maybe higher. And now all of a sudden, it's essentially free. So it's again the thematic concept here we're talking about. The broader stroke is going from scarcity to abundance, and that's extremely powerful.

Speaker 4:

And if you're still a skeptic of delivering rather serious information through fun mediums, I've got to shout out Ninjio, which is the company that we use for cybersecurity training, and all of their lessons are done with cartoons, and obviously cybersecurity is quite a serious topic, but they're really good. I enjoy watching those videos. I get a lot out of them, so be open minded, all right. Topic two for today is another fun one. I'm calling this the fun episode because we're talking about Disney robots. So NVIDIA, Disney Research and Google DeepMind have announced a collaboration to develop Newton, an open source physics engine designed to simulate robotic movements in real world environments, by NVIDIA CEO Jensen Huang during the GTC 2025 conference or GPU technology conference in San Jose, california. This partnership aims to revolutionize Disney's next generation entertainment robots, including Star Wars inspired BDX droids, which are set to debut at Disney theme parks worldwide starting in 2026. I'll add an aside here If you haven't seen this video on YouTube of the keynote where he brings them on stage, you've got to check it out. They're adorable. They're not scary at all. They're really cute. Some key features of Newton it allows developers to program robotic interactions with various objects like food items, cloth, sand and other deformable materials. The engine is designed to make robots more expressive and capable of handling complex tasks with greater accuracy. Newton integrates with Google Deep Mind's robotic tools, which simulates multi-joint robot movements. Disney plans to use Newton to enhance its robotic character platform for lifelike and interactive robots. The BDX droids showcased during the keynote represent just the beginning. Disney Imagineering SVP, kyle Laughlin, stated that this collaboration will enable the creation of robotic characters that are more engaging and capable of connecting with guests in uniquely Disney ways.

Speaker 4:

Beyond entertainment robots, though, newton has potential applications in industrial robotics, ai-driven humanoid assistance and manufacturing systems. It addresses the sim-to-real gap, enabling robots to learn from simulations and adapt their movements for real-world conditions. In addition to Newton, nvidia unveiled GROOT-N1, an AI foundation model for humanoid robots aimed at improving perception and reasoning capabilities. Model for humanoid robots aimed at improving perception and reasoning capabilities. The company also introduced next-generation AI chips Blackwell, ultra and Rubin and a new line of personal AI computers. So, amit, this is another fun topic that I think still has quite serious implications. We've been keeping an eye on robots basically since the inception of the sidecar sync. Why do you think that's important for our listeners to keep an eye on?

Speaker 2:

Well, everything we talk about with AI is in the digital world, and so when we have robotics, we're able to connect AI with the physical world. Or, put another way, it's going from bits to atoms, right? So it makes it real in our minds and it becomes physically real to us, right? So I think the applications are enormous. There's a lot of things in the world that are not solvable purely with digital solutions. You know as much as we say that we want everything to jump on digitization and therefore jump on the back of Moore's law and Internet and now AI. There's certain things that don't work that way, like how do you take care of an aging population? How do you take care of more and more people that need medical care, who either can't afford it or we don't have enough doctors and nurses to go around, right? So that's one application that I think robotics is going to be very, very interesting for. And there's other things too, like think about, like you know, all the different things that you have happening in your house. Either you know you don't want to do your own laundry, or perhaps you know you can't. Right, what about something to help you do that that's affordable and scalable and all that. So there's a lot of things in life where robotics will be extremely powerful.

Speaker 2:

I think the consumer facing stuff is interesting because if you think about advanced robotics, thus far it's either been like demo videos that you've seen at shows or online or you've seen them in industrial settings.

Speaker 2:

Robotics has been a thing in much more of like fixed location robotics, where you have like a robotic arm doing auto manufacturing or something, but nonetheless robotics.

Speaker 2:

But those robotics were like kind of pre-AI, in the sense that they were much more like specifically programmed to do very, very specific tasks, whereas part of what has been announced here that's kind of under the hood, if you will of you know the cute Disney robots is this physics engine and this new foundation model for robotics that integrates with all the other work that DeepMind's done, which is this interesting through a collaboration between, obviously, a consumer brand that's pioneering in this kind of stuff, with Disney, google's DeepMind, who have some of the best AI researchers in the world, and, of course, nvidia, who's done tremendous things with compute and hardware.

Speaker 2:

So it's worth watching and I think that you know, the ability for a platform like this to be available for other people to build on top of is also it's exciting because there will be a proliferation of robotics if you can make it easier and way less expensive for anyone who has an idea for robotics to just build on top of a generalized robotics platform, which is what NVIDIA and Google are after Disney's, after you know cute robots in the theme park, to get you know people to come to.

Speaker 2:

Disney World and other applications as well, and movies and so forth. But you know, ultimately for Google and NVIDIA they're after a generalized platform.

Speaker 4:

So would it be fair to say maybe robotics will have less of an impact on internal associations operations, but maybe a huge impact, potentially, on their members.

Speaker 2:

I don't know. I mean, I think it depends on, like anything, where the movement of something in the physical world is necessary could be something that robotics helps with.

Speaker 2:

So you know, I don't know if there's certain associations that have, you know, inventory in their office or whatever they're doing. You know probably less and less of that these days, but I definitely think in the field where people are working, you know probably less and less of that these days, but I definitely think in the field where people are working, you know, in fields that are fundamentally like you know, people facing or industrial settings 100 percent. But I think this is going to affect all of us in our daily lives a lot faster than we probably anticipate. I wouldn't be surprised if by the end of this decade it's fairly commonplace but still probably a little bit on the expensive side, but fairly commonplace still to have some consumer type robots in the household doing some basic tasks. Wouldn't surprise me at all and if they're cute enough, like the Disney robots, people might be accepting of those.

Speaker 4:

I might get one If it does my laundry and it looks like that.

Speaker 2:

I might be open, we'll see talking about something that we thought would be number one. It's interesting to our audience, probably interesting to most people, that this is going on. And then this is in addition to and on top of all the other advancements in humanoid robotics that have been going on, and we've talked about that a little bit on this pod. There's so much going on there, there's so much investment happening, there's so much research happening there, along with, you know, the continual progress in industrial robotics. All of this is coming together because the common denominator now is that you can turn it into software that actually drives these things. You can turn it into general purpose foundation models and the physics engine kind of complements that right.

Speaker 2:

So physics engine is deterministic. It's something that says, hey, like we know, this is the way things will react, and based on the laws of physics, essentially. So it's basically an engine kind of like, in a way like a game engine. You know the way certain things work in video games, but this is a real world, three dimensional, you know real time, open source physics engine. It's a lot of words to basically say it's a piece of software that understands how the world works, whereas AI models don't really understand that. Some people are out there trying to create neural networks that have better understanding of the real world, and I think that's great, but ultimately, you want the neural network to be complemented by a deterministic piece of software that has a rules engine that says look, this is actually the way that physics would work in that situation, which is going to be a lot more consistent and reliable, and then leverage the AI to make good choices about what's the right action or, you know, reaction to have, based on what's happening.

Speaker 2:

So I think it's a great combination. It's what a lot of people have been talking about. You know the natural progression. There's nothing really novel about like what they're trying to do, but I think the combination of this skill set is really interesting because you know you're doing it in a way where you have innovators in three different dimensions coming together to partner, so I thought that was particularly interesting too. Ultimately, for associations, I can't say that this has like this specific applicability to you tomorrow. It's more like, over the next five years, 10 years, your profession, your industry, will likely be affected by this and maybe there'll be some applications for you within your life or within your own work.

Speaker 4:

So, in the situation where an associations, industry, profession is surely going to be impacted by robotics let's say manufacturing, for example what responsibility do you think the association has at this point? I feel like there's a lot of question marks still. We're seeing a lot of advancements and investment in this space, but it's not like we're seeing right now major disruption occurring. Do you feel like associations should get ahead of the curve, start producing content about that? What do you think?

Speaker 2:

Yeah, I mean the association should be the voice of their sector and should be looking at things that are going to affect the sector and help the sector prepare, help the sector advance itself by taking advantage of these technologies. So that's generally a true statement across all the different fields associations represent. You know, what I would say with this is that it might be some of the less obvious industries. So, for example, industrial automation, robotics manufacturing, auto manufacturing in particular, but a lot of other manufacturing has had a heavy emphasis on automation and robotics for a long, long time. So it's definitely not like a novel idea to talk to manufacturing leaders about, like, how to add more robotics or same thing with, like, industrial warehousing, like someone like Amazon and lots of other warehouses have that. Where I think it's probably less anticipated, but where this advancement will be more likely to have an impact, is some of the kind of softer, fuzzier environments. So if you're in a factory or if you're in a warehouse, there's a high degree of predictability, or a higher degree of predictability in terms of your environments, your surroundings, the kind of objects you're dealing with. It's a lot more mechanistic, right, whereas what about housekeeping? What about painting? What about delivering food? What about all the things where you're in the real world where there's all this like chaotic scenarios happening, much caused by us as people, but just like the weather or, like you know, new Orleans potholes or whatever right Like, so you have all this like chaos, and so having robotics that is smart and are that are smart enough to handle that and to still accomplish whatever their task is, is interesting. So if a world comes up in the not too distant future where certain types of tasks like folding laundry, but if you take that one and say, well, extrapolate from that, well, it can probably also wash your floors, it can probably also wash your windows, it can probably also make your bed, maybe, and do some other things, so that's interesting.

Speaker 2:

But then what's the other applications of that? So, at kind of like an industry level, what does that mean for, like the house cleaning industry? Does it mean everyone has a housekeeping robot? Does it mean that there are businesses that start up that lease these out or rent them out, or you have a service where, like your housekeeper that comes to your house is a robot? What about the industrial setting? Right? So there's all these kinds of questions, and that's one example, right? What about in areas where there's tremendous labor shortages, like when Hurricane Ida came through New Orleans a handful of years ago. I think you were still here when that happened right, yeah.

Speaker 2:

It was terrible for a lot of reasons.

Speaker 2:

The power was out for four weeks or three weeks, but also it caused a lot of roof damage. Was out for four weeks or three weeks, but also it caused a lot of roof damage and it was. It took forever to get a roofer to come to your house. It's a dangerous occupation. There's not that many people who do it. Normally they don't have demand at the level they did, right. So of course everyone's mad. I can't believe it's going to take me six months to get my roof repaired. Well, normally they don't repair half the roofs in the city all at the same time, right, so it kind of makes sense.

Speaker 2:

But what if there were some general purpose robots that could learn to be roof installers very quickly? And there's no issue with like risk or hazard of safety or insurance requirements for these robots to be on your roof other than them falling in your head or something, but like much reduced insurance concerns compared to humans on your roof. And all of a sudden everybody's roofs repaired like way less expensively and way faster, right. So that's an interesting scenario where you have these choke points or disaster relief right or search and rescue. There's just so many applications where robotics could not just give us money, which I think by itself is obviously going to be the drumbeat of economic decision-making, will always be there. Right To find that, but to also find things that we cannot do. But for the scale of this kind of emerging technology. That's where I think it gets more exciting.

Speaker 4:

That's a great point of me. So I feel like AI is going to be able to do the knowledge work and the physical work. Last question for today's episode, because I heard you mention this briefly. I don't think it was on the podcast, but you mentioned to me that you were doing some college tours with your son, so he is starting to think about what he's going to do in his future. Amit, what are you recommending to your children in terms of like career paths that they go down when we're having these kinds of conversations?

Speaker 2:

You know it's a tough one. I think part of it is the follow your passion kind of thing matters to some extent. I think it. I mean it matters as a person. But, like in terms of career opportunity, I guess, is how I'm trying to answer your question. Um, my thought process for both of my kids and they're both getting closer to that time is, uh, pick something where you get really good at communicating and you're also really good at connecting with people, where your interactions with people are really a lot of the value, and that can be said to be true for a lot of different professions.

Speaker 2:

But I think that the AI is going to do most of the cognitive labor for most people very soon. I don't know if that's in three years or in seven years, but I mean, in some ways we're already there with a lot of the things we're doing. It's also to expand the range of possibilities. This episode, in a lot of respects it's the fun episode. It's also the possibility expanding episode, because the things we can do with AI, you know it's it's not like we're displacing tons of graphic artists to create all these cartoons. We just never would have created the cartoons. Of course, the inverse of that is also true. People who were creating cartoons will use this tool instead of hiring graphic artists. So there's, there is that issue, but the demand for something that is, you know, becoming abundant is enormous and that's going to create more ideas and more opportunities.

Speaker 2:

We tend to be pretty good we collectively, as a species, tend to be pretty good at like coming up with new ideas. So being in a creative pursuit of some sort, or having a creative like thread to a discipline, creative pursuit of some sort, or having a creative like thread to a discipline. So I'm just trying to get my kids to maybe try a couple different things and to really focus on, on like just getting good at building relationships and communicating, certainly hard skills and specific disciplines. That's like a very generic answer in a way. That's important. But, you know, ultimately I think that's going to be what's super, super important for us.

Speaker 4:

So lean on being human, lean on humanity, it seems like, is kind of the advice. Well, everybody, thank you for tuning into this fun but serious episode. We will see you all next week.

Speaker 2:

Thanks for tuning into Sidecar Sync. This week, Looking to dive deeper, Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember Sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.