Sidecar Sync
Welcome to Sidecar Sync: Your Weekly Dose of Innovation for Associations. Hosted by Amith Nagarajan and Mallory Mejias, this podcast is your definitive source for the latest news, insights, and trends in the association world with a special emphasis on Artificial Intelligence (AI) and its pivotal role in shaping the future. Each week, we delve into the most pressing topics, spotlighting the transformative role of emerging technologies and their profound impact on associations. With a commitment to cutting through the noise, Sidecar Sync offers listeners clear, informed discussions, expert perspectives, and a deep dive into the challenges and opportunities facing associations today. Whether you're an association professional, tech enthusiast, or just keen on staying updated, Sidecar Sync ensures you're always ahead of the curve. Join us for enlightening conversations and a fresh take on the ever-evolving world of associations.
Sidecar Sync
Meta Launches Llama 3.3 & Text-to-Video Becomes Reality with Sora by OpenAI | 60
In this episode of Sidecar Sync, Amith and Mallory unpack the latest advancements shaping the AI and association landscape. From Meta's groundbreaking Llama 3.3 model, which challenges scaling laws with its smaller yet mightier parameters, to OpenAI’s launch of Sora, a highly anticipated text-to-video AI model, we explore how these technologies redefine the boundaries of innovation. Discover why smaller AI models are taking the spotlight, how Sora’s creative capabilities might revolutionize storytelling, and the implications for associations embracing video at scale. Tune in for an insightful discussion on these exciting AI breakthroughs!
🔎 Check out the NEW Sidecar Learning Hub:
https://learn.sidecarglobal.com/home
📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecarglobal.com/ai
🛠 AI Tools and Resources Mentioned in This Episode:
Llama 3.3 ➡ https://www.llama.com/
Sora ➡ https://openai.com/sora
HeyGen ➡ https://heygen.com
Hugging Face ➡ https://huggingface.co
Chapters:
00:00 - Introduction
03:16 - Overview of Meta’s Llama 3.3 Model
06:51 - Benchmark Comparisons and Cost Impacts
10:53 - Multi-Agentic Applications with Llama 3.3
17:12 - Transitioning Models in Real-Time
19:41 - Redefining Scaling Laws in AI
24:03 - Launch of OpenAI’s Sora
30:07 - The Creative Potential of Text-to-Video AI
36:29 - Consumer Expectations in AI-Driven Video
42:02 - Evaluating ChatGPT Pro Subscriptions
🚀 Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global
👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
The pace of innovation is not slowing down. It's probably speeding up, but it's at minimum maintaining pace, which is a ridiculous pace. To begin with, the costs are coming down, so much so that it'll not be a factor to even really discuss. So this incredible abundance of AI is really around the corner. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings to everybody and welcome to the Sidecar Sink, your home for content at the intersection of all things AI and associations. My name is Amit Nagarajan.
Speaker 2:And my name is Mallory Mejiaz.
Speaker 1:And we are your hosts. Before we get into all the things that are happening right now in the world of artificial intelligence that affect the association market and we have a couple of really cool topics for you today we're going to take just a moment to hear a quick word from our sponsor.
Speaker 3:Introducing the newly revamped AI Learning Hub, your comprehensive library of self-paced courses designed specifically for association professionals. We've just updated all our content with fresh material covering everything from AI prompting and marketing to events, education, data strategy, ai agents and more. Through the learning hub, you can earn your association AI professional certification, recognizing your expertise in applying AI specifically to association challenges and operations. Connect with AI experts during weekly office hours and join a growing community of association professionals who are transforming their organizations through AI. Sign up as an individual or get unlimited access for your entire team at one flat rate. Start your AI journey today at learnsidecarglobalcom.
Speaker 2:Amit, I feel like we're truly getting into the holiday season because we are seeing so many news items, so many releases. They're like gifts coming from these AI companies.
Speaker 1:It is. Yeah, at least that's how the AI companies position them. The first gift of OpenAI's 12 days was allowing us the privilege of paying them $200 a month, which was really fun. I love those kinds of gifts as a vendor.
Speaker 2:Yeah, wait, is that how gifts really work Us paying them $200 a month? I don't want to jump the gun. We will be talking about their new tiered ChatGPT Pro plan, but yeah, a little bit backwards for the holiday season. Openai.
Speaker 1:Yeah, I think there's a lot of great things, and OpenAI is certainly not the only company that's been dropping all sorts of cool stuff. There's lots of interesting things happening in the world, as is really always the case, but I think this particular last week or so has been more hectic than normal.
Speaker 2:Absolutely. Yep, it's been exciting to see, I know today we're talking about two important topics, one of those being Lama 3.370B, and then next up we'll be talking about Sora from OpenAI, the much anticipated video model. But I know, amit, it was last week. We were actually in annual planning for Sidecar and you put down your phone and said Lama 3.3, 70b dropped. So was that kind of an exciting moment for you?
Speaker 1:For sure. I mean, you know I've been talking a lot about how I'm excited about smaller models, that, as excited as I am about the frontier of the biggest, most powerful models like OpenAI's GPT-4.0 and now O1 and so on, I spend a lot of time here on the pod and elsewhere talking about how I'm excited about smaller models because they're faster, they're cheaper, way cheaper and they're catching up in capability, and this is a great proof point. So the reason I was so excited last week when Meta dropped Lama 3.3 is their 70 billion parameter model 70B for short is roughly comparable in power to their 405 billion parameter model from earlier this year. So Lama 3.1, 405b is actually being bested by Lama 3.3 70 billion parameter model. So it's a model you know stood by LAMA 3.3 70 billion parameter model, so it's a model you know roughly six times smaller. That is actually more powerful and faster to inference, because the model size directly correlates to how fast it executes.
Speaker 1:Now, that's impressive. What's even more impressive, though, is when you compare LAMA 3.3 to OpenAI's GPT-4.0, which is purported to be over a trillion parameters, and in fact, lama 3.370b significantly exceeds the latest GPT-4.0 model in most of the AI benchmarks. That's really what got my attention not so much the 405b, because 405b was a little bit below GPT-4, even in the springtime, so I find that really exciting. What added fuel to that fire was the availability immediately from the Grok team. This is Grok with a Q G-R-O-Qcom. They offer really industry-leading AI inference tools and speed on their platform and they immediately launched Lama 3.3 on their platform. So our team, myself included, has been experimenting with this stuff pretty much nonstop since it dropped a few days ago.
Speaker 2:I finally was able to figure out how to get access to it, amit, and we'll make sure to include that link in the show notes for everyone. If you would like to test out Llama 3.3b or Llama 3.370b and I, just right before this pod, dropped in a few prompts and I will say, latency is one of those things you don't notice until it's greatly sped up, because it's mind blowing how quickly I'm getting responses, almost so fast that I'm thinking, yeah, this can't be any good. But then you read the response and you realize no, this is pretty, pretty good output. So we'll be sure to include that link in the show notes. But I do want to provide a little bit more context on Lama 3.3 70B for our Sidecar Sync audience. So, as Amit mentioned, meta unveiled it just last week and, as a reminder, in the context of large language models or LLMs, parameter sizes refer to the number of internal variables or settings that the model uses to process and generate language. So parameters play a crucial role in determining the model's capabilities and performance. And when we say 3.370B, again we're talking about 70 billion parameters. So, as Amit mentioned, lama 3.370B delivers performance similar to the larger LAMA 3.1 405 billion parameter model, but ata significantly lower cost with reduced computational requirements.
Speaker 2:We're seeing meta claim that LAMA 3.370b outperforms competing models like Google's Gemini 1.5 Pro and OpenAI's GPT-4.0 on several industry benchmarks, including MMLU, which evaluates a model's ability to understand and generate text. It's based on an optimized version of the transformer architecture, featuring an improved attention mechanism that lowers inference costs and, just as a side note, the training data set included here is about 15 trillion tokens from the public web and over 25 million synthetic examples. To give you a further idea of just how affordable this is processing 1 million input tokens with Lama 3.370B costs around 10 cents, while generating 1 million output tokens requires around 40 cents worth of compute capacity, compared to $1 and $1.80 respectively for Lama 3.1405B. It's now available for download from platforms like Hugging Face and the official Lama website, and it's also been integrated into services like IBM's WatsonXai, making it accessible to enterprise users. With over 650 million downloads of Lama models to date, the latest release further solidifies Meta's position in the open source AI landscape.
Speaker 2:So now we have a little bit more context to meet for our Sidecar Sync audience. You mentioned that you're excited about this and you've been excited about smaller models. Can you talk about that a little bit more, because at a glance we think bigger, better, right. More parameters sounds like a good thing, sounds like I'll have a more effective large language model, but can you explain your reasoning?
Speaker 1:Well, people are wondering, you know like I think, with Grok specifically, but just generally why is there a need to go so much faster and have so much more throughput in terms of tokens? And if you think about what a human being can perceive, you know we're limited to probably between 10 and 20 tokens per second in terms of our ability. If you think of a token roughly as being a word, it's not exactly that, but you know. So, 10, 20 words per second is probably as high as we can comprehend through any modality. So we're like well, why do we need more than that? Well, we're, you know. People who are thinking that way are only seeing a very tiny sliver of the picture. The reality is is particularly in agentic applications, where there's a high degree of conversation from AI to AI, there are thousands of tokens being processed all the time, in some cases tens or even hundreds of thousands of tokens per request, and so the ability to do this at a much, much faster pace enables those applications to ultimately be far more responsive. Faster pace enables those applications to ultimately be far more responsive. When you make something abundant and near free, increased use occurs. Right. This is a classical thing that happens with pretty much any good or service is that if you're able to drive the price down, you know the scale of consumption goes up, assuming that the utility stays constant or increases, which it clearly does here. So what you're going to see is a massive increase in demand. That in turn, is going to fuel more scale, probably more fundamental research. That's going to drive a lot of the things you just described further compression of the size of these models and capabilities, and so that's a virtuous cycle that essentially think of it this way that models are becoming close to free and incredibly smart. You mentioned some stats around the comparison between Lama 3.370B and Lama 3.1405B in terms of cost. It's an even more stark comparison when you compare Lama 3.3 to OpenAI's model, which is about $15 per million tokens of output, compared to roughly $0.60 per million tokens of output from L, roughly 60 cents per million tokens of output from Lama 3.3, on Grok specifically, and so you're talking about a 25x decrease in cost over roughly a six-month period in time. It's just mind-boggling. So that's the thing I get excited about. Open source means there's more availability, more competition. Other people will derive models from this that are special purpose. Associations, of course, can do this. There's so much that's going to happen when you put this stuff out in the open. So to me that's the fundamental reason I get excited.
Speaker 1:Just to kind of put an exclamation point on that, the reason I've been working with our team across Blue Cypress, particularly the Tassio team and the Member Junction team, over the last few days, is we very quickly adopted this new model inside Skip. Skip is our multi-agentic data analyst and MBA level business coach and Skip up until now has only really worked well either on the best model from Anthropic or the best model from OpenAI and those are expensive and, frankly, kind of slow. But it's a very complex set of prompts that happen inside these multi-agentic solutions that interact with data at scale. So Skip can do things like create any report you can talk to it about. So you can talk to Skip about your various data and you can just say create a report and it'll go out and write all the code and build the report and present it to you. That was previously taking. It takes an average Skip conversation to produce one report. Takes about 50,000 tokens is what we're finding, which you know from a speed perspective would on average take a minute or two in the open AI world up until literally this weekend and cost, you know, like a quarter or half a dollar or something like that, which is still very reasonable compared to how long it would take a human programmer to write a report, which might be a day or two and cost a lot more than 50 cents. In comparison, though, lama 3.370b on Grok takes about five to 10 seconds to do the complete round trip, and that will soon be about one to two seconds, because we're using an unoptimized version of Lama 3.3. They have a much faster version available on Grok that isn't yet publicly available, which is called the speculative decode version, which is roughly three times faster. I believe so, but the point is, though, that now we're down to the fractions of a penny per interaction. So that's going to drive adoption, because people aren't going to think about cost anymore. It goes away. So that's going to drive adoption because people aren't going to think about cost anymore. It goes away. So for complex applications and multi-agentic solutions like Skip, this means you have way more fuel to work with. Related to that, by the way, a lot of these systems are going to start doing multiple parallel tasks, so in the case of Skip a part of that engine.
Speaker 1:What Skip does is Skip writes code and Skip will write code that is related to the conversation the user's having, just like a human programmer would. Right, you have a programmer on your team. You say, hey, I'd love to write a report that shows me member renewal trends year over year and I'd like to put it into a chart format of some sort. Right, you talk to them about the requirements and the programmer says, okay, cool, give me a couple days, I'll go write the code and I'll come back to you. Maybe it's a little bit faster than that with tools like Power BI or Tableau or whatever, but that's the basic process and that's what Skip is doing Now in. A programmer might go and write it one way and then say here it is. But and that's what Skip does right now Skip says, okay, I'm going to write the code, and then Skip tests the code, make sure it works and then presents the results just like a human programmer.
Speaker 1:What we're going to do in the very near future is we're going to fire off many parallel requests to the AI to write the code multiple different ways and then we're going to have a supervisory AI that's kind of like the master craftsman AI that looks at the output of all these other AIs and picks the best one that most closely matches what the user's looking for. And in that process, by the way, there's a reinforcement loop where the AI just gets smarter and smarter within the agentic solution. Right now that's not affordable, right? That would cost tens or, you know, even maybe dozens of dollars per request in the open AI world. Now, with Lama 3.3 on Grok, it would be possible to do that, but still it would add up. It would probably be, like you know, instead of being two cents per request, it would be like 50 cents to a dollar. So we're not doing it at the moment.
Speaker 1:Imagine if this stuff was free and instant. We would 100% do that and we'd have way better results. And it would, of course, be imperceptible by the user because these things are happening in parallel. So bottom line is real simple. That's a long description. People aren't really understanding how much potential demand there is for tokens. People are going to be consuming billions of tokens per month personally in the very near future.
Speaker 2:Wow, there's a lot to unpack there, meath. I want to go all the way back to the beginning. So the value you see coming out of this release of Lama 3.370b is not necessarily for the average user of meta AI, for example, but in fact in those agentic solutions when the latency is really important.
Speaker 1:I think the average user will see a difference. They'll see it's faster and probably more intelligent than the prior version. But it's marginal to the average user because these things are already really good it is. That is correct, it's. I think that the system side of it is where you're going to see really explosive growth happen in terms of capabilities.
Speaker 2:And then I'm understanding correctly that you have already swapped out the model that's behind Skip.
Speaker 1:Yes, skip is still running on OpenAI for all of the live customers in production, but we have a test environment that's moved over to Lama 3.3. Assuming testing goes well for the next probably about a week, week and a half then we're going to offer customers the ability to switch over.
Speaker 2:So just to give everyone context, this model was released on Friday right Amit Friday of this past week. We're recording this podcast on a Tuesday, so in just a few days you were able to kind of plug and play a different model. You're in a testing environment. I'm sure there are many listeners that one think that's pretty commendable, because it is, but are two wondering how you were able to move that quickly and kind of jump on this release. Can you talk about that?
Speaker 1:Sure, well, I mean, we already had support for Grok built in because? So all of our stuff is built on top of this open source platform we created called Member Junction, on top of this open source platform we created called Member Junction, and Member Junction is available for free for anyone in the world to download and use. It's a software development framework and it's also an AI data platform. That Skip sits on top of. Member Junction, has created an abstraction layer on top of AI models. So, rather than coding against OpenAI or Anthropic or Gemini or Grox APIs, we have an intermediate layer that essentially acts as a translator. It's a very thin layer of software we built a long time ago, like a year and a half or longer, and the point of it was to say, hey, everything else in our universe, whether it's a user of our framework, a client, they want to interact with an LLM or other types of AI models, they go through this abstracted layer of software, which is independent of any model or model family, and so then it's just literally a metadata setting where we can say, okay, there's a new model and it's from Grok, we just plug it in. And if some other vendor came up out of the blue. That was not one of them.
Speaker 1:We support, I think, five or six different vendors right now, including Grok, openai, anthropic, gemini, mistral and a couple of others, but if something else came up, we can easily build what's called a provider that basically knows how to talk to this other vendor through their API, and then everything else in Member Junction's ecosystem will just work, just by changing the settings. So, honestly, it actually wasn't that much of an effort. What's more of an effort is just testing things what they call in the AI world, the world of evals. Right, having a bunch of structured tests you can run to determine whether or not your applications still work correctly with any model. That's really where most of the effort is when you switch models, and that is non-trivial. That is a significant effort, although there's a lot of work happening in the world of AI development to help automate evals, which is exciting, but in any event, yeah, I mean for us it was really just sitting on top of the member junction architecture that made that possible.
Speaker 2:Yeah, I want to share a quote that I pulled from an article on Grok's website about this release, and here it is. What's even more impressive is how meta is challenging the death of scaling law. Yes, it was a useful myth that helped us understand the early days of LLMs, but the reality is that meta is defying what we thought, what were thought to be traditional scaling limits. They're not just throwing more parameters at the problem. They're actually improving the underlying models without changing the fundamental architecture of the model. On this pod, we have talked about scaling laws and myths, so I want to hear your take on how this release is potentially challenging that view.
Speaker 1:Well, I mean, look, scaling laws is just the latest kind of predictive law about computing of one flavor or another. We've had Moore's law, we've had Metcalfe's law, we've all these different laws that have existed over the landscape of compute and computing digital kind of innovation for decades. And people always say, oh, moore's law is going to end or this is going to end. And then some you know bright team of people at some research lab or some company comes up with a workaround. We're like, oh no, no, there's a limit that we're going to overcome. And then you overcome that limit.
Speaker 1:Sometimes it's fundamental architectural shifts. A lot of times it's just incremental innovation where you're saying, hey, there's a smarter way to handle this part you mentioned earlier. There's an optimized attention mechanism. The attention mechanism in a transformer is this thing that actually takes up the largest part of the compute because it's essentially comparing every token against every other token that preceded it. It's a computational challenge that compounds as the tokens increase in scope per request and if you can make that even a tiny bit more efficient, you radically increase the performance of the whole system.
Speaker 1:Downstream, and people are working like crazy on this. Of course there's new architectures. People are working on that. Don't have the quadratic problem, which is what I just described in the transformer architecture, but thus far, none of those alternative architectures have proven to scale up in terms of intelligence, as well as the transformer architecture has shown. So if you can make transformers significantly more efficient and stretch the boundaries of what they can do, that gets exciting. That's exactly what's happening right now. People at Meta and a number of other leading labs are working on new architectures that aren't yet available, but, at the same time, working to stretch the boundaries of what's out there today, and that's exactly what Anthropic and OpenAI are doing as well. There's nothing really particularly novel about that in the sense of like what they're trying to do. How they've achieved that is tremendous, though like hats off to those guys. I really think Meta AI, led by Jan LeCun, has been doing some amazing work this year.
Speaker 2:All right. Last question on this topic for our audience Is this something that listeners need to go out and test right now, or just be aware of it?
Speaker 1:Well, I'm the best. It Well, I'm the best. I'm probably like most in favor of trying to learn things and be aware of them by trying them out, I guess is what I'm trying to say.
Speaker 1:So if you want to understand this stuff, go to grokcom groqcom, check it out or go to metaai, check it out there and you'll understand it a little bit better. I think that what we were just saying is that the average end user probably will look at it and say, yeah, it's great, it works well, but it doesn't necessarily appeal to me any differently than Claude or Gemini or OpenAI. So at the surface level it might not impress you that much. I think Grok's inference speed will impress everyone but, like you experienced, you might think it's fake or he might think it actually didn't work because it's so instantaneous, which is kind of funny. Actually, the founder of Grok had a post on LinkedIn, I think, this weekend where he said that that was the problem he ran into early on is that people looked at it and they saw him demonstrate how Grok could instantly inference something that would take seconds or longer in other systems and they're like, yeah, I just saw it. Like no, I don't believe this is going to work. And he says, well, you like literally just saw me demonstrate it to you and they said, yeah, it's not going to work. So it's like this suspension of disbelief when something is so much of a leap ahead. It's almost the same thing if you go back in time to a couple of years ago when GPT-4 was not yet released but was being made available to a handful of people in the industry, including Bill Gates, who got a demo from Sam Altman and didn't buy into the capabilities until he saw the AP biology course being mastered by GPT-4. And then he kind of got over that leap. But even the earlier indicators to him, he's like yeah, it's not quite there, but it was in a way already there. So even people like that, deep in the industry, you know, are sometimes on a hard time seeing it.
Speaker 1:I find myself in that same boat a lot of times too right, because I'm used to what I'm used to, and then something new comes along and you really have to take a moment to say wait a second. What does this mean? So, I think, for our average listener? Association team member. You're thinking about your strategy. You're thinking about your business. The main association team member. You're thinking about your strategy, you're thinking about your business.
Speaker 1:The main message is this the pace of innovation is not slowing down.
Speaker 1:It's probably speeding up, but it's at minimum maintaining pace, which is a ridiculous pace to begin with, and what this means for you is that the applications that you wish to empower your members, with your staff, with are possible, not just because the intelligence is available in these models today, but because the costs are coming down so much so that it'll not be a factor to even really discuss.
Speaker 1:You don't really talk about the cost of electricity or the cost of breathing oxygen, it's just there. So this incredible abundance of AI is really around the corner, and you shouldn't be worried about costs newly, as you have been perhaps in the last year or two for a lot of these systems. That's exciting. And again, for those of you that haven't taken a deeper dive and you're listening to us maybe for the first time, or haven't heard a lot about AI and association land, I'd encourage you to just dive in and start learning. Talking about in this particular episode in this topic, I should say, is probably more likely to get people who are already into AI at some level excited than brand new listeners, but it's something that I think everyone here will appreciate once they go a little bit deeper on their AI journey.
Speaker 2:Yep. Moving to topic two for today's episode, sora. So OpenAI has officially launched Sora, its highly highly anticipated text-to-video AI model, on Monday, december 9th 2024. And, as Amit mentioned at the top of the episode, this is part of OpenAI's 12-day Shipmas product release series. It's now accessible to GPT Plus subscribers in the United States and many other countries, though at the time of the recording of this podcast, it seems to be that the website is down and they're not allowing for any new account creation, but they said they should have this resolved soon. Just as an fyi um chat, gpt plus subscribers can create up to 50 priority videos, uh, using a thousand credits at 720p resolution with a maximum duration of five seconds. But a new chat GBT pro subscription is launched, priced at $200 per month and offers unlimited video generations up to 500 priority videos, 1080p resolution, maximum duration of 20 seconds and the ability to download videos without watermarks.
Speaker 2:We all know Sora is an AI video model, but here are some more specific features. So we're going to have text-to-video generation within Sora, but we're also going to have image-to-video conversion, video extension and also video remixing. Now, within the interface, we're going to see things like an explore section, which features AI generated videos from the community, a storyboards function, which I've been hearing about on LinkedIn. I think this works particularly well and that is for creating videos from a series of prompts and then a remix tool for modifying Sora's outputs using text prompts and a feature to blend different scenes together if you want to create a longer form video. Now, despite its advancements, zora still faces some challenges, like struggles with complex physics simulations, difficulties in understanding causality and problems differentiating left from right. There are also some concerns here around safety, so OpenAI has said that they implemented safety measures restricting prompts related to violent, hateful or celebrity imagery, as well as content featuring pre-existing intellectual property.
Speaker 1:But alas, we have. We talked about this initially. I want to say it was back this point and there will be many more. But the idea of being able to create high quality videos simply going from thought to output essentially is what we're talking about here opens up creative potential for a lot of people who, you know, do not have access to this.
Speaker 1:I mean, most people, even people who are creatives, don't have access to creating, you know, videos because video has just been a very complex and very expensive endeavor. You know, a minute of video at a high quality of production is thousands of dollars. You know, even today, even with AI-assisted tools for professional video folks particularly if you're doing animations or anything like that so being able to do that with AI drops the cost dramatically, or anything like that. So being able to do that with AI drops the cost dramatically. That should be able to create opportunity for people to use video far more fluidly, far more frequently, far more abundantly in the way they communicate and, at their core, associations are all about communication. So if you can use video in a rich way and have a massive abundance of video and perhaps even real time over time, that's not what this is right now. It opens up a lot of potential doors for associations to communicate more effectively with broader and more diverse audiences, so that's what hits me really hard when I see this is what happens in two years.
Speaker 1:I think what's out there right now. A lot of people are going to immediately start using it in cool ways, which I think is awesome, but I think of it more from the longer arc in terms of how associations will leverage this type of technology over a period of time. The only other comment I have is I think OpenAI has done some really good work on the software engineering front, taking the core AI and putting some important features around it, like the multiple prompt storyboarding that you described and the ability for the community to share. I'm really curious, though, mallory, what are your thoughts? This is definitely very close to your world on a number of fronts. I'm curious what your reaction is to this.
Speaker 2:I flipped it around on me Amit.
Speaker 2:I think, this is exciting to let all of our audience know. Amit the other day shared with me an episode of. It used to be the AI breakdown, I think. Now it's called the AI Daily Brief. That featured a little segment about Ben Affleck of all people, and his take on artificial intelligence. We're going to link that in the show notes because it was such a thoughtful reflection on how AI is going to impact kind of these creative industries. Writing certainly is an industry that we know will be affected.
Speaker 2:But I really like how he framed the challenge and kind of our future as going into two paths, one of those being having a whole market of AI generated content that people want on demand. They want personalized episodes of. I think he gives the example it was an HBO show that I can't think of the name. I want a personalized episode of this series just for me. And then kind of that other pathway which will be human generated content and maybe people will dabble in both or maybe they'll kind of find themselves on one path or the other, and so that's the way I like to think about this is the fact that video is now so easy to create is certainly, on the one hand, can be perceived as a negative thing, but, as you mentioned to me, the fact that so many people across the world will have the ability to tell stories that they've always wanted to tell but never could because they didn't know how to animate or they didn't have access to the.
Speaker 2:Let me all let you in on a secret. It is so expensive to create even a short film like what you're thinking probably double it, triple it. Even if it's what you would deem an indie film or kind of like a backyard film project, they're incredibly expensive to create. So knowing that many people everyone, essentially in just a few years will have access to create these kinds of stories is exciting to me, and the way I justify it in my mind is thinking two paths. Okay, I'll continue on the path of human generated content. That's what resonates for me. That's the world I want to be a part of, and we'll have a whole nother market and that will thrive as well. But overall, this is exciting. I'm warming up to the idea more and more through this pod, honestly, but through working at Sidecar, for sure.
Speaker 1:That's super interesting to hear. I mean, I think that the idea of two markets or markets that definitely have different sides to them or different kind of ends of the spectrum, even if it's one unified market in some way, it's kind of like. You know Ben Affleck himself. I don't know what's your favorite Ben Affleck film.
Speaker 2:So, off the top of my head, the one I'm going to think of is Good Will Hunting Okay, from way back when my favorite is more recent.
Speaker 1:It's the Accountant. I don't know if you've seen that.
Speaker 2:I have not.
Speaker 1:You should put it on your list. It's very different from Good Will Hunting. Okay, ben Affleck plays the character of an autistic individual who goes through life and is an accountant by trade, but he's much more than an accountant, as you'll learn when you watch the movie he has a gun in the.
Speaker 3:Yeah, he's a special forces guy and does all this.
Speaker 1:And, yeah, he's just a very interesting character. I believe there's an accountant, too coming out that actually features the real Ben Affleck. But what I would be interested in is you know someone who likes the movie, perhaps to be able to go to Netflix or wherever this stuff happens to live, and if you'll say, hey, create an episode for me with that character, you know, killing this other bad guy that I like from another movie or something like that, right, and if it's part of that kind of you know character repository of IP, that'd be kind of a cool thing. And then it just like starts playing this new episode for you. So that's on the kind of entertainment consumption side. But if you kind of capture that idea and say, well, you know, you could bring it over to another world of corporate communications or professional development and learning, I think there's so many opportunities like that as well. The other thing, too is, on the one hand, while I totally honor the profession of people who do live acting or any kind of acting I don't even know what the right term is to use I also think that for the number of Ben Affleckcks in the world, there's a ton of people who are obviously not on that stage, and so the question is is are there new opportunities for monetization that are not directly correlated to people's labor, where you can license you know aspects of your image and likeness and so forth in your voice in a way that you control and that you have access to a monetary, like a revenue stream, and that people can use in ways that you consider to be okay, which has a lot of ifs in it right now, but there's a lot of technological components that make that possible.
Speaker 1:First of all, using blockchain, it may be possible to put the assets that represent the likeness of a character including their name, their voice, their image, their personality in a way that is only accessible through blockchain, which provides kind of an immutable ledger and a way of licensing that character at scale. And then that could potentially create a way for AIs to interact with the authentic source of that character and then even involved in what would be called smart contracts the ability for that character to only engage in individual smart contracts with AIs that are going to generate videos that are aligned with a code of ethics or whatever that that character wants to be part of. So maybe I have a character and I only want it to be used for business communications, and there's certain types of things I don't want that character to be involved in talking or certain industries I don't want it to be involved in. So you know, these kinds of ideas, I think, are opportunities that might open up the door for a lot of people who don't really have an opportunity to monetize their skills in a traditional sense, in new ways. So I think that's a possibility. It also obviously creates the opportunity for 100% synthetic content for, you know, images and videos of people who don't ever exist, right to be purely AI to possibly displace a lot of those people. So I think there's two sides to that.
Speaker 1:But I would say that Sora specifically represents an opportunity at the mainstream to start playing with it. Because for those that are deeper in the AI world, we've had Runway ML, we've had HeyGen, we've had a number of tools that are not exactly like what Sora does but have some similarities in different ways and kind of expose your mind to the idea of generative video and various flavors. Sora will be the first time the general public really understands it, because OpenAI has far and away the largest consumer base of users, of chat GPTs. So by them pushing it, even for people who are like at the lower tiers to have some access to this tool will likely result in the public consciousness increasing their understanding and awareness of generated video, which, in turn, by the way, is really important for associations and all of us to understand, which is that consumer expectations drive the reality of business transactions, meaning that if I'm a consumer and I have a certain level of interaction with a brand in my personal life, I very quickly come to expect that caliber of low friction, high quality engagement with my B2B world.
Speaker 1:So you know, there might be like the classical example that is oh well, amazon makes it easy to buy something. How come my association's e-commerce site is harder to use than Amazon? And nobody cares about the fact that you're not Amazon. You don't have the resources. They still expect your e-com to be just as good as Amazon's and your fulfillment to be just as good.
Speaker 1:That same thing is coming to communication. So if ChatGPT can generate customized, personalized videos to teach someone a topic on demand, which Sora cannot do, that yet, just to be clear, these short videos do not have narratives and they don't have people speaking and all that kind of stuff. But very soon, if you kind of put two and two together. That will be the case. Probably by this time next year you'll have that ability to say, make an educational video for my child on this topic, and it'll create a five minute video teaching my kid about some topic in algebra or whatever right, and it'll do it and that will be a thing. So what impact does that have on associations? Well, you have to have this caliber and this modality of interaction and communication to be competitive over time.
Speaker 2:And, as you mentioned, video creation has been so expensive and tough within the past that I assume within associations perhaps, video has been limited kind of to departments where it was seen as essential, maybe in marketing, obviously in education. I'm wondering now that video creation is much more feasible and will continue to be more feasible in the future. Do you see any other kind of out there video use cases for associations in your mind that aren't marketing or education?
Speaker 1:Where my mind goes is primarily to interactive video, where, you know, I'm always thinking about how can associations improve the quality of their education delivery, how can they make it better for members to connect with one another, how can they facilitate, ultimately, the advancement of their mission? And so think about education for a moment and say well, what if there was a real-time video avatar that was resident inside your LMS, that actually had really high-quality conversations with your you know, learners throughout their journey, that really helped them like a personalized tutor would. And you know we're very close to that already actually with avatar-based videos. Heygen is one of the companies in that space. That's a little bit different than what Sora does, but they're kind of in the same general realm. And I also think about the same thing in terms of member-to-member communication. If you could have an AI avatar involved in making conversations between, let's say, two members who haven't yet met, to introduce them the way a person might and make it less awkward for two people to meet who might have a really good reason to connect professionally, that could be interesting as well.
Speaker 1:There's many use cases and again, that's something that's a little bit different than Sora's use case of more creative videos, but I think the two worlds blend together. Ultimately, mallory, my mind is still kind of focused on the same thing that I've been talking about for a while, which is the idea of like one super AI that knows you really well, that can invoke any number of other AIs on your behalf. So Sora and HeyGen and blah, blah, blah, like. Those are all brands of specific tools that right now you as a user have to go to them, know how to use them and get the specific piece and part you want. Very, very soon it'll be like Dali became part of ChatGPT you just ask for an image and it creates it or it may even suggest that it can create an image for you.
Speaker 1:You'll be having a conversation with your AI assistant, from whichever vendor you're using, and that thing will know you really well and it'll have all these tools at its disposal hundreds, thousands of tools and it'll be able to synthesize videos. So, if you have a conversation saying, yeah, I want to have a 15 minute training video that covers these topics, it'll use your corpus of content for the knowledge. It'll use avatars that are approved by your association to generate characters. Maybe it'll use something like a runway or a Sora to stitch the whole thing together, do the creative elements and then ultimately, seconds later, you'll have the video you asked for and you'll watch it and you'll say, well, I don't really like this part, change it. And it just does it. You know that's the synthesis of these tools that's going to be coming through the supervisory AI and that's around the corner.
Speaker 2:The last thing I've got to address here is this new ChatGPT Pro subscription which is $200 a month. I'm sure you have kind of your own take on that and maybe the perceived value there. But for our association listeners who in a world very soon might have a few staffers, let's say come up to them and say we really need this ChatGPT Pro subscription 200 bucks a month. We think we can get a lot of value out of it. What's your knee reaction there, do you? That's just a big jump from what I think was 20 bucks a month, 25 bucks a month, to 200. So what are you thinking?
Speaker 1:For the vast majority of end users, I don't think there's any need at all for the pro subscription and that includes almost every employee of every association on the planet and that answer has to do with, like text-based interactions. I do think that that $200 a month subscription is worth experimenting with for maybe one or two people to use Sora. I would try that out and say okay, because the $200 a month version includes like unlimited generations longer videos, as you were saying earlier. So if you're going to experiment with Sora, rather than having a five-second video, getting a 20-second video from the pro subscription could be worth it. Having more priority, faster generations, downloads without watermarks, those features that you mentioned to me. I almost think about it as my motivation to try the pro subscription personally would be if I want to play with Sora in more detail. I have no idea whether or not it's worth it on a long-term basis to pay that.
Speaker 1:The other thing, too, is remember all this stuff is very quickly becoming lower and lower cost.
Speaker 1:So I think they'll probably continue to find ways to put new features that are kind of on the edge into the $200 subscription to build that business for themselves. But you know, I really don't think most people have any need whatsoever for this. Maybe if you have one power user who's like the AI person, who's always testing everything new, get it for them for a month and see what happens, and you know. But I really don't think there's a need to consider that Ultimately. With all of this AI stuff, because of the amount of competition, the amount of capital chasing the problem, we're going to arbitrage out all of the profitability from the generic models very, very quickly, and that includes also, probably, inference over time, like the compute clouds that run this stuff. You're going to see that everyone's competing at such an aggressive, massive scale that profitability is going to become extremely razor thin and prices are going to come down, which is great for everyone, except really for the model companies other than the handful that are going to win.
Speaker 2:So, sora, at this moment, something fun I won't say fun to experiment, something that might be useful to experiment with, but you probably don't need to roll out the full pro subscription to your whole team. I also think ROI would be pretty tough to measure on something like this, but it's important to learn nonetheless.
Speaker 1:I agree. I think it's worth experimenting. So if you said, hey, I'm going to spend 200 bucks for this one month for this one user who's like really into it, by all means, that's great, but yeah, I would definitely think very critically about it. I would not go and say, hey, every employee needs to have the best version. Doesn't make a whole lot of sense, to me at least.
Speaker 2:Well, everyone, thank you for tuning in to today's episode. I have plans to create a wonderful Sora video in the near future, once I'm able to, and we will be sure to share that with all of you. We will see you next week.
Speaker 1:Thanks for tuning in to Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember Sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.