Sidecar Sync
Welcome to Sidecar Sync: Your Weekly Dose of Innovation for Associations. Hosted by Amith Nagarajan and Mallory Mejias, this podcast is your definitive source for the latest news, insights, and trends in the association world with a special emphasis on Artificial Intelligence (AI) and its pivotal role in shaping the future. Each week, we delve into the most pressing topics, spotlighting the transformative role of emerging technologies and their profound impact on associations. With a commitment to cutting through the noise, Sidecar Sync offers listeners clear, informed discussions, expert perspectives, and a deep dive into the challenges and opportunities facing associations today. Whether you're an association professional, tech enthusiast, or just keen on staying updated, Sidecar Sync ensures you're always ahead of the curve. Join us for enlightening conversations and a fresh take on the ever-evolving world of associations.
Sidecar Sync
Scaling Laws, AI Scientist Framework, and AI-Generated TV with Showrunner | 44
In this episode of Sidecar Sync, Amith and Mallory take you on a journey through some of the most cutting-edge developments in AI today. The conversation kicks off with a detailed look at scaling laws and how they are driving exponential progress in AI models, from data demands to computing power. They then explore the AI Scientist framework, a revolutionary tool that promises to automate scientific discovery and reshape research as we know it. To top it off, they dive into Showrunner AI, an innovative platform democratizing content creation by allowing users to generate their own AI-powered animated series. Whether you're curious about the future of AI in research or content creation, this episode is packed with insights on the next frontier of artificial intelligence.
π AI Tools and Resources Mentioned in This Episode:
AI Learning Hub β‘ https://sidecarglobal.com/hub
Scaling Laws β‘ https://shorturl.at/Zlqmt
AI Scientist β‘ https://arxiv.org/abs/2408.06292
Showrunner β‘ https://shorturl.at/CMo2f
Chat GPT β‘ https://openai.com/gpt-4
Chapters:
00:00 - Introduction
03:00 - Discussion on Scaling Laws in AI
09:15 - Real-World Challenges of Scaling AI
15:01 - Legal and Data Constraints on AI Development
17:07 - Introducing AI Scientist: Revolutionizing Research
34:14 - Showrunner AI: The Future of Storytelling with AI
38:16 - How AI is Changing Business Strategies
46:35 - Final Thoughts and Future AI Innovations
π Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global
π Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com
More about Your Hosts:
Amith Nagarajan is the Chairman of Blue Cypress π https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. Heβs had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.
π£ Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan
Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.
π£ Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias
I think you will see interesting stuff coming out of those next models and then when you plug in those ideas into hypotheses, into something like the AI scientist, watch out You're going to start seeing some interesting stuff happen. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Welcome to the Sidecar Sync podcast. Today's going to be interesting. If this podcast actually makes it to you on Thursday, august 22nd, I will consider that to be a minor miracle, because Mallory and I have been dealing with intermittent Wi-Fi power outages and some other issues between New Orleans and Atlanta. My name is Amit Nagarajan.
Mallory:And my name is Mallory Mejiaz.
Amith:And we are your hosts. Before we get into some interesting topics on AI and associations, first let's hear a quick word from our sponsor.
Mallory:Today's sponsor is Sidecar's AI Learning Hub. The Learning Hub is your go-to place to sharpen your AI skills, ensuring you're keeping up with the latest in the AI space. With the AI Learning Hub, you'll get access to a library of lessons designed to the unique challenges and opportunities within associations, weekly live office hours with AI experts and a community of fellow AI enthusiasts who are just as excited about learning AI as you are. Are you ready to future proof your career? You can purchase 12 month access to the AI learning hub for $399. For more information, go to sidecarglobalcom. Slash hub. Amit, how are you doing on this lovely Wednesday evening?
Amith:I'm doing great. You know, it's been a's been a long day. Um, I woke up to a power outage here in new Orleans, which, um, I don't need an AI model to predict how frequently the power will be out in new Orleans, particularly this summer, um, so it's, uh, it's, it's unfortunate, but it's a pretty, pretty significant. We actually have a generator on our home, but our generator is also not working at the moment because my house is under construction, and so there's just a lot of interesting things happening in my world at the moment, down here in the bayou.
Mallory:I love it. Well, basically right before we started recording, Amit was taking me to Wi-Fi school and helping me put my phone in the window, so hopefully we could have some strong enough signal to get this episode done. So, as he said, it will be really fortunate if this episode makes it to you, but hey, we're committed to the Sidecar Sync and we're here to make it happen. Today, we've got a few exciting topics lined up for you all. The first of those is scaling laws, which we've talked about a little bit on the podcast before, but not in depth. For the next topic, we are talking about an AI scientist. And finally, for topic three, we are talking about Showrunner AI, which is a storytelling AI platform. Starting with scaling laws, a lot of AI's progress is due to something called scaling laws. Basically, researchers found that if you give AI models more data and more computing power, they get noticeably better at all sorts of tasks. This discovery kicked off a race among big tech companies. They're all trying to build bigger and more powerful AI systems, hoping to create the next breakthrough. Recent research by Epic AI suggests this approach could keep working until at least 2030, potentially which is exciting, but it also brings some challenges, the first of which is power supply. These massive AI systems need an enormous amount of electricity. Billion dollar AI supercomputer by Microsoft and OpenAI that could need up to five gigawatts of power, which is how much electricity it takes on average to power New York City. It raises questions about energy use and environmental impact, of course. Now another big challenge is chip manufacturing. Ai needs special chips, and making these isn't easy or cheap. Building a new chip factory can cost over $10 billion and take years to complete. This creates a bottleneck in producing the hardware needed for AI advancement Data.
Mallory:As we know, it is another concern. Ais learn from huge amounts of information, but we might be running out of suitable training data. Some estimates suggest we could exhaust the supply of public text data in about five years. And, of course, there are growing legal concerns. Using books, articles and websites to train AI is raising copyright issues. It's leading to legal battles that could affect the availability of high quality training data. Despite these challenges, researchers are working on clever solutions For the power problem. They're looking at ways to spread AI training across multiple locations. To address the data shortage, they're exploring using more diverse types of data, including images, audio and video. Some are even experimenting with having AIs create training data for other AIs. So, Amit, I kind of gave a quick overview of scaling laws and, as I mentioned, we've talked about it on the podcast before, but I feel like it would be worth it for our listeners to kind of have you set the stage for what scaling laws are.
Amith:Yeah, I'm happy to. Well, it's super interesting because in the past when he took a computer program and he said, hey, I've got this program like Microsoft Word and I throw more horsepower at it, more compute, more memory, more storage, it's still Microsoft Word, it still does the same stuff. The functionality doesn't change just because it has a bigger computer. Similarly, if you have a database like SQL Server or Oracle and if you throw more CPU, more memory, more storage, better networking, you don't get emergent capabilities, you don't have new functionality coming into that piece of software. It's just the same software, it just is faster.
Amith:So the idea is that with the AI models, of course we're not in a deterministic world, which is traditional computer programming that are symbolic systems based on logic that are pre-programmed. Essentially these are neural network based systems using deep learning techniques, which we've covered on this pod before. But the essence of that idea is that you know the models themselves. They are neural nets and they learn from training data nets and they learn from training data. And then when you run them meaning when you what, what, what, uh in the AI world is called inference, Um you get different results, sometimes with the same exact request. And so because they're non-deterministic, um, and because they are neural networks, they can produce different results with different sets of or with the same inputs. Now, Now, when you significantly increase the amount of training data, which, of course, also requires a significant increase in what you described earlier Mallory, compute, storage, energy, all those ingredients right what ends up happening is you have a more powerful model that emerges from that. So something like a GPT-4 was more powerful than GPT-3, which was more powerful than GPT-2. Each of these were orders of magnitude increases in training, data, compute and power consumption. The interesting thing, though, is the capabilities that came out of these programs with these extra resources were significantly better, somewhat predictable that's why it's called a scaling law in terms of what level of new emergent properties popped out, not necessarily which properties or which capabilities started to emerge, but that a certain number of emergent capabilities would keep popping up, basically at levels of compute and levels of training data, Whereas the interesting thing is the actual programs, the actual software architecture, the way these neural networks work, the way they're trained.
Amith:There have been innovations there, but largely it's the same basic transformer architecture that was originally introduced in 2017. So there's been incremental progress from an algorithmic perspective. So what we're essentially saying is this you can take the same program, throw a lot more money at it and get a more powerful ai. That's what the scaling laws are basically saying. The reason they're being coined quote-unquote laws is because it's a prediction. It's kind of like moore's law.
Amith:Moore's law was not saying, hey, it has to be this way, but it was more like. You know, I am basically saying that over a period of time I believe this is going to happen. So that's essentially what the scaling laws are intended to codify is this idea of like if you put more ingredients in, you're going to get more and more powerful AI out. That's the basic concept, and it's kind of weird because thus far in the history of computing, we've had principally deterministic types of software. Again, software that doesn't change and evolve, it just does what it does, and so you throw more resources at it. It can support more users or work faster, but it doesn't all of a sudden do new things.
Mallory:Well, that was my follow-up question is have we seen scaling laws hold with any other emerging technologies? But it sounds like maybe we haven't in the same way.
Amith:Not in the same way. Not in the same way. I mean, this is a new animal, because what we're talking about here isn't so much that performance is going to improve or the cost to the performance ratio is going to keep improving, right, where Moore's law was essentially saying and the law of accelerating returns? That concept is kind of a broader thing about Moore's law. It was essentially saying that computing would double in power for the same price point, or put another way, computing would be half as expensive over every roughly two-year time span. So that was the Moore's law concept.
Amith:And so if you double something and double something and double something over and over and over again, then you get this exponential curve. And so that had more to do with price relative to performance. So the number of computations per second that you could do for a certain amount of money. But it wasn't that the computations could do new things, they weren't like growing new capabilities, it was just doing compute at a lower cost. Here, and most exponential curves are like that. If you think about, for example, in the world of photovoltaics or solar cells, same idea law concept is talking about not a performance increase so much so in the traditional sense of speed, but rather a capabilities increase, for the models actually can do more for you with the same basic algorithm. The algorithm hasn't really changed or changed much, but the capabilities of the output the models that get built are significantly different.
Mallory:So this would mean that we are not, at this point, as reliant on AI innovations or discoveries we might say as we are on resources and infrastructure that goes into AI advancement.
Amith:Well, kind of. But that's kind of like saying what if we made a really, really gigantic steam engine? Would that be the best way to get stuff around? You know, if we'd ever had any other innovations in transportation? Sure, it'd be like a really giant steam engine, but it probably wouldn't be the best for everything, right?
Amith:So, in the case of this particular model architecture, we have a number of known significant problems with it. The most notable problem is something called the quadratic problem, which has to do with the way the transformer architecture works the idea of the longer the context is so the more stuff you send to the AI. There's essentially this quadratic problem, which means that if you have 100 tokens versus 1,000 tokens, it's not a 10x increase, it's a 100x increase in terms of 100 times 100. So, basically, what's going on with the math is you're comparing every token against every other token, right? So that is fundamental to this thing called the attention mechanism. Which is this really? It's this key breakthrough that made transformers as powerful as they are, but it's a significant performance limitation at scale. There's been all sorts of interesting incremental improvements in the current model architecture to address this, but ultimately what we need is infinite context window with what's called linear progression, meaning if I give you 1,000 tokens or 100,000 tokens, I can still get a response back essentially in roughly the same time, right? So I'm not necessarily looking. It's maybe a little bit longer, right, but it's not dramatically more time. So that's the concept is that we have to solve for the quadratic problem.
Amith:The other problem with neural networks as they are today is they're basically fixed in time. So if you ever notice with ChatGPT or Claude, you will find that they're kind of fixed in time in terms of their training data sets. So they'll say, oh, this most recent release of GPT-4.0, as of the time of this podcast, had the training cut off, I believe in October of 2023 or something along those lines. Maybe it was December, and so the knowledge of the AI is limited to that point in time. And when they have future model releases, what they do is they take that model and then they do more training on it with more recent data, and then they do more training on it with more recent data, and then they'll release, you know, gpt 4.0 of some future sub version, right, and there'll be a new flavor, and then that's the new version of the model, but the model itself is very much static.
Amith:Once you've completed training it and you've shifted it into the mode where it's serving a user in inference mode it doesn't continue to learn. You can talk to chat GPT all day long. It's serving a user in inference mode, it doesn't continue to learn. You can talk to chat GPT all day long. It's never going to learn anything from talking to you because you're in inference mode, you're not in training mode, and so the architecture of the current neural networks we're working with are largely fixed, and so there's research going on to be able to make the actual training process much more dynamic and continuous.
Amith:So there's a number of kind of threads of research happening in that arena as well that are exciting. So there are a lot of really interesting things happening in the world of AI research that will push forward even more aggressively here. So I'm hopeful that we will see solutions that will lower the energy requirement, lower the compute requirement, allow us to be a lot smarter AI helping create better AI in some respects. So I don't think that it's just about scaling laws. I think scaling laws are a great thing to be able to kind of lean on, because right now we don't have you know, we haven't had that major step change in model architectures yet, but I think we're pretty close to getting something interesting.
Mallory:That makes sense Out of the challenges that we talked about. So power supply, chip manufacturing, data and then legal concerns. Do you feel like any of these challenges are more difficult to address or more pressing than the others?
Amith:I mean, I think the supply chain problem with chip manufacturing is a major issue that you know it's going to be tough to figure out how to solve for that for a lot of different reasons. You mentioned the long lead times for new fabs. We're talking about, obviously, capital costs. There's also the geopolitical tension around the fact that most advanced chips are made by one company in Taiwan, tsmc. They're a great company that does amazing work, but it's a super, super risky thing to have. You know the global supply chain for AI chips come from a single company that's in a potential conflict zone. So there's those issues, I think, on the energy side. You know, when you think about a lot of these things, part of what we talk about on this podcast is how one exponential might help another exponential on this podcast is how one exponential might help another exponential. So if we talk about what's happening, let's say, in the world of material science, we have AI helping with material science discovery.
Amith:Well, new materials could help us accelerate AI. So, for example, if we have more efficient materials to work with. So, for example, superconductivity is this holy grail that we talked about in materials in the past, and the idea of power transmission using superconductors would result essentially in lossless transmission of power. That makes the grid more flexible. We lose a ton of the power that we generate just through transmission. The same thing happens on a micro scale on a chip. You're losing a ton of the power on the chip, which the byproduct of that is a lot of heat, because of that same lossiness of transmission of power, even across very, very small distances. So if we can solve some of these materials issues, that will help a lot in terms of chip design, chip manufacturing, powering these AI plants essentially right the AI data centers and a number of other things, and I think, of course, better AI chips will help us design better materials. So that's where I think there's a potential virtuous loop here where these exponentials can build on each other in an interesting way.
Mallory:Moving on to topic two, the AI scientist. The AI scientist towards fully automated, open-ended scientific discovery is a pioneering framework aimed at achieving fully automatic scientific discovery using artificial intelligence. This framework leverages advanced large language models to independently perform research tasks traditionally carried out by human scientists. It's designed to generate novel research ideas, write code, execute experiments, visualize results and produce full scientific papers. The AI scientist can also conduct a simulated peer review process to evaluate its own work, mimicking the iterative development process of the human scientific community. This framework has been applied to several subfields of machine learning, including diffusion modeling, transformer-based language modeling and learning dynamics. Remarkably, it can produce papers at a cost of less than $15 per paper, and these papers have been shown to exceed the acceptance thresholds of top machine learning conferences when evaluated by an automated reviewer with mere human accuracy.
Mallory:The concept of an AI scientist introduces a new era in scientific discovery, where AI agents can potentially handle the entire research process, including idea generation, experiment execution and peer review, thus enabling endless creativity and innovation on complex global challenges. While the system shows promise, there are still challenges and limitations, like occasional flaws in the generated papers and the question of whether AI can propose genuinely paradigm shifting ideas. So, amit, you sent me this and I thought it was really neat. I looked into it briefly and I realized that it's all open source, so any one of our listeners can go and get access to this AI scientist. I want to know from your perspective, why is this so exciting?
Amith:Well, you know, the idea of novel science being invented by an AI, or being at least partially invented by AI is it's, a really interesting thing. I think that what you're seeing with this current generation of the AI scientist, as this particular approach is entitled, is really kind of a pretend version of that, in the sense that the core component of it that has to do this invention is really the language model that's coming up with the hypotheses that then drive the rest of the process. Um, the hypotheses are, you know, the rest of the scientific process is only good as the hypotheses that you're putting into the front end. It's kind of like in marketing if your top of funnel isn't that great, your bottom of funnel is not gonna be that great either. It's a similar kind of idea and so, um, it is. That's definitely a weak part, because current language models, even the state-of-the-art GPT-3, gpt-4.0 or CLOD-3.5 or any of these things, are still largely incapable of coming up with novel ideas. Nor do these models actually have long-term planning reasoning or the ability to do like kind of you know, anything beyond a facsimile of reasoning in the current incarnation of these models. That being said, they have the knowledge of all humans in them. So it isn't so much they're necessarily creating novel ideas, but they might appear novel in the sense that they might be recombining ideas at scale in a way that no human would likely ever do, right Partly because they're doing things that are not obvious to us. So, even though they're not necessarily coming up with truly novel ideas, even essentially by mixing the pod a little bit, they might come up with novel enough ideas that represent breakthroughs, that could be significant or could be minor, but things that are worth going through the rest of the scientific process with.
Amith:So to me, that's what's interesting with the current technology. What's more interesting is if you imagine a world where you fast forward and you take the AI scientist which, by the way, all the AI scientist is it's a multi-agentic system. We've talked about multi-agent systems a bunch of times. It's basically AIs that work with multiple different steps. They have tools available to them, multiple different AI agents, essentially, which are basically just different prompt structures that are talking to each other to solve a problem in kind of a coached way, right where there's this particular goal in mind. So I'm not saying that this isn't a great advancement, but the point is is that it's a multi-agentic system that's focused on science. What I'm excited about is you say, okay, well, this system is actually already pretty good, as you pointed out $15 per paper and it's not garbage papers. These are papers that are passing pretty high bars.
Amith:But what if you plug in GPT-5? What happens when you plug in the next frontier model, which is a 10x improvement over GPT-4.0? And maybe those models really are capable of more. Like part of the emergent property of scaling laws we were just talking about might be and we don't know this yet right, this is all speculation. Even the people who wrote scaling laws, they don't know what the emergent properties are going to be. But if you do an order of magnitude increase in power, even with the current model architectures, what will you get? Will you get new capabilities? Will you get truly novel ideas that come out? That will be interesting to see what happens.
Amith:The other thing is is again, as we talked about in the last topic, the model architectures themselves are becoming smarter. They're capable of system two thinking right when we talked about that in the past thinking fast and slow. We've talked about the idea of system one thinking which is kind of like the reaction, the survival instinct, and system two is longer range planning and we kind of switch back and forth between different parts of our brain from the limbic system, the frontal cortex and AIs only basically have a limbic system equivalent. They just instantly respond to everything as quickly as possible. That's what inference does. There's no ability to like kind of think about it and reason on it and kind of stew on it and sleep on it and kind of come back to it. Right, there's no equivalent to that right now.
Amith:The next generation of models are going to approximate that and that's probably where you're going to see some first versions of creativity. Some people might put that creativity still in air quotes, but I think you will see interesting stuff coming out of those next models and then when you plug in those ideas into hypotheses, into something like the AI scientist watch out You're going to start seeing some interesting stuff happen. I've also seen versions of this type of approach that are tied into labs where you can actually execute lab experiments, because there's a lot of automation that happens in labs, like in biology or in other disciplines, and so if you can kind of tie in the actual execution of a physical experiment in the world of atoms, not just in the world of bits with this type of model. That also is interesting, right? Should there be human supervision somewhere in there? Maybe so.
Amith:But the point is, is that you're really able to consider a much broader range of experiments? Right, we're limited by. You know, scientific discovery is limited by resources, right? So if you say the resources are human ingenuity and labor, dollars, time, and if we can solve for some of those constraints, you can increase the amount of scientific discovery going on. So, actually, even if the AI scientist doesn't necessarily come up with all the novel ideas, but it's just doing the rest of this stuff, that's a superpower for human scientists.
Amith:So to me, that's what got me excited about it. At the end of the last topic, I was talking about how exponentials are converging and how they're feeding each other, like my example, is material science leading to more efficient chips and better power distribution and blah, blah, blah. And my point here would be that's exactly what we're saying here too is that, you know, the AI scientist could be an AI scientist in material science, right? Or some other related field that might power the next generation of AI. And, of course, as the example you provided, they're actually having the AI scientists do work in the machine learning discipline itself. So I think it's quite interesting.
Mallory:So, conceptually, what I'm struggling with you mentioned with the emergence of GPT-5, one day we might have an AI scientist that could come up with truly novel ideas, but with generative AI as a concept, or especially large language models which are next word predictors, which are predicting next words based on what they've been trained on, I'm struggling to understand how, conceptually, the next models could be capable of novel ideas when they're predicting based on what they've been trained on.
Amith:Right. I think part of it is even more power, even more data, but the other part of it is if the model had the ability to pause and think slower and contemplate and recalibrate its thinking based on its initial results, kind of like loop on itself in the way we might think about it, or complex tasks that can be broken down into smaller components, and then the model is capable of you know kind of reasoning out solutions for each part. There's more sophistication, more horsepower ultimately, in that kind of model, and that's where all the research labs are heading in terms of you know reasoning. That's this. All this hype you hear about Strawberry or previously we covered QSTAR last fall, which is Strawberry that's where, you know, the OpenAI team is heading. That's pretty clear that everyone else is doing something similar, because that's going to be, you know, a significant change, a step change, if you will, in functionality in these models.
Amith:My point, though, would be, even if the current models are the limit of what we have, the thing that I think I think it's somewhat of an esoteric argument. So people the purists will say no, gpt-4-0 is not capable of novel ideas because it is a statistical machine based upon all of the data you fed it. Therefore, by definition, it is simply replicating that which it was trained on True statement, true statement. However, what's necessarily different between that and how we come up with new ideas we don't necessarily know right, like our training data is our life experience. How do we come up with our next idea, what we call intuition, what we call that feeling in the middle of the night or when you're driving down the road and you're like, oh, that eureka moment. What exactly is that? I do think there's something to do with the system to thinking, that longer range kind of slower process thinking. But what is the input in the human brain that drives that ingenuity? There is no really good answer to that, but ultimately, we don't necessarily know that it's not something similar, just a much more powerful version of it, necessarily know that it's not something similar, just a much more powerful version of it. So I'm going way beyond what I'm an expert in in terms of neuroscience and in terms of how these AI models work at a really deep level.
Amith:But the point is is that there's a lot of unknowns, and I think the reason I said it's kind of an esoteric debate is because, even if all the thing was doing was recombining words from the past, but it's doing it in such an extraordinary level. But it comes up with new ideas because of the combinations of what it's learned, right? That's what a lot of art is. That's what a lot of science is, too. You build on ideas, you're influenced by others in your field, you talk about that, you cite other papers in scientific research, right? So I think it's an interesting conversation. To me, the most important thing is what are the results? A few months ago, we talked about an AI model that was coming up with the ideas for a whole bunch of new materials.
Amith:I think it was the Project no, maybe, and by Google's DeepMind and I think it came up with 120,000 novel crystal structures that were never before known, and not all of them were replicable or things that could actually be, you know, synthesized. But the idea essentially was fascinating to me because that's what it was doing. It was essentially coming up with these new hypotheses for potential materials, some of which are good, some of which were total garbage. But I think in that episode we talked about how hundreds of those had actually been synthesized by humans in a lab after the AI had suggested that they would have unique properties and applications. So I think we're already seeing this happen.
Amith:To me, it's exciting because it's applicable to a lot of other fields. We talk about science as a domain that, I think, is just generally interesting because it's about human progress. But when we talk about associations and we say, okay, well, how many associations within the back office and the staff of the association are doing scientific research? Basically none that I know of. A lot of them are working in fields where their members are doing that, but they themselves are not. They're operating marketing departments and membership departments and so forth. So why do we keep talking about this stuff and how does it apply is perhaps the question we should spend just a minute on, and you know.
Amith:My answer to that is if AI can come up with novel or even pseudo novel science, what can it do for you? How can it help you solve a sticky membership problem? How can it help you solve a governance problem? How can it help you solve issues related to the way you structure your event for next year? If you're having debates in your committee about the topics that you should be programming, and on and on and on, right? There's so much that an AI can do for you. When you think in these higher order terms that people are not thinking about yet because they're still stuck on the initial use case of, is the blog post that ChatGPT wrote for me good enough to post without human review, right? So I think there's so much more like this.
Amith:So I get drawn into the scientific stuff because if we can solve for that, that's a much higher order function than most of the back office things that people are doing, and it's applicable at the same time. So it's complex, multi-step planning, it's long-term execution, it's actions right, Taking advantage of tools. It's all that stuff that we talk about with multi-agent systems.
Mallory:It must drive you kind of crazy, amit. I'm thinking we've been talking about multi-agentic systems for a while on the podcast and to think that this AI scientist exists but we could take that out and put it into business and we could have a business scientist in our organization who could just run experiments right now. We have that technology right now. So I guess that's just it's kind of mind blowing to think that we it's all here at the tip of our fingers but yet so many people aren't using it or maybe don't have the knowledge to.
Amith:I think it's a lack of awareness. Partly it's a lack of the creativity to think about how to use these tools, and it's also at the moment there still are some barriers from a technical perspective and technical sophistication. To actually pull this code down, execute it, work with it. It's going to require some degree of technical skill, but that's also where AI is going to make it much more accessible to everyone in the very near future to be able to do these things without any technical skills.
Amith:You know, if you were to kind of run a thought experiment and said well, you know, mallory, you're marketing digital. Now You're spending a lot of your time thinking about digital. Now Wouldn't it be great to run dozens of concurrent experiments, marketing different kinds of messages to folks, seeing what works? You're doing some of that, but you're doing it at a human scale, right, and you're pushing the boundaries of what you and your team can do. But imagine a marketing agent that is essentially like the AI scientist and you give it the general idea of like try different ideas every single day and see what happens and it goes and does stuff. Like, try different ideas every single day and see what happens and it goes and does stuff, and you know, of course you got to get comfortable with the idea of this thing communicating with people. But that's where the human in the loop idea and multi-agent systems comes back into the fold.
Mallory:So many of our conversations, amit, you and me, about Sidecar, about marketing, are very often yeah, let's try it, let's experiment, let's run this, let's see if it works and let's iterate and try something new later. And just to think about having that power within Sidecar, it's just it's crazy to think that that technology is here.
Amith:Yeah, and in the Sidecar community, I mean, we're in a really enviable position in a lot of ways compared to the challenges a lot of our listeners have.
Amith:Sidecar has an audience of about 12,000 people, who have all selected into the group to receive newsletters three times a week, to be notified about webinars and education, to read the books and e-books and all this other stuff that we produce.
Amith:And the people that have kind of gravitated towards Sidecar are those that are more of an innovation mindset, people who want to push forward and see where should associations and nonprofits go, and we tell people look, we kind of experiment on ourselves. That's part of what we do. So there's a bit of an expectation that sidecar is a little bit out there. We're kind of nutty and we do stuff that sometimes doesn't work right and that's kind of the culture we set from the very beginning years ago, whereas most associations have a challenge in that their association was set on standards of predictability, excellence, tradition for hundreds of years in some cases or decades, and when you have that kind of a cultural backdrop and perhaps a governance structure that's extremely risk averse, it's a lot harder to experiment in the way we're talking about would probably be seen as totally reckless.
Amith:And maybe it is to some extent right, but that's kind of in our ecosystem. It works for a lot of associations. Therefore, I think that's why we always talk about how you have to do things in a sandbox and start off with really, really small experiments, so that you're not betting the farm or betting your job, but you are doing something small enough that it could show promise, which is really exciting. But if it doesn't work, you learn from it and you move on to the next thing. So I think part of what we're trying to do is to inspire people with all sorts of crazy ideas and to test out a bunch of them for our own business, because, essentially, sidecar is pretty much an association. It's very much that type of a model.
Mallory:Moving on to topic three, showrunner AI, referred to as the Netflix of AI, is an innovative platform developed by the Simulation, formerly Fable, Studio and allows users to create and watch AI-generated animated series. The platform is designed to democratize content creation by enabling non-professional users to generate their own TV shows using AI. Users can create episodes by providing short prompts, which the AI then uses to script, produce and cast shows. The episodes can range from 2 to 16 minutes and are currently limited to specific styles like anime, 3D animation and cutout animation. The platform is in its alpha stage and has generated significant interest, with a waitlist of over 50,000 people. Significant interest with a waitlist of over 50,000 people. Showrunner aims to make TV production accessible to a broader audience, allowing users to experiment with storytelling in real time. It features AI-generated dialogue, voices and editing, and allows users to edit scripts, shots and voices to personalize their episodes.
Mallory:Showrunner's launch includes several shows like Exit Valley, a satire of Silicon Valley, and Pixels, a family comedy with Pixar-style animation. The platform's episodic nature is currently more suited to sitcoms and self-contained stories rather than long, epic narratives. It does face some challenges, like the quality of AI-generated content, which some critics can find clunky and hard to watch, and, additionally, the platform's reliance on AI raises questions about the originality and the creativity of the content produced, as we've discussed on today's episode. So, Amit, this one was a fun one for me, because anytime I see news like this in the AI space given my other work as an actor, I don't know this stuff always is very interesting to me, but I kind of want to take a little bit of a different angle on this. You and I have spent a lot of time talking about this book called the Seven Powers by Hamilton Helmer, and you can explain the book better than me, but it's basically a book about the foundations of strategy and having different powers within your business that give you durable, persistent returns. Is that right, Amit? More or less?
Amith:Persistent and differential. So got it, differential meaning better than everyone else Got it.
Mallory:Persistent and differential returns, and so one of the powers in that book is counter positioning, which is interesting because that's how Netflix at least initially disrupted the media space was with counterpositioning, and so now we see platforms like Showrunner using AI to take a potential counterposition to Netflix. So kind of a lot of thoughts in this question, but I'm curious if you can talk a little bit about the Seven Powers book and kind of how AI is redefining what these powers mean and what's available now to businesses.
Amith:So the Seven Powers written by Hamilton Helmer, as you mentioned. I'd recommend this as reading to anyone listening or watching A fantastic book on strategy. I've read dozens of books on strategy over the years. This one I find particularly enticing because it has really a mathematical foundation underneath it, in the sense that it's essentially trying to show that, if, first of all, what is a power? A power is something to the earlier point you made that produces durable, differential returns. So the durability is, of course, meaning it lasts over time. A durable good is a refrigerator. A consumable good is a t-shirt, right, so something that lasts a long time. There's durability and differential being more profit or more margin than anyone else, and this is, by the way, very applicable to not-for-profits, because not-for-profits should be, and in many cases are, extremely profitable. You just reinvest in your mission rather than distributing that cash to shareholders, since you don't have shareholders. But the point I would make is that this is super applicable to all organizations, especially the nonprofit community, and so we're gonna be talking a lot more about the seven powers over time, but just to give you an example of some of the powers.
Amith:So Mallory mentioned counter-positioning, which is a great example of something where it's what it sounds like. You're essentially going up against an incumbent and you're saying, hey, there's a different way of providing the value either more value or there's a different way of providing the value. So it's Blockbuster versus Netflix is the example I think you're providing there, where you're delivering media initially through DVD by mail and then through streaming, and so, in fact, interestingly, netflix counter-positioned against themselves in their second phase. Their first phase was DVD by through streaming, and so, in fact, interestingly, netflix counterpositioned against themselves in their second phase. Their first phase was DVD by mail, and that was a more convenient way to get a physical medium into the DVD player at your home. And then they counterpositioned against themselves when it came to the delivery of those bits over the air. And then, of course, they shifted into a completely different business. Because, uh counter, in a completely different business that's powered by a different power. Because, uh, counter positioning is is inherently fleeting, because once you become the dominant player, then someone else is going to counter position.
Amith:Some of the other powers are things like scale economies, network economies, cornered resource, switching costs, branding, process power, et cetera, and we're not going to talk about all of them, but the essence of what Netflix is all about is scale economies. So the way the author, hamilton Helmer, describes scale economies is that you basically have the lowest per unit cost relative to the value you're creating, which, of course, yields the highest margin. Right, that's the differential return. And the way you do that is by having more customers. And so you know, in the case of Netflix, they spend more than anyone on content, yet their profit margins are better because they have a much, much larger paying base. And that, of course, becomes a virtuous cycle, because if you have more customers paying, you can invest more in content. Your content is a fixed cost. The more customers you have, the more incremental margin you generate, and then the overall margin relative to that fixed cost goes up. And it's very, very hard to compete with that, right, because if you have a much smaller budget for content, your content would, in theory, not be as good.
Amith:Enter AI, and so your conversation here, where AI can produce potentially content. That is obviously subjectively better, but it's all relative to the viewer. So if I create my own content using this tool and then I consume it and I share with my friends, it has an interesting effect because it perhaps is something I enjoy and gain more utility from than the static library of content that someone like a Netflix has. So perhaps in that scenario also, this startup is not only counterposition but they potentially could gain from some degree of network economies, because if there's kind of a sharing ecosystem where you create something and then you're promoting it to your friends, maybe there's an element of that, but there's not really two sides to that in terms of both supplier and consumption. So I think it's an interesting conversation.
Amith:But yeah, that's kind of a super brief synopsis of seven powers and we use it all the time at Blue Cypress to evaluate businesses we're thinking about entering into or thinking about. Can we create one of these powers, one or more of these powers, in a business? And we think about it from a startup perspective. It's super easy to say of course we're going to counter position, whoever the incumbent is. If there is one, and if there isn't an incumbent, you can still be counter positioning in terms of the value creation process, but then ultimately, what would be the durable power that we would go after over time?
Amith:So I think this is a super interesting conversation. We'll probably be talking about it a bunch more on the pod and in other formats. But coming back to this AI opportunity, I just think it's super interesting because, you know, this is not a tool that I've had any exposure to personally at all. I probably won't even look at it, even though I find it conceptually interesting. But the idea of being able to create short form episodes that are highly tailored to whatever it is I'm interested in. I mean, I could see that being appealing to a lot of people.
Mallory:And I imagine, using AI to do that, the cost per unit would be essentially zero, right? So then I'm not sure if scale economies, do you think they'll hold in the age of AI?
Amith:Yeah, I think the question so the resource, that's, this massive fixed cost that's required to produce content in the current world does that over time become a lesser issue, right? Is that one type of content that will continue to be necessary to produce these massive big budget TV series and movies and so forth, or does this potentially supplant all of that? I think it maybe is like a category right that becomes part of what people consume, but it's hard to say. And another thing too is, of course, ai is in this crazy fast doubling. So maybe we're at two to 16 minutes now, but very quickly these things become feature length films, they can become TV series, self generating episodes or new seasons of TV shows, like there's all sorts of crazy stuff that can come out of this.
Amith:You know, about two years ago I think it was earlier, yeah, and the whole right around the time, chat GPT came out, somebody had invented a essentially a multi-agent system and what it did was it used an lm to generate ideas for seinfeld episodes. So it was trained on all the seinfeld scripts of the past and the idea was to generate new seinfeld scripts, so the ideas originally, and then it would generate the actual script. Then it would take the script and it would feed it back then to a very rudimentary text to video type of thing. It was basically like youimentary text-to-video type of thing. It was basically like a really lousy animation kind of thing, but it had audio and it had a very rudimentary form of video and it was just a continuous Seinfeld episode, was the idea. So the AI just kept generating more and more frames and it was just a little bit ahead of what people were watching and that was, of course, just a novelty, but the idea was interesting, right? It's similar conceptually to what you're describing here.
Amith:And to your point that resource is no longer kind of the constraint necessarily if people do want to consume this stuff.
Mallory:I want to know how our listeners and our viewers feel, so feel free, if you're listening audio only, to send us a text. You can actually do that in the link in the show notes, and if you're viewing on YouTube, please drop us a comment. Are you interested in consuming this kind of content that's AI generated content that's personalized just for you with the kind of humor that you like, or are you more leaning toward human created content? Amit, I don't know what your take on that is.
Amith:Well, you know, I actually I could see kind of maybe a hybrid being kind of interesting. So sometimes I don't watch a ton of TV. I watch live sports and then pretty much that's the only TV consumption I'll get into. And so it's starting to get to be that time of the year where I'm a little bit less productive because NFL is about to kick off. But other than that, I'm not a big TV person.
Amith:Every once in a while someone will say, oh, this series is really great, you should check out whatever. And I'll get into the series and I'll really enjoy it and I'll watch it. I may not binge, watch it per se, but I'll watch it pretty consistently for a period of time until I finish it, and then that'll be that, and then I'll not watch TV again for six months or something until someone else tells me oh, there's this other great thing. I don't really rely on the recommendation engine or anything like that, and maybe occasionally I find something interesting. But if I find something I like, wouldn't it be cool if it's almost like a choose your own adventure, where you can get more episodes of a series that you do like? And so someone like Netflix, who owns the IP for so much content could be in a great position to offer subscribers maybe a premium tier subscriber, or maybe it's just part of the core service the ability for Mallory to say, hey, I want more episodes of this type, continue this storyline, or what sorts of risks and issues with doing that.
Amith:There's IP issues, there's how do you pay the actors, there's all the other intellectual property pieces that go into that. But conceptually it's kind of interesting. So think about what Disney can do with their library of IP. It'd be unbelievable, you know. So you think about kids going nuts with Disney Plus, where they can create their own episodes of Marvel shows or whatever. So it's a Marvel, you know, shows or whatever.
Mallory:I could definitely see kids going nuts with that. I don't think I've ever thought about it from the kid angle. I think in my core it just feels like the antithesis to everything I believe in. So for the time being I'm not super interested in the choose your own adventure thing. But I definitely think there's a market for it and I don't I guess I don't necessarily I'm not saying there's anything wrong with it either a market for it and I don't. I guess I don't necessarily I'm not saying there's anything wrong with it either. I just think for the reasons that I consume entertainment I can't imagine that would be satisfying to me.
Amith:Yeah, I, you know, I don't know. I mean, I wouldn't know until I've consumed something that's like this, but you know like, for example, like you know, probably my favorite TV series of all time is 24. I don't know if you've ever watched 24.
Mallory:I've never. I don't know if I've heard of 24. What is that?
Amith:about. So, um, you should go check it out. Um, the first couple episodes of the first season might take a little bit of time to get into, but, uh, it's. It's amazing. Kiefer Sutherland, it's uh, he's Jack Bauer. It's basically counterterrorism. It's. Every season. It's 24 episodes. They're an hour each. The episodes are kind of in real time.
Amith:That's the idea behind the show and it's really awesome, like it was my favorite TV show for a long time and I would love it if I could say, hey, give me a new season of 24. I would totally watch that if it was good. But I have no idea if it would be good, right so, but I think it's just an interesting exercise to think through. I would be shocked out of my mind if, in the next few years, we didn't have these kinds of options on at least some of the streaming services, and perhaps in an experimental mode. But the technology is there, if you think about it. We have text to video. That's getting pretty good. We have, obviously, the audio to audio stuff. We've got all sorts of amazing things that we can do in terms of generating ideas. So I think we're gonna see stuff like this. So it raises all of these interesting questions of what does it mean, but I think there could be demand for that.
Mallory:Oh, absolutely, and I will check it out by reason of this podcast. I can assure all of you I will consume it and I'll let you all know how I feel. This is how I'm predicting I'll feel, but, as we all know, humans are not so good at predicting how they'll feel in the future.
Amith:So that's true.
Mallory:Well, everyone, we made it to the end. We didn't have any major I don't think power outages. We'll find out a blackouts. Thank you all for joining us and we will see you next week.
Amith:Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to bootcamps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.