Sidecar Sync

24: The Sounds of Suno AI, the Stargate AI Supercomputer, and Grok 1.5 vs. DBRX

Amith Nagarajan and Mallory Mejias

Send us a text

In this episode of Sidecar Sync, Amith and Mallory delve into the fascinating world of AI-generated music, discussing the capabilities of Suno AI, a startup that has developed an AI model capable of creating credible, emotional music and lyrics. They explore the implications of this technology on the music industry and the potential impact on human artists. Additionally, they discuss the Stargate AI supercomputer project, a massive $100 billion collaboration between Microsoft and OpenAI to build one of the world's most powerful AI systems. The episode also covers the latest developments in large language models, including Grok 1.5 from Elon Musk's XAI and the open-source DBRX model from Databricks.

Key Insights from this Episode:

✅ AI-generated music, exemplified by Suno AI, offers novel creative avenues for non-musicians, raising questions about the future of human artists and the music industry.
✅ The Stargate AI supercomputer project underscores the significant investment and resources dedicated to advancing AI by major tech firms like Microsoft and OpenAI.
✅ AI scaling laws indicate that increased computational power can yield emergent capabilities, yet breakthroughs necessitate algorithmic innovation and novel architectures.
✅ Competition and diversity within the AI landscape, seen with models like Grok 1.5 and DBRX, drive innovation and expand choices for consumers and developers.
✅ To keep pace with the rapidly evolving AI landscape, individuals and organizations should actively experiment with new tools, models, and approaches, dedicating time and resources to exploration and learning.

Chapters:

00:00 Welcome 
01:59 Innovation in AI Music Generation
10:18 Disruption of Artistic Expression by AI
16:59 The Future of AI and Creativity
21:31 AI Supercomputers and Audio Technology
28:55 The Impact of Scaling Laws
34:02 Debating the Future of AI Scaling
41:23 AI Models DBRX and Grok 1.5
48:21 Elon Musk and AI Open Source
53:24 Exploring AI Tools and Experimentation

This episode is brought to you by Sidecar's AI Learning Hub 🔗 https://sidecarglobal.com/ai-learning.... The AI Learning Hub blends self-paced learning with live expert interaction. It's designed for the busy association or nonprofit professional.

🚀 Follow Sidecar on L

🚀 Sidecar on LinkedIn
https://www.linkedin.com/company/sidecar-global/

👍 Like & Subscribe!
https://x.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecar.ai/

Amith Nagarajan is the Chairman of Blue Cypress https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory:
https://linkedin.com/mallorymejias

...
Speaker 1:

To me, those are the things that are exciting, because the MOE model essentially is this idea of a bunch of specialists working together versus one generalist. So you can take the most brilliant, trained human that has all sorts of great capabilities and you ask them to solve a diverse set of problems. They're not going to be as good as a bunch of specialists. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions.

Speaker 1:

I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings and welcome back to the Sidecar Sync. We have another really exciting episode for you guys today, where we're going to be talking about some exciting topics at the intersection of AI and associations. As always, particularly, we have some really fun stuff. Today we'll be starting off with I'm here with my co-host, mallory, and before we get started, let's hear a quick word from our sponsor.

Speaker 2:

Today's sponsor is Sidecar's AI Learning Hub. The AI Learning Hub is your go-to place to sharpen your AI skills and ensure you're keeping up with the latest in the AI space. When you purchase access to the AI Learning Hub, you get a library of on-demand AI lessons that are regularly updated to reflect what's new and the latest in the AI space. You also get access to live weekly office hours with AI experts and, finally, you get to join a community of fellow AI enthusiasts who are just as excited about learning about this emerging technology as you are. You can purchase 12-month access to the AI Learning Hub for $399. And if you want to get more information on that, you can go to sidecarglobalcom. Slash hub. Amit, how are you today?

Speaker 1:

I am doing great. How are you today, Mallory?

Speaker 2:

I'm pretty good we're nearing, or I guess we're fully in spring, really in New Orleans. I had a call with someone the other day in Minnesota I think and they said they had snow last week. And I'm, like you know, in New Orleans, we're full-blown spring, enjoying the good weather while it lasts.

Speaker 1:

Yeah, I was getting some pictures from some of my friends up in Utah and they had, like you know, 10 inches of powder like three days ago I think on the slopes. So I was a little bit jealous, but enjoying the weather out here too. It was really nice yesterday, pretty warm.

Speaker 2:

Absolutely Well today, like Amit said, we have an interesting episode lined up for you, Particularly fun. First topic We'll be talking about Suno AI, Then we'll be talking about the Stargate AI supercomputer and finally we'll wrap up with a discussion around Grok 1.5 and DBRX. So we've got a really interesting episode lined up for you. Diving into topic one, Suno AI.

Speaker 3:

I want you all to take a, a place where knowledge will never fail. It's called PsyCon, the AI learning hub, where you can level up and rise above. In this digital age, ai's the way to go, learning the skills that'll make your mind glow. Sidecar's got the courses, the tools and the way to Unlock your potential. Seize the day. Sidecar, sidecar, taking us high AI learning, reaching for the sky.

Speaker 2:

So believe it or not, amitha and I did not create this song ourselves. Uh, we used ai, like you might have guessed, and we used a tool called Suno. Suno is a startup that has developed an AI model capable of generating credible emotional music and lyrics, including a blues song called Soul of the Machine that went viral. I had a chance to listen to that one myself and recommend that you all check it out. The model collaborates with OpenAI's ChatGPT to generate the lyrics, while the Suno model creates the music itself, including realistic-sounding vocals and guitars. The technology has sparked debate and controversy around issues like cultural appropriation, the use of copyrighted training data and the potential impact on human artists. Music industry experts like living color guitarist Vernon Reed, have expressed a combination of wonder, shock and horror at the capabilities of Suno's AI-generated music. The Suno AI founders have ambitious goals to democratize music creation, allowing anyone to make professional-sounding songs without any musical training or instruments. So, amit, you are the one that created this song, so can you talk a little bit about your experience with Suno?

Speaker 1:

Yeah. So anyone who knows me at all knows that I have. You know, if I could have negative music music talent, I'd be there. I'm definitely at zero, and so if I were to create a song, it would be the stick figure equivalent of music. You know would not be very good at all, and my kids can attest to that. So it's definitely interesting to hear what Suno can create from a simple text prompt. And so what I did over the weekend is I've been wanting to play with this for the last, I think, three weeks that I've heard about this tool, and this weekend I actually started off with something different that we didn't play.

Speaker 1:

The thing that I ended up with was Sidecar. I think was the second or third prompt I put in. But my youngest child was heading off to the beach this weekend to celebrate its spring break here in New Orleans, and they were celebrating a friend's birthday at the beach, and that was an occasion for me to say, okay, well, let me see if I can create a cool song for that kid at their beach party. And so I went in and put in these different prompts and I came up with this really cool song that I was able to then text to to my kid and who played it for all their friends and it was really it was. They were like, wow, this is amazing, and it was. It wasn't really like a happy birthday song, was more of like you know, we're celebrating you on the beach, and it even had like cover art. That was beachy and it was. It's pretty impressive. So then I said, well, what's the? That was fun and you know, for me, having no musical skill at all, it was kind of cool to be able to like have an idea and then turn it into a song. You know, it's kind of like having an idea, turning it into a video with something like, uh, sora that we've talked about, or even like with stable diffusions uh, video diffusion models and others that are coming, coming. When we talk about multimodality in AI, we're talking about any kind of input and any kind of output, so it's like text or audio or video, in which we've talked about a lot. On the output side, it can be, of course, text, but it can be images, video, and now music is another modality, essentially, and it's a super interesting creative outlet for people who don't have skill in these areas. I also think it could be a really good companion for artists who are interested in very quickly experimenting with different concepts, which can then lead to their own musical creations. So I think it's interesting, but we'll talk more about that in a bit.

Speaker 1:

I think the experience was very simple. It's a text prompt. You can say the style of music, what the lyrics are going to be. I even was messing around with multilingual capabilities so I asked for a French song. All of these kids are in a French immersion school here in New Orleans, so I'm like, well, and then I didn't send them the French song. And the reason was is I didn't totally trust the AI. I didn't want to send a bunch of teens a song that had no idea what it was saying, so I decided to go with the English version. I tried to get the English version with the French accent. I couldn't figure out how to do that, but that was fun and nice. Of course.

Speaker 1:

You know, being a business guy for 30 years, I figured I would try to think about the business application. Since that's when the sidecar song came to mind. I'm like you know, it'd be kind of cool to have like a little sidecar theme song about how important it is to learn AI, which we talked about on this pod so much, so I put in I think that was the second try I put in not even a paragraph, a couple sentences. I explained what sidecar does, I explained our passion for providing AI learning to associations, nonprofits, and out popped the song we just played. Wow, I'm assuming the French song you were trying to create was the one about baguettes. Is that right? Yeah, that one. And I tried to create another one that was something about beach and water and having fun. And that's where I got a little bit uncomfortable because I'm like I have no idea what this guy's saying in the song.

Speaker 2:

Fair point. Fair point I think you shared the baguette one with me and I was like, what is me thinking? Like why is he creating a song about baguettes? Maybe you love baguettes. A lot of people do. Well, very neat. Would you say the prompt that you used to create the sidecar song like, how long did it take you all in all?

Speaker 1:

I mean, in total, across all these different little tests I did, I spent maybe 15 minutes in the tool.

Speaker 1:

So the sidecar one was the third or fourth one I did. It took me a minute, you know, to play with a couple different prompts, so it was super quick, it was very fast. You know, the technology behind Suno, as you described, is a mixture of a large language model in this particular instance right now coming from GPT-4 and a diffusion model which is capable of actually generating other modalities outside of text, and then the two interact with each other in an interesting way. So it's from a technology perspective. It's cool because we have kind of a, you know, a combination right of different tools that are coming together under the hood and in fact, when we talked about Sora a few weeks ago, which we don't have a lot of details on yet, but we know that that is a transformer diffusion combination as well. So it's just an interesting time because people are starting to figure out how to take pieces of technology that exist and recombine them to create new kinds of capabilities, new applications.

Speaker 2:

So this topic was this one was a tough one for me, I think, in terms of outlining what we were talking about, the episode, because professionally I think this sounds great. I realized that we at Sidecar have never experimented with music creation before, and so this opens a whole new wave of opportunity for things that we can do in our marketing efforts. You know, add music to our website. It's really fun on that end. On the personal side, I haven't really talked about this much on the podcast, but I'm also an actor outside of my nine to five job.

Speaker 2:

I like to think I'm creative all the time in my nine to five and out of my nine to five, but I am a creative at heart and I always have been, and seeing a tool like this, I really agree with the quote I shared in the summary I have like a mixture of awe and admiration and horror, because I realize now that music is so easy to create it's already tough enough to make a living as an artist. So Amit and I have talked about this before. We do believe art for art's sake will continue in the same way that painting has persisted on for a long time and theater has persisted on for a long time. However, art to make a living with a tool like this seems under attack, and so I want to get your take on this, amit. I know we like to be AI optimists, but I will say this is a hard one for me to kind of swallow.

Speaker 1:

I agree 100%. I don't know if there is a solution to this other than the fact that this type of progress is incredibly disruptive to a lot of people's livelihoods. Type of progress is incredibly disruptive to a lot of people's livelihoods. I think it opens up the door for creative expression for people who don't have those skills and even for people who do potentially be a companion to help them accelerate and improve what they do. But the reality is also that a lot of the work that people do in this field, if the AI and remember this is AI basically 1.0 collectively. I know we have models that are like GPT-4 and Gemini 1.5, but really what we're in right now is AI 1.0 from the perspective of the consumer's eyes. Ai is, of course, 60 years in the making, as we talk about in this pod a lot, but the reality is what we have now from a broad adoption perspective is 1.0, and we're going to be easing into version 2 and 3. And I mean that kind of like generationally right Every couple of years. So we have the worst AI today that we'll ever have. And so what we will see soon with video models where you can say, hey, give me a five-minute intro to sidecar video that combines content from our website, perhaps clips from some of our existing learning assets, but has some original creativity to it, where you have an AI avatar that's bringing it all together. Maybe that AI avatar breaks into song at some point. There's some really creative, interesting marketing things you could do. The people who would have created that video would no longer be hired by someone like Sidecar to create that video. The flip side of that, though, is Sidecar, as a smaller company, would never create a video like that, because it would cost probably, traditionally, traditionally hundreds of thousands of dollars to create a video like I just described, of any value. So I do think that what will happen is demand will increase, that there will be way more music, there'll be way more video, there will be way more demand for these creative forms of expression, because that's happened every time there's been a step change in access for a variety of different technologies over time, even going back to the printing press, in terms of the availability of information and how that spread and how there wasn't people didn't have a hard time, like you know, getting access to books, in the sense that they did before that when they were handwritten, because now, all of a sudden it was cheap to produce a book and lots of people could have access to that information and in a similar way, I think you know this broadens and democratizes access.

Speaker 1:

That being said, I don't know what's going to happen in the near term, because there definitely are cases where people will stop using professionals in graphic arts in you know, in music right, who they might have otherwise used. So think about, like ad campaigns for TV If I'm an ad exec at a big Madison Avenue agency and I have a client who wants to create TV spots or YouTube shorts or whatever, I have these other assets, these other tools available to create lots of experiments. Do I still need musicians and artists? Maybe, but maybe I need fewer of them. But I also think people who think forward about this will say well, actually I want to keep all those amazing professionals in my employee, and then I want to be able to dramatically increase the volume and the quality of work that I create, whereas you might have only used music a little bit because it was a rare, rare thing that you couldn't really just create.

Speaker 1:

I have no idea what it would cost to create a corporate jingle right For a 30-second song with Sidecar Probably many thousands of dollars right, probably a lot, I don't know. But now it's basically free. So it is both of these things that we're right in the middle of this and we're early in this process, so we have a lack of visibility. So I think that's a great quote. We should probably post that on our website. It's that mixture of wonder, shock and horror which you can have those emotions simultaneously in some cases, which I think it's kind of like if you go to some of those really high end fine dining restaurants, I feel the same way, because it's really interesting, but it's also usually pretty terrible. At least I'd rather have a burger and fries personally.

Speaker 2:

Exactly A mix of shock, horror and awe. Yeah, I think what it is for me is at least, personally, I've taken this stance of being an AI optimist but at the same time, choosing to consume art that is human, created In my mind, that was kind of my thing is moving forward. I think people consume art for a lot of different reasons sometimes entertainment, sometimes it's really lighthearted, but for me personally, I like seeing the story, I like feeling like that empathy component and seeing stories that are relatable to me represented in music and film and on TV and whatnot. And I guess what's really hard about this particularly is you cannot tell that this song was created by AI. So it's almost like my personal belief kind of starting to crumble because I won't be able to tell, I won't be able to hear a song and be like that was created by a storyteller, a human versus an AI model, and I think that is just something I've got to unfortunately get over or kind of just figure out a new way to go about this.

Speaker 1:

One of the interesting ways to think about labor and labor markets is you think about distributions like the typical bell curve, and that's applicable in a lot of fields. But in labor economies you have a distribution of skill level right. And so if you think about the broader scope of a particular field whether it be musicians or visual artists or whatever it is there's people on the very right-hand side of that diagram which are basically in the top X percent very small percentage of their field and people on the other end, but there's the broad middle, probably the 80 percentile, which is most people, and those folks, I think, are going to have a choice to make. They either embrace having AI as a companion tool to lift their work quality up or they're probably going to have a hard time. Now, I think over time everyone will embrace using these tools, just because the forces of the economics are so hard to go against.

Speaker 1:

Regardless of what your opinion is, these things exist. It's a similar thing I can relate to a lot more closely is software development. So I've been doing software development for pretty much my entire life, and coders now who don't know how to use AI are basically irrelevant. Coders who are good at using AI are superpowers. They are able to go out there and do so much more.

Speaker 1:

And for a lot of people, actually, who aren't familiar with software development and programming, they might think well, that's a very different kind of analogy, because this isn't software, is more of an engineering thing, and obviously music and graphics are more artistic.

Speaker 1:

But I will tell you that software development is actually an extremely creative process as well, and that's actually what attracts a lot of people to it is it's a mixture of science and art in a lot of ways. But ultimately, the developers who know how to use AI are the ones who are going to be highly employable, and the people who either are slow to adopt it or just kind of push against it are going to have a hard time. I think we might see the same thing here, and that distribution I was referring to people that are in the middle are most susceptible, because the people that are on the far end like I don't think Beyonce has a lot to worry about. Beyonce has a lot to worry about, but I think people that are not her or not that tier right, that top five 10% are going to have a really hard time if they don't figure out a way to use this stuff.

Speaker 2:

I think the key is what you said that we've got to think about AI as a companion tool. We've got to think about opportunities for humans to guide AI in a way that they can create more than they ever had possible, and that's why I'm super fortunate and happy to kind of be in the position that I am as a creative who's also talking about AI every week in a podcast. I've told you, amit, it's a taboo topic when talking to other actors bringing up AI and especially talking about the great opportunity it can create and will create in the world. However, I think we should just stress that we need more creatives working with AI all the time, and I will say as a side note, suno has only 12 employees, which I thought was pretty small, and many of them are musicians. So, like that made me happy. Reading about Suno, I was like, okay, we've got musicians creating this tool. Hopefully they are working, you know, with a creative interest in mind.

Speaker 1:

Well, I think the number of employees is an interesting thing to think about.

Speaker 1:

The capability of what Suno does is very impressive, regardless of how you feel about it on either end or in the middle, 12 people doing that with probably a really modest amount of capital relative to some of the bigger AI companies or you've used Midjourney quite a bit.

Speaker 1:

I think they have like 28 employees and they have over $100 million in revenue. So you know, there's a lot of companies out there that have very limited number of people and you're shocked by what they can produce. Part of it is, if you're an AI company, guess what You're really good at using these tools yourself, and so I guarantee you the Suno team is all over using AI in every phase of their work, independent of what their product does, and that's the key to it. It's like if someone were to say, oh, what would it take to create an AI music generator? I might say I need a thousand people and a billion dollars in five years. And now, all of a sudden, suno. I don't know how long they've been at it, but probably not super long, and there will be many Sunos. There'll be many other products like this.

Speaker 2:

It seems like AI-generated audio has been lagging behind text and image. Would you say that kind of in order of least difficult to most complex to create? It would be text, image, video, then audio, as kind of being the most complicated thing to produce.

Speaker 1:

Audio is interesting because there's definitely more information in audio than just text and a lot less information than video. Video is probably the most complex because you not only have the most information dense modality but you also have to have an understanding of physics. You have to have an understanding of what we call the world models that a lot of people are talking about. None of these models have a direct understanding of physics. They learn physics in the video models and even the language models to some extent through examples, so they kind of approximate physics, but they don't have like a rules engine about physics and so video is harder there. That's why there's a lot of speculation with Sora from OpenAI that they have some kind of world model in there. Jan LeCun, who's the head of AI at Meta, talks a lot about the need for a world model or a physics engine that is, you know, trained on that explicitly to really power future video models. And this is also super relevant in robotics, because you think about video generation, think about the flip side, which is video consumption, and you know bots that need to exist in the physical world, need to have a complete understanding. We talked about that recently in this pod as well, with humanoid robots and the advancements coming there.

Speaker 1:

But coming back to the question about audio, I actually think audio is a relatively straightforward problem. Compared to images and video, it's just a less popular category. So I'll give you an example is that 11 labs for some time has had extraordinary AI audio tools. They're audio only or maybe they have some video now too, but I'm pretty sure they started for sure audio only and that's primarily their focus. We've talked about HeyGen here a number of times and they just released their version five studio, which includes a number of new tools and they're a video company and there's overlap between the two. But 11 Labs is actually super popular. It's just not as well known. I just think it's a modality people don't think about as much.

Speaker 1:

You can do things like AI voice dubbing. So some of our companies they want to standardize the professional voice they use for all their video content and so they'll have anyone record a video. So, like Member Junction, which is our open source project for a common data platform we've talked about here, there's a lot of people involved in that project and we want to be able to record videos of little demos of the software and how-tos and tutorials and it's nice to be able to say okay, we want to have a consistent brand voice across all those videos. You can do that very easily with 11 Labs, voice-to-voice dubbing and there's lots of applications. I think the technology is actually excellent.

Speaker 1:

No-transcript company last week that announced a public beta and they had an AI audio to audio assistant that understood emotion and that's something that we'll have to go look it up in the show notes. I forget the company name now, but a lot of people are talking about that and I just think it's kind of obvious that that exists because all the capabilities are there. It's a matter of what we build with these fundamental building blocks.

Speaker 2:

On that physics note for video, I feel like it's worth diving into just a little bit. Thomas Altman, who leads our intro to AI webinar that we do every month. He was telling me about, or talking to me about, sora and saying I think there's this video out there of an elephant made out of leaves. Have you seen this video of me that they released? And I was like I feel like you see it and you just say, oh, wow, that's really neat. But he was talking to me about how there had there had to be an understanding of how an elephant walks and then also the leaves and gravity and how they like flow in the wind, and when you really start to think about that, these text to video models are wild.

Speaker 1:

They're incredibly sophisticated, and how far you can get just with inferred understanding versus an explicit understanding of physics. So I think, either way, we're having more and more of these solutions come to market that are clearly more and more realistic. So, yeah, there's a lot of fun things to talk about there. I think audio is an area that associations can really lean into, because so much of the content that we have in text format we can convert to audio. Some people like to listen to their content obviously, as evidenced by our audience for this podcast, versus reading a book or a blog. So the modalities of your content can be very fluid now, and that's exciting. You can also think about translation, which we've talked about, where you go from English to some other language. You can even go from English to Australian English, which can be fun. So there's a lot of reach opportunity, better accessibility for your content, and so translation isn't just about languages, but it can be from one modality to another, and associations sit on a massive repository of content. So understanding that knowledge. You know language models can help you with a lot of that, but these other models can help you really get that content activated, put them and put that content in front of your audience in different ways to really engage them. So that to me is super exciting. And so music going back to Suno is just another modality where you know you potentially have songs for all sorts of things.

Speaker 1:

And it's interesting too because we talk about how our species is hardwired to really remember stories well, and also we remember songs in a different way. I don't know what the neuroscience is behind it, but sometimes, like my wife's a great example she remembers like every single jingle from every commercial she's ever heard and sometimes just breaks into song in the middle of the day and that's it's like, which is super fun. But like I don't know how she remembers all this stuff, and when she sings something I'm like oh yeah, I remember that from like the 1980s, you know. But there's definitely a different form of memory somewhere in the brain for song and for story, for song and for story, and so I think that's a really powerful thing to pick up on, because if we translate our message, either as marketers or as educators, into song, into story, really make it come alive, like think about the story and the song coming into a video to explain something right To different audiences, and animation or like a full motion video, it just opens up these unbelievable opportunities to educate and inform folks in a variety of ways.

Speaker 1:

So I get really pumped about it, because none of that would ever happen with human labor. That stuff is way beyond the scope of what we have. Even if 8 billion of us all trained on how to do that, we wouldn't have enough people to do that. So that's where AI scale is really exciting to me, while at the same time I think we have to be cautious about it, because what you said earlier really resonates with me that this is very shocking too.

Speaker 2:

Well, I would love to hear from all of our listeners Are you more excited? Are you more horrified? Are you feeling both? Let us know on the Sidecar community or on LinkedIn. We'll have both in the show notes.

Speaker 2:

Moving to topic two, the Stargate AI supercomputer, Microsoft and OpenAI are in discussions to build a massive new AI supercomputer data center project called Stargate that could cost over $100 billion. Stargate is envisioned as the largest and most advanced data center in a series of installations the companies plan to build over the next five to six years. The Stargate supercomputer would use millions of specialized server chips to power OpenAI's next-generation AI systems, like GPT-5. The project could launch as soon as 2028 and would be over 100 times more expensive than today's largest data centers. Microsoft would likely be responsible for financing the project, which could require exploring alternative power sources like nuclear energy, due to its massive 5 gigawatt power needs. The high cost is driven by the need for vast computing power to train advanced AI models, as well as challenges in finding enough specialized AI chips. The project has not been officially greenlit and its future may depend on OpenAI delivering on its promise to significantly boost its AI capabilities.

Speaker 2:

The Stargate project represents the enormous scale of investment that major tech companies are pouring into the race for advanced AI capabilities, with Microsoft and OpenAI aiming to build one of the world's most powerful AI supercomputers. So, Amit, definitely interested to get your take on this. I feel like talking about these supercomputers $100 billion it's kind of hard to put that into human understanding. So can you kind of bring this back down to earth for us on why this is so important?

Speaker 1:

Sure. Well, the first thing I'd mention is $100 billion is obviously an enormous sum of money, but it also represents about eight months of profit for Microsoft, so it's well within the realm of capabilities that they and other of the large corporations have in terms of resources, which actually, at this point, go beyond the scale of what most governments can invest. So that's an interesting conversation by itself. Is that? You know the Fab Seven, as they call them, in terms of the tech economy driven growth in the stock market, and you know it's like how much control and power do they have? That's a separate conversation, but that's that's why like $100 billion is like oh, what are you going to do? How much debt are you going to raise? Well, they have the cash coming in consistently for this, and companies like Apple and Google and now NVIDIA are starting to see that, although NVIDIA's cashflow is obviously way smaller than Microsoft or Google. But in any event, I digress. The point is is that it's an achievable financial goal, and the theory behind that level of investment is that scale will solve all ills. And so there's this so-called scaling laws in AI that thus far have held to be true, where, when you add more compute really compute being the broader term for compute, memory storage, et cetera. But when you make your computer bigger you're able to get emergent capabilities. So you take the same model, you run more data through its training, you give it more power and it gives you new capabilities you hadn't necessarily predicted. So when you think about what's happened just using the OpenAI timeline, you think about GPT-1 to GPT-2 to GPT-3 to GPT-4, four major versions of that model over a number of years and the amount of increased compute that's been thrown, which has pretty much been like a 10x or an order of magnitude. Each time they've trained a new model and with that they've. Also they've seen these emergent properties where GPT-4's capabilities were radically greater than GPT-3's, even in categories they didn't expect For example, being, you know, as capable as the typical medical school graduate and passing a medical licensing exam, things like that. Or even back in the GPT-3 days, when that came out after GPT-2, same basic architecture. Obviously it improved somewhat, but same basic concepts and it was able to start coding and that's what led to the GitHub co-pilot taking the world by storm, which was back in the GPT-3 days. So these emergent properties are believed that they will continue to be there, to emerge over time as scaling continues. So if you throw more hardware and more money at it, you'll get better results.

Speaker 1:

Now, that isn't to say that people who believe in the scaling laws don't believe in algorithmic improvement. It's not to say that they don't believe in making better models or better training approaches. For example, with training, companies like Mistral don't use RLHF or reinforcement learning with human feedback. They use something called DPO instead, which is about three to six times more efficient and similar in terms of quality of output. It's one of the reasons Mistral has been so quick and so efficient with capital to get really powerful models out there. They just released Mistral Large about a month ago and it's very, very close to GPT-4 and Cloud 3.

Speaker 1:

So there is another school of thought out there is what I'm getting at. That will say, first of all, we're just throwing way too much resourcing at this. Independent of the money, the environmental impact of what you just described is truly horrendous, right? So if you just say we're going to throw as much money as we want to at it, how in the world do you power it? How do you power it responsibly? Those are big open questions. So can we get an order of magnitude or multiple orders of magnitude of improved efficiency, smarter models, et cetera.

Speaker 1:

You know, jan LeCun, who I mentioned already on this pod, is someone I like to follow a lot because he's a contrarian in a lot of respects.

Speaker 1:

He's very big in, he's one of the original AI godfathers, so to speak, and he's an amazing guy.

Speaker 1:

But he is not as big on not so much scaling laws but the idea that current transformer architectures, language models, can solve on their own a lot of these problems, and so he believes that there's other architectures that are needed, other approaches that are needed, and I think that probably there's truth in both camps ultimately.

Speaker 1:

But the point here is that a project like this at $100 billion of investment over really a handful of years I think you know you said 2028, you know that's around the corner and so having assets like that and by then we'll be on like GPT-6 or GPT-7 maybe, who knows what that looks like but the point is is that things are not slowing down, they're speeding up. That's my main takeaway from this topic is that and this is one company right, because Microsoft and OpenAI kind of think of them as one thing in a way, in terms of the investment side of it. It's one company going after it. There are lots of companies with similar resourcing that are going after it and tons of other organizations with not quite that level of resourcing but different ways of thinking about it going after this problem. So the takeaway for associations, in my mind, is not so much the number of gigawatts or the number of dollars or the exact timeline, but it's that the pace of change in AI will accelerate from where we're at, not slow down.

Speaker 2:

That makes sense. I want to dig more into this idea of the scaling law, or scaling laws in terminology. A law is something that you know to be true. So you said some people believe in scaling laws. I don't know if that was just the word you chose to use. Is it that some people believe in this and some people don't? Or is it that some people say we need to do this at all costs and some people say we need to move more cautiously?

Speaker 1:

some people say we need to move more cautiously. So empirically, thus far the scaling laws have held true. When you throw more resourcing at it, you get these order of magnitude improvements in performance. The question is whether or not they will continue to hold. It's similar, in a way, to Moore's law, the so-called Moore's law, which he never called it a law. He just observed the fact that transistor density doubled every 24 months for the same price point.

Speaker 1:

And there was always, even back in the 80s and the 90s, there were always skeptics saying oh, moore's law is dead, it's not going to hold, and so it's a similar debate. In a way. It's like will the scaling laws continue to hold here? Will we be able to get that much more? It's a totally different context, but it's a similar debate. So it's a quote unquote law in the same sense that Moore's law is. Nothing guarantees that it will continue, but it has performed on that. If you look at the data and you say, hey, what's the curve show in terms of capability relative to training, data set and therefore compute, it very much shows you these capabilities are growing in alignment with that so-called law.

Speaker 2:

Okay, I read that Sam Altman has publicly said that the main bottleneck holding up better AI is the lack of sufficient servers to develop it. Based on the conversations you and I have had on this podcast, I think I was thinking we were more in a discovery bottleneck and not really fully understanding the black box of AI and how to create AGI, multimodality, et cetera Text to music, in my mind, being a good example, because it seems like we had all the pieces there, but then it just kind of evolved out of nowhere, which maybe it's been around for a minute, but that's how it seemed to me at least. So my question is it's about the scaling laws Do you think that if we had all the servers and chips we needed right now, we would not have any AI questions left?

Speaker 1:

I don't believe that that's true. I think there's plenty that we don't know, so I think that just throwing more horsepower at it doesn't mean you're going to solve the problem. I am not really qualified to answer that question because I'm not an AI scientist, but I just don't believe that it makes sense that the current algorithms we have which largely they've evolved, but largely they're a little bit old now what we're starting to deploy are evolutions of the transformer architecture, which was a paper from 2017. And there's a lot of work that's gone on since then, in the last seven years. But we need new models, we need new architectures, we need new ways of thinking about it, and the beauty of the world we live in is that there are a lot of people who are competing for this and it's a global competition, and what we'll see is a lot of innovation.

Speaker 1:

People who don't have the resources that Microsoft and OpenAI have that are forced to figure out like in the proverbial garage. You know how do you build the next great AI model? Going back to Jan LeCun, again, he points out that a 17 year old can learn how to drive an automobile with a very limited number of hours of practice. Right, so we've all gone through that. If we're drivers, and it doesn't take a million hours of video training data to train our brains on how to safely operate a vehicle, it takes a handful of hours. With some feedback from an instructor, maybe a parent, and most people can learn how to drive and be reasonably safe. Right so, but we need a gargantuan amount of video. So there's there's a radical inefficiency there if you think about it. Um, and so there will be breakthroughs over time. I don't know when they'll come or who will make them, but I'd be rooting for the underdog and actually kind of betting on the underdog a little bit in terms of people who are going after these alternate architectures. And in this era where we're just starting to discover, it's almost like the double helix was just discovered and we know what DNA is and all of a sudden, that explodes. Scientific discovery and biology, similar things are happening in AI right now, and we just don't know.

Speaker 1:

So I think that people who want to throw $100 billion at it have at it. Go for it. I hope they power it responsibly in some way. I don't know how you do that, but I also think there's plenty of people who are going after this prize with a very different mindset and again follow Mistral. Those guys are doing some really interesting work. Their models are really good, but that's not what's interesting. It's their approach that's interesting. It's based on super high quality and much smaller data sets. It's a different training methodology that I mentioned DPO instead of RLHF, which we don't want to get into here, but it's basically a different approach that's more efficient and there will always be this advancement right.

Speaker 1:

And so one of the things that we have to remember is that the people who are the established, entrenched players, they kind of want it to require $100 billion because that protects them, and so if someone can come up with a way of doing it with, say, a measly $1 billion or something like that, you know okay.

Speaker 1:

Well, what does that mean for OpenAI? They have far less of a moat, so they're kind of hoping that that's true, and one of the things that happens to all of us is that if we keep saying the same thing to ourselves, we start believing our own stuff and we think it's the only thing. Even the most brilliant people in the world are susceptible to that kind of confirmation bias from themselves and other biases on the flip side of that, and so we have to remember that. And there's also this thing where you know you look at it and think about the classical innovators dilemma where if you're a large, established business, it's hard to disrupt yourself and, in many ways, the scientific progress in a particular like thread that you're going down. It's hard to disrupt that when you've spent all your time and energy and emotional investment in saying that's what we're going to go do. So I think it's going to be super interesting to see what happens, but I think the scaling laws by themselves probably don't get us to AGI. I think it's a mixture of things.

Speaker 2:

Interesting. Yeah, I was just thinking logically. If compute was the only thing holding us back, it seems like you would see every major giant doing something similar. Or in my I don't know, in my ideal world I'm like is there a place where these giants, these big tech companies, work together and kind of like create a supercomputer together? But I'm guessing that's not really the business landscape of me.

Speaker 1:

There's stuff we don't know. You know it's a world we don't know a lot about, and it's kind of like if I said hey, Mallory, let's go create a supersonic jet together. Let's take an Airbus 380, the largest passenger jet in service in the world, and let's strap on 10 extra engines, so we'll figure out a way to throw extra engines on the wing, somehow maintain its airworthiness and not crash. Are we going to get to supersonic speeds? No, we're not, because there's some fundamental issues with the design of that aircraft that prevent it from achieving that.

Speaker 1:

And so, similarly, like we and we have an understanding of that now, in terms of aeronautical engineering and all those, all those sub disciplines which I know nothing about, but I'm an enthusiast around this stuff I think it's one of these things that you look at and say, Okay, well, you know, it's the same idea. It's like we have this, you know, really cool but also not super advanced model architecture. Let's throw a whole bunch more engines on there in terms of compute and hope that it goes supersonic or hypersonic, you know, I think who knows, maybe it'll work, but I just suspect I have this nagging belief that we need a lot more innovation in the algorithms.

Speaker 2:

That makes sense. That's a good example to contextualize all this abstract supercomputer stuff. Moving on to topic three, grok 1.5 and DBRX XAI, the AI company founded by Elon Musk, has unveiled the latest iteration of its Grok large language model, called Grok 1.5. It's not out yet, but it was announced last week that it would be out this week, so it very well might be out by the time you all hear this. Grok 1.5 enhances several aspects of the earlier Grok 1 model, particularly in logic, math and coding abilities. Benchmark tests show Grok 1.5 more than doubled Grok 1's score on the math mathematics benchmark and jumped more than 10% on the human eval test for coding and problem solving. A major upgrade is the expanded context window of 128,000 tokens, far beyond Grok 1's 8,000 tokens. This allows Grok 1.5 to maintain conversations for longer and refer to earlier parts of the discussion.

Speaker 2:

It's unclear for now if Grok 1.5 carries over the unrestricted traits of the previous Grok models, which Musk has described as having a rebellious streak and being willing to engage with controversial topics. Now, on the other hand, we have DBRX. Databricks has introduced DBRX, a new open source general purpose large language model that outperforms other established open source LLMs like Lama 2, mixtrel and Grok 1 across a range of benchmarks. Dbrx also surpasses the performance of GPT 3.5 on most benchmarks and is competitive with the proprietary Gemini One Pro model. Dbr-x is especially strong as a code model, outperforming specialized models like CodeLlama 7DB on programming tasks. Ingrained mixture of experts architecture, which provides 65 times more possible expert combinations compared to other open mixture of experts models. We'll explain that, don't worry. This improves the overall model quality. So, amit, what are your thoughts on these two models? Again, we haven't seen Grok 1.5, but it might be out by the time everyone listens to this podcast. I'm also not sure if you've tested DBRX yourself.

Speaker 1:

I have not. I haven't played with either of these two models, haven't had the chance yet. But I think the most compelling aspect of this first of all builds on the conversation we just had about competition and alternatives, both having accessible models here that you can run locally or run in your own private cloud environment. Do whatever you want with them. But the most notable thing is the speed and their capabilities at logic and math, which also extends to coding. So that's a very important domain because when you are able to get a model to generate reliable, high quality and significant code, you can build on that. You can build a lot of interesting solutions. To give you one example, one of our teams here has built an AI called Skip, which is an AI data scientist, and what Skip basically does is has a chat, GPT-like conversation with a business user. Business user says, hey, I'd love to analyze my member renewal trends, or I want to understand my event registration data, or whatever. Skip has a conversation with that user and then ultimately, what Skip does is Skip writes code. Skip generates a program and then tests that program against your data and then presents the user with the result, just like a human analyst would. It's really cool I bring up Skip is that, under the hood, right now, skip is using primarily OpenAI's GPT-4 model, but is evaluating a number of other models, including Cloud3. And we'll be definitely looking at these models as well, because code generation, as good as it is, is nowhere near perfect. Some of our listeners might be familiar with Devin, which was an AI software engineer. That was announced about three or four weeks ago, and then there's OpenDev, which is the open source equivalent of that, and these are basically tools. They're not models. They're tools that build on top of various models to do software development for you. And so if the underlying brain, though, is kind of like GPT-4-esque which is good but not fantastic, right, it is a limiting factor, and so what you can engineer solution-wise is limited. So DBRX and with Grok 1.5, we have yet additional high quality, low cost options for code generation, and I know they have other capabilities, but I think that their advances in code generation and math are the most notable applications that I think are super applicable. So to me, those are the things that are exciting.

Speaker 1:

I think that the mixture of experts comment it's almost going to be ignored in the future that everyone will always will use that architectural approach, because the MOE model essentially is this idea of a bunch of specialists working together versus one generalist. So you can take the most brilliant, trained human that has, you know, all sorts of great capabilities, and you ask them to solve a diverse set of problems. They're not going to be as good as a bunch of specialists. And then of course, those specialists can be fine, fine tuned and focused on particular categories. And so these MOE models we talked about this with Mixtral M-I-X-T-A-T-R-A-L from the Mixtral company a couple months ago.

Speaker 1:

Mixtral is an eight times 7 billion model. So basically they have eight different submodels, each of which is 7 billion parameters kind of like smaller models essentially and it performs at a very high caliber of performance. Yet it inferences, meaning its runtime cost is very low because it's only using typically two of the submodels at a time, and DBRX is very similar to that. It's rumored that GPT-4 also uses an MOE model and they were probably the first commercial MOE example, but OpenAI doesn't tell you a whole lot about what goes on under the hood in their stuff. So to me I think it's just more evidence that acceleration continues. More choice is good for consumers and will lead to more and more applications being built that will serve the nonprofit sector, so I find it exciting.

Speaker 2:

Do you know if Gemini uses a mixture of experts? Architecture? Yes.

Speaker 1:

Yeah, gemini, I know Gemini Pro 1.5 does. I'm pretty sure Gemini 1.0 Ultra and Pro do as well.

Speaker 2:

But I'm pretty sure Gemini 1.0, ultra and Pro do as well, but I'm not 100% sure about that. Well, looking into Grok 1.5, obviously I'm no AI expert but I will say, even reading the article that they put out on their website, it didn't seem all that impressive. It only outperformed GPT-4 and Gemini 1.5 in the human eval benchmark which I looked up, and that is a dataset designed to evaluate the code generation capabilities of large language models. It did show 100% recall and it's 128,000 token context window, which I thought was pretty solid, but anyway, it didn't seem like it was going to be this life-changing model. Do you foresee Grok being a major competitor in the space or do you kind of say this is maybe like a passion project of Elon?

Speaker 1:

It is a passion project for Elon, but that's enough to make it material in terms of competitive landscape. We got to remember this is an individual who has individually an insane amount of resourcing, and it's not just because of financial ability, but it's because he controls Tesla, which has the world's largest and most capable kind of three-dimensional model of the world. They have more video footage like moving video footage than anyone else on the planet through Tesla, and there's tremendous AI capabilities at Tesla. Then he has Twitter, which has the probably still are X the best real-time information on what's happening around the planet at all times. They have an unparalleled amount of text coming in. Granted, it's a lot of garbage too, but there's a lot of content there. He controls both of those assets. He has kind of the three-dimensional visual world and he has the text world, and so the combination of those assets could be very, very interesting. He himself, obviously, is very capable and very determined.

Speaker 1:

This is the same guy who, after making millions at PayPal, literally threw every dollar he had into going after SpaceX, which we don't really talk about in this podcast. It's not super relevant to AI, but it's deeply an innovation story where you know his thesis was to basically radically reduce the cost of space travel, and so the number one problem with that was that you threw away a rocket as soon as you used it. It's like saying, hey, mallory, let's jump on a 747 and fly from here to Europe, and when we get there, let's throw it away and let's get another plane and fly back. That sounds absurd, but that's exactly what we've done in the rocket world. And he's like well, that doesn't make any sense, let's build reusable rockets. And lo and behold, he did. It took a lot of determination Love him or hate him again, like the guy's determined, and he barrels through walls that would stop most people, both because he has determination and brain computer interface. So he has a lot of moving parts. But remember, it's basically Elon Musk Inc. Which basically means he's able to pull resources across the board, from Tesla to Xai to Xcom to all these other places, however he wants to. There's some interesting corporate governance problems with that, since Tesla is a public company, but ultimately it is what it is right, and so there will be something competitive coming from Grok. I think Grok might have a shot if he chooses to train it on some of the Tesla video data to be an extraordinary multimodal model, possibly powering robotics Because, by the way, back at Tesla, he has a big robotics piece coming out.

Speaker 1:

They have I forget what they call it, but their own humanoid robot project, which is quite compelling as well. So he'd have all these pieces going on in Musk's crazy universe. He's going to be a competitor, he has the resources to do it and he's going to compete. So to me and DBRX from Databricks, databricks is not a company a lot of people know. They're an infrastructure company that powers a tremendous amount of machine learning. They actually quietly acquired a company called Mosaic ML a number of months back, which was doing some really interesting generative AI work. They're just yet another example of someone with a lot of resources and skill coming after this problem.

Speaker 2:

When I saw DBRX was open source, I immediately looked up to see if Grok was open source, and then I found an article from several weeks ago that said I think just in March they decided to go open source. And then there were some other articles in the mix that were like it's not truly open source. I don't know if you have any insight on that, Amit, but with Elon's decision to go open source versus keeping it closed, versus keeping it closed, I don't know if you've read Walter Isaacson's book on musket.

Speaker 1:

If you haven't, I'd recommend it and it's fascinating. You get even people who really dislike the guy and I'm kind of I like him and I also don't like him for a lot of different reasons, but I admire what he's accomplished and reading the book was fascinating because if you kind of look at his history and the way that guy makes decisions, it's pretty haphazard looking. I think there's perhaps a method to the madness. But he pursues what he believes are the right decisions and he doesn't really care about collateral damage at any point in time. So that's really scary when you have as much power as he does Things like Starlink and the impact geopolitically on Ukraine and stuff like that that's pretty crazy in my opinion. But the point I'd make is, when we look at someone like that with those kinds of resources, I think you're going to see a lot happen quickly because he doesn't care about what anyone thinks. So the level of innovation that comes from that complete lack of care you could say that it's a judgment flaw or whatever, but that's what he's doing. So, unlike some of these other companies that are much more measured.

Speaker 1:

Now Elon had this and there's this lawsuit right now between Elon and OpenAI because he was the major financial backer of OpenAI when they got started and he sued them to say that they've gone against what he invested in when they were a nonprofit. And he's this big advocate for AI safety and he talks about open source versus closed source, yet Grok initially was closed source and so you know very much typical thing Someone on Twitter tagged him and said hey, you know what are you doing? You're talking about open source, yet Grok isn't open source and I think, like the next week, he's like oh yeah, let's make it open source.

Speaker 2:

That's what it seems like. I mean, when I was reading, I read that article about the lawsuit and then I just started seeing these articles pop up. That was like, yeah, you just decided it was open source or will be very soon, and I guess you're right. That's how Elon kind of works.

Speaker 1:

That's how he, that's how he plays the game. So we'll see what happens. You know, I think it, I think there's just when you have a player like that in the mix, it's harder to predict what's going to happen. You kind of have a pretty good idea of what Microsoft's going to do and what Google's going to do at this point, but what Elon's going to go do, who knows? So I think it's just going to make things more competitive. And his goals? We're not exactly sure what they are, but we know that AI safety is important to him at least. So he says, yeah, he's putting out models that are not quite at the frontier of the field but will be soon. So we'll see how that affects how he approaches this now that he's in that seat.

Speaker 2:

And if you all want more information too, on that open source versus closed source debate. We have a previous episode I'm not sure what number it is, but one of the earlier episodes we did. We covered that in depth. Amit, kind of my last question for the day. I know we're talking about AI beginners. We often say just get started, just pick a tool, pick a model, go try it out, see what works for you. For the people who are more intermediate to advanced, maybe that listen to this podcast, that are using AI every day, how do you recommend sorting through the many options out there, because I myself fall into a trap of using ChatGPT pretty much all the time. I think you've mentioned the same and I want to make sure that, as we get more and more options out there, we're being deliberate about our intentions to try new things and experiment.

Speaker 1:

I think that's the most important point, what you just said, which is to try new things and experiment. Have a budget just like you would in a financial budget, a time budget, where once a week, you have an experiment with AI, appointment on your calendar for 15 minutes, maybe every Friday afternoon. You do that as an exercise to wind down your day. You just do something you have not done yet with AI. It can be something fun, like the music generation we talked about. Maybe you want to try out Claude. You haven't tried Claude 3. You want to play with that? So I tend to do a lot of that in just the natural flow of my work. I have a really cool schedule of what I get to do. I get to play with a lot of different technologies, working across a lot of teams, talking with a lot of organizations. So I kind of, inherent to what I do, end up with that, but most people don't have that level of flexibility, unfortunately. So I would say budget a meeting with yourself on your calendar, put a recurring appointment on your calendar, new experiment with AI every week for a certain amount of time and go try new things. I do tend to float between Cloud and ChatGPT and I have been working with Gemini 1.5 for a while, so I think it's got interesting capabilities. I think it's just important to try them a lot. And that's kind of one comment from the I would call it the end user perspective.

Speaker 1:

Now, the other thing I would say is, for those that are listening, that are more on the technical side, if you're a developer or you're interested in going deeper, all of these companies have studios or playgrounds. Like OpenAI has a playground, gemini has a Google AI studio, cloud has something similar. You can go in there and really go deeper and play with these models beyond kind of the veneer of the chat GPT style interface, and with those experiments you can control a lot of different things, you can choose different models, you can test a whole bunch of different system prompts, which are kind of the prompt behind the prompt, where you can make the model take on different characteristics. So if you're a little bit more technical, I'd encourage you to check out those studios or playgrounds because you can learn a lot just by. It's kind of like popping the hood in the car and seeing what's going on in there and learning a little bit about how the car drives. Mallory, I think, didn't you play around with OpenAI Playground at some point and look in there.

Speaker 2:

Yeah, I played around with that, I think originally, like way back in 2022, that was what Thomas Altman, who I already mentioned on this podcast, showed me, and I think when we were working on creating blogs out of your book Ascend, we were kind of working in the playground. That's right, blogs out of your book Ascend, we were kind of working in the playground. I mean, anyone could really do it. It just looks a little bit more intimidating than Chatuby 2.

Speaker 1:

Yeah, it's kind of like going into the cockpit of a jet and there's all these controls and things Maybe not quite as intimidating as that, but there's a lot more knobs and levers that you can pull and push and that's how you learn more about how these models work. It's still very much end user. You're not coding, you're just using a different interface. It is called a playground because it's designed for developers to simulate what would happen if they wrote code that called APIs to do this. So you can kind of work in the playground, say, oh, what would happen if I gave it this system prompt and this user prompt? How would it react? And that's a very quick way of prototyping something and then saying, ok, I know it'll work this way. Now I want to have a developer create a program that calls the API over and over to do this.

Speaker 1:

So a good example would be something like taxonomies. So if I have, let's say, you know, a million documents and I want to create a taxonomy around them and we've talked about that here and in blogs a lot how would I do that? Well, I'd pick a model or I'd experiment with a few, I'd go to Cloud, I'd go to OpenAI, I would take in some of my documents. I would say, okay, I'm going to have a test system prompt and I'm going to test out different user prompts. Kind of gets to the point where I have an idea of which models produce different kinds of output and I'd get that to work. And this is a non-developer, this is someone like moderately technical. Then if I wanted to automate it with a million documents, I would hire a developer.

Speaker 1:

You know there's lots of people who you can hire. You can go on Upwork to get freelancers, you can hire companies. You might have in-house IT and you can say, hey, developer, this is the set of prompts that we're going to use. I want you to simply write a program in JavaScript or in Python or whatever that uses these prompts, so you can separate the programmer from the AI architect if you will, and so that becomes a really interesting skill set for people to pick up on, because you don't need to be a programmer to learn how these models interact. And that's one of the best ways to think about like, oh, maybe we can plug in DBRX, maybe we can plug in Lama 2 or maybe we can plug in Mixtrel, so you have, like, all these tools in your tool bench. The question is, which one do you pull off the shelf to use, depending on the circumstance? It's an amazing time. It's a little bit complicated, but it's an amazing time to be thinking about building stuff.

Speaker 2:

For this week's episode. Normally I use ChatGPT kind of to do the research and the summaries for the topics that we discuss, but earlier this week my uncle actually was talking to me about Perplexity AI and I've used perplexity but I haven't in probably like six to eight months. So I decided to go and use that this week and it was awesome. Like I don't know if they've improved it, but it was so efficient. I even say I had increased my productivity even from using chat GPT by using perplexity this time and I was like, okay, I've got to remind myself, stop getting so locked into what I do every day. I love the idea too, amit, of like the 15 minutes recurring on your calendar. Anyone can take 15 minutes. I think we can all find 15 minutes that we can do some experimenting with.

Speaker 1:

Yeah, If Google could write a blank check to perplexity and absorb them without the FTC and other people getting on them, and if the companies want to sell, I think they would do that in a heartbeat. So I wouldn't be surprised if they got acquired in the next six months. They really have some innovative approaches to blending search with generative results and a lot of people are big, big fans of perplexity. I haven't used it a ton. I've played with it on and off here and there, but it's a great reminder to go check these tools out if you haven't been in there in a while. Some people tell me they've tried ChatGPT and they weren't very impressed. Yet they might have done it a year ago and also they probably were in the free tier instead of the paid version, which in the paid version you get GPT-4 and the free version you get GPT-3.5. It's a totally different experience. It's like saying I'm riding around on a lawnmower versus driving around in a Maserati and there's a big difference. They both have engines and four wheels, but they do different things.

Speaker 2:

I love your analogies, amit. Well, thank you for the conversation today. Hopefully we'll get to create more songs in the future for Sidecar.

Speaker 1:

Sounds awesome. Thanks, mallory. Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember Sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.