Menu
Sign In Pricing Add Podcast
Podcast Image

The Startup Ideas Podcast

DeepSeek R1 - Everything you need to know

Wed, 29 Jan 2025

Description

Greg's DeepSeek Cheetsheet: From Installation to Expert Prompting: https://www.gregisenberg.com/deepseekRay Fernando, a former Apple engineer, gives an in-depth tutorial on DeepSeek AI and local model implementation. The conversation includes detailed guide for setting up local AI environments using Docker and OpenWebUI, and how to implement DeepSeek on mobile using the Apollo app.Key Points:• Comprehensive overview of DeepSeek AI and its reasoning capabilities• Discussion of data privacy concerns when using Chinese-hosted models• Detailed tutorial on running AI models locally using Docker and OpenWebUI• Comparison of different AI model providers (Fireworks, Grok, OpenRouter)• Mobile implementation of AI models using Apollo appTimestamps:00:00 - Introduction02:34 - Overview of DeepSeek05:29 - Data privacy concerns with Chinese-hosted models08:02 - Running Models Locally with OpenWebUI10:06 - Comparing Hosting Providers: Fireworks and Groq16:54 - Cost Comparison of using/running AI Models18:35 - Improving Prompts for Better Outputs22:41 - Running AI Models Locally: A Step-by-Step Guide37:09 - Mobile AI implementation discussion45:27 - Future implications and closing thoughts1) DeepSeek's R1 Model - What's the big deal?• On par with ChatGPT's reasoning capabilities• Open source & free to use• BUT hosted in China (data privacy concerns)• Incredible for analysis and content generation2) Alternative Ways to Use DeepSeek:• Fireworks AI ($8/million tokens)• OpenRouter• Grok API• Local hosting (safest for sensitive data)3) Running AI Models Locally Step-by-step setup:• Install Docker• Use OpenWebUI• Download models via Ollama• Configure API connectionsPro tip: Takes 5 mins to set up, saves hours of worry about data privacy4) Mobile AI is HERE!• Apollo app lets you run models locally on phone• Download smaller, optimized models• Works offline• Perfect for quick analysis on the go5) Temperature Settings Explained:High temp (0.8-1.0) = Creative modeLow temp (0-0.3) = Logical modeChoose based on your needs!6) FUTURE PREDICTIONS • Watch-based AI coming soon• Emergency response applications• Real-time negotiation assistance• Advanced audio analysis7) Getting Started Tips:• Start with public data on DeepSeek• Experiment with different models• Use temperature settings wisely• Focus on practical use casesNotable Quotes:"We're in this new deep-seek world where if you figure out the model that works for you and the tasks that you want to accomplish, you might be able to out-compete whoever you're competing against." - Greg"Please don't be fearful or don't feel like you're left behind. If you're just finding out about this, you're not that far behind. We're all actually still trying to understand what this intelligence can give us." - Ray FernandoLCA helps Fortune 500s and fast-growing startups build their future - from Warner Music to Fortnite to Dropbox. We turn 'what if' into reality with AI, apps, and next-gen products https://latecheckout.agency/BoringAds — ads agency that will build you profitable ad campaigns http://boringads.com/BoringMarketing — SEO agency and tools to get your organic customers http://boringmarketing.com/Startup Empire - a membership for builders who want to build cash-flowing businesses https://www.startupempire.coFIND ME ON SOCIALX/Twitter: https://twitter.com/gregisenbergInstagram: https://instagram.com/gregisenberg/LinkedIn: https://www.linkedin.com/in/gisenberg/FIND RAY ON SOCIALX/Twitter: https://x.com/RayFernando1337YouTube: https://www.youtube.com/@RayFernando1337Ray's Website: https://www.rayfernando.ai

Audio
Featured in this Episode
Transcription

0.449 - 17.217 Greg Isenberg

Ray Fernando on the pod. He's a 12-year ex-Apple engineer. He streams AI coding. He's building an AI startup in real time. I needed to have you on because what are we going to talk about today, Ray?

0
💬 0

18.482 - 35.673 Ray Fernando

Today, we're going to talk about prompting. And we're going to be specifically prompting with the new reasoning models with DeepSeq R1. And there are a lot of caveats as far as these models because they're now able to think and reason. And what that can do is can even lead to superhuman capabilities.

0
💬 0

36.574 - 60.235 Ray Fernando

And so what does that mean is that these models have now become so advanced and this specific one from DeepSeq is out of China. And what that allows you to do is basically they've made it open source so that it's available for us to study. But it's apparently also on par with ChatGPT's like O1 model, O1's reasoning models.

0
💬 0

61.076 - 95.834 Ray Fernando

And why it's taken the world by storm is because the fact that it's also free to use on their website. So deepseek.com. And when we say free, there's also a little bit of a caveat if you don't really know.

0
💬 0

95.954 - 114.962 Ray Fernando

So I just want to also kind of cover a little bit of the architecture today and explain what you're going to get into if you use something like DeepSeq and then maybe how you can also run this in something that can run like in a container like in North America or in some other area because your data is really important, especially if you're doing anything for business.

0
💬 0

115.782 - 132.63 Ray Fernando

And then also the third kind of secret bonus there would be how to actually run this locally on your machine so you can get the capability of these models and run that locally for your own private businesses, whether you're a lawyer, you're a doctor or whatever. There's a lot of different implications that you probably want to look into.

0
💬 0

132.67 - 154.08 Ray Fernando

So I think that this episode is going to be super helpful if you're even just beginning and you don't really know some of the advanced stuff or know code. That's OK. It just takes learning English or using English to describe these things. to get the output and the intelligence of these models to do some really cool stuff. So I'm pretty excited. All right, let's get into it. Cool. Excellent.

0
💬 0

154.441 - 177.459 Ray Fernando

To start out, to use these models, you have a couple of options. And one is going directly to deepseek.com. And this is actually currently hosted in China. So a little bit of background here is that your computer is here. Like, for example, I'm in North America. And if I go to deepseek.com or download the app from the App Store, the app will actually be talking to a region over in China.

0
💬 0

178.158 - 198.192 Ray Fernando

And for what it may be, whenever you send your data over to another country, they have their own rules and laws and regulation. So I would be very careful as far as anything you put into the system, as far as if you have any sensitive data, because it would not belong to a region that you may live in or have control in.

0
💬 0

198.853 - 216.44 Ray Fernando

There are other alternatives which we're going to cover, which would be using a web UI and going to these different API providers like Fireworks or Grok. And then we're also going to do something like covering running something locally on your machine so it doesn't go out to any of these providers. And you can even run this if you're flying on a plane, which is really exciting.

0
💬 0

217.38 - 238.246 Ray Fernando

So as an example for DeepSeek, we're just going to do this because this is something that's currently public information and I don't really mind having this stuff sent out. So as far as prompting, one thing that I frequently do is I have a live stream and I basically transcribe my videos and stuff. So I basically, you know, I made a little app that will transcribe videos.

0
💬 0

238.726 - 253.739 Ray Fernando

And I just take my live stream here and just run it through the transcriber there. And what it will do is just generate transcripts from the video. And it usually does it pretty fast. It processes on my device and then it sends it up to Grok for the endpoint. So when it's done, it looks basically something like this.

0
💬 0

255.049 - 275.944 Ray Fernando

And you're able to copy this transcript and put it into something like DeepSeek if you wanted to. And so in order for you to use the model, what you can do on DeepSeek.com is to just go ahead and click where it says DeepSeek. So when it turns blue, that means the DeepThink is enabled. And that means it's also there. If you wanted to enable web search, you can do that.

0
💬 0

276.004 - 288.547 Ray Fernando

So we can probably do that for the next prompt here. So I paste in my transcript here, I hit shift and I hit enter a couple of times. And so what we can do is give it additional instructions for it to do what we want to do.

0
💬 0

288.727 - 307.852 Ray Fernando

And one of the things that I like to do is I actually have built a little prompt that I'll actually share with y'all so that you can actually do some analysis and generate a blog post of a transcript. And so that is actually located here in my little Notion thing. And so one of the things I have is we're going to cover how to do some of these prompts and stuff.

0
💬 0

307.912 - 327.22 Ray Fernando

And I'll actually show you how you can generate some of these advanced chaining prompts because this will really take advantage of these models to think through all of that text and do some work on our behalf. So this is really, really cool. It's basically like hiring an admin to go through all of your stuff and make things for you. So we're going to go ahead and hit submit.

0
💬 0

328.194 - 356.814 Greg Isenberg

And when we do that... I will say, I wouldn't put a tax return on deepseek.com. It's not the type of thing I would put on. So you do want to be a bit wary of what you're putting on when you're on deepseek.com. Now, I was playing with Perplexity earlier, and Perplexity actually has some of these models built in, but it's hosted in the United States of America. So that's a bit different.

0
💬 0

357.938 - 378.788 Ray Fernando

That's correct. And you may want to ask your app providers what they do. One of my favorite apps for coding is actually Cursor. And I asked them, hey, where do you have your DeepSeek model hosted? And they told me they use the Fireworks API. And that's actually not in China. So that's great. So it's like, OK, cool. That's awesome. And they're using the full model.

0
💬 0

379.508 - 399.629 Greg Isenberg

Quick break in the pod to tell you a little bit about Startup Empire. So Startup Empire is my private membership where it's a bunch of people like me, like you, who want to build out their startup ideas. Now they're looking for content to help accelerate that. They're looking for potential co-founders.

0
💬 0

399.669 - 417.857 Greg Isenberg

They're looking for tutorials from people like me to come in and tell them, how do you do email marketing? How do you build an audience? How do you go viral on Twitter? All these different things. That's exactly what Startup Empire is. And it's for people who want to start a startup but are looking for ideas.

0
💬 0

418.517 - 429.019 Greg Isenberg

Or it's for people who have a startup, but just they're not seeing the traction that they need. So you can check out the link to StartupEmpire.co in the description.

0
💬 0

429.719 - 446.622 Ray Fernando

So these models have these parameters that you may hear of. And like, you know, like the really large parameter model, like 600 billion plus parameters, just means that it has more intelligence to leverage. And it tends to take longer in its thinking. But the results are really, really, really great.

0
💬 0

447.842 - 463.348 Ray Fernando

And some of the models that will probably run locally on the machine a little bit later, they're going to be like distilled. So you just basically take the essence of it and then those models basically are going to run. They run a lot faster and they're just as efficient, but they may not think as long or they may not give the results.

0
💬 0

463.548 - 486.206 Ray Fernando

And it's really up to you to try out, which I highly encourage. So one of the problems is that sometimes the server is really busy. And that can happen because right now it's so popular. And I probably after the publishing of this video, it probably be even more popular. So you can hit this little pencil and you can hit send again to try to resend it. And so that's kind of where I thought, well,

0
💬 0

487.066 - 505.702 Ray Fernando

If I'm sending this over and there's a bunch of reliability issues, why don't I try to do something like, you know, that I can host my own or just hit the API themselves? If Cursor is doing that, why can't I do it as well? So I can actually show you a technique for how to do this so you can hit the API and so you don't send your data to China.

0
💬 0

506.382 - 526.574 Ray Fernando

And that actually involves using this thing called Open Web UI. So I can show you that. So while this is thinking, if it even returns the results or does anything here, we're going to go ahead and pop on over to the other side. So on the other side, we're going to go to here. And so I have an instance of what's called open web UI, and it looks very similar to like a chat GPT.

0
💬 0

527.154 - 545.782 Ray Fernando

And to get this set up, I'll probably go through a little bit more details, but I'll just go ahead and show you what this looks like. So in here, I have the model selected. I can go to deep seek. And so what's great is that you can connect to an API provider And I'm using Fireworks AI. So Fireworks AI here is currently hosting DeepSeq model.

0
💬 0

546.282 - 568.666 Ray Fernando

And they allow you to use the model just by, you know, using getting an API key and then putting in the exact model string and so forth here. And so from here, if I go to the open web UI, I'm able to select them and say, OK, this is my DeepSeq account. I'm going to go ahead and just paste the exact prompt that we had, paste it in with my transcript and everything here.

0
💬 0

569.395 - 589.244 Ray Fernando

And I should be able to get everything out. So let me just double check here that I got everything. So it's still timing out here. Yeah, server is busy. Try again later. So yeah, that's not fun. So what I'm going to go ahead and do is scroll to the top and hit this little copy button and then go over here. Just make sure I put everything in there. I have the whole transcript.

0
💬 0

589.304 - 605.151 Ray Fernando

Yeah, so the whole transcript here. And so what's going to happen here is when I hit send, it's going to send this off to Fireworks AI. And what's great about this thing that's actually running in this open web UI is that it's using the API and it's not actually sending the data to China.

0
💬 0

605.651 - 625.541 Ray Fernando

So just for as it's doing its thinking here and showing us what's going on, I'm going to go back to overlay this model of our data in our container and kind of show you what this is looking like in the background. So here in TLDraw, we actually have our Mac and PC. And so I used web UI here and I'm actually using the Fireworks API.

0
💬 0

625.702 - 641.814 Ray Fernando

So I'm going to the cloud and this cloud is located in North America. So the data basically resides here in North America and it's going to be delivered back to my device. So that's what we're doing. When we were on the DeepSeq website, we were going out to the China region. So that's just a heads up of kind of how that's working in the background.

0
💬 0

642.335 - 656.145 Ray Fernando

And so next I'll show you the difference of what's going to happen and like the speed difference with what Grok hosting provider provides. And then a little bit later, we'll get into a little bit of details for how to get the setup. So as you see, it's kind of outputting this stuff here.

0
💬 0

657.119 - 676.207 Ray Fernando

Because these models are still so new, these web apps are still adjusting to take a look at the reasoning stuff. And so what I'm going to go ahead and do is hit this little pencil up here to the very top. I'm going to make this a little bit bigger so you can see. And then from here, when you hit this little pencil, it creates a new chat. At the very top, you can select the model dropdown.

0
💬 0

676.267 - 700.349 Ray Fernando

I'm going to type in DeepSeek. And the one that I have set up from Grok is called the Distilled Llama 70B. And so this model is actually like a smaller distilled version that they're hosting, but it's incredibly fast. And so if we hit this here, it seems like nearly instant by the time all the stuff kind of starts finishing. So we'll see this model actually going out.

0
💬 0

700.449 - 717.02 Ray Fernando

It's doing its thinking and now it's actually providing the response just like that. Super fast. So if we take a look, this this has thought for a few seconds and actually shows us the reasoning that was going on. So this is actually going through my transcript. trying to really understand what was going on with my transcript.

0
💬 0

717.22 - 736.357 Ray Fernando

This was an interview with LDJ who spoke a lot about deep seek and technical details of things that, you know, I really couldn't remember. And so then it basically makes this like, you know, very simple blog posts. Um, so we'll see if my other, um, one that was running the larger models finished. And you can see the difference between the two models.

0
💬 0

736.377 - 749.563 Ray Fernando

A distillation model is just going to give us like a small little blog post versus the full model that's also running on Fireworks API is actually giving us quite a bit of detail. It's going to take more time, but take a look at what it's doing right now.

0
💬 0

750.123 - 778.582 Ray Fernando

so it's above here was all this thinking stuff but this is actually now doing an analysis on my transcript and generating a really nice blog post uh and so it's telling us about the calculations that ldj talked about in the stream uh geopolitical implications of what's going on with the new ai arms race uh also future predictions and we talked about these details in the live stream and it literally picked them up and is now creating a graph from this this is how crazy these models are if you can really think about it

0
💬 0

779.442 - 794.188 Ray Fernando

And here's some key takeaways as well. So that's an amazing thing. And I'll be able to share these prompts with you and, you know, so that you can actually run these analysis on your guys's own transcripts as well. Yeah. So here's the SEO enhancements and final thoughts. Yeah.

0
💬 0

794.688 - 818.069 Greg Isenberg

So when you're in when you're in business or you're building a startup. having an unfair advantage is so important, right? Like being super efficient and keeping your costs low, creating your product to be the best possible thing. Now we're in this new deep seek world where the model, I call it a deep seek world, but it's really, it's a llama world. It's a deep seek world.

0
💬 0

818.73 - 847.652 Greg Isenberg

It's a world where if you figure out the model that works for you and the tasks that you want to accomplish, you might be able to out-compete whoever you're competing against. Now, I've done a similar prompt on ChatGPT with some of my YouTube transcripts. And it's not unusable, but it's more of a thought starter. It's like, oh, okay, I can take...

0
💬 0

849.056 - 876.671 Greg Isenberg

most of this and I can rejig this and add this and add that and probably get to a blog post, that is good. But it does require a lot of human energy to go and do this. When I see what's coming out of this, What's really, really mind boggling is the fact that it almost looks like just quickly scanning this, this looks pretty, pretty human level, incredible.

0
💬 0

877.192 - 879.293 Greg Isenberg

Like a senior writer would do something like this.

0
💬 0

880.414 - 899.484 Ray Fernando

Yeah. Or a research engineer that you hire to really thoughtfully take a lot of notes, spend a lot of time analyzing and like put together how you would want to report. And it's even more incredible because these instructions can be configured. So if you want like a graph or you want a type of thing,

0
💬 0

900.231 - 914.32 Ray Fernando

we can take this prompt and put it into DeepSeq itself to say, can you give me this type of output instead? And it'll do that for us. It's like, how do we improve the prompt? Or what do you want to see from your outputs all the time from your live streams?

0
💬 0

915.401 - 934.812 Ray Fernando

And I think the thing that I've seen, this is kind of the biggest breakthrough that's happening is that I'm seeing this also with O1 Pro, by the way. O1 Pro and DeepSeq reasoning models, these reasoning models spend extra time and actually pay attention to your instructions. And so every little detail that they're seeing, they're like, oh yeah, I haven't done that yet.

0
💬 0

935.152 - 956.54 Ray Fernando

Okay, let me go ahead and make sure I still do that. And that's something that I super deeply appreciate. And for me, it's worth the extra 200 bucks I pay a month to open AI. But this is really quickly turning my head and like, oh my goodness, did you understand like what just happened here? It's like, I'm a little still taken, I'm still taken away at this output.

0
💬 0

956.56 - 981.065 Ray Fernando

Like you're saying, it's very detailed. And it's, to me, I feel like this is totally a game changer. And I think one thing that people aren't really talking about right now is actually this additional rush to understand who can host this in order to host these huge, like, 600 plus billion parameter models, you need all those GPUs. You need services like fireworks. Grok is trying to spin that up.

0
💬 0

981.325 - 1000.989 Ray Fernando

Grok was able to get a distilled model. There's just so much demand. There's going to be even more demand for these chips. And yeah, this is just the beginning and I'm trying to figure out which provider can host this for me reliably so that I can do this for myself, but also share back and put this into apps for other people as well, because this is going to be This is great.

0
💬 0

1001.089 - 1013.855 Ray Fernando

And I don't want the data to go to China. I just want the data to stay in North America. Or if I get a European container, I can do the European container and meet all their legal requirements that need to happen for that as well. So that's super exciting. Yeah.

0
💬 0

1014.275 - 1017.697 Greg Isenberg

What's the cost, the pricing for fireworks?

0
💬 0

1019.028 - 1038.777 Ray Fernando

Yeah, the pricing for fireworks, we can look this up real quick. I think it's about eight dollars per million tokens where normally I think ChatGPT was like 15 input and 60 dollars for output for O1 Pro. I can just double check that real quick. So pricing. Let's see.

0
💬 0

1042.259 - 1051.423 Greg Isenberg

Yeah, from what I remember, it's cheap. It's like significantly cheaper. Mm-hmm. Exactly. Then go to 01 Pro.

0
💬 0

1052.224 - 1060.13 Ray Fernando

Yeah. So 01 Pro or 01 API cost. Yeah. And the pricing.

0
💬 0

1063.65 - 1083.892 Greg Isenberg

And this is going to add up, right? You might be like, oh yeah, who cares? A million tokens. But once you add this to your workflow and you're pumping out content or you're doing research on an ongoing basis or you've built a business around how to do this, these tokens will add up.

0
💬 0

1085.1 - 1103.759 Ray Fernando

That's exactly it. Yeah, they'll add up. Also, at the same time, OpenAI has currently promised that the 03 model will come out and the mini model will come out, which would be on par with this model. So that prices will also probably significantly drop as well because they just get more efficient with time. And so that'll be really interesting to see.

0
💬 0

1104.28 - 1126.14 Ray Fernando

And, you know, I'm rooting for it because for me as a consumer, I want the power of all the intelligence. And to do these types of things, I think it's going to be pretty important. And I think to that note, I think it'd be interesting to show how if anyone hasn't found out about this. It's this thing on OpenAI's thing called the platform.openai.com.

0
💬 0

1126.701 - 1143.573 Ray Fernando

So you just sign up for accounts or developer account. It's a little playground. And what you can do is actually hit this little generate star button. And so we can describe a prompt that we want for any type of model. And what it will do is reconfigure the prompt for the language model so that they can actually be more efficient at doing something.

0
💬 0

1144.093 - 1172.043 Ray Fernando

So if there's a task that you do quite a bit, you're like, I just want, you know, please like make you know keywords for my Amazon listings and then hit like generate what it'll do is basically whoops sorry if we hit generate here and hit this at the very top right and hit update what it will do is actually reconfigure this prompt just from a one line type of thing to include more details so

0
💬 0

1172.936 - 1193.073 Ray Fernando

we can take existing prompts and try to improve them through this mechanism. So as you can see, this is like how people get really nice long chains of thought or reasoning or outputs. So one way to think about these things is to first kind of put down like what instructions you want, what type of output do you really expect? maybe what you don't want as a good starting point.

0
💬 0

1193.313 - 1215.921 Ray Fernando

And then that'll help you generate prompts that can be a little bit useful. A lot of the prompts that you're seeing that I spit out are basically things that have come out over time because of my use cases. It's like, okay, I want this instead of that. And so as an example, One of the things that I was thinking about was like, how do you verify like claims for a specific type of thing?

0
💬 0

1216.081 - 1239.178 Ray Fernando

So if you have an article and how do you understand if something is actually true or not? So I have one here. This is information verification. And so one of the things that these models that I was currently showing you is that the Fireworks and the Grok are specific API endpoints. And right now there isn't like a specific web search thing that's currently tuned into them.

0
💬 0

1239.258 - 1258.027 Ray Fernando

So if you want to do web search, you have to go through the deepseek.com route or the app. And keep in mind, you're also sending data into this container there. So sometimes you could just use it for public articles for things that you really don't care about. So if you're on DeepSeek, let's just see if they actually have stuff available here for us.

0
💬 0

1258.527 - 1279.765 Ray Fernando

So you just go to the search thing, turn that on. I paste my prompt in and then what I'm going to go ahead and do is like maybe grab an article like the Techno Optimist Manifesto from Marc Andreessen. It's a very popular article and sometimes, you know, it's really long to read. There's like a lot of information here. And you're like, oh, my God, there's like so much in there.

0
💬 0

1279.845 - 1298.737 Ray Fernando

It's like, how can I even get started with this thing? And how do I even verify the claims of this stuff? And I think this is probably sometimes like a good thing to start here. So what I do is I hit shift and enter and I put the article at the very top. And once I do that, I just go ahead and paste that in there and then just go ahead and hit send.

0
💬 0

1299.304 - 1315.496 Ray Fernando

So what that's going to do for us is going to use the web search and try to look through, just like how we saw earlier that was happening with the API, every type of claim that's in there, try to see if they can search the internet for it and try to see if it can do anything. So this is going to try to do its thinking thing.

0
💬 0

1316.517 - 1333.246 Ray Fernando

This is obviously very popular and DeepSeek, the website, is getting flooded with people because of basically it being free. And so, yeah, just keep in mind, like Greg says, like, yeah, I would not be putting taxes in there. I probably wouldn't be putting medical records in there.

0
💬 0

1334.486 - 1354.612 Ray Fernando

Things that, you know, you don't want to see generated if somebody asks a question that's related to you, because that can be a little crazy. You'll be like, wow, all of a sudden my data is showing up somewhere. I was not expecting it. So Yeah, this is currently airing out. It's no surprise, and this is kind of why we were kind of thinking about doing some other alternatives here.

0
💬 0

1355.132 - 1370.262 Ray Fernando

So yeah, I think that was set there. So I think another thing maybe that could be useful here is probably getting this stuff set up. So if you wanted to run this locally, maybe we can kind of go over that workflow. What do you think?

0
💬 0

1371.783 - 1375.445 Greg Isenberg

I would love to. Yeah, I mean, selfishly, I would love to know that.

0
💬 0

1376.266 - 1400.06 Ray Fernando

OK, OK, cool. Awesome. Let's do that. Yeah, I think that that'll be great. So in order, like for this section, in order for us to run this model locally, the best interface that I found, bar none, is something called Open Web UI. So Open Web UI, it's really quick to get started. All you need is to download Docker. So just go to docker.com. And you can just download the desktop app.

0
💬 0

1400.28 - 1417.768 Ray Fernando

So download the one that you need for your machine. If you're running an Apple M machine, which I am doing, I just download the Apple Silicon version and accordingly. So once you get that installed, it's going to present to you a user interface like a dashboard here. And so that's kind of something that you'll have kind of going.

0
💬 0

1417.908 - 1436.245 Ray Fernando

Mines is already showing the app is running here, but that's how you already know it's installed. It will require the terminal, but it really won't hurt you too bad if you have it running. So the first command that you will basically run is this one that's listed on their quick start. So the quick start will be listed here and we'll have this available as a guide for those to download.

0
💬 0

1437.125 - 1455.057 Ray Fernando

And all you have to do is follow these two steps, really. So the first step is pull the container. So this is you just copy this and then you put it into the terminal and then it's going to do this little pulling thing and it'll probably download like, you know, several gigabytes of files onto your machine. And then the next step is literally what they call running the container.

0
💬 0

1455.097 - 1473.632 Ray Fernando

So with Docker, the whole app and everything is all contained in one. So that way you don't have to spend a bunch of time doing extra terminal things. This is probably the only two terminal things that you will run. If you're running a PC, you probably want to run, especially for NVIDIA, you'll want to run this command, GPUs-all. So all you have to do is just copy this one if you're running NVIDIA.

0
💬 0

1474.233 - 1489.697 Ray Fernando

And that'll take advantage of your GPU. And it'll run more efficiently when you're running it locally. So the one I like to do is just a single user mode, which doesn't require sign in. That way, if you're the only one that's using it at your house or on your network, that's probably the best way to do it.

0
💬 0

1490.398 - 1513.127 Ray Fernando

So you just copy this command here and then you put it into the terminal and it'll say, hey, you know, great. It's up and running. And then all you have to do now is just go to the website localhost3000.com. And then once you're running on localhost 3000, you're going to be presented with some user interface thing like this. And I have a model that's currently loaded here.

0
💬 0

1513.167 - 1535.776 Ray Fernando

That's kind of why it's showing us here. But you're not going to have any models loaded. So in order for you, the next step here that we have to do is actually have a couple of options. One of the things that I do is you can just download a model locally. And I use a thing called Ollama. And so ollama.com. is something you'll want to download there. And so that way you can run any local model.

0
💬 0

1535.896 - 1550.822 Ray Fernando

And it's simple as just, you know, finding the model and so forth. So once you hit download, it's going to download for your machine and you install it. You'll see like the way that it's downloaded, you'll actually see this little llama guy that's at the very top and a little figure there. And that's how you know it's currently running.

0
💬 0

1551.722 - 1572.576 Ray Fernando

And so once that's currently running there, you'll see the models that are currently listed here in the model section at the very top. And so DeepSeek R1 is going to be the one that we want to use here. So we'll go back to our web UI instance, and then we're going to go ahead and hit where it says user at the bottom. From there, we're going to go to the admin panel.

0
💬 0

1573.436 - 1592.306 Ray Fernando

And from the admin panel, there's like a section, a settings area. And so this settings area has an area of our connections with a little cloud icon. And this is kind of where we're going to connect our other providers here. So let me make this a little bit bigger so that everyone can see. So as you can see, the Ollama API is already configured for us, which is nice.

0
💬 0

1593.086 - 1617.022 Ray Fernando

And this is already going to have the Docker container there, which is great. And so when you hit the little pencil here and you hit plus, change the model, you can type in the model like deep seek. And if you don't see it available, which it may not be there, you'll see this option at the bottom that says pull deep seek from olama.com. And so that'll actually search olama here to get it for you.

0
💬 0

1617.542 - 1638.55 Ray Fernando

So like, for example, if we wanted to download the fee four model, I'll just type that one in just as an example. So you can see fee four. So I have no, I don't have that model currently downloaded. I just can hit here and it's just going to go ahead and find it and it downloads it. So in no time, basically this model would just be downloaded on my machine.

0
💬 0

1638.931 - 1664.528 Ray Fernando

And then I could just type in fee for, and I'll be able to use that in the future going forward. So what we can do is go ahead and type in, you know, we're starting a new chat and we're going to basically select the model deep seek dash R1. And you'll see it'll be colon latest is kind of what it's listed at. And that's actually how you know that that's the one that's currently running locally.

0
💬 0

1665.249 - 1682.798 Ray Fernando

And so once you select that one, you can just say something like explain options training and then go ahead and hit enter. And so what this does is, you know, it's basically, you know, thinking and you can see the thinking tokens of what's going on when it's thinking. And so all of this is actually running on my computer, which is amazing.

0
💬 0

1683.887 - 1706.018 Ray Fernando

One of the ways that I can tell is there's this command line called asitop. And it actually shows us all of the resources that it's eating up. Thankfully, I have 128 gigabytes on my machine because I do live streams. I do all this stuff at the same time. And you can kind of see how much RAM it takes up right now with me hosting the stream plus running this model locally.

0
💬 0

1706.878 - 1723.349 Ray Fernando

So yeah, this is actually what it does here. One of the things that we could even do is try to test that prompts that we were using earlier so that we can run this command locally. So earlier, what we did was we were running like a whole analysis on something and it would just fail out.

0
💬 0

1723.449 - 1743.805 Ray Fernando

So this thoughtful analysis that I was showing you, we can try to see if we can run this on a local model and just see the difference as well. So this is basically the transcript that I had earlier, plus the analysis stuff. And if I go to open web UI and then just go ahead and kind of go ahead and go back here and create a new chat. and hit paste, and then hit run.

0
💬 0

1744.366 - 1769.2 Ray Fernando

So this is going to see it's thinking here, and it's using up all the resources on my local machine to run this model. And it's quite a lot of tokens. And it's still fairly impressive what a smaller model can do that's running on my machine. And you'll have different versions that you can use. And so this one is using the 7 billion parameter model. If you get something that's a little bit higher,

0
💬 0

1769.801 - 1790.256 Ray Fernando

This is probably going to get you a little bit more detailed response. And I would definitely play around with these things. Another important setting I think that you can tweak, and we can probably run this as a next chat, is while this is going here, there's a control section. So this control section at the very top will show us, let's see what to dismiss this.

0
💬 0

1790.756 - 1812.428 Ray Fernando

So the controls, one of the controls that you'll probably want to change around to get different results is the temperature. So it's setting the temperature from like, you know, 0.8, the default to like a lower temperature will actually make it like hallucinate less is kind of what people say. And so it'll tend to follow instructions better and then not kind of veer off into different tangents.

0
💬 0

1813.308 - 1826.279 Ray Fernando

And then another one, if you go all the way to one, it'll just be extremely creative. So you can think about those as far as maybe if you're doing some creative writing, some non-logical reasoning, that can be really helpful if you want to kind of think out of the box and have it kind of go into La La Land.

0
💬 0

1826.919 - 1843.573 Ray Fernando

It's really up to you and your content, but I would definitely do two different responses with different temperatures and test those things and see if you see any difference in your output. For me, sometimes I find the temperature zero to be very helpful for very logical reasoning purposes. especially around code.

0
💬 0

1843.693 - 1848.196 Ray Fernando

But it's really up to, it kind of varies and I just kind of want to give you a heads up on that.

0
💬 0

1849.276 - 1886.584 Greg Isenberg

I appreciate that. To me, I would rename that temperature as wine versus coffee mode. Love it. Wine might get you a little more creative. If you want more rational execution style, maybe you want coffee mode. We have LCA, it's our design firm for the AI age. I feel like that's what's missing from a lot of these AI products is just a little humanity and lightness.

0
💬 0

1888.145 - 1890.727 Greg Isenberg

I expect over the next couple of years we'll start seeing...

0
💬 0

1893.075 - 1900.605 Ray Fernando

You know what would be funny? To basically have like a spinner where you can actually flick it yourself and you kind of see it land on something and then just like hit go.

0
💬 0

1901.426 - 1902.407 Greg Isenberg

Totally. That would be cool.

0
💬 0

1902.447 - 1910.839 Ray Fernando

Because sometimes you really don't care, right? You're just like, I just want to spin the bottle and see what happens. Like... Totally. It's this kind of YOLO mode, kind of. Yeah. Yeah.

0
💬 0

1910.919 - 1911.119 Greg Isenberg

Yeah.

0
💬 0

1912.161 - 1934.272 Ray Fernando

Because I think, like you say, there's huge opportunities in the AI space to be playful. And I think that's what's interesting is you have these intelligence of the models. And then now you have to have people who build interfaces to interface with them. And there are a lot of companies who are trying to do that. And, you know, you can get very far with just some prompting as we're seeing here.

0
💬 0

1935.174 - 1955.263 Ray Fernando

And then we're trying this exercise here is to try different models. So. if you think about it, Ollama is sort of the gateway to all these different types of models that you can try out and see if it even works for your use case. And this web UI is actually a really nice user interface to keep track of that. It's safe locally on your machine. You can go back to them at any time.

0
💬 0

1956.564 - 1976.149 Ray Fernando

There's additional options at the bottom here, which is really nice. So you can actually have this read out loud to you. So if you're a person suffering maybe from dyslexia or you actually prefer audio, you can have that for you. This will give you some information there. You can continue the response. Sometimes if you have too much information, it still needs to continue going.

0
💬 0

1976.209 - 2002.763 Ray Fernando

So you hit the continue and it'll just continue on or regenerate the responses. So that's kind of some of the basics there. So yeah. Um, so this is the output of this model and I'm fairly impressed for being a 7 billion parameter model at running locally on my machine, uh, that it took that entire transcript and did this analysis type of thing. That's I'd say is pretty close to the bigger model.

0
💬 0

2002.863 - 2026.359 Ray Fernando

And, um, and in terms of details, it's not as detailed as the other one, if we kind of take a look. So like, The previous, this is one that came out with before, you know, with this nice big blog post type of thing. So it's pretty good and it's running, you know, locally. I can run this on the plane as far as that. So yeah, so to get started, basically, again, it's just open web UI.

0
💬 0

2026.88 - 2048.234 Ray Fernando

There is a getting started. It's literally a couple steps to run. Make sure you have Docker installed there. And then Ollama is going to show you all the different models. So if you go to the models, you'll see kind of stuff that's popular and trending right now. And that'll kind of get you some of that as well as far as getting started. There is, you know, we also talked about Fireworks AI.

0
💬 0

2048.374 - 2066.938 Ray Fernando

So that's Fireworks. It's a good resource for you to, you know, go take a look and put that model in. So like if you want to put that model into your Ollama, you would kind of do the same thing here. So go to user and then you go to admin panel. And then you would go to settings up here. And then from the settings, you're going to go ahead and hit connections.

0
💬 0

2067.539 - 2089.706 Ray Fernando

And so what you'll do is go ahead and hit the little plus connection. And so you have to put in the base URL and you'll also have to put in the API key. So the base URL here for fireworks is this specifically here. It says api.fireworks.ai slash inference slash v1. In the example documents, you'll see slash chat slash completions and things.

0
💬 0

2090.307 - 2116.378 Ray Fernando

You don't need those because that's part of the OpenAI framework is that you just put everything up to v1 and then you'll generate an API key from that model. over in Fireworks, so AI. So if you go to the model here in Fireworks and you go to your name, and then if you go to API keys, once you go to API keys here, you just hit create API key and that'll pop up.

0
💬 0

2116.799 - 2137.111 Ray Fernando

And that's the key that you want to put in there. Similar to Grok Cloud, you just go ahead and hit create API key. So once you go to console.grok.com, There's an API key section here. And then you'll want to hit create API key. And that'll pop up a dialog with those API keys. And so that endpoint will look something like this over here.

0
💬 0

2137.332 - 2161.784 Ray Fernando

So that'll be, if we hit configure, api.grok.com slash openai slash v1. And then you put your key in there. And you don't have to do anything with these IDs. These will be pooled directly from that endpoint. So whatever models you have available will be there. And so now when you hit the plus sign, you'll see like this nice list of models from fireworks. So there'll be the fireworks one.

0
💬 0

2161.824 - 2177.209 Ray Fernando

So account slash fireworks. You can play with any one of those. And then the other ones that are just with the normal name are from Grok. So they have those as available for there. So you can you can play with a lot of these models, which is nice and compare them. And then the ones at the bottom are the ones from Olamo.

0
💬 0

2177.962 - 2200.465 Ray Fernando

And it's a little show like, you know, the colon latest is kind of how you can tell. And if you hover over them, you'll see like some additional information over the parameter count, what quantization level it is. So Q4 means it's quantized to four bits. And that also has a play in its intelligence. Obviously, the higher level of quantization, you know, means more memory. So it's like 32 bit.

0
💬 0

2201.736 - 2219.949 Ray Fernando

16, uh, all the way down. Um, so the, like the, the lower the number, the like not less intelligence, but you may not get the output that you want is expected. So that's kind of part of that process. It's a lot of different things here, but I think, uh, the most important thing is just, um, yeah. How, how do you host this locally, how to start playing around with it?

0
💬 0

2220.629 - 2225.673 Ray Fernando

Um, and that's kind of like a really good primer to get started for doing these models and stuff. Yeah.

0
💬 0

2226.954 - 2236.695 Greg Isenberg

I love it. Um, I don't know if you've played around with it, but is there any way to do this on mobile? Like, could you play with local models on the mobile device?

0
💬 0

2237.896 - 2242.597 Ray Fernando

Yeah. There is an app called Apollo. Have you heard of that? Apollo.

0
💬 0

2242.957 - 2246.257 Greg Isenberg

Yeah. I just haven't used it. Let's see.

0
💬 0

2246.297 - 2254.699 Ray Fernando

I don't think app store app store. I'm going to see if I can go here. Apollo. Let's see. Okay. Private local AI.

0
💬 0

2255.512 - 2286.293 Ray Fernando

yes so i have this app on my phone and they allow you to download the models directly just like you would with olama as well but it's it has its own interface which is really nice and so um i wonder if i could share my screen i think i can so on your phone yeah yeah they have a phone mirroring exactly apollo Okay. Oh, I have to lock my phone. Okay, cool.

0
💬 0

2287.714 - 2323.341 Ray Fernando

So I lock it and then it should be able to connect. Okay, cool. Nice. Awesome. So yeah, let me kind of minimize this here and yeah. Okay. Let me just go to a different screen here. Probably one that's less cluttered and do phone. Whoops. Put that over here. Cool. Maybe this will work. I think. Yeah. Yeah. Sweet. Yeah, I actually have the yes.

0
💬 0

2324.061 - 2342.239 Ray Fernando

Another place to get your models apparently is also through open router. And so, yeah, so this is kind of the Apollo app. You're like, OK, cool. I can. Can I start chatting with this? You know, as soon as I play with the thing, a couple of configurations you have to do is you hit this little hamburger menu at the very top left corner and then you hit settings.

0
💬 0

2342.519 - 2361.479 Ray Fernando

So on the on the phone app, you hit settings and it's going to say AI providers settings. And when you click there, you have three different options. Open Router, which is another API provider. And you can also get access to pretty much every model there, which is also very handy. I think they give you some free credits, but then you would put your credits there.

0
💬 0

2362.28 - 2379.708 Ray Fernando

And then you have the local model and then you have custom backends. So with the local model, they actually can tell how much memory you have on your device and they'll actually have a little download button for those models. The ones that are not available with the download button basically means you can't run that on your device because you don't have enough memory to run them.

0
💬 0

2379.848 - 2406.706 Ray Fernando

So these downloads are pretty big, like four gigabytes, and some of them are several gigabytes. So just depending on the space on your phone. So you can actually run the distilled Lama 8-bit MLX version. And I have the distilled Quen version at 7B. So it just depends on your... Oh, that one's actually not compatible. Which one do I have downloaded? So I think on mine, let's see.

0
💬 0

2407.386 - 2430.513 Ray Fernando

The one I have available is the DeepSeek R1 from Apollo. I think I have it from OpenRouter that's running. So let's take a look here. AI providers, OpenRouter. Yeah. So the one that I have set up right now is from OpenRouter. So OpenRouter will show you all the models. You can select DeepSeek R1 from there, which is awesome. So you can have a conversation.

0
💬 0

2430.633 - 2452.651 Ray Fernando

So this just requires me being connected to the internet. We start a new chat. You're like, tell me more about options trading. And so here you're still talking to the model, but you're actually just using open router. And so that's a little bit different than, you know, sending your stuff directly to deep seek. And they should be able to do that.

0
💬 0

2452.871 - 2470.517 Ray Fernando

It's possible that this model is busy or it's currently down. That can happen. So, yeah, that happens. Yeah. While that's going, I think we could even start another new chat. Let's see this model. You can select a different model. So let's see.

0
💬 0

2471.442 - 2472.943 Greg Isenberg

It's crazy how many models there are now.

0
💬 0

2473.744 - 2495.621 Ray Fernando

There's so many. Yeah. It's like, how do you know which one does it? I feel like you just go off vibes. Like what's, what's my friend telling me? It's yeah. Like what's the real vibes right now? So the vibes right now, obviously R1 is like the real hotness. People are like totally into that right now. Um, and it makes sense cause you know, reasoning, uh, at a much lower costs. So, um, let's see.

0
💬 0

2495.641 - 2518.068 Ray Fernando

Um, there's probably something going wrong with my API key or something. So AI providers, I can select local model to run. You know, I want to see if there's something small here that we can download. So we could do, yeah, this distilled quen, just for speed purposes, we'll just download the gigabyte one. So this is going to download, wow, that's really fast.

0
💬 0

2519.671 - 2545.821 Ray Fernando

the quen model 1.5 b and so that'll run deep seek locally and so basically it's just downloading it directly from i think hugging face and then the model is being loaded on my phone and um this this is actually optimized to run on apple hardware or apple silicon so that's um you know one way that you can kind of take a look at it uh to run this thing and so what's nice yeah if this phone runs out of internet or i need to ask some questions or do some stuff

0
💬 0

2546.705 - 2566.835 Ray Fernando

I will have this R1 reasoning model that's a much smaller version to run on device. And I think that's another good point about AI that's running. And you don't always need the most powerful thing running for every single type of thing. I think it's really important to... understand different use cases, you know, because maybe you don't need that depth of reasoning.

0
💬 0

2566.875 - 2585.674 Ray Fernando

You just need something that's really quick or you just need something that's really good at like gathering lots of information and just telling you some topics or something like that. And that could just be done really quickly. So it's kind of like picking the right tool for the job and experimenting. So we're at a good age today where you can actually get these models and experiment with them.

0
💬 0

2586.644 - 2607.629 Ray Fernando

So now I should be able to select this guy and run it. So let's go ahead and hit done and start a new chat. And then over here, we're going to go ahead and select the model. So here we're going to type in, oh yeah, it already has it at the top. So you see this little icon that signifies that it's running locally. And then we're going to hit cancel. Hit done. Okay, great.

0
💬 0

2607.909 - 2635.738 Ray Fernando

So now we're running with that local model. And I think we're just using a default system prompt about it being Apollo. And you're like, yo, tell me about options trading. And it should basically start to cook. So it's using my phone's power there. And it's now thinking. And so if we click this little dropdown, we'll actually see the reasoning tokens. Wow. Yeah. I have reasoning on my phone.

0
💬 0

2636.298 - 2662.922 Ray Fernando

No internet. Completely running locally. 2025 is insane. Yeah. Yeah. And imagine being able to run this on your watch. Like that'll just be because this is already showing its capability. Like we're doing this input. If you're, if you can make an app that can run on a watch all locally, you know, just think about like the transcription stuff, right? Uh, you have a very, very lightweight model.

0
💬 0

2663.363 - 2680.054 Ray Fernando

You send the audio, you know, from the watch, you know, especially of a loved one, maybe they've fallen or something. It can just turn on the speaker and try to understand the situation and, And then if it listens to paramedics or something about asking questions and they don't really know, maybe the watch can show, hey, there's this app here.

0
💬 0

2680.074 - 2700.525 Ray Fernando

I'm going to show you the emergency card this person has for their medications. Or this is something that's happened in the last, you know, five minutes before this event or something. This is kind of the way that people think about designing apps with these models is trying to think about these use cases. Because now you have really powerful devices just all like on the size of your wrist that

0
💬 0

2701.144 - 2716.728 Ray Fernando

that can run these models and the power of, uh, Apple's MLX infrastructure and also their, uh, AI technology is the fact that, you know, these, these are really optimized to run these models. Um, you know, very small as we're seeing right now. Um, so that thought for 49 seconds, uh, and it gets us this output.

0
💬 0

2717.288 - 2726.59 Ray Fernando

Uh, yeah, please excuse my small screen, but, uh, we'll probably have a zoom in on this, uh, for the edit. So yeah, yeah, yeah. That's, that's pretty sweet.

0
💬 0

2728.11 - 2760.059 Greg Isenberg

So many startup ideas, by the way, like from, um, From that alone, I love that you shared that example of someone maybe falling and hurting themselves. I think even coordinating with your AirPods, there's just a ton of opportunity there as well. Translation. not just pure translation, but it's like someone is saying X, Y, Z, but what are they really saying?

0
💬 0

2762.561 - 2781.712 Greg Isenberg

Imagine negotiating in the future, except you have pretend you're a lead negotiator as almost a local AI LLM that's helping you figure this out. We could have a million ideas, but it's just really exciting to see where this could go.

0
💬 0

2783.484 - 2805.308 Ray Fernando

Yeah, I think that also goes to the point a little bit about what some of these models do. So one thing I just learned very recently about GPT-4 and ChatGPT's Omni models is the fact that this model's breakthrough, a little bit different than R1, is the fact that it can actually understand audio and tone and all these extra implications that we don't know about. especially for negotiation.

0
💬 0

2805.428 - 2819.564 Ray Fernando

Imagine if you can understand someone's breathing rate just from listening to the audio. That's the capability of something like the 4.0 models with audio. You just give it the audio and it's going to know tone. It's going to know cadence. It's going to pick up things that we just normally don't think about.

0
💬 0

2820.325 - 2835.429 Ray Fernando

But people who are maybe skilled negotiators can understand what those implications mean and then can say, hey, give me some outlier things. Every time I give this person a response, it can answer in milliseconds what the differences are. And I've heard these terms of micro-expressions.

0
💬 0

2835.609 - 2854.92 Ray Fernando

And if you have an Omni model, you can actually mimic these micro-expressions and say, okay, this person is off when we ask them these types of things or changes their position. And those are the things that you really can't get today with some of the, you know, current reasoning models, but except for like the Omni models, which is the, you know, 4.0 models.

0
💬 0

2855.821 - 2867.593 Ray Fernando

And so it's going to be really exciting when they actually dropped 0.3. I think a lot of people are going to be taken by storm of like, what's actually really going to come out from them. It's going to be a really, really big leap.

0
💬 0

2868.654 - 2869.916 Greg Isenberg

Anything else you want to cover today?

0
💬 0

2871.662 - 2884.325 Ray Fernando

I think this is a really good primer for folks to get started on the power of prompting and especially with these reasoning models just to get started. So we covered being able to get started with prompting, understanding where your data is going.

0
💬 0

2884.445 - 2903.896 Ray Fernando

You know, if you're using deepseek.com or using the apps that will go directly to China and their restrictions and things that they have for your data privacy. So just beware. I wouldn't personally be putting any personal information in there that you don't want exposed. And then there are other providers that you can use right now and other people that are still spinning up at this very moment.

0
💬 0

2904.397 - 2924.655 Ray Fernando

So for now, Fireworks, OpenRouter, Grok as far as inference. And then we also covered here running the models locally so that you can actually run them on your phone. using the Apollo app I was using. It's a paid app, but I find a lot of value from it. And I'm not sponsored or anything like that. I just love the work that these people are doing. And it's really good stuff.

0
💬 0

2924.695 - 2941.181 Ray Fernando

You can connect to these endpoints with them. And then the other part is running this through Ollama locally on your Mac and using WebUI with Docker and stuff. So I think there's a lot to be taken care of here as far as trying to use these models and come up with different app ideas.

0
💬 0

2941.281 - 2957.993 Ray Fernando

And if you have an idea, just start to use the playground to try to generate some prompts for it and see if you can get the output that you want. And that could be the beginnings of your next multi-million dollar idea that you don't even know is there, right? So it could be hidden in plain sight. And I think that's the power.

0
💬 0

2958.153 - 2977.007 Ray Fernando

If you want to reach out to me, you can just go to rayfernando.ai and book some time. We can have a conversation. We can get you set up because some of this stuff is very cumbersome and it's just easier for me to walk you through this. And so I'm available there as well. You can find my YouTube channel, Ray Fernando 1337 on YouTube. Feel free to check that out.

0
💬 0

2977.067 - 2991.578 Ray Fernando

I do a lot of live streams of where I check out new technology and try to play around with these AI models and try to discover what's going on and also try to bring on experts to explain things a little bit more for us. So Greg, such a pleasure to have me on the show. This is really amazing.

0
💬 0

2992.236 - 3008.129 Greg Isenberg

You're a legend, man. Thank you for coming on, sharing your insights here. This has been super helpful. I thought it was helpful. So if people agree, go comment on YouTube. I read and respond to almost every comment.

0
💬 0

3008.849 - 3034.224 Greg Isenberg

um like and subscribe for more of this in your feed and let us know if we should bring and when i say we it's me you know let me know if i you know should invite ray back on again to show us more stuff i would certainly love to do that more in 2025 um crazy times ray uh this whole deep seek you know tidal wave is just it's insane

0
💬 0

3035.27 - 3057.168 Ray Fernando

Yeah, I'm glad that it's something is here to like make more people aware that there's a lot of intelligence and how fast it's moving. And I want to also add that, like, please don't be fearful or don't feel like you're left behind. If you're just finding out about this, you're not that far behind. We're all actually still trying to understand. what this intelligence can give us.

0
💬 0

3057.608 - 3069.791 Ray Fernando

And so the prompts and the things that you develop is a good place to start. And, you know, it doesn't have to feel complicated. And, you know, whatever you can get your hands on, make sure you do that. And, you know, be aware of where your data is going.

0
💬 0

3069.811 - 3076.973 Ray Fernando

But at the same time, play, discover, share, share back with the community and definitely share any cool stuff that you've done in the comments for sure.

0
💬 0

3077.253 - 3081.014 Greg Isenberg

All right, my man. I'll see you later. Thank you so much.

0
💬 0

3081.634 - 3082.394 Ray Fernando

Take it easy. Thanks.

0
💬 0
Comments

There are no comments yet.

Please log in to write the first comment.