skip to content

What is GenAI and can we do it with JavaScript?

with Adam Cowley

If you're like me and you don't know what "GenAI" means, Adam Cowley will teach us how to use langchain.js and build our own custom apps.

Topics

Transcript

Captions provided by White Coat Captioning (https://whitecoatcaptioning.com/). Communication Access Realtime Translation (CART) is provided in order to facilitate communication accessibility and may not be a totally verbatim record of the proceedings.

JASON: Hello, everyone! And welcome to another episode of Learn with Jason. Today on the show we are going to be learning about a bunch of stuff that I keep hearing the names of and have no idea how to use and to help us, I'm bringing on previous Learn with Jason guest and friend Adam Cowley. Adam, how are you doing?

ADAM: Good. Glad to be here, how are you?

JASON: I'm doing so good. I'm so excited because I feel like -- I've spent a significant amount of effort trying to ignore AI. Just in general. I, you know, I was joking with you beforehand that like, there are certain fields where I -- I just start to get the like -- I don't know about this one -- it feels like people are making some big promises and that means that in another few years we'll forget about it entirely. But something feels different about AI because I'm actually see for get used in different medium. However, before I dive into that, because I have a million questions for you. Let's start by setting some context. So, for folks who aren't familiar with you and your work, a bit of background?

ADAM: I'm Adam, I have been at neo4j, I have been there for 9 years now. It's a graph-based company. We represent data in nodes and relationships, and look at the connections between those things. My day job is to educate developers how to use Near 4J. And I do that through the graphic academy. I have been producing courses for there for three or four years. And trying to provide a hands-on experience. But I want to say I'm a developer first. I don't want to talk about it today. Keep me honest today, we're here talking about GenAI. If anyone hears me mention near4J out of context, let me know. And I will send you a free T-shirt. Adam Neo4j.

JASON: I'm ready with the sound board. We'll just blast it. All right. So, I wanted to -- Adam Argyle is here to challenge you for the right of the only one Adam.

ADAM: There's always two Adams, no matter where you go.

JASON: That's true. I -- yeah. That actually is true. Adams and Mikes. There will always be in any gathering of three or more people, at least two Adams and at least two Mikes. So, Adam, let's talk a little bit about AI. So, there are -- I think -- all right. So, let me start by attempting to set the context for what I understand and then you can set me straight on this. AI, as far as I can tell, is a buzzword for literally anything that's being done right now with like machine learning or computer vision or all of these things that have sort of been around for a long time but we just collectively discovered a lot of hype for them. And have bundled them up under the term "A I." And then there is a subset, I think, of AI that has been called GenAI which is short for generative AI, is that correct?

ADAM: Yes, that's correct.

JASON: And generative AI is something I don't understand. Whenever you start reading about it, it includes a lot of things that are net new for me. I don't know what a vector database is. When whenever I read about GenAI, use a set of vectors and this. I don't know what this means. And I back away slowly. And as a result, I haven't really learned anything about it. So, in your experience, like what... am I correct in saying that AI as a -- like AI without prefixes or qualifications is just a marketing word. Is that a safe assumption?

ADAM: So, not necessarily. I guess when people talk about -- or when people are talking about AI in general, what they're getting towards is AGI, artificial general intelligence. Which we're not quite there yet. You have machine learning, which is, you know, you take data. You train a model to predict something or to do something. And then you get an output from that. And then artificial intelligence is applying that. So, the decisions get made for you. And these signals get acted on. So, that's kind of the difference between machine learning and AI in general.

JASON: Okay.

ADAM: And with generative AI, you can genuinely call it AI, artificial intelligence, because you're having something act on something without any human intervention. But really what the -- what GenAI is, is generating content or generating something based on instructions, based on a prompt.

JASON: Okay. And so, this would be I guess ChatGPT being the most famous example here. Is -- that is GenAI.

ADAM: Yep. Yeah. And so, this is -- this is nothing new really. Because like these models -- like ChatGPT, GPT is the type of model that -- that is used. So, I always forget this, but it's generative pre-trained Transformer. So, it's a category of model -- machine learning model -- that it's basically trained on a vast amount of data and then it's basically turned into like a next word generator. But it's a -- a -- yeah. A way of predicting what the next word would be in the sequence and then kind of, you know, keep doing that until it runs out of juice.

JASON: Got it. And so, this is, you know, there have been concepts like this for a long time that are obviously a little less sophisticated. But things like a Markov chain have been around for a very long time. And you -- you get, you know, I think that's why predictive text on phones is so funny. Because there's sorted of -- I don't know if it's exactly a Markov chain, but it's like that. In relation to the word that is before it, it makes sense. But in relation to five words back, it definitely doesn't. And that's why those tend to be pretty funny sentence it is you just start tapping the autogenerate. So, the idea of something like ChatGPT is that it's sort of taking that to a much more sophisticated extent, right? Where I'm -- I can expect that I should expect coherence. Like not just based on the previous word, but on like the context of what's being said. It should all feel like a real thought out sentence.

ADAM: Yeah. Yeah. And that's where -- I mean, we'll get ton it -- but that's where the vector embedding, semantic session and things come in. The reason why this is so big at the moment and the reason why -- if you think of like the big explosion came with like GPT-3. That's when people started to kind of pay attention. But like these models have been around for ages. But the reason that ChatGPT really took off was because it was easy to use. It was a chat interface and you could interact with the model through just a text box. And then, you know, it did amazing things. Things that you would have to do with -- I don't know, like some sort of commandline interface. Some sort of API. Something that's kind of hidden away from like the execution of it.

JASON: Okay.

ADAM: So, yeah, that's kind of where we are.

JASON: Got it. Okay. So, before we get too deep, let's do a little bit of like a glossary up front. Because there are gonna be a lot of new concepts and I just want to make sure that I understand what they are. So, we defined AI in general. We've defined GenAI. What is tissue you said "Vector embeddings" a second ago. What are vector embeddings?

ADAM: So, in a nutshell, a vector embedding is an array of floats between minus 1 and 1 which represent the distance between a chunk of textural concept and certain items that the LLM will deem to be important to kind of be able to understand what the -- what the thing is. So, you essentially take -- take some text, pass it through to an embedding model. And then you get back a series of numbers essentially.

JASON: Okay. And so, that sounds like math. And so from a practical standpoint, what is that math doing?

ADAM: So, that is -- it's basically a distance between something or the concept or the chunk of text that you have. And certain other concepts. So, you can kind of gain the -- like based on these numbers -- and we're never gonna know what these numbers are.

JASON: Sure.

ADAM: One, two, three, four, five -- but it's the distance between something and something else. Like the concept. And then you do the space. So, it's like you do a co-sign similarity or some sort of similarity comparison to work out two things that are similar to each other.

JASON: And so, this is sort of like the idea of if you were -- like in UX, for example -- when you're trying to map out the navigation for a -- for really big app. One of the -- the exercises that they'll have people do is they'll take all of the concepts in the app and then they'll do a card sort where they start grouping them together based on what somebody thinks they are. I think this is like this, and I think this is like this. And this helps create sort of conceptual map or a conceptual overview of how the app is perceived by somebody else. So, in a very rough analogy, what we're doing is we're basically saying, hey, whatever model we're using. Build your own like conceptual map of the information I just put in so that if somebody asks you a question, you're using this set of data to make decisions about how to respond as opposed to just like guessing at random.

ADAM: Yep. Yeah. And this is on a grand scale as well. You're talking a thousand plus dimensions. A thousand plus dimension space where each of these distances is a distance from something else.

JASON: Gotcha, yeah. And then J em in the chat saying strawberry is closer to fruit, further from building. That makes sense in a -- like obviously the examples that we're using here are very oversimplified. But that is I think a very -- it feels very intimidating. But when you talk about it, it really is -- it's -- we're just trying to emulate the way that humans like categorize things. Because when we look at anything, we're instantly categorizing it on a ton of different vectors, to use the word. Where, you know, when you look at a -- at a piece of fruit, you think about the color, you think about its location, you think about its size. You think about what is it similar to? What is it not similar to? What does it taste like? You know, all those sorts of things that are very -- we just do it, right? But if you were to try to build a software application by hand that could do all of the same -- the same categorization as like a human brain does, it immediately gets unwieldy to try to write down all those things and predict. This is why building software is such a challenge. You're always trying to predict what other humans will want. And it's just very, very hard to do that. So, maybe this is where the excitement comes from. These models theoretically close that gap where if we have the available information, the model then theoretically can present it in a way that no matter how somebody asks the question, it can sort of understand what they meant and give them what they intended as opposed to me as a developer having to predict exactly how somebody might try to surface that information.

ADAM: Yep. Yeah. Definitely. Yeah. Yeah, if you take -- strawberry is a good example. The example I use as well is like apple. We have got apple, the fruit, and we've got Apple, the company. And based on the -- kind of the words around, the context of the sentence, those are two complete different things.

JASON: Based.

ADAM: But based on the kind of semantics of the sentence and the distances from these concepts and the embedding means that you can work out like semantically like whether this thing that you're talking about is the fruit or is the company. And that's something that is -- well, it has always been difficult to kind of understand in terms of like NLP or natural language processing or entity recognition, things like that. That's a difficult thing. It's almost like a silver bullet. You can figure these things out with these vector embeddings.

JASON: Got it. Okay. That's what a vector embedding is. Any other big concepts that we're gonna need to be familiar with as we dive in here?

ADAM: None I can think of off the top of my head.

JASON: Tackle them along the way. Before we start, there's one other thing I want to address which is to me kind of the elephant in the room with AI. And, you know, I talked about it a little bit at the beginning of the show. But to me, AI, when it first came out, had a very similar sort of smell as a lot of things that have kind of over time proven to be a bit overblown, you know? We see the -- the sort of get rich quick gold rush of people. Everybody's trying to say, oh, we're this kind of company now. We offer -- yeah, we've got that tech, we've got that tech. And AI very much has that happening right now where it seems like every single company is in a dead sprint to add something that counts as AI so that they can say, yeah. We do AI. Historically, you know, like with crypto, that ended up being a pretty massive distraction that's had a fairly low long-term impact on anything. Like we're not seeing a lot of, you know, the Web3 NFTs, those sorts of things. They're still around. But they didn't have the like -- the government's not using crypto. Our banks aren't using crypto. We didn't really like see this massive change. It was technology. It's cool. But, you know, life goes on, right? AI is being touted in the same way. Like it's gonna change everything. It's gonna replace developers, it's gonna replace every job, actually. It's just gonna be -- we have the things like the Roko's Basilisk. It's very funny. Google that.

ADAM: What was that?

JASON: It's too much of a rabbit hole to get in right now, but it's a fun side quest for anybody who wants to go down the rabbit hole.

ADAM: I'll close my browser and not read it while you're talking.

JASON: Much appreciated. But yeah. So, what is -- what about AI is hype? And what about AI is real? And I'm just gonna preface this for everybody with this is two dudes having opinions. This is not, you know, like be -- take it all with a grain of salt and, you know, come to your own conclusions here.

ADAM: Yeah. So, my experience is nothing is ever as easy as people make it out to be. And we'll probably cover that -- find a few examples of that. If anyone tells you that they've made thousands of dollars on doing one thing, it's not happened. Also, to a certain degree, if there's like a video on YouTube that's got somebody looking shocked, probably not -- you're probably not gonna make any money off that have. But I think the difference I see between this and other things is that you've got real life kind of uses and applications. I guess with Web3, like it's a great idea. And it's a -- it isn't ideal. But I can't see the application there. Whereas -- I mean, depending on what industry you work in and what you do for a living, there are like applications across -- yeah. yeah. Across the industries, really. And, you know, I've used if for quite a few things. Failed with it quite a few times, succeeded a few times as well. I think there's always that degree of skepticism. I think that's right. But there's also that kind of degree of -- what's the word? People will be scared of it. But people were kind of scared that, you know, when cars first came out, if they went over 5 miles an hour, your head would spin. Women can't drive cars because they'll -- it will rotate their ovaries or that sort of stuff. Then the people who have the vested interest. Like the horse breeder that doesn't want cars to kind of take off as well. So, yeah, there is that. There's also the optimist side of it where these new advances in technology, instead of, you know, being kind of scared of things going wrong -- and, you know, AI is gonna take all our jobs. Right now it definitely won't. And I can promise you that because I've tried to make it take my job. And it -- I'm getting nowhere with it. But yeah. It's -- yeah. There's applications that you can kind of take to -- yeah. Or that you can kind of do with it. It's all about freeing up people to do more interesting things. So, I'm thinking like if this can take my job, I can perfect my pizza day recipe. And I guess you probably focus on burgers or something, right? Like that is other things we can do.

JASON: Yeah. And I think there's a very philosophical discussion to be had here. And Nicole Sullivan was talking about this where like the promise of AI is that it removes work and creates this like post-labor society. Right? And on its face, that is really interesting. But in -- in the -- in reality what we've seen is that when we make it easier for a company to do work, what they do is they remove work for the people who are doing it and transfer all of that wealth and leisure to the people who own the companies. So, what makes AI any different? I don't think that's really the can of worms we want to open, but it is a thing to think about, right? How do we actually mitigate that as we are working with these new tools to, you know, automate things that used to have to be done by hand. Somebody is not gonna get that work anymore. What does that mean? Like does that mean that I automated that work and I put an extra 15, 20 bucks an hour in my pocket and that person can go kick rocks? Or are we actually building a post-work society that means I don't get to keep all the wealth when I replace a job, the wealth goes back in. The whole tax the rich thing. That's -- you know, deep, deep, deep rabbit holes that we can get down and probably not worth completely derailing this podcast. I want to make sure we have plenty of time to actually build. So, one quick note. You mentioned a minute ago about cars and people being afraid of them. I dropped a link to a podcast. And it's a very incomprehensible link in the chat. This is a show called The Dollop run by comedians. But they did this deep dive on the history of women and transportation. And it is bonkers. Like if you want a really funny, but also very depressing hour of entertainment, listen to that. And all the different ways that people used to tell women they would die if they got on a bike, if they got on a train, if they got in a car. It's fascinating how weird we are about all of that. Okay. I'm -- I have one question from the audience that I want to bring up in the context of this. So, you said that, you know, AI is gonna sort of start changing had and shifting jobs. In the future of web Dev specifically, what do you see the impact of AI being?

ADAM: Um... yeah, that's a good question. I see a lot, especially at the moment and kind of the space we're in now. The -- sorry, copyrighters, but I think a lot of that is gonna -- is probably gonna be kind of taken over. There are, you know, you know, models and like specific sort of trade models that will write code for you. I still don't think that they're quite there. But I -- for me at the moment, it's more about kind of increasing the productivity. So, like Copilot is a good example where you still have to do the thinking up front. But it can kind of do the quick stuff for you. So, if I start typing, you know, I want to -- to do a fetch request to get information from this URL, these are my headers. It's gonna be JSON. You can put in a comment and do the work upfront and it generates it for you, or do it autocomplete. But for me the... the actual -- the important thing there isn't the code that gets generated, it's the thought that goes behind it. So, I think instead of writing like sort of five -- well, I say -- 50 lines of code an hour, you can write the kind of specification for the code. Have it write 200, 300 lines of code for you in an hour. And then it's up to you to just kind of test it. Generating like unit tests for your code is like you say, I want it to do this, this, and this. These are conditions. And you can kind of get these generated for you. There's definitely. So, I've tried to do it a few times with different -- kind of different frameworks and different projects. Like how do I write the code to do this? And really in the time that I've written the instructions, I might as well have just written the code.

JASON: Yeah.

ADAM: I think there's definitely the -- we're not at a point now where it's gonna take developers' jobs because I don't think the models are kind of ready for that. There's a lot of back and forth and thinking and reasoning that these models aren't capable of doing. But I think it's gonna speed people up, make things more productive. And as we're gonna find out later. So, I've written a course for Graph Academic on Neo4J and LLMs. One on the fundamentals. And one is building a chatbot with TypeScript. So, I chose Next.js for that. And I chose React for -- as my kind of two frameworks. And then coming on this podcast, I thought, I'll have a look at Jason's website. See what he uses. Okay. He uses Astro and he uses Solid. But actually when I was able to do with relatively success was translate the React stuff to Solid and translate the Next.js stuff to Astro. And there's a few little things around it. So, I was using like React hooks and then there was a bit, you know, to kind of convert things over to what signals. Which was -- which needed a little bit of further kind of coercing. But through like a conversation with ChatGPT, I was able to do that in probably about 20 minutes.

JASON: Nice.

ADAM: So, that's another benefit, really.

JASON: Cool. All right. So, I want to make sure we have time to actually build. Despite the fact that I have 9 million more questions, I'm going to force us over into screen share mode here. So, before we get started, let me do a quick shoutout. This episode, like every episode, is being live captioned. And there is a little ticker at the bottom that will show you where to go find that. And that is made possible through the support of our sponsors. We've got Nx who just joined on this month. Thank you so much, Nx. Netlify, and Vets Who Code who have been with me for years -- Netlify for years, Vets Who Code for a while now. Thank you very much to the sponsors for making this show possible. And as always, if you or your company want to continue allowing me to do this, please get in touch because this is the only thing I do for money now. Thank you. We are talking to Adam. So, here is a link to Adam's website. And you can -- you can hop on there to see what Adam's all about. And from there, I don't know what to do next. What should my first steps be if I want to -- actually, set the stage on specifically what we're gonna do today.

ADAM: Okay. So, as I mentioned before, like I've got a -- a course on Graph Academy which is all about you know implementing a chatbot and making a chatbot. This is the high-level stuff. What we're gonna do is jump into that chatbot TypeScript course.

JASON: Okay.

ADAM: The course is kind of written -- so, I've built an application in Next and React. And what it does at the start is it basically just echoes the response back to you. So, as some people say with like GenAI, it's a parrot. It just repeats back to you what you've told it. But it does the same thing. But what we'll do is we'll go through and do it in Learn with Jason style. We'll take that chatbot. We'll first connect to an LLM. And then we'll -- we can -- we can start to build out some functionality from there and kind of explore what you would do to get started.

JASON: Great. Okay. So, you sent me a repo that I cloned over here. And it looks -- I mean, it's -- like there's a handful of things I haven't seen in here before. Like, I've heard this name. I haven't really tried it or anything. But yeah. So, we've got an Astro site. We've got LangChain which I'm gonna ask you to explain in a minute. Cheerio is for tests, right?

ADAM: So, cheerio, it's like a jQuery thing, it's for scraping HTML into JS.

JASON: Oh, right. Yeah, yeah. Marked if for markdown, neo4j is for neo4j. And we have Solid and TypeScript. And we're off to the races, right?

ADAM: Yep.

JASON: So, if I start this up... let me start this app here. We've got the standard Astro four, three, two, one. Okay. So, I'm in here. We've got the -- Lengsbot. You say right now it's an echo chamber. If I say, why are you copying me? It will just repeat what I have to say. Let me zoom in a little bit so this is a bit easier to read. Okay. So, this is fun. Where do we go from here?

ADAM: Cool. So, if you head back to the repo. So, there's a -- an API route handle I think I probably call it. Called chat.ts. You can see what this does. It uses a set time out to wait for a second and then kind of respond back. So, what we'll do is we'll replace this with interactions with an LLM.

JASON: Okay.

ADAM: And to do that, we'll use LangChain. So, LangChain is a framework for building AI applications. So, I have no affiliation with them other than their -- they're brilliant to work with. So, yeah. So, it's a -- and congratulations to LangChain. They just got their first round of funding, I think.

JASON: Cool.

ADAM: It shows that kind of things are taking off. Primarily they were working with a Python library for a while. And then they've kind of moved over to JavaScript as well. So, us frontend/fullstack developers can also have fun with LLMs. We don't have to leave it to the Python developers. But really, it's kind of an orchestration framework for communicating back and forth with LLMs. So, if you go to -- I think it's -- if you go into the docs, they just changed their website. Not quite sure where things are now. But if you go to --

JASON: The TypeScript docs, right? JSTX docs?

ADAM: Yeah, I'm looking for models. Model I/O. The nice thing about LangChain is they have out of the box support for every LLM provider you can think of. So, that's all the way from OpenAI, you know, open source ones. LLaMA, things like that.

JASON: Okay.

ADAM: And then they -- so, they started out by doing -- like building some sets of classes that you could like instantiate with some config and then it basically did all the work for you. So, it was like a QA chain. So, it's like a -- you know, you put in some information and then it does all the magic for you. But what they've released recently is this L cell, or LangChain expression language. And basically all those classes were doing under-the-hood, doing some pre-computation, putting it into a prompt, sending the prompt to the LLM and pass in the response. The kind of newer approach is to kind of do that. I was kind of scared when I first saw this because I thought an issue like -- I can do anything with this. Like I can do a cipher QA chain for neo4j and all this gets done for me and generates a cipher query and gives you the result back. But really actually all it does is under-the-hood is just information goes into a prompt and that goes to an LLM.

JASON: Okay. All right. So, I understood what you said academically. Practically, like what do I do with it?

ADAM: So, the first thing to do is npm install LangChain. And that will give you your package.

JASON: Okay. And I think I -- right. So, we've got LangChain. And I already ran the install.

ADAM: Yep. And then it shares. Just including things from the -- from the package and going from there. So, in LangChain, you have this concept of a chain. So, it's kind of a sequence of events that happen. So, like to me this feels a little bit like RxJS, if you're familiar with that. But it's basically things piping through from one thing to another thing.

JASON: Got it.

ADAM: And then the terminology has been carried over from the Python library because you -- so, in the Python library, you just use like a pipe command --

JASON: Right.

ADAM: -- and write from one thing to another. Like in your shell. Where that doesn't really get supported in JavaScript or in TypeScript. So, there's a few different ways to go through it. But basically, you create what's called a runnable sequence.

JASON: Okay.

ADAM: It's a sequence of events that execute one after another, and go from one thing to another, and apparently I've just done that -- but yeah. So, in order to -- have I forgotten --

JASON: Well, I can still hear you, but you have frozen.

ADAM: Okay. And what a bad way to freeze as well. All right. Let's see if I can sort this one out. While we're talking. But yeah. So, basically the way you get an LLM to do what you want it to do is you give it some instructions. You give it what's called a prompt. Yeah. Tell me about technology. So, basically, you're giving instructions to an LLM. So, that could be as simple as like ask it a question and then it would generate an answer for you. Or you can get it to --

JASON: There you are. Oh, did that one just freeze too?

ADAM: Yeah, I think so. Sorry about this, viewers. And yeah. It could be as simple as providing just a simple instruction. Can you answer this question for me? Or it could be something more complex like as in a something, you need to do something. So, something happens. So, yeah. The way we do that is through tissue through a chain. So, the example that I've got in the course is like -- actually, kind of take this back to -- to where we're going. So, what we want to do is to implement that chat interface.

JASON: Yes.

ADAM: So, when a question comes in from the user, we take the message, we take the response. We pass that through to an LLM. And then we get that to provide an answer.

JASON: Okay.

ADAM: Cool. So, in the -- in the root handler, you get a JSON response sent through. In chat.ts.

JASON: Chat.ts.

ADAM: Yeah. You see that input is request.json.

JASON: Yes.

ADAM: In there there will be a message key. So, that will be a string. Actually, if you open up the inspector, you should be able to see what's coming through.

JASON: In here? Wait.

ADAM: Yeah. Yeah. So, if you go to like the network tab, it sends through. It's basically a simple message through with QuNeW key, which is message. And the UI is expecting the same thing back. So, it's expecting back a message key and then it will add it to the response. So, what we need to do is we need to use create prompt. We need to pass that through to an LLM and then we need to pass the response somehow.

JASON: Okay. So, what that means, then, is we need to create a prompt. What was the next thing?

ADAM: Pass that through to an LLM.

JASON: Pass to an LLM. And then return the response?

ADAM: Yep. Yeah.

JASON: Okay.

ADAM: So, you pass the response that comes back.

JASON: Okay. Okay. So, first part. Creating a prompt.

ADAM: Yep. So, let's pretend that this chatbot is you. So, what we can do is we can create a prompt template. What a prompt template is -- it's a -- it's a -- basically, a string of text with place holders in there. So, you could say, you know, for example, tell me about country name, for example. So, that's --

JASON: Okay.

ADAM: Let's try that first. If you do create new const. Then call it prompt.

JASON: Prompt.

ADAM: Equals. And then if you do prompt template.

JASON: Prompt template.

ADAM: Yeah, capital P, capital T. And then dot from template.

JASON: Okay.

ADAM: And then puts -- yep. Put a string in there. This is where you start to provide instructions. So, you could say --

JASON: Okay.

ADAM: So you give the LLM a role. You can say you are Jason, the host of the -- host of the -- sorry. Learn with Jason, the show. Answer the user's question in a -- how would you describe your communication term?

JASON: Oh, chat. How would you describe my communication? If you were gonna say: He always responds in a, you know, very grumpy, serious way? What are the words you would use? Let's see... just respond with boops.

ADAM: Yeah.

JASON: Let's see... so, I would say helpful. Friendly. lovely way. Thank you, thank you. And -- in a corgi way. Like it. Okay. So, random and impulsive. Roger, how dare you? Okay. So, I've got... I've got the description here. Then what do we say?

ADAM: So, what you do in here is you'll -- so, in essence, you're describing like a role or a persona and you're giving some instructions. So, you could say -- so, now if you add a new line and say -- if you just put in curly bois and curly braces, in question. This should be enough to kind of give you the -- well, give the LLM enough instructions of what to do. So, you're saying, yeah. You are Jason, the host of Learn with Jason. Answer with the -- with helpful, friendly, lovely way. So, do something like include emojis. So, things like that. Or kind of like control how the output comes. And then what you're saying is basically -- see if we can do this in a different way. Yeah, it should be fine. So, you'll turn these out to answer the question. So, if you put question in front of it with a colon actually. Saying, this is the question part of the prompt.

JASON: Okay.

ADAM: Cool. Yeah. What we need to do is take the information from that request and inject it into that prompt.

JASON: Okay.

ADAM: So, in order to run something in LangChain, you use a runnable sequence.

JASON: Okay.

ADAM: So, if you return -- if you type "Return." To return something from the rate handler. Oh, no, sorry. So, if you do const chain equals -- so, you create a new variable called chain. And then so if you do runnable sequence... dot from. Yep. And then this gives you like a -- this allows you to define the -- the steps. So, the input that comes in is a -- you have like the message there which is that input.message thing. So, the first thing you want to do is to -- oh, no. Sorry. To do an array first.

JASON: Okay.

ADAM: Empty array, yep. And then you start to kind of define and build things up. So, let's say what you can do is pass through an object. Use the curly braces again to kind of start to define what this input would be.

JASON: Okay.

ADAM: And then inside there, if you -- so, do question, didn't we? In the --

JASON: Yeah.

ADAM: In the prompt. If you question, colon. So, this would be a -- so, in this case, you use runnable pass through. So, if you new runnable pass through. And I'll explain this one as it comes through.

JASON: Oh, pass through. That's the one I wanted. There it is.

ADAM: So, runnable parallels, you can do these things in parallel. Yeah. What that's saying is take the -- the information that's passed through to that chain and then assign it to the question variable.

JASON: Okay.

ADAM: So, what we can do actually at this point, are you a TypeScript fan?

JASON: Sure.

ADAM: Okay. That runnable sequence.from has metrics on there. You can determine what the input and the output of the chain will be.

JASON: Okay.

ADAM: And that will make it more -- make your code more robust. So, after "From," use the arrow brackets. Yep. Inside there, the input will be a string, so, if you do "String" and then colon and string again. No, sorry, comma. String, string. So, you're defining what the input is and what the output is.

JASON: Okay.

ADAM: The input will be a string and the output will be a string. Now it will explain that the output isn't a string which is fine because we're still going from there. So, if after the -- what? Yeah. After the braces and the comma, add a new line. Then do prompt. So, what this is doing is it will pass the question through into the prompt. It will use everything inside the object that you have defined above to replace those keys with the placeholders that you've got in the -- in the prompt. So, next thing we want to do is we want to pass that through to an LLM. So, we need an instance of an LLM. Now, we could go fancy and we could use open source ones and we could download our LLaMA. But we don't have time for that. I've given you the OpenAI API key so you can use that.

JASON: Oh, I have not actually added those to this project.

ADAM: Okay.

JASON: So, you -- you gave me like two sets. Am I -- I'm putting all of these together in one end file?

ADAM: Yeah, yeah. Stick them all in.

JASON: Okay. Let me pull this off-screen here for a second. And I'll get my .env set up. Okay. That is saved. And closed and... .env changed, restarting server. Okay. Let me get this back over here.

ADAM: Sounds promising.

JASON: We're in business. I just added everything into this .env. Now we're ready because I have all the API keys in place.

ADAM: We need an instance of an LLM. If you do the prompt, const LLM equals.

JASON: Oh, up here, that makes sense.

ADAM: Yeah. And then new OpenAI. So, if you start typing OpenAI, yep, that's the one. And then so that takes a -- an object is the first parameter.

JASON: Okay.

ADAM: And then the key is OpenAI, AI key.

JASON: Okay.

ADAM: And that needs to be assigned to import.meta.env -- I think it was secret_OpenAI_--

JASON: Copy it out of here.

ADAM: Basically what we're doing is we're defining the API key to use -- I hate saying "under-the-hood" because I'm not American. But under-the-hood.

JASON: Under-the-bonnet is that the?

ADAM: Yeah, Under-the-bonnet. Basically it's just an API call -- REST API call. So, OpenAI has -- the OpenAI specification. Really all that's doing -- and there's different language models and you can use those to -- to send to wherever you need to go.

JASON: Okay.

ADAM: Like I said, it supports like 80 plus --

JASON: Different LLMs?

ADAM: Yeah. Different LLMs and service providers. So, you can swap one out for another just by having a different instance. So, if you wanted to use, you know --

JASON: LLaMA or -- yeah.

ADAM: Yeah. So, you would just do that. And it's just the abstraction on top of it.

JASON: And then do I just -- oh look. I toss it in. And it's not mad at me anymore.

ADAM: Perfect. What you need to do is pass the output that comes back. So, there are different output passes, but you can for now use -- or create a new instance of a string output parser.

JASON: Okay.

ADAM: You can just do that after the LLM inside the sequence.

JASON: After the -- okay. So, I do new string output parser, all right. Does it need any args?

ADAM: Nope. No. That's good.

JASON: Okay. And probably don't need a simulated delay anymore.

ADAM: Yep. Instead what you do now is you do chain.invoke.

JASON: chain.invoke. And do I have to await any of this stuff?

ADAM: Yep. That's an async function. And then if you assign that to -- let's call it message.

JASON: Okay. And I assume I have to pass in the input.message.

ADAM: Yep.

JASON: And so, I pass this --

ADAM: Message above. That's why it's giving you that.

JASON: Ah. Maybe we call this one response.

ADAM: Yep.

JASON: To recap what's happening here. So, we write a question. That question gets sent to this endpoint and we pull out the message here from the input itself. Then we create a chain. And we invoke the chain using the message. So, I guess we can just use that message directly. And that message goes into this runnable sequence, gets passed in -- so, the runnable pass through, just whatever shows up here gets stored as question. That question gets put into the prompt where it gets used here and we set the tone for how I should respond. Or how the LLM should respond as me. We then define the question. And then we pass it through to our LLM which in this case is OpenAI. And finally, we tell it to say whatever comes back from the LLM, pass that out as a string. Which we then get in response. So, I should be able to response?

ADAM: Yep. So, if you do message code and response, that will be the output that the --

JASON: Okay.

ADAM: -- the AI -- sorry, the UI is expecting.

JASON: Cool.

ADAM: That should be it. Now, fingers crossed, if you go back into the -- into the UI. And then ask it a question.

JASON: All right. What is the correct number of boops? Uh-oh.

ADAM: Oh.

JASON: Did I --

ADAM: You may have to stop and start this over. I found this when I was experimenting. But yeah, double check that key as well.

JASON: I mean, the key was -- the key is right. Because I copy-pasted it out of what I set. So, we should theoretically be okay. Correct number of boops? And you will copy that in case it doesn't work this time either. Hey! There is no correct number, it's totally up to you and what makes you happy. Some people might like one boop, other might like 100. What's most important is you're having fun and enjoying the boops.

ADAM: There grow.

JASON: I had my doubts. But that does sound exactly like me.

ADAM: Yeah -- so, it's really interesting that you say that. Because I -- in my kind of experiments I was -- I have been trying to kind of like see what we use. Like should we use a local LLM and things like that? When I used LLaMA 2, which is Meta's -- I don't think it's quite open source. But it's freely available. LLM. It started to say, you know, y'all, and stuff like that. Which is almost the way that you speak. Which I thought was quite interesting. It's almost like a parody of what you would say.

JASON: Sure, sure.

ADAM: One of the things that I've done with my -- with -- when trying to generate like course content and stuff like that for Graph Academy is that I try to say, you know, communicate in the style of somebody. I always choose Malcolm Gladwell, I like the way he talks. But it doesn't give me what I want. It kind of gives you like a parody of what's going on. So, yeah.

JASON: Got it, got it, yeah.

ADAM: But yes. That's our first call into the...

JASON: Now, this is going into OpenAI without any vector embeddings. This is not like -- I don't know -- what is your opinion on Qwik? Right so, this is sort of a -- just like incorrect kind of nonsense answer.

ADAM: Yep.

JASON: So, if we wanted this to actually be based in like what I would say, how would we go about that?

ADAM: So, this is -- so, two things to look at. So, firstly for the kind of tone you would want to tune a model and to like retrain on your communication style. Which is something I have no idea how to do it. But then the -- the other way to do it is to -- so, using embeddings through a technique called -- just called RAG or retrieval of multi-generation. To be honest, I don't like the term. But it was coined by I think it was Google -- Google AI Labs.

JASON: Okay.

ADAM: Which is quite funny because the acronym for that is FAIL. Sorry, Facebook -- Facebook AI Labs. And yeah. Anyway, it's basically a way of improving the generation of text or generation or something based on the retrieval of information from a data source. This could be an API. It could be the world's leading graph database. [Horns!] Free T-shirt. It's basically taking the information from a database and putting it into the prompts. It's called base stuffing. But the easiest way to do it is to stuff things into a prompt.

JASON: Okay.

ADAM: So, we basically changed the -- the prompts and the way that we want the user -- the LLM to act.

JASON: Okay.

ADAM: And then we -- and then we go from there. The way we do that is through -- or the easiest way, I should say, is through vector embeddings.

JASON: Okay. How would one go about doing that in this case?

ADAM: So, every service that offers an LLM, we'll have some sort of vector embedding model.

JASON: Okay.

ADAM: And what you do is you take text and then you convert it into numbers. So, in true Bloopy style. It was a show from the UK. They make something through crafts. Here is one I made earlier. Inside the modules folder and then inside the ingest folder, I've got some scripts in there. So, if grow into -- yeah, into the utils.

JASON: Okay.

ADAM: So, basically, there's some scripts in there that scrape your website. Sorry, Jason. Scrape your website. Get the information from the HTML using cheerio and then convert into an object format.

JASON: Okay.

ADAM: What happens is when we have an individual episode, it will then go and get the text. So, I think -- is this... I think it's transcript -- const transcript equals.

JASON: Oh, yeah. You go and get the transcript.

ADAM: Yep.

JASON: With all my very helpful naming in my markup.

ADAM: Yeah. Definitely. It was a challenge. Especially with the -- I can't remember what it was. The title something. It was like a weird -- I guess I can generate -- it's class.

JASON: Oh, yeah. Yeah. So, the -- the way that Astro works is it scopes styles by appending a hash. And so, that is so that you can, you know, put -- you can basically target like NH1, but it's scoped to the component that uses it without bleeding out into the global document. But it does make it a little tough for things like web scraping.

ADAM: Yeah. Yeah. Yeah. Definitely. But I mean, we -- we got there in the end. So, yeah, basically what you do is you take text and then you then want to turn it into an embedding which means that you can compare it to other things in the way that computers like to compare things, right? So, this is where we come on to a topic of chunking as well. Because this is an important thing. Because when you -- so, when we want the LLM to answer something based on context or based on the information that we give it. Each of the LLMs has its own token limit. I think it's something like 7,000 for OpenAI. If you wanted to send all the information from through a transcript and a prompt, it wouldn't --

JASON: It won't fit.

ADAM: It won't work because there's too much information. Plus the more information that you put into a prompt, the more likely it is that the LLM will get confused. And I need to turn off this thumb thing. Because I think that's what made it crash before. That's what made it crash.

JASON: As a little aside, if grow to your menu bar, there's like a green camera. And if you pull that down, that's how you turn the reactions off when you're on screen. Oh, no. Did we -- we just -- we just lost Adam all the way. Hey! There you are!

ADAM: As much as I want.

JASON: Perfect. Excellent.

ADAM: Where was I? Yeah. So, you have to basically chunk the text into sizes that are big enough to allow the LLM to answer the question or perform the task you want it to while being small enough that it doesn't blow out the token limit and doesn't confuse the LLM. And also remember as well, the more tokens you spend --

JASON: The more expensive.

ADAM: The more money it costs.

JASON: So, question here. When I'm doing something like vector embeddings, I'm converting my text. In this case, you're going through all the episodes on the Learn with Jason site and grabbing the transcripts. So, that's megabytes-upon-megabytes of information. Not sending all of that with every prompt. No way that could fit. Practically what's happening with the -- the prompt, you're not sending the stuff with it. But the vector embeddings are there so that it's included that data in every response without having to send all those tokens?

ADAM: Yeah. So, basically what will happen is in LangChain the term for this is a vector store. And then a retriever. So, we're retrieving information. But what the -- what the LangChain library will do is it will take the -- take a step back, right? So, we first take the content and then we embed it. We create this embedding and then we can decide on chunk sizes and experiment. We have these lists of floats in the -- in the database. When the user asks a question, there will be some -- I don't know -- okay. There'll be some like pre-processing to take out the stop words and things like that. And then basically give you the -- like if you're asking what is the -- what's -- what's your opinion on Qwik? Whatever the question was before. If the -- there will be some pre-processing to say, opinion of Qwik would be the response. Or the actual intent. So, there's some like I guess -- I don't know if it's deterministic or if it's just strictly removing stop words.

JASON: Yeah.

ADAM: But then it will take that phrase, convert that to an embedding and then use the embeddings in the database to compare it to that to find the closest things.

JASON: Okay.

ADAM: So, in terms of the actual chunk sizes and what you do embed, different length of questions, responds better or worse to different lengths of like questions and chunks. So, there's no right or wrong answer. It's all -- it's all trial and error, basically. But yeah. You basically have a set of numbers that you compare to another set of numbers. That thing gives you probabilistically something close to the question that you asked. And then that gets put into the prompt as something to help the LLM to answer the question.

JASON: Okay. I -- I kind of think I understand what you're saying. But let's -- let's implement it and that might help clarify. So, we -- you have scraped the Learn with Jason site. It -- and I assume because you've got this written that you've already like created a vector embedding of the site somewhere?

ADAM: Yep. Yes.

JASON: That's part of the API keys that I have?

ADAM: Yeah. It's the same API key to create the embeddings and to use that service. The reason that I did that before coming on the show is that this does take a long time. And you hit rate limits as well. Especially with -- with OpenAI. I'm not sure exactly how many there are. But I think I probably hit the rate limit after about 50 -- 50 episodes. And I had to keep -- because it 300-something episodes you've done?

JASON: Closing on 400 now.

ADAM: Okay. Wow. Yeah. So, yeah. I had to kind of do it in stages.

JASON: Got it, got it.

ADAM: But yeah, it's a manual process. One thing I've still not got straight in my head is if you -- if you realized that the -- that there was something wrong in the captions or something like that, you update the text. You have to update the embeddings as well. Which means you have to kind of go --

JASON: Oh, you need like a -- you need some kind of like -- this is where I want to learn something like ingest. Because there's a sort of like sequential thing, if this, then that, but it's a programming-driven thing. Okay. So, we have all the raw ingredients. So, prior to the show, you scraped the Learn with Jason website. You turned that into vector embeddings that are OpenAI-friendly and you stored those in neo4j?

ADAM: Yep. Yep, that's right.

JASON: I see a lot of people talking about vector databases. And -- is a vector database actually different? Or is it just a database where you have put the numbers from the vector embeddings?

ADAM: So, it's a way of storing those vector embeddings in that way that maybe wasn't possible at the time when the company existed? So, the parallel that I draw with it is that you've got full text search. Which has been around for years. And you've got Apache Solo, Lucene, used in neo4j. Then you have this company called Elastic and Elasticsearch. And it's full-text search plus services and features and things that go around that. And that's sort of the way that I see vector databases and kind of vector database companies being. And if somebody wants to correct me on that, I'm happy to be corrected. But the one example that I have been kind of playing around with over the weekend was Weaviate. It's basically a vector database, an SQL database with it on the front. Instead of doing the embeddings yourself, I think they do it for you. And then they've also got GraphQL resolvers for generating content. You can say from this information that I've requested, run this prompt against an LLM which has been pre-defined. So, it's --

JASON: Oh, okay.

ADAM: -- adding the extra information on top that have.

JASON: So, kind we're doing here you would configure right on the database itself.

ADAM: Yes. From an neo4j point of view, looking at the benefits of the data store itself for answering certain questions and doing certain things. I'm sure we'll come on to it. But there are certain things that vectors don't perform well at. The, you know -- the model needs a little bit more help with. And there are things that it's brilliant at. But --

JASON: Okay.

ADAM: Yeah. It's --

JASON: So, in the interest of making sure that we get to a working final product here, we got about 15-ish minutes left. I want to actually use this now. So, you've got vector embedding with neo4j. We've got the LangChain chain set up. We've got our prompt going into OpenAI, we're returning that to the browser. How do we get the vector embeddings into this chain?

ADAM: All right. So, the first thing we do is we create a vector store. So, if you do that under the LLM. So, const store equals -- oh, yeah. Vector store is fine. And then if you do -- let me double check -- you're looking for an instance of the neo4j vector store.

JASON: Okay. Do I create that as like a new?

ADAM: So, it is... just checking my notes. So, it's awaits neo4j vector store.

JASON: Await.

ADAM: Yeah. And then dot from existing index.

JASON: Okay.

ADAM: So, I've created the index. So, there are ways in LangChain to create things on the fly and you can just pass in the documents. Create even like in memory vector database. But then also you have to do that kind of every time you start it.

JASON: Sure.

ADAM: But this is the way of using a vector database that existed in neo4j. The first thing that we needed is an embedding model. I've used the OpenAI embeddings for this. If above that, you a create a new variable called "Embeddings." And then that will be a new OpenAI embeddings. And it should be the same config as the LLM. So, the OpenAI API key.

JASON: Okay.

ADAM: Cool. So, that's the first argument of the store.

JASON: Okay.

ADAM: And then the second is the object with the configuration. So, you need a URL which is the import.meta.env.neo4j_URI.

ADAM: Yeah, I think prefix it with secret as well.

JASON: Great question. Let me double check here. Yes, yes.

ADAM: Yeah.

JASON: Okay.

ADAM: Cool. And then just cast it as a string because it's not set up in -- or like as string.

JASON: Okay.

ADAM: And if you copy that line down twice. Use that username and password.

JASON: Okay. Username and password. And then this one was username and password. Okay.

ADAM: If you stop your stream -- oh, there it is. Okay. Cool. And then there's a few extra things that we put in this. We need to put in the index name.

JASON: Index name.

ADAM: Yep. That will be episode-parts, all lower case.

JASON: Oh, like a string.

ADAM: As a string.

JASON: Episode dash parts?

ADAM: Yeah.

JASON: Okay.

ADAM: And then the -- so, do we need it? No, we don't. So, yeah, if you tech -- no, that's fine. No, that should be fine. Sorry. Cool. So, that is -- that's our store.

JASON: Okay.

ADAM: And then we need a retriever. So, a retriever is like analysis on top which is a way of interacting with it through LangChain components. If you do const retriever equals store.asRetriever.

JASON: Like that?

ADAM: Cool. Yeah. And this is where the magic starts to happen. So, inside the chain where you've got the question.

JASON: Okay.

ADAM: So, remember that past -- that takes the value that's been passed through to the chain and then assigns that to a question. If you add a new line there --

JASON: Inside the object.

ADAM: Yep, yep.

JASON: Okay.

ADAM: And call that context.

JASON: Context.

ADAM: And then it you do a retriever. And then dot pipe.

JASON: Wait, something is wrong. Expected one arguments. Okay.

ADAM: So, in there, if you just do a function that takes one argument and then converts it to -- does like JSON.stringify.

JASON: Okay. And so, it would just be like value or whatever?

ADAM: Yep. That's fine.

JASON: Okay.

ADAM: And then what that will do it a little bit of magic. It will take the input. Pipe that through to the retriever which then creates the embedding. Goes away to OpenAI, creates the embedding. And looks in the vector store for similar things and then returns back. I think it's like --

JASON: Okay --

ADAM: I think it's five or ten results by default.

JASON: Okay. Is context like a magic thing or reference that in the prompt up here?

ADAM: Yes, you need to reference in the prompt.

JASON: Context.

ADAM: Yep. Yep. So, what you can do as well is you can saw, like provide instructions to say only answer the question using this context. Or using the provided context. And then it won't fall back to the pre-chain knowledge. You're basically using it as a natural language generator. I don't know whether you have to move it above, but I've found that it -- it kind of helps.

JASON: Okay.

ADAM: Yep. And that's pretty much what we need to do. So, at -- if you go back to the -- the chain. One thing I wanted to kind of call out was see that dot pipe? And then the val -- yeah, the stringify bit. Whether you add something into or you pass something into a prompt, that will need to be a string. So, what this is doing is converting that to a string. And it turns out that -- yeah. Especially like GPT-3.5, GPT-4, they're gotten an understanding JSON. You can put it into X amount.

JASON: Got it. This is effectively a JavaScript object that we are turning into a string and we're just dumping in like a stringified JSON object and it's like, this is fine.

ADAM: Yep.

JASON: Okay.

ADAM: Yep.

JASON: So, is that it? Like are we ready to give this a shot?

ADAM: Yeah, let's give it a go. Cross my fingers again and hope when I do that my camera doesn't freeze. And I feel like I answered this one.

JASON: Seeing -- okay. See the appeal of using framework that offers the benefits of JavaScript without the page weight and duplication. Okay, that makes sense. What sets Qwik apart is the focus on improving the user experience without requiring a lot of extra effort from the developer, and as Misko mentioned -- hey, name drops -- it is about lowering the barrier for developers. If you are looking for ease of use, I would definitely recommend giving Qwik a try. That's a pretty good answer. I want to try something and see what happens. I'm going to say... cite your sources. Because that should theoretically also link to the episodes?

ADAM: Yeah.

JASON: What do you think about Astro? Hi there, Fred! Thanks for asking about Astro. This is great because it immediately assumed that this is a plant. Like this is Fred asking the LLM about it. Let's see... from what I've seen so far, it seems like Astro is a really unique take on web development and frontend architecture. I like how it is the best of both worlds, creating fast websites. It's focused on open source and sustainability. That's an interesting take. But can't wait to see what you have in store for Astro. Okay. This is just shy of -- it makes sense. But it's a little bit of word salad. It's kind of like a politician's answer.

ADAM: Yep. So, this is where -- so, if you than question like four or five times, you would get a different answer every time. And that's just the way that works. Like there's a degree of randomness. So, there's a temperature which I can't remember what the -- what the default value is. But basically that's the amount of randomness that gets injected in. So, the first thing --

JASON: It's funny, this is definitely Fred.

ADAM: Or maybe did you have a Fred in one of the --

JASON: I've had Fred on the show a couple times, yeah.

ADAM: Yeah.

JASON: It's funny that it's taking it that way.

ADAM: So, that's almost like a jailbreak from your database. So, the data in the database is -- and again, this is like the database getting confused because -- because of the information sent through. It thinks that it's -- it's Fred. I'm interested to see how this one...

JASON: Let's see, as the host of Learn with Jason -- appeals to authority -- I have seen firsthand the power and potential of graph databases. It knows it's a graph database, that's promising. Neo4j is great for solving problems. There are limitations and tradeoffs to consider. For example, the right performance may not be as highs a other databases for high throughput scenarios. But it can be used in combination with other databases for a powerful solution. Overall, I'm excited about what neo4j has to offer. And happy coding. I have been saying happy building a lot.

ADAM: Okay. So, that's one you can add in as part of the -- part of the prompt is that you end it with one of these phrases or something like that. So, one thing that I don't particularly like, it's not really citing the sources.

JASON: Right.

ADAM: But so if you head into the -- back to the -- the repository. And if you go into modules and then retriever. Yeah. The index. All right. So, this is my experiment. So, this won't work if you're on Nexus using LLaMA. Using a mystery model. Scroll down a little bit further. You can see the retrieval query. What that's doing is controlling the response. That's a little bit of a cipher to say -- this is a parent and child kind of matching. You match on the smaller chunk and then you provide information from the bigger one. So, if you just take that whole retrieval query property and then paste it in to the other one, you should get -- you should get references as well. Because it's got the title description here on the date. But before you leave this file, if you scroll down a little bit further, this is the prompt that I -- that I wrote for you. So, it's --

JASON: Okay.

ADAM: -- giving information about you, what you have done before. I asked ChatGPT to describe you and then copied and pasted the information in. And then it's basically a little bit of fine-tuning there on the prompt. Or what's called like prompt engineering to actually work out like the -- the extra things to kind of add personality and also to make things more consistent.

JASON: Do not attempt to answer any questions that do not relate to web development. That seems like a good idea. Remember to use episode URLs, titles, and descriptions to provide -- okay. So, this is good. So, then if I -- if I head back to this page and you said, here. And then I -- I also want to include had some of what you said here because I think this is really good. Actually, why don't we just throw the whole thing in here and do the way you did it and see what happens. So, here is -- so, my little bitty prompt up here got us decent results. Let's see what happens when we give it a well thought through prompt.

ADAM: Yeah. And this is also like a -- a bit of a trial and error thing. And like some of this information might be extraneous. Some might be needed. It's about experiments and changing the prompts until you get what you want. I'm confused as to why it's returning -- like it's doing it in like a -- like monospace. I don't know if the --

JASON: Yeah, that is interesting. I think it might be coming back as like markdown and having too many spaces at the beginning for whatever reason. Because it only does it sometimes. Which is really interesting. But yeah. So, as the host of Learn with Jason and an experienced developer, I'm delighted by Astro's potential to -- from my conversations with Fred, I can tell he has the deep understanding of the needs and challenges of the developer community. I've seen firsthand how Astro brings people together in a joyful, collaborative environment and excited to see where it goes. Check out my interview with Fred on Learn with Jason here with a link. That's pretty cope. Tell me how I can use neo4j. Starting out with the course, app development with Node.js on Graph Academy. Oh, interesting. This course covers how to get started with neo4j in a node environment. You'll receive a badge upon completion, provide some incentives. You can sign up for the neo4j sandbox. And nice. This is like -- this gets way into like neo4j itself and doesn't link to like the episode you were on where you gave us this information. Let's see... tell me -- let's test it. What is your favorite cheese? My favorite cheese is the one that the chefs love more than anything in the world. As I mentioned in the episode -- oh, my god, I actually provided information about this. This is amazing. Oh, my god. This is incredible. Oh, this is really good. This is really good, actually. My favorite cheese is the one that is made with love and passion. That's probably something that came out of my mouth in the past. Let's see... how many boops is the proper number of boops? Subjective and can vary based on personal preference. In the context of the lesson we're using a variable called boop count. This can be changed to any number desired. It is only for demonstration purposes and does not have a specific number of boops that is deemed "Proper." I like that this through providing the embeddings, we've got an bot that absolutely will not give you a straight answer. It is "It depends" all the way down. Which is probably the most accurate version of me that you could get. So, we get into the actual episodes. All right. This is very cool. Like I think this is -- it's good to see. And I think like a way that I would probably use this in practice is there are a lot of instances where I know I had a conversation with somebody. And I can sort of remember -- like I don't remember what episode it was. Or exactly, you know, what we were talking about. I just know that they said something super-smart. And I were to say, like I quote Sunil all the time on this quote, only shaped on werewolf-shaped problems. I have had Sunil on. I would love to say, where was the episode where somebody said something like this and get quotes back. And being able to pass in those -- like have the vector embeddings so that we can get a decent chance of actually finding the information that's not in the title or description, but something that somebody said in the middle of the show. Or a general -- like what is Jason's general take on like software architecture? Would be really interesting to see. And to me, that's like -- I feel like that's a great -- like there's search. There's filtering. And then there's like conceptual search where like -- there was a vibe here. Can you find anything that gets into this general subject matter and give me a list? And I think that's really, really cool.

ADAM: Yep. Yeah. So, there's this -- there's a -- a good kind of comparison of like Google versus complexity -- Perplexity, sorry. So, Perplexity is like an AI search engine. And when you search for Google, you search for full text which is, you know, if you know that exact quote or you know a certain part that have quote, if you know part of the lyrics of a song, you can find the song based on the chorus. Because those words appear within a certain distance of each other. If you can't remember the quote itself, but the quote is something like... you know, give a man a fish and he'll eat --

JASON: Right.

ADAM: Give a man a smashburger.

JASON: Exactly what you just did. Can you Google that? Who knows.

ADAM: Yeah. But it's the semantic search and the semantics behind the text which is important. And that's what the embedding model can -- can understand. So, even if you're talking about a certain thing, or you're talking around something and not using those exact terms, it should be able to find it for you.

JASON: Awesome. All right. Well, I have more questions, but unfortunately we're out of time. So, I'm going to instead just ask for somebody who wants to go further on this, where -- like where should they go next? I'm going to drop a link to this neo4j chatbot. Because this -- so, this -- what we did today is a -- is or the of an expansion on what this course will teach, right?

ADAM: Yep.

JASON: So, anybody who wants to build effectively what we did today, if you go through the course, you will get a result similar to what we did. With obviously whatever embeddings you want.

ADAM: You'll get better ones. As an extra module down at the bottom about cipher generation. I guess one thing that's important to leave people with is embeddings are really useful for certain things. And the kind of questions you're asking are perfect for that. And actually, an example that I've got is -- so, I was playing around with the like embedding the syntax FM, story to maintenance competitor.

JASON: Definitely not a competitor. Friends.

ADAM: Friends, okay. Cool. In that case, I guess this is all about friends, really. But there was a -- an episode with Taylor -- the friend.

JASON: Yeah.

ADAM: And she was talking about DevRel and made a joke about limousines. And I drive it to work. I asked the chatbot, was she joking when she said that? What did she say about limousines, it was a joke? It worked out the quote and where it was. It probably is, and based on the window of text to either side, it's probably sarcastic. Probably a joke.

JASON: So cool.

ADAM: It's brilliant for stuff like that. For the qualitative stuff, it's brilliant for. But if I asked -- or if you asked the chatbot now how many episodes mention neo4j, it wouldn't be able to answer it because it's doing a similarity search on similar things and it's basically finding the next-best like set of results. Or like the nearest neighbor. So, even if there's something that's not necessarily like similar to the result, it will return that anyway.

JASON: Gotcha.

ADAM: And it's up to the LLM to actually answer the question. Whereas if you can write, you know, text to cipher for neo4j, text to SQL, all of these libraries. If you run something on a data source and then say one of the lessons is like, you know, give me the average rating for a movie or something like that. If you want like a quantitative result, like a fine identity result, then that gives you -- you have to get that through other means. That's talking to an API, having some sort of logic. Developers aren't going to disappear. Someone those write that logic for the LLM to use. To answer your actual question and if you take the course and the neo4j and the fundamentals, the one before that. Which is kind of like a precursor to which teaches you everything that you need to know, that will teach you all about semantic search, which is pretty much what we have done today. Cipher generation. So, generating -- using an LLM to generate a cipher statement that will retrieve the information and put that into the prompt to answer the question. And then the next step beyond that is agents. This is where LangChain is really exciting. You can define a tool in a certain context. For qualitative stuff like comparing movies by a plot, recommend me movies for ghosts. Then you can use one tool, which is like the vector retriever. Or you can define another tool for a quantitative thing to say, go into the database and actually find this stuff. Or like go to an API to find out what the weather is in San Francisco and stuff like that. So what about it -- that's like the -- the next level stuff. And then the level --

JASON: Got it.

ADAM: -- level on top that have is there's a tool in LangChain that's nothing to do with the graph databases. But what that allows you to do is define the set of tools and the state and the LLM reasons with itself. It builds up like a memory and a set of messages. You can say like, solve me this problem with these tools and it will go away, do something. Come back. And then it will look again and it will do something else and it will keep going back and forth. So, like you're in danger of spending a lot of money on OpenAI bills.

JASON: Sure.

ADAM: But that's the reasoning that the LLMs itself can't do.

JASON: Got it.

ADAM: Which I think is amazing. It's really exciting.

JASON: That's very cool. And I would love to keep going all day. Unfortunately, we're out of time. So, I'm going to do one more shoutout, this episode, like every episode has been live captioned. We have had Amanda here from White Coat Captioning all day. Thank you so much, Amanda. That is made to be ever possible through the support of sponsors Nx, Netlify and Vets Who Code. I haven't put the new shows up, but they're going to be fun. Up this week. Get on the Discord, subscribe here. If you're on Twitch, YouTube, et cetera, subscribe, share this with your friends. Because that helps me continues making more of these shows and it really does mean a lot. Adam, thank you so much for spending time with us. And we will see you all next time.