Talk to Django
Learn how to use LLMs for Django ORM Data
- Neon's built-in support for pgvector -- which has optimized vector datatypes to your database to do vector-based queries. These kinds of vectors are numerical representations of your text data because machines are good at numbers
- Embeddings via sentence-transformers and OpenAI (I show both). Embeddings turn your text into vectors such that 👆 works
- Numpy and pgvector-python so that you can do things like Cosine Similiarty for searching your database (e.g. search for similar vectors in your database)
- Jupyter for rapid prototyping all of this for Django ORM models (django database models)
- OpenAI for LLM and Embeddings -- LLM is used to infer the database results to give a natural language response back (instead of just raw SQL or a Django queryset)
- Llama Index for performing Text to SQL, RAG, and natural language text to SQL with semantic search
Lessons
Welcome
3:29
Demo
5:33
Requirements
2:30
Virtual Environment and Jupyter
6:19
Getting Startd with Embeddings and Semantic Search
6:37
Embeddings with Multiple Data Points
5:32
Embeddings with IDs
12:18
Setup Django Project
6:31
Integrate Pgvector & Django
8:00
Neon Postgres with Django and pgvector
7:35
Integrate Django and Jupyter
4:35
Semantic Search with Django and pgvector
16:19
Semantic Search with Generic Foreign Keys Across Multiple Models
9:17
Services to Generate Embeddings and Search Results
10:41
Consine Similarity with Numpy and Vectors
6:43
Configure Postgres for LlamaIndex
13:54
Add Documents from Djagno Models to Llama Index
12:45
Custom Embeddings with LlamaIndex
13:37
Llama Index Semantic Search Modules
4:19
Text to SQL with Llama Index
9:27
New Model for Text to SQL
4:53
Mulitple Models for Text to SQL
3:18
Customize the Llama Index SQL Query Prompts
8:15
Load a Dataset for Real Blog Articles
4:35
Talk to Django
13:17
Thank you
2:06