Sunday, June 23, 2024

htmx 2.0

 </> htmx ~ htmx 2.0.0 has been released!

ends support for Internet Explorer and tightens up some defaults, but does not change most of the core functionality or the core API of the library.

All extensions have been moved out of the core repository to their own repo and website: They are now all versioned individually and can be developed outside of the normal (slow) htmx release cadence.

bigskysoftware/htmx: </> htmx - high power tools for HTML

htmx allows you to access AJAX, CSS Transitions, WebSockets and Server Sent Events directly in HTML, using attributes, so you can build modern user interfaces with the simplicity and power of hypertext

htmx is small (~14k min.gz'd), dependency-free & extendable

Raspberry Pi Zero 2 W Raspberry Pi Zero 2 W (with Quad-core CPU,Bluetooth 4.2,BLE,onboard Antenna,etc.) : Electronics

$21 (not a best price)

  • 802.11 b/g/n wireless LAN (2.4 GHz only)c
  • Bluetooth 4.2 / Bluetooth Low Energy (BLE)
  • Small form factor, suitable for various DIY projects
  • Expansion – Unpopulated 40-pin HAT-compatible I/O header
  • Footprint-compatible with earlier members of the Raspberry Pi Zero family
  • Includes 512MB LPDDR2 SDRAM

Raspberry Pi Zero is half the size of a Model A+, with twice the utility.
A tiny Raspberry Pi that’s affordable enough for any project!

AI: pgvector + RAG

The missing pieces to your AI app (pgvector + RAG in prod) - YouTube

with Suprabase 

A step-by-step guide to going from pgvector to prod using Supabase. We'll discuss best practices across the board so that you can be confident deploying your application in the real world. Learn more about pgvector:

Workshop GitHub repo: It's easy to build an AI proof-of-concept (POC), but how do you turn that into a real production-ready application? What are the best practices when implementing: - Retrieval augmented generation (RAG) - Authorization (row level security) - Embedding generation (open source models) - pgvector indexes - Similarity calculations - REST APIs - File storage

Large language model - Wikipedia (LLM)

What Is Retrieval Augmented Generation (RAG)? | Google Cloud

RAGs operate with a few main steps to help enhance generative AI outputs: 

  • Retrieval and Pre-processing: RAGs leverage powerful search algorithms to query external data, such as web pages, knowledge bases, and databases. Once retrieved, the relevant information undergoes pre-processing, including tokenization, stemming, and removal of stop words.
  • Generation: The pre-processed retrieved information is then seamlessly incorporated into the pre-trained LLM. This integration enhances the LLM's context, providing it with a more comprehensive understanding of the topic. This augmented context enables the LLM to generate more precise, informative, and engaging responses. 

RAG operates by first retrieving relevant information from a database using a query generated by the LLM. This retrieved information is then integrated into the LLM's query input, enabling it to generate more accurate and contextually relevant text. RAG leverages vector databases, which store data in a way that facilitates efficient search and retrieval.

What is RAG? - Retrieval-Augmented Generation Explained - AWS

pgvector/pgvector: Open-source vector similarity search for Postgres @GitHub

Open-source vector similarity search for Postgres

Store your vectors with the rest of your data. Supports:

  • exact and approximate nearest neighbor search
  • single-precision, half-precision, binary, and sparse vectors
  • L2 distance, inner product, cosine distance, L1 distance, Hamming distance, and Jaccard distance
  • any language with a Postgres client

Plus ACID compliance, point-in-time recovery, JOINs, and all of the other great features of Postgres