Tuesday, December 26, 2023

SpaceX Falcon 9 "sinks"

 Historic SpaceX Falcon 9 booster topples over and is lost at sea – Spaceflight Now

This particular booster, tail number B1058, was coming back from its record-breaking 19th mission when it had its fatal fall. ...The booster made a successful landing... succumbed to “high winds and waves.”




EV: Kia Electric Truck

 The 2026 Kia Electric Truck: Everything You Need to Know

Kia's PR release noted, "in the United States, where mid-sized SUVs and pickups are popular, electric versions of these models will be produced locally from 2024."

Building the truck in America would allow Kia to avoid the chicken tax for the truck and let it to be eligible for the federal EV tax credit.






"Chroma": Vector Database for ML/AI

podcast: The Cloudcast: Intro to Vector Databases


chroma-core/chroma: the AI-native open-source embedding database @GitHub
Apache license; Go + Python + Jupyter + TypeScript

pip install chromadb # python client
# for javascript, npm install chromadb!
# for client-server mode, chroma run --path /chroma_db_path

Embeddings?

What are embeddings?

  • Read the guide from OpenAI
  • Literal: Embedding something turns it from image/text/audio into a list of numbers. 🖼️ or 📄 => [1.2, 2.1, ....]. This process makes documents "understandable" to a machine learning model.
  • By analogy: An embedding represents the essence of a document. This enables documents and queries with the same essence to be "near" each other and therefore easy to find.
  • Technical: An embedding is the latent-space position of a document at a layer of a deep neural network. For models trained specifically to embed data, this is the last layer.
  • A small example: If you search your photos for "famous bridge in San Francisco". By embedding this query and comparing it to the embeddings of your photos and their metadata - it should return photos of the Golden Gate Bridge.

Embeddings databases (also known as vector databases) store embeddings and allow you to search by nearest neighbors rather than by substrings like a traditional database. By default, Chroma uses Sentence Transformers to embed for you but you can also use OpenAI embeddings, Cohere (multilingual) embeddings, or your own.


tryChroma.com (+$18M investment)

"the AI-native open-source embedding database"