Models - OpenAI API
GPT models are fast, versatile, cost-efficient, and customizable.
Structured Outputs - OpenAI API
Structured Outputs - OpenAI API
App is a generic React Native client, driven by server-side metadata files.
Why XML metadata now, 20 years later?
Maybe because it can enforce schema, and more visible start/end tags, like in JSX.
Or because it looks like HTML, in fact proper HTML is XML (XHTML)
GitHub - Instawork/hyperview: Server-driven mobile apps with React Native @GitHub (MIT license)
Hyperview is a new hypermedia format and React Native client for developing server-driven mobile apps.
Serve your app as XML: On the web, pages are rendered in a browser by fetching HTML content from a server. With Hyperview, screens are rendered in your mobile app by fetching Hyperview XML (HXML) content from a server. HXML's design reflects the UI and interaction patterns of today's mobile interfaces.demo of AI app @ 3:11:30
Time to Rise Summit Day 1: Break Through in 2025! - YouTube
alternative: custom prompts with ChatGPT
Make Tony Robbins Your Personal AI Business Coach With ChatGPT
$495 package includes this app, listed with $149 price.
Tony Robbins AI (free) : r/TonyRobbins
Streets GL (streets.gl)
GitHub - StrandedKitty/streets-gl: 🗺 OpenStreetMap 3D renderer powered by WebGL2
Streets GL is a real-time 3D map renderer built for visualizing OpenStreetMap data with a heavy focus on eye-candy features.good explanation and code examples
Python Concurrency: Threads, Processes, and asyncio Explained
David Beazley - Python Concurrency From the Ground Up: LIVE! - PyCon 2015 - YouTube
Ray Dalio | The All-In Interview - YouTube
How Countries Go Broke: Questions & Answers | LinkedIn
Ray Dalio on X: "How Countries Go Broke: Introduction & Chapter One " / X
Caddy is a powerful, extensible platform to serve your sites, services, and apps, written in Go. If you're new to Caddy, the way you serve the Web is about to change.
Most people use Caddy as a web server or proxy, but at its core, Caddy is a server of servers. With the requisite modules, it can take on the role of any long-running process!Caddy is an extensible server platform that uses TLS by default.
Automatic HTTPS by default
The Techno-Optimist Manifesto | Andreessen Horowitz
Lies, Truth, Technology, Markets, "Machine", Intelligence, Energy, Abundance, Values, Meaning, Enemies, Future, People
Superintelligence is Upon Us | Marc Andreessen | EP 515 - YouTube at Jordan B Peterson podcast
#458 – Marc Andreessen: Trump, Power, Tech, AI, Immigration & Future of America | Lex Fridman Podcast
Marc Andreessen: Trump, Power, Tech, AI, Immigration & Future of America | Lex Fridman Podcast #458 - YouTube
a good overview of key terms used in AI/LLMs:
Build Generative AI Applications with Foundation Models - Amazon Bedrock - AWS
Key terminology - Amazon Bedrock
Foundation model (FM) – An AI model with a large number of parameters and trained on a massive amount of diverse data. A foundation model can generate a variety of responses for a wide range of use cases. Foundation models can generate text or image, and can also convert input into embeddings. Before you can use an Amazon Bedrock foundation model, you must request access. For more information about foundation models, see Supported foundation models in Amazon Bedrock.
Base model – A foundation model that is packaged by a provider and ready to use. Amazon Bedrock offers a variety of industry-leading foundation models from leading providers. For more information, see Supported foundation models in Amazon Bedrock.
Model inference – The process of a foundation model generating an output (response) from a given input (prompt). For more information, see Submit prompts and generate responses with model inference.
Prompt – An input provided to a model to guide it to generate an appropriate response or output for the input. For example, a text prompt can consist of a single line for the model to respond to, or it can detail instructions or a task for the model to perform. The prompt can contain the context of the task, examples of outputs, or text for a model to use in its response. Prompts can be used to carry out tasks such as classification, question answering, code generation, creative writing, and more. For more information, see Prompt engineering concepts.
Token – A sequence of characters that a model can interpret or predict as a single unit of meaning. For example, with text models, a token could correspond not just to a word, but also to a part of a word with grammatical meaning (such as "-ed"), a punctuation mark (such as "?"), or a common phrase (such as "a lot").
Model parameters – Values that define a model and its behavior in interpreting input and generating responses. Model parameters are controlled and updated by providers. You can also update model parameters to create a new model through the process of model customization.
Inference parameters – Values that can be adjusted during model inference to influence a response. Inference parameters can affect how varied responses are and can also limit the length of a response or the occurrence of specified sequences. For more information and definitions of specific inference parameters, see Influence response generation with inference parameters.
Playground – A user-friendly graphical interface in the AWS Management Console in which you can experiment with running model inference to familiarize yourself with Amazon Bedrock. Use the playground to test out the effects of different models, configurations, and inference parameters on the responses generated for different prompts that you enter. For more information, see Generate responses in the console using playgrounds.
Embedding – The process of condensing information by transforming input into a vector of numerical values, known as the embeddings, in order to compare the similarity between different objects by using a shared numerical representation. For example, sentences can be compared to determine the similarity in meaning, images can be compared to determine visual similarity, or text and image can be compared to see if they're relevant to each other. You can also combine text and image inputs into an averaged embeddings vector if it's relevant to your use case. For more information, see Submit prompts and generate responses with model inference and Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases.
Orchestration – The process of coordinating between foundation models and enterprise data and applications in order to carry out a task. For more information, see Automate tasks in your application using AI agents.
Agent – An application that carry out orchestrations through cyclically interpreting inputs and producing outputs by using a foundation model. An agent can be used to carry out customer requests. For more information, see Automate tasks in your application using AI agents.
Retrieval augmented generation (RAG) – The process of querying and retrieving information from a data source in order to augment a generated response to a prompt. For more information, see Retrieve data and generate AI responses with Amazon Bedrock Knowledge Bases.
Model customization – The process of using training data to adjust the model parameter values in a base model in order to create a custom model. Examples of model customization include Fine-tuning, which uses labeled data (inputs and corresponding outputs), and Continued Pre-training, which uses unlabeled data (inputs only) to adjust model parameters. For more information about model customization techniques available in Amazon Bedrock, see Customize your model to improve its performance for your use case.
Hyperparameters – Values that can be adjusted for model customization to control the training process and, consequently, the output custom model. For more information and definitions of specific hyperparameters, see Custom model hyperparameters.
Model evaluation – The process of evaluating and comparing model outputs in order to determine the model that is best suited for a use case. For more information, see Evaluate the performance of Amazon Bedrock resources.
Provisioned Throughput – A level of throughput that you purchase for a base or custom model in order to increase the amount and/or rate of tokens processed during model inference. When you purchase Provisioned Throughput for a model, a provisioned model is created that can be used to carry out model inference. For more information, see Increase model invocation capacity with Provisioned Throughput in Amazon Bedrock.
Honda Prologue Electric Car sales skyrocket in America - YouTube
3rd best selling EV in USA!
based on GM platform
2024 Honda Prologue – All-Electric SUV | Honda
A new era for the Changelog Podcast Universe
However, this is where our vision for CPU.fm comes into play. Spin-offs are being planned and new podcasts will form from this change (and CPU.fm will be there to support them). Here’s what we know so far:
Four EVs that take on the Tesla Model Y - Autoblog
EXCLUSIVE: Former GM Exec Warns Tesla and China’s EV Domination Is Unstoppable - YouTube
Announcing The Stargate Project | OpenAI
"The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately. This infrastructure will secure American leadership in AI, create hundreds of thousands of American jobs, and generate massive economic benefit for the entire world. This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.
The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.
Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners. The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements."
President Trump makes announcement on AI infrastructure - YouTube
OpenAI Unveils “Project Stargate” - $500 BILLION AI Mega Factories! - YouTube
OpenAI teams up with SoftBank and Oracle on $500B data center project | TechCrunch
"OpenAI says that it will team up with Japanese conglomerate SoftBank and with Oracle, among others, to build multiple data centers for AI in the U.S.AI-only private school, fully personalized for each student
2hr Learning: How Our Schools Work - YouTube
Private School in Texas, Florida & Arizona | Alpha School
Alpha School uses AI to teach students academics for just two hours a day | FOX 7 AustinDeepSeek claims its 'reasoning' model beats OpenAI's o1 on certain benchmarks | TechCrunch
Chinese AI lab DeepSeek has released an open version of DeepSeek-R1, its so-called reasoning model, that it claims performs as well as OpenAI’s o1 on certain AI benchmarks.TypeScript is complex. JavaScript ecosystem is complex.
Most things can be done in may different ways, and things break and stop working all the time.
When ts-node has some issues, often tsx works. That is helpful.
But is it enough to "switch"?
Frequently Asked Questions | tsx
GitHub - privatenumber/ts-runtime-comparison: Comparison of Node.js TypeScript runtimes
ts-node
incorporates type checking, tsx
does nottsx
handles package types automatically, ts-node
does notnode --import tsx
adds support for both Module and CommonJS contexts. To only import one, you can use node --import tsx/esm
or node --require tsx/cjs
.
node -r ts-node/register
only supports a CommonJS context, node --loader ts-node/esm
must be used for projects that are type Module.
"On average tsx
was faster, (about twice as fast on medium sized projects), than ts-node
. tsx
also includes a watch option which automatically reruns when the codebase is changed, which can be useful in certain circumstances.
Overall, it feels that losing type checking for a faster and more flexible runtime is a better choice ... for running tests and small dev scripts.