Saturday, September 14, 2024

Architecture: House in Greenhouse

 Family wraps homestead in greenhouse to warm up & grow food all year - YouTube




similar concept, more evolved: one floor under ground for stable temperature;
filtered rain water used for all house needs; 
using solar panels for energy, integrated to solar roof



CS and Programming open courseware (!)

MITx

Materials by Lecture | Introduction to CS and Programming using Python | Electrical Engineering and Computer Science | MIT OpenCourseWare 6.100L | Fall 2022 | Undergraduate


Introduction to Computer Science and Programming | Electrical Engineering and Computer Science | MIT OpenCourseWare (6.0001, Fall 2016)

Lecture Slides and Code | Introduction to Computer Science and Programming in Python | Electrical Engineering and Computer Science | MIT OpenCourseWare


Introduction to Computational Thinking and Data Science | Electrical Engineering and Computer Science | MIT OpenCourseWare (6.0002, Fall 2016)

Lecture Slides and Files | Introduction to Computational Thinking and Data Science | Electrical Engineering and Computer Science | MIT OpenCourseWare


Introductory Programming | MIT OpenCourseWare | Free Online Course Materials

6.00 Intro to CS and Programming has been retired from OCW. You can access the archived course on DSpace – MIT’s digital repository. Please see the list of introductory programming courses and other programming courses from recent years.


26 lectures, about 1h each
MIT 6.100L Introduction to CS and Programming using Python, Fall 2022

More information at https://ocw.mit.edu/terms
More courses at https://ocw.mit.edu




related code:





related tools:

Python Tutor helps you do programming homework assignments in Python, Java, C, C++, and JavaScript. It contains a unique step-by-step visual debugger and AI tutor to help you understand and debug code. Start coding online now in Python, Java, C, C++, and JavaScript







@edX
https://www.eecs.mit.edu/people/ana-bell/ 
https://www.mit.edu/~anabell/




books
Manning Publications; 1st edition (April 24, 2018)  456 pages

Harvard x


"This is CS50, Harvard University’s introduction to the intellectual enterprises of computer science and the art of programming, for concentrators and non-concentrators alike, with or without prior programming experience. (Two thirds of CS50 students have never taken CS before.
... The course starts with a traditional but omnipresent language called C that underlies today’s newer languages, via which you’ll learn not only about functions, variables, conditionals, loops, and more, but also about how computers themselves work underneath the hood, memory and all. The course then transitions to Python, a higher-level language that you’ll understand all the more because of C. Toward term’s end, the course introduces SQL, via which you can store data in databases, along with HTML, CSS, and JavaScript, via which you can create web and mobile apps alike. "



CS50's adaptation of ChatGPT for students and teachers beta
with support from Microsoft and OpenAI

Visual Studio Code for CS50
CS50's adaptation of Codespaces

Manual pages for the C standard library, the C POSIX library,
and the CS50 Library for those less comfortable




CS50 AP  is an amalgam of two courses,
CS50’s Introduction to Computer Science, otherwise known as
 CS50x, and
CS50’s Understanding Technology, otherwise known as
 CS50T. 
CS50 AP is only for students in high school. 
Students not in high school should take CS50T and CS50x instead.
CS50 AP is also available via edX.

Cs50 related books  Cs50 Books @ GoodReads

Stanford @edX

CS101 is a self-paced course that teaches the essential ideas of Computer Science for a zero-prior-experience audience. Computers can appear very complicated, but in reality, computers work within just a few, simple patterns. CS101 demystifies and brings those patterns to life, which is useful for anyone using computers today.


free until 10/21; 
verified certificate: $249


edX

At 2U, we deliver world-class learning outcomes at scale.
Through our global online learning platform edX, we connect millions of people


e to high-quality, career-relevant education in partnership with leading universities and industry experts.

Friday, September 13, 2024

science paper: The biology of digital organisms

 The biology of digital organisms

Digital organisms are self-replicating computer programs that mutate and evolve. They can be thought of as a domesticated form of computer virus that lives in, and adapts to, a controlled environment. Digital organisms provide a unique opportunity with which to study evolutionary biology in a form of life that shares no ancestry with carbon-based life forms, and hence to distinguish general principles of evolution from historical accidents that are particular to biochemical life. In terms of the complexity of their evolutionary dynamics, digital organisms can be compared with biochemical viruses and bacteria. Recent studies of digital organisms have addressed long-term evolutionary adaptation and the growth of complexity in evolving systems, patterns of epistatic interactions in various genetic backgrounds, and quasi-species dynamics.




Thursday, September 12, 2024

movie: Tower to the People: Tesla's Dream at Wardenclyffe

 Tower to the People: Tesla's Dream at Wardenclyffe - YouTube

Discover the truth about Nikola Tesla's most ambitious experiment for humanity at a lab called Wardenclyffe in this award-winning film. Crushed by the day's corporate titans, Tesla was exalted generations later as the world united to preserve his legacy.


Wardenclyffe Tower (1901–1917), also known as the Tesla Tower, was an early experimental wireless transmission station designed and built by Nikola Tesla on Long Island in 1901–1902, located in the village of Shoreham, New York. Tesla intended to transmit messages, telephony, and even facsimile images across the Atlantic Ocean to England and to ships at sea based on his theories of using the Earth to conduct the signals.











Modern-day world-changing tech, now from Tesla, the company.

AI: OpenAI "o1" and "01-mini" models: PhD level

 OpenAI launches new AI model o1 with PhD-level performance | VentureBeat

following months of reports and rumors that intensified in recent days, OpenAI announced its “o1” AI model family beginning with two models: o1-preview and o1-mini, which the company says are designed to “reason through complex tasks and solve harder problems” than the GPT series models.

The o1-preview model is designed to handle challenging tasks by dedicating more time to thinking and refining its responses, similar to how a person would approach a complex problem.

In tests, this approach has allowed the model to perform at a level close to that of PhD students in areas like physics, chemistry, and biology.

OpenAI envisions the models being used for a wide range of applications, from helping physicists generate mathematical formulas for quantum optics to assisting healthcare researchers in annotating cell sequencing data.

Developers will also find the o1-mini model effective for building and executing multi-step workflows, debugging code, and solving programming challenges efficiently.

In benchmark tasks such as the International Mathematics Olympiad (IMO) qualifying exam, o1-preview demonstrated its prowess by solving 83% of the problems, a sharp improvement over the 13% success rate of its predecessor, GPT-4o.

Coding with OpenAI o1 - YouTube

OpenAI - YouTube


"learn to code to learn how to think"



Wednesday, September 11, 2024

C => Rust ?

 The US has planned their move to Rust (it's wild) - YouTube theo-t3.gg

Translating All C to Rust @ DARPA
(TRACTOR)
Dr. Dan Wallach


immunant/c2rust: Migrate C code to Rust @GitHub

C2Rust helps you migrate C99-compliant code to Rust. The translator (or transpiler), c2rust transpile, produces unsafe Rust code that closely mirrors the input C code. The primary goal of the translator is to preserve functionality; test suites should continue to pass after translation.


Tuesday, September 10, 2024

AI: Graph RAG (Microsoft)

Intro to GraphRAG - YouTube by Microsoft "Reactor" (learning)

a technique that enhances document analysis and question-and-answer performance by leveraging large language models (LLMs) to create knowledge graphs. We'll explore how GraphRAG builds upon Retrieval-Augmented Generation (RAG) by using knowledge graphs instead of vector similarity for retrieval. You'll learn how to set up the environment, prepare data, and implement GraphRAG using LangChain, with practical code examples. Additionally, we'll explore some advanced features and customization options available in LangChain to optimize and tailor GraphRAG to your specific needs.


RAGHack 2024

AI: LangChain vs LlamaIndex

 LangChain vs LlamaIndex: Choose the Best Framework for Your AI Applications


LlamaIndex is a powerful tool for data indexing and retrieval, designed to enhance information accessibility. It streamlines the process of efficiently indexing data, making it easier to locate and retrieve relevant information. By focusing on effective data retrieval, LlamaIndex ensures that users can access the information they need quickly and accurately. LlamaIndex is particularly adept at indexing and storing data into embeddings, which significantly improves the relevance and precision of data retrieval.

LangChain, on the other hand, is a versatile framework designed to empower developers to create a wide range of language model-powered applications. The modular architecture of LangChain enables developers to efficiently design customized solutions for various use cases. It provides interfaces for prompt management, interaction with language models, and chain management. It also includes memory management to remember previous interactions. LangChain excels at chatbot applications, generating text, answering queries, and language translations.

Monday, September 09, 2024

Postgres Full Text Search vs Elasticsearch

 Full Text Search over Postgres: Elasticsearch vs. Alternatives - ParadeDB

What is Full Text Search (FTS)?

Full text search is a technique that finds entries in a collection of text based on the presence of specific keywords and phrases. Most search engines like Elasticsearch use the BM25 algorithm to rank search results. BM25 considers how often a term appears and how unique that term is across all documents.

Full text search is different from similarity search, also known as vector search, which searches for and ranks results by semantic meaning. Many modern applications use a combination of full text and similarity search. This practice is called hybrid search and can yield more accurate results.


Postgres FTS is a native functionality available to all Postgres databases. It leverages the tsvector data type, which stores text as searchable tokens, and the GIN index, which improves search speeds.


ParadeDB is a full text search engine built for Postgres. Powered by an extension called pg_search, ParadeDB embeds Tantivy, a Rust-based Lucene alternative, inside Postgres. Like native Postgres FTS, ParadeDB plugs into any existing, self-managed Postgres database with no additional infrastructure. Like Elasticsearch, ParadeDB provides the capabilities of an advanced full text search engine.


94X Faster than Postgres
ParadeDB brings column-oriented storage and vectorized query execution to Postgres tables. Users can choose between row and column-oriented storage at table creation time.


AI: RAG app with Postgres DB

 Building a RAG application with GitHub Models and Postgres FROM SCRATCH - YouTube
@Visual Studio Code

Retrieval Augmented Generation (RAG) is a powerful technique to make your AI models even smarter - but how can you leverage it in your own applications? In this video, Pamela Fox breaks down the basics of RAG and shows you how to build your very own RAG app from scratch.

saving embeddings with PGVector extension to BD, from Python
with Code Spaces VSCode in web browser

GitHub - pamelafox/pgvector-playground: A dev container for using PostgreSQL + pgvector in Python, with SQLAlchemy, SQLModel, psycopg2, and asyncpg examples. @GitHub

Comparing main...demochanges · pamelafox/pgvector-playground · GitHub

Pamela Fox | LinkedIn


Intro to GraphRAG - YouTube @ Microsoft Reactor


What Is Retrieval-Augmented Generation aka RAG | NVIDIA Blogs

What is RAG? - Retrieval-Augmented Generation AI Explained - AWS


PostgreSQL as a Vector Database: Create, Store, and Query OpenAI Embeddings With pgvector

Sunday, September 08, 2024

tall buildings, weak, sand/mud foundations

"saved" $6M on installing proper foundation down to bedrock... 

Why Nobody Can Fix This New York Skyscraper - YouTube

161 Maiden Lane - Wikipedia

161 Maiden Lane (also known as One Seaport, 1 Seaport, or Seaport Residences) is an incomplete 670 ft (205 m) tall residential skyscraper on Maiden Lane in the Financial District of ManhattanNew York City,

The building leans 3 inches (76 mm) to the north as a result of the method used to construct its foundation: instead of using the piling method like other neighboring skyscrapers, soil improvement methods were used where chemicals or other material are added to the soil to strengthen it. As of 2024, only half of the finishes, including windows, have been installed.[2]



One Seaport’s site sits upon East River landfill that dates to the turn of the eighteenth century. With rock situated at 132 to 166 feet below grade, initial evaluations of deep foundation systems such as drilled piles and caissons, common to high-rise structures, were performed. The difficulties associated with drilling elements to such depths resulted in extremely high foundation bids from a limited number of contractors. An alternate system, not commonly utilized to support high-rise structures, was proposed. The solution used a jet-grout soil improvement system, to depths of 55 feet below grade, into the sand layer.



another building with same/similar issue:

The Nightmare of San Francisco’s Sinking Tower, Explained - YouTube

Millennium Tower (San Francisco) - Wikipedia


After developers disclosed to authorities in 2015 that the building was sinking and tilting,[26]

the foundation of the main tower consists of a concrete slab built on 60-to-90-foot deep (18 to 27 m) concrete friction piles through the fill and young bay mud

The building is leaning toward the northwest,[30][31][32] and this has caused cracks in the building's basement and the pavement surrounding the tower.[33] As of 2018, the sinking had increased to 18 inches (46 cm) with a lean of 14 inches (36 cm).[34] Measurements in 2022 show the tilt increased to 28 inches (71 cm), as measured from the roof.[35]



A building is only as sound as its foundation.




AI future, by Eric Schmidt (former Google CEO)

Google CEO ERIC SCHMIDT BANNED Interview LEAKED: "Future is SCARY" (AI Pep Talk) - YouTube

In this shocking Stanford interview video, we reveal the leaked interview of former Google CEO Eric Schmidt, where he candidly discusses the future of artificial intelligence and its implications for humanity. Titled "Future is SCARY," this exclusive Pep Talk provides a deep dive into the challenges and opportunities that AI presents.

Components of powerful LLMs: According to Schmidt, there are three key components that will make LLMs powerful. 
  • The first is context windows, which allow LLMs to access and remember short-term information. 
  • The second is text-to-action, which allows users to give instructions to LLMs in natural language and have them carry out those instructions. For example, a user could instruct an LLM to create a social media app similar to TikTok. 
  • The third component is the ability to learn and adapt, which allows LLMs to continuously improve their abilities.


"...given the capabilities that you envision these models having 
should we still spend time learning to code?"

...why do you study English if you can speak English? 
you get better at it, right 
you really do need to understand how these systems work 
and I feel very strongly yes


Azure OpenAI Assistants API

Is this new feature making it easier or more confusing to use AI API?
Both!
A typical Microsoft tool :)
At least there is documentation available.

 Azure OpenAI Service Assistants API concepts - Azure OpenAI Service | Microsoft Learn

Assistants, a new feature of Azure OpenAI Service, is now available in public preview. Assistants API makes it easier for developers to create applications with sophisticated copilot-like experiences that can sift through data, suggest solutions, and automate tasks.

  • Assistants can call Azure OpenAI’s models with specific instructions to tune their personality and capabilities.
  • Assistants can access multiple tools in parallel. These can be both Azure OpenAI-hosted tools like code interpreter and file search, or tools you build, host, and access through function calling.
  • Assistants can access persistent Threads. Threads simplify AI application development by storing message history and truncating it when the conversation gets too long for the model's context length. You create a Thread once, and simply append Messages to it as your users reply.
  • Assistants can access files in several formats. Either as part of their creation or as part of Threads between Assistants and users. When using tools, Assistants can also create files (such as images or spreadsheets) and cite files they reference in the Messages they create.

The Assistants API, as the stateful evolution of the chat completion API, provides a solution for these challenges. Assistants API supports persistent automatically managed threads. This means that as a developer you no longer need to develop conversation state management systems and work around a model’s context window constraints. The Assistants API will automatically handle the optimizations to keep the thread below the max context window of your chosen model. Once you create a Thread, you can simply append new messages to it as users respond. Assistants can also access multiple tools in parallel, if needed. These tools include:











Saturday, September 07, 2024

Deno JSR (JS module registry)

 What we got wrong about HTTP imports

by Ryan Dahl, creator or node.js and Deno

JSR is an open-source, cross-runtime code registry that allows users to easily share modern JavaScript and TypeScript. It’s built to be reliable and cheap to host, essentially acting as a heavily cached file server due to immutability guarantees.

Under the hood, JSR still uses HTTP imports.

AI: Google Gemini API

Gemini - Google DeepMind

  • "Ultra": not free, best model
  • "Pro": good, free
  • "Flash": small, fast
  • "Nano": for "on-device" AI tasks

Gemini (language model) - Wikipedia

Gemini is a family of multimodal large language models developed by Google DeepMind, serving as the successor to LaMDA and PaLM 2. Comprising Gemini Ultra, Gemini Pro, Gemini Flash, and Gemini Nano, it was announced on December 6, 2023, positioned as a competitor to OpenAI's GPT-4. It powers the chatbot of the same name.


Gemini (chatbot) - Wikipedia

Gemini Chatbot   //gemini.google.com/app/


Google AI Studio | Gemini API | Google for Developers  |  Google AI for Developers


Gemini models  |  Gemini API  |  Google AI for Developers



 What Is Google Gemini AI Model (Formerly Bard)? | Definition from TechTarget

Google Gemini is a family of multimodal AI large language models (LLMs) that have capabilities in language, audio, code and video understanding.


Gemini 1.0 was announced on Dec. 6, 2023, and built by Alphabet's Google DeepMind business unit, which is focused on advanced AI research and development. Google co-founder Sergey Brin is credited with helping to develop the Gemini LLMs, alongside other Google staff.










Microsoft (Azure) Growth charts

 Prediction: Microsoft Azure To Reach $200 Billion In Revenue By 2028

Microsoft vs Amazon vs Apple

2004-2014


2014-2024


Friday, September 06, 2024

EV: Tesla 'Giga Train' in Germany

Tesla gets Germany to kill off diesel trains after launching EV ‘Giga Train’ - YouTube
by Electric Viking

free, battery powered

Tesla's all-electric 'Giga Train' in Germany takes first trip @Teslarati

Tesla’s all-electric “Giga Train,” which will transport employees to work at Gigafactory Berlin in Germany,





MicroORM: TypeScript ORM

 mikro-orm/mikro-orm: TypeScript ORM for Node.js based on Data Mapper, Unit of Work and Identity Map patterns. Supports MongoDB, MySQL, MariaDB, MS SQL Server, PostgreSQL and SQLite/libSQL databases. @GitHub

JS, MIT

TypeScript ORM for Node.js based on Data Mapper, Unit of Work and Identity Map patterns. Supports MongoDB, MySQL, MariaDB, PostgreSQL and SQLite (including libSQL) databases.

AI: Sutskever's Secret SSI: $5B

 $125B for Superintelligence? 3 Models Coming, Sutskever's Secret SSI, & Data Centers (in space)... - YouTube

Safe Superintelligence (Sutskever SSI): https://www.reuters.com/technology/ar... $125B Data Centers: https://www.theinformation.com/articl... Altman ‘Too Aggressive’: https://www.theinformation.com/articl... OpenAI Orion: https://www.theinformation.com/articl... Semianalysis Report: https://www.semianalysis.com/p/multi-... https://www.semianalysis.com/p/100000... Xai Colossus: https://x.com/elonmusk/status/1830650... https://x.com/elonmusk/status/1815341... Altman Reacts: https://www.theinformation.com/articl... GPT-6 Co-location: https://x.com/corbtt/status/177239252... Data Centers in Space: https://x.com/8teAPi/status/183111330... And Underwater: https://www.itpro.com/infrastructure/... Zuckerberg on Power and Exponentials:    • Energy, not compute, will be the #1 b...   Epoch AIB Report: https://epochai.org/blog/can-ai-scali... Original SuperAlignment Deadline: https://openai.com/index/introducing-... Character AI Bought: https://www.theinformation.com/articl...




a computer scientist who specializes in machine learning.[1]

Sutskever has made several major contributions to the field of deep learning.[6][7][8] He is notably the co-inventor, with Alex Krizhevsky and Geoffrey Hinton, of AlexNet, a convolutional neural network.[9]

Sutskever co-founded and is a former chief scientist at OpenAI.[10] In 2023, he was one of the members of OpenAI's board who fired CEO Sam Altman; Altman returned a week later, and Sutskever stepped down from the board. In June 2024, Sutskever co-founded the company Safe Superintelligence with Daniel Gross and Daniel Levy.


Thursday, September 05, 2024

book online: Exploring JavaScript (ES2024 Edition)

 Exploring JavaScript (ES2024 Edition)

by Dr. Axel Rauschmayer

“Exploring JavaScript” makes the language less challenging to learn for newcomers, by offering a modern view that is as consistent as possible.

Highlights:

  • Get started quickly, by initially focusing on modern features.
  • Test-driven exercises available for most chapters.
  • Covers all essential features of JavaScript, up to and including ES2024.
  • Optional advanced sections let you dig deeper.

No prior knowledge of JavaScript is required, but you should know how to program.

AI: Andrej Karpathy: AI Teaching Assistant @ Eureka Labs.AI

No Priors Ep. 80 | With Andrej Karpathy from OpenAI and Tesla - YouTube

"...a founding team member of OpenAI and the former Tesla Autopilot leader, needs no introduction. In this episode, Andrej discusses the evolution of self-driving cars, comparing Tesla's and Waymo’s approaches, and the technical challenges ahead. They also cover Tesla’s Optimus humanoid robot, the bottlenecks of AI development today, and how AI capabilities could be further integrated with human cognition. Andrej shares more about his new mission Eureka Labs and his insights into AI-driven education and what young people should study to prepare for the reality ahead."







building a new kind of school that is AI native.

...Teacher + AI symbiosis could run an entire curriculum of courses on a common platform. 
...it will be easy for anyone to learn anything, expanding education in both 
reach (a large number of people learning something) and 
extent (any one person learning a large amount of subjects, beyond what may be possible today unassisted).

...first product will be the world's obviously best AI course, LLM101n. This is an undergraduate-level class that guides the student through training their own AI, very similar to a smaller version of the AI Teaching Assistant itself.