Friday, April 25, 2025

EV Slate $20K pickup truck/SUV?

 Slate’s Game-Changing $20K EV: It's a Truck and an SUV all in one - YouTube by The Electric Viking

Bezos-backed Slate Auto reveals its new customizable $20,000 EV - Fast Company

Slate Auto’s new vehicle is designed to make EVs more accessible by making most features optional add-ons. The company says it will start delivering vehicles in 2026.

Slate Will Teach EV Owners How to Make Their Truck an SUV on YouTube - Newsweek


Here's Your First Look at Jeff Bezos' Cheap Electric Trucks





Turing Award: Jeffrey Ullman

Another great interview with very prominent "computing" person.
Not about compilers, mostly about AI

 Turing Award Special: A Conversation with Jeffrey Ullman - Software Engineering Daily

Jeffrey Ullman is a renowned computer scientist and professor emeritus at Stanford University, celebrated for his groundbreaking contributions to database systems, compilers, and algorithms. He co-authored influential texts like Principles of Database Systems and Compilers: Principles, Techniques, and Tools (often called the “Dragon Book”), which have shaped generations of computer science students.

Jeffrey received the 2020 Turing Award together with Alfred Aho “for fundamental algorithms and theory underlying programming language implementation and for synthesizing these results and those of others in their highly influential books, which educated generations of computer scientists.”


in 2000 he was awarded the Knuth Prize.[4] Ullman is the co-recipient (with John Hopcroft) of the 2010 IEEE John von Neumann Medal
...
He was the Ph.D. advisor of Sergey Brin, one of the co-founders of Google, and served on Google's technical advisory board


Principles of Compiler Design, by Alfred Aho and Jeffrey Ullman, is a classic textbook on compilers for computer programming languages. Both of the authors won the 2020 Turing Award for their work on compilers.



Compilers: Principles, Techniques, and Tools
[1] is a computer science textbook by Alfred V. AhoMonica S. LamRavi Sethi, and Jeffrey D. Ullman about compiler construction for programming languages. First published in 1986, it is widely regarded as the classic definitive compiler technology text

The first edition (1986) is informally called the "red dragon book" to distinguish it from the second edition[5] and from Aho & Ullman's 1977 Principles of Compiler Design sometimes known as the "green dragon book"


Thursday, April 24, 2025

Turing Award: John Hennessy

A podcast interview with a very prominent person in computing;
Uplifting view of the future... and AI...
All roads/conversations lead to AI...

 Turing Award Special: A Conversation with John Hennessy - Software Engineering Daily

John Hennessy is a computer scientist, entrepreneur, and academic known for his significant contributions to computer architecture. He co-developed the RISC architecture, which revolutionized modern computing by enabling faster and more efficient processors. Hennessy served as the president of Stanford University from 2000 to 2016 and later co-founded MIPS Computer Systems and Atheros Communications. Currently, he serves on the board of the Gordon and Betty Moore Foundation, and is chair of the board of Alphabet.


AI: programming with data is the right way to think about it
... we're going to see more and more of that and particularly smaller LLMs adapted to particular domains


Hennessy is one of the founders of MIPS Technologies and Atheros,
and also the tenth President of Stanford University.

Along with David Patterson, Hennessy was a recipient of the 2017 Turing Award for their work in developing the reduced instruction set computer (RISC) architecture, which is now used in 99% of new computer chips.

Wednesday, April 23, 2025

AI Agents Protocols: A2A & MCP

Google’s A2A Protocol is a GAMECHANGER - YouTube by Chris Hay - YouTube


MCP makes Telnet RELEVANT for AI Agents - YouTube  by Chris Hay


Making Sense of LLM Tool Use & MCP - YouTube by Maximilian Schwarzmüller - YouTube


Explore Model Context Protocol (MCP) on AWS! | AWS Show and Tell - Generative AI | S1 E9 - YouTube



Overall, both are designed for specific tasks:
  • A2A facilitates agent-to-agent communication, while
  • MCP enables agent-to-tool interactions.
That's why we need both of them to build powerful agentic systems.

A2A:

- Supports flexible, conversational, and task-centric communication. Agents can exchange messages and negotiate formats based on their capabilities.
- It operates like a "conference room" for agents to collaborate.

MCP:

- Uses structured schemas for rigid, exact inputs and outputs.
- It’s designed for precise interactions between an agnets and external systems, acting like a "tool workshop" that provides clear instructions for accessing resources.

Tuesday, April 22, 2025

.NET Conf 2025

 .NET Conf Focus on Modernization

2025-04-22 11 AM EDT

Modernizing .NET: Future-Ready Applications in the era of AI



Monday, April 21, 2025

AI: AlexNet Source Code (from 2012)

This is the program that started current "AI revolution"
Now released as open source.

There was at least 3 previous "waves" of technology called "AI".
But this current one is by far the most successful and influential.
In good part because the HW (NVIDIA GPUs in particular) provided enough computing power for it.

computerhistory/AlexNet-Source-Code: This package contains the original 2012 AlexNet code. @GitHub

Puthon + Cuda + C++

the original AlexNet source code as it was in 2012, when it won the ImageNet competition. Geoffrey Hinton, Ilya Sutskever, and Alex Krizhevsky formed DNNResearch soon afterwards and sold the company, and the AlexNet source code along with it, to Google, which would continue work on it. This package also includes the parameter files trained on the ImageNet dataset.

While there are other existing repositories of code named "AlexNet" on the web, they are not the original code, but rather reimplementations based on the paper Krizhevsky, Sutskever, and Hinton published:

Krizhevsky, A., Sutskever, I. & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. In F. Pereira, C. J. C. Burges, L. Bottou & K. Q. Weinberger (ed.), Advances in Neural Information Processing Systems 25 (pp. 1097--1105). Curran Associates, Inc.


AlexNet - Wikipedia

AlexNet is a convolutional neural network architecture developed for image classification tasks, notably achieving prominence through its performance in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). It classifies images into 1,000 distinct object categories and is regarded as the first widely recognized application of deep convolutional networks in large-scale visual recognition.



Sunday, April 20, 2025

AI BASIC programming language?

In 80s, computers become "personal",
enabled by simple programming language: BASIC

In 2000s, communication become "personal",
enabled by WWW (HTML, HTTP, JavaScript...)

In 2020s, intelligence is becoming "personal".
Is there a place for simple AI interface language: "AI BASIC"?
(please don't say it is only Python :)

MCP (Model Context Protocol) is getting traction for AI agent-to-agent communication.
But how about human-to-agent, to be more specific than English?

"The BASIC programming language today is derided. It’s seen as slow and encourages bad programming practices. “Real” programmers use more advanced languages. That attitude is a real shame because BASIC was an attempt to allow anyone, even non-programmers, to use a computer, and as a learning tool it created a generation of coders. Rather than being dismissed, I think it should be seen as one of the most important programming languages ever created. Why was this simplistic language created, and where did it come from? And why was it on just about every home computer in the 1980s?"


BASIC - Wikipedia
BASIC (Beginners' All-purpose Symbolic Instruction Code)[1] 
is a family of 
general-purposehigh-level programming languages 
designed for ease of use. 


Python, Java, C++, R, Julia, Haskel, Prolog, Scala



The programming iceberg is complete roadmap to the loved, hated, historical, and weird programming languages that you should now about. It starts with easy-to-learn coding tools, then descends into the most difficult low-level and esoteric languages. 

Featuring C, C++, C#, F#, HolyC, C--, Java, JavaScript, Python, Rust, Fortran, Lisp, V, Nim, Zig, APL, Ada, COBOL, Haskell, Scala, Clojure, Kotlin, Swift, Lua, PHP, Elixir, Erlang, Chef, Malbolge, lolcode, emojicode, ASM and many more!


WASM (WebAssembly) was not mentioned, while may be essential foundation for future languages.

Programming Language Iceberg : r/ProgrammerHumor

Local AI: Ollama + OpenWebUI

"run free" is relative... need a powerful (enough) computer to run local, and that is not free

Learn Ollama in 15 Minutes - Run LLM Models Locally for FREE - YouTube


Run any LLM Model Locally for FREE (Ollama + OpenWebUI) - YouTube




Saturday, April 19, 2025

learning Zig programming language

Zig is a relatively new programming language
intended to be a "modern version of C": low-level "systems" language
for code where performance and safety are important.

While both Rust and Go have similar objective,  there are some important differences. 
Zig, in fact, is the closest to "spirit" of C:
relatively simple syntax, full control and responsibility for memory management.

Rust is much bigger / more complex, but also more used. 
Almost like C++ is to C, Rust may be to Zig. 

Go, on the other side is also simple, fast, but it does have "GC", automated memory management.
That makes it much easier to use correctly, while limits its usage for real-time critical applications.

At this time, there are not many prominent projects using Zig yet. Rust is more popular.
I

Zig (programming language) - Wikipedia

Home ⚡ Zig Programming Language

Learn ⚡ Zig Programming Language

Getting Started ⚡ Zig Programming Language

Samples ⚡ Zig Programming Language

const std = @import("std"); pub fn main() !void { const stdout = std.io.getStdOut().writer(); try stdout.print("hello world!\n", .{}); }

$ zig run hello-world.zig

or

$ zig build-exe hello-world.zig $ ./hello-world hello world!


Here is a fun demo of Windows GUI low-level program, fully functional!
Adding more GUI elements quickly gets much more verbose, but resulting EXE is small and fast.

// https://ziglang.org/learn/samples/
const win = @import("std").os.windows;

extern "user32" fn MessageBoxA(?win.HWND, [*:0]const u8, [*:0]const u8, u32) callconv(win.WINAPI) i32;

pub fn main() !void {
    _ = MessageBoxA(null, "world!", "Hello", 0);
}


What is the EASIEST way to learn "zig"? : r/Zig


e-books:


courses:



Zig or Go, but not Rust (ThePrimeTime)

Go GUI


there are several Go packages that wrap the Windows GDI functions in a more idiomatic and less verbose interface. Here are some of the better options for creating a chess GUI:

  • Fyne: A cross-platform GUI toolkit that works on Windows and abstracts away the low-level details:
go get fyne.io/fyne/v2
fyne-io/fyne: Cross platform GUI toolkit in Go inspired by Material Design @GitHub

  • Walk: A Windows-specific GUI toolkit for Go that provides native Windows GUI capabilities:
go get github.com/lxn/walk
lxn/walk: A Windows GUI toolkit for the Go Programming Language @GitHub

  • Gio: A cross-platform GUI library with immediate mode rendering:
go get gioui.org
Gio UI

  • Ebiten: More game-focused but excellent for something like chess:
go get github.com/hajimehoshi/ebiten/v2





even without packages, Go can access Windows (and other OS) GUI features
Here is a super-simple but functional Go GUI "Hello World" program.

package main

import (
    "syscall"
    "unsafe"
)

var (
    user32      = syscall.NewLazyDLL("user32.dll") // Loads the user32.dll library dynamically
    messageBoxA = user32.NewProc("MessageBoxA")    // Gets a reference to the MessageBoxA function
)

func main() {
    messageBoxA.Call(
        0, // NULL for the parent window handle
        uintptr(unsafe.Pointer(syscall.StringBytePtr("world!"))), // message text
        uintptr(unsafe.Pointer(syscall.StringBytePtr("Hello"))),  // title
        0, // type of message box, default
    )
}


Thursday, April 17, 2025

Git: from 10 days to 20 years

It is not JavaScript that is created in 10 days, apparently Git is also made so quickly.

And both Git and JavaScript take way much more than 10 days to learn properly :)

It is strange how code market works, "free" and "available" often "wins"

Linus Torvalds built Git in 10 days - and never imagined it would last 20 years | ZDNET

Linus Torvalds built Git in 10 days - and never imagined it would last 20 years
Git is celebrating its 20th anniversary. Here's why Torvalds never intended for it to stick around.

...Torvalds's answer was to create a true open-source VCS alternative: Git. In a mere 10 days, he developed a working version of Git, which was first committed on April 7, 2005.

AI models: OpenAI o3, o4-mini + Codex CLI

Introducing OpenAI o3 and o4-mini | OpenAI

"smartest and most capable models to date with full tool access"

trained to think for longer before responding



Codex CLI: frontier reasoning in the terminal
... a lightweight coding agent you can run from your terminal. It works directly on your computer and is designed to maximize the reasoning capabilities of models like o3 and o4-mini, with upcoming support for additional API models like GPT‑4.1⁠.

You can get the benefits of multimodal reasoning from the command line by passing screenshots or low fidelity sketches to the model, combined with access to your code locally. We think of it as a minimal interface to connect our models to users and their computers. Codex CLI is fully open-source at github.com/openai/codex⁠(opens in a new window)

(direct alternative for Anthropic "Cloud Code" CLI tool)

Wednesday, April 16, 2025

AI: VS Code: Agent Mode Day: MCP protocol

The Future of AI in VS Code: MCP Servers Explained! - YouTube by Burke Holland

The Only 3 Videos You Need to Get Started with MCP! - YouTube by VS Code 

Visual Studio Code + Model Context Protocol (MCP) Servers Getting Started Guide | What, Why, How - YouTube by James Montemagno

 🔴 VS Code Live: Agent Mode Day - YouTube 5 hours recording of live online event

Agenda: Keynote - MCP: The Protocol Behind the Curtain Agent mode Project Padawn Using & Building MCP servers BYOK Notebooks Copilot Next Edit Suggestions (NES) Completions model Customizing Copilot Live coding with Wes Bos


Introduction - Model Context Protocol

MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a USB-C port for AI applications. Just as USB-C provides a standardized way to connect your devices to various peripherals and accessories, MCP provides a standardized way to connect AI models to different data sources and tools.





The GitHub MCP Server is a Model Context Protocol (MCP) server that provides seamless integration with GitHub APIs, enabling advanced automation and interaction capabilities for developers and tools.




AI music creator: Suno.AI

Suno | AI Music


Is AI Killing Music?? Suno and AI Audio - YouTube


My thoughts about Suno AI and AI MUsic : r/SunoAI

... BLOWN AWAY by the very first result...


How to generate your own music with the AI-powered Suno | ZDNET


"in style of Santana"

The River of Dreams by ds2345 | Suno

The River of Dreams by ds2345 | Suno (2)

Moonlight Serenade by ds2345 | Suno

Dancing Under the Stars by ds2345 | Suno

Moj Svet by ds2345 | Suno in Serbian

Moj Svet by ds2345 | Suno (2) female voice

Spring Fusion by ds2345 | Suno

Freedom's Rhythm by ds2345 | Suno

It's My Life by ds2345 | Suno

Vision Child by ds2345 | Suno

Vision Child by ds2345 | Suno

Tuesday, April 15, 2025

AI Agent protocols: MCP, A2A

The Model Context Protocol is an open standard that enables developers to build secure, two-way connections between their data sources and AI-powered tools. The architecture is straightforward: developers can either expose their data through MCP servers or build AI applications (MCP clients) that connect to these servers.


At its core, MCP follows a client-server architecture where a host application can connect to multiple servers:


Google: A2A protocol



  • MCP (Model Context Protocol) for tools and resources
    • Connect agents to tools, APIs, and resources with structured inputs/outputs.
    • Google ADK supports MCP tools. Enabling wide range of MCP servers to be used with agents.
  • A2A (Agent2Agent Protocol) for agent-agent collaboration
    • Dynamic, multimodal communication between different agents without sharing memory, resources, and tools
    • Open standard driven by community.
    • Samples available using Google ADK, LangGraph, Crew.AI


Monday, April 14, 2025

AI: OpenAI GPT-4.1: API only, for coding

Introducing GPT-4.1 in the API | OpenAI

A new series of GPT models featuring major improvements on coding, instruction following, and long context—plus first-ever nano model.
  • Coding: GPT‑4.1 scores 54.6% on SWE-bench Verified, improving by 21.4%abs over GPT‑4o and 26.6%abs over GPT‑4.5—making it a leading model for coding.
  • Instruction following: On Scale’s MultiChallenge(opens in a new window) benchmark, a measure of instruction following ability, GPT‑4.1 scores 38.3%, a 10.5%abs increase over GPT‑4o.
by Matthew Berman

1000x faster db: TigerBeetle

TigerBeetle website




created in "ZIg" language, like bun.js

Sunday, April 13, 2025

AI power usage estimate error 120 000 times

Excellent podcast interview with a very prominent person from computing word, 
co-creator of RISC processor design, including modern and very popular "open hardware" RISC V".

But the most interesting part of conversation was about a mistake in a scientific paper
where energy usage used for AI training was over-estimated 120 000 times!

That was based on public data from Google, that are miss-interpreted, and without insight of how Google is actually doing AI training etc. David Patterson knows it well, since he now works in Google
and is deeply involved in energy optimization.

And now, with this wrong info published and cited many times, people are making all kinds of assumptions and even plans! Yes, data centers are using a lot of energy, but not even near to levels some people are presenting it to be.

Turing Award Special: A Conversation with David Patterson - Software Engineering Daily


Good News About the Carbon Footprint of Machine Learning Training

"Unfortunately, some ... papers misinterpreted the NAS estimate as the training cost for the model it discovered, yet emissions for this particular NAS are ~1300x larger than for training the model. These papers estimated that training the Evolved Transformer model takes two million GPU hours, costs millions of dollars, and that its carbon emissions are equivalent to five times the lifetime emissions of a car. In reality, training the Evolved Transformer model on the task examined by the UMass researchers and following the 4M best practices takes 120 TPUv2 hours, costs $40, and emits only 2.4 kg (0.00004 car lifetimes), 120,000x less. This gap is nearly as large as if one overestimated the CO2e to manufacture a car by 100x and then used that number as the CO2e for driving a car."

David Patterson (computer scientist) - Wikipedia

JavaScript V8 engine internals

very "technical" and interesting

Land ahoy: leaving the Sea of Nodes · V8  
"V8’s end-tier optimizing compiler, Turbofan, is famously one of the few large-scale production compilers to use Sea of Nodes (SoN). However, since almost 3 years ago, we’ve started to get rid of Sea of Nodes and fall back to a more traditional Control-Flow Graph (CFG) Intermediate Representation (IR), which we named Turboshaft. By now, the whole JavaScript backend of Turbofan uses Turboshaft instead, and WebAssembly uses Turboshaft throughout its whole pipeline."



Google created V8 for its Chrome browser, and both were first released in 2008.[4] The lead developer of V8 was Lars Bak, and it was named after the powerful car engine.[5] For several years, Chrome was faster than other browsers at executing JavaScript


In 1994, he joined LongView Technologies LLC, where he designed and implemented high performance virtual machines for both Smalltalk and Java. After Sun Microsystems acquired LongView in 1997, Bak became engineering manager and technical lead in the HotSpot team at Sun's Java Software Division where he developed a high-performance Java virtual machine

With a team of 12 engineers, Bak coordinated the development of the V8 JavaScript interpreter for Chrome

Bak co-developed the Dart programming language presented at the 2011 Goto conference in Aarhus, Denmark