Thursday, May 22, 2025

book: Apple in China; HarmonyOS vs Android?

"For readers of Walter Isaacson’s Steve Jobs and Chris Miller’s Chip War, a riveting look at how Apple helped build China’s dominance in electronics assembly and manufacturing only to find itself trapped in a relationship with an authoritarian state making ever-increasing demands.

...Apple was lured by China’s seemingly inexhaustible supply of cheap labor. Soon it was sending thousands of engineers across the Pacific, training millions of workers, and spending hundreds of billions of dollars to create the world’s most sophisticated supply chain....

Without explicitly intending to, Apple built an advanced electronics industry within China, only to discover that its massive investments in technology upgrades had inadvertently given Beijing a power..."

Patrick McGee - author of Apple in China


Apple iMac - Colors (1998) - YouTube

A tribute to the iMac "Colors" ad from 1999 - YouTube


HarmonyOS - Wikipedia

HarmonyOS (HMOS) is a distributed operating system developed by Huawei for smartphones, tablets, smart TVs, smart watches, personal computers and other smart devices. It has a microkernel design with a single framework: the operating system selects suitable kernels from the abstraction layer in the case of devices that use diverse resources

HarmonyOS:.Opens in new tab

Initially based on Android, HarmonyOS NEXT (the newest iteration) has moved away from Android's core and is built on a self-developed kernel. It uses a microkernel, which is claimed to be more efficient and secure than Android's monolithic kernel.





AI in Go: Model Context Protocol (MCP)

Making AI "cloud native"... by using GoLang

mark3labs/mcp-go: A Go implementation of the Model Context Protocol (MCP), enabling seamless integration between LLM applications and external data sources and tools. @GitHub

A Go implementation of the Model Context Protocol (MCP),
enabling seamless integration between LLM applications and external data sources and tools.

The Model Context Protocol (MCP) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:

  • Expose data through Resources (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
  • Provide functionality through Tools (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
  • Define interaction patterns through Prompts (reusable templates for LLM interactions)
  • And more!

metoro-io/mcp-golang: Write Model Context Protocol servers in few lines of go code. Docs at https://mcpgolang.com @GitHub


and another one...

a robust Model Context Protocol (MCP) server implementation in Golang, integrating the client with Python and Ollama using LangChain  Post | LinkedIn

and another alternative...

Introduction - mcp-golang

Wednesday, May 21, 2025

Google I/O 2025; Gemma 3 open model AI

Google I/O 2025 Recaps - YouTube


Google I/O 2025: Everything announced at this year's developer conference | TechCrunch

Android Studio adds 'agentic AI' with Journeys feature, Agent Mode | TechCrunch



Google goes wild, again... 11 things you missed at I/O - YouTube


Sergey Brin, Google Co-Founder | All-In Live from Miami - YouTube


DeepMind CEO Demis Hassabis + Google Co-Founder Sergey Brin: AGI by 2030? - YouTube


Google DELIVERED - Everything you missed from I/O 2025 - YouTube
by Matthew Berman


Gemma 3: Google’s new open model based on Gemini 2.0

Gemma - Google DeepMind


OpenAI += "io" HW startup - $6.4B (Jony Ive, iPhone designer)

OpenAI buys iPhone architect’s startup for $6.4bn | Technology | The Guardian

 Sam and Jony introduce io | OpenAI

Jony Ive confirms he's working on an AI hardware product

Jony Ive AI hardware project


Sir Jonathan Paul Ive s a British-American designer. He is best known for his work at Apple Inc., where he was senior vice president of industrial design and chief design officer Ive is the founder of LoveFrom, a creative collective that works with Ferrari, Airbnb, OpenAI and other global brands




"....the device could be larger than Humane’s AI pin, but with a “form factor as compact and elegant as an iPod Shuffle.”
...“one of the intended use cases” is wearing the device around your neck. It also may not come with a display, Kuo says, featuring just built-in cameras and microphones for “environmental detection.”
... The device could also connect to smartphones and PCs to use their computing and display capabilities."



10x Faster TypeScript compiler, in Go

Maybe the same or similar technique could be use to make TypeScript apps faster, by compiling to Go.
For now, it is compiling "by Go"
10x faster.

 A 10x Faster TypeScript - TypeScript (video)
by Anders Hejlsberg, TypeScript architect

"plug-and-play" replacement to original TypeScript compiler

he native implementation will drastically improve editor startup, reduce most build times by 10x, and substantially reduce memory usage

CodebaseSize (LOC)CurrentNativeSpeedup
VS Code1,505,00077.8s7.5s10.4x
Playwright356,00011.1s1.1s10.1x
TypeORM270,00017.5s1.3s13.5x
date-fns104,0006.5s0.7s9.5x
tRPC (server + client)18,0005.5s0.6s9.1x
rxjs (observable)2,1001.1s0.1s11.0x





TypeScript Migrates to Go: What's Really Behind That 10x Performance Claim?

What's actually getting faster is the TypeScript compiler, not the TypeScript language itself or the JavaScript's runtime performance. Your TypeScript code will compile faster, but it won't suddenly execute 10x faster in the browser or Node.js.

Tuesday, May 20, 2025

AI HW NVIDIA: 1,000,000x in 10 years

 NVIDIA CEO Jensen Huang Leaves Everyone SPEECHLESS (Supercut) - YouTube

Highlights from the latest #nvidia keynote at COMPUTEX 2025. Topics include ‪@NVIDIA‬'s MASSIVE new Blackwell Ultra GPUs, new products like DGX Spark, DGX Station, and RTX PRO Server, and how they'll power generative AI models like #chatgpt by #openai and #deepseek R1, reshaping artificial intelligence and computing as we know it.


AI: Microsoft GitHub Copilot coding agent, open sourced

 ðŸ§‘‍🚀 Microsoft Build 2025, AI's Biggest Hurdles, & AI Kills Entry Jobs


Microsoft CEO Satya Nadella unveiled a major leap for GitHub Copilot, shifting its role from a helpful pair programmer to a more autonomous peer programmer. The upgraded AI is now equipped to take on complex coding tasks independently

"pair programming" specifically refers to two developers working together,
"peer programming" can encompass larger groups (often called "mob programming").



We are excited to introduce a new coding agent for GitHub Copilot. Embedded directly into GitHub, the agent starts its work when you assign a GitHub issue to Copilot or prompt it in VS Code. The agent spins up a secure and fully customizable development environment powered by GitHub Actions.



with Erich Gamma (!)





React Router v7 = Remix

React Router v7 | Remix

React Router v7 brings everything you love about Remix back into React Router proper. We encourage all Remix v2 users to upgrade to React Router v7.

For the majority of the React ecosystem that has been around for the last 10 years, we believe React Router v7 will be the smoothest way to bridge the gap between React 18 and 19.


Ryan Florence is a co-creator of React Remix and in this episode he speaks with Josh Goldberg about the Remix project.

apparently, Remix company was acquired by Shopify, and encouraged to merge back to React Router to simplify dev user's experience





Monday, May 19, 2025

Microsoft Build: xAI Grok on Azure




excellent AI news

Elon Musk Stuns Microsoft CEO - YouTube

Conversation with NVIDIA CEO Jensen Huang: Satya Nadella at Microsoft Build 2025 - YouTube




Create Virtual Agents with Copilot | Microsoft Copilot Studio

Multi-agent orchestration and more: Copilot Studio announcements | Microsoft Copilot Blog

Microsoft goes all in on AI agents at annual Build conference @reuters
Microsoft (MSFT.O), opens new tab said on Monday it would offer new AI models made by Elon Musk's xAI, Meta Platforms (META.O), opens new tab and European startups Mistral and Black Forest Labs hosted in its own data centers, and unveiled a new artificial-intelligence tool designed to complete software coding tasks on its own.








cloud: Azure Durable Functions vs AWS Step Functions

Durable Functions Overview - Azure | Microsoft Learn

Azure Durable Functions documentation | Microsoft Learn


Pick the Perfect Tool: Azure Durable Functions vs. Azure Functions for Your Workflow | LinkedIn


azure durable functions vs aws step functions - Google Search

AI Overview Learn more

Azure Durable Functions and AWS Step Functions are both serverless workflow orchestration services, but they differ in their approach and scope. Durable Functions, a library for Azure Functions, provides code-based orchestration of long-running, stateful workflows, while Step Functions is a managed service within AWS that orchestrates workflows using a visual, declarative approach with state machines.
Feature
Azure Durable Functions
AWS Step Functions
Orchestration Approach
Code-based, using orchestrator and activity functions
Visual, declarative using state machines in a JSON format (ASL)
Focus
Focuses on integrating individual function executions into long-running workflows
Focuses on orchestrating and coordinating multiple AWS services and tasks into workflows
State Management
Handles state management and checkpointing transparently, allowing developers to focus on the business logic
Handles state management and checkpointing transparently, allowing developers to focus on the workflow logic
Integration
Integrates with Azure Functions and other Azure services
Integrates with AWS services like Lambda, S3, DynamoDB, and more
Developer Experience
More familiar to developers already using Azure Functions
Provides a visual, low-code approach for designing workflows
Cost
Pay-per-use, based on function invocations and execution time
Pay-per-use, based on state transitions executed in the workflow
Language Support
Supports the languages supported by Azure Functions (C#, F#, JavaScript, Python, etc.)
Primarily focuses on JSON format for state machine definition
Key Differences:
  • Code vs. Visual:
    Durable Functions require you to write code for the orchestration logic, while Step Functions provides a visual, drag-and-drop interface for designing workflows. 
  • Integration Scope:
    Durable Functions primarily integrates with Azure services, while Step Functions is designed for orchestrating AWS services. 
  • Learning Curve:
    Durable Functions might be more familiar for developers already using Azure Functions, while Step Functions might have a steeper learning curve for those unfamiliar with state machines. 
In summary:

If you prefer a code-based approach and are already using Azure Functions, Azure Durable Functions might be a good fit.

If you prefer a visual, declarative approach and need to orchestrate multiple AWS services, AWS Step Functions might be more suitable.

Sunday, May 18, 2025

OpenAI Codex: Agentic Coding

OpenAI Unveils Codex - Fully Agentic Coding - YouTube


...you can access Codex through the sidebar in ChatGPT and assign it new coding tasks by typing a prompt and clicking “Code”. If you want to ask Codex a question about your codebase, click “Ask”. Each task is processed independently in a separate, isolated environment preloaded with your codebase. Codex can read and edit files, as well as run commands including test harnesses, linters, and type checkers. Task completion typically takes between 1 and 30 minutes, depending on complexity, and you can monitor Codex’s progress in real time.




list: Famous Computer Technologists & Scientists

random list, in no particular order...

Famous Computer Scientists | List of the Top Well-Known Computer Scientists

25 Famous Computer Scientists and Tech Duos Who Impacted the Industry | Rasmussen University

List of computer scientists - Wikipedia

15 Famous Computer Scientists Who Changed Our World



Saturday, May 17, 2025

AI: "Nikola Tesla" Model & GG by Tony Robbins

clever :)

"fine-tuned" Llama3 model to imitate Nikola Tesla personality.

Nikolas Tesla Model | Open WebUI Community

Nikola Tesla - Wikipedia


Modern: "GG" AI Model based on Tony Robbins 

Mastermind Business System Reviews & GG AI Tony Robbins

The "GG" AI model, referred to as GG 2.0, is a key component of Tony Robbins and Dean Graziosi's Mastermind Business System. It functions as a 24/7 AI business coach, providing personalized guidance and support based on Robbins and Graziosi's proven business strategies. GG 2.0 is designed to help users navigate the complexities of building and scaling a digital business, offering real-time insights, suggestions, and accountability

GG Walkthrough: Mastermind.com's AI Assistant - YouTube

MasterMind.com

[Thrive in 2025] by Tony Robbins: How To Build The Foundation To A Thriving Business You Love!

Tony Robbins 10-Minute Morning Routine to Prime for Success - YouTube

Friday, May 16, 2025

AI MCP, inspired by LSP

solving many-to-many communication challenges, for IDE and AI tools

Anthropic and the Model Context Protocol with David Soria Parra - Software Engineering Daily

The Model Context Protocol, or MCP, is a new open standard that connects AI assistants to arbitrary data sources and tools, such as codebases, APIs, and content repositories. Instead of building bespoke integrations for each system, developers can use MCP to establish secure, scalable connections between AI models and the data they need. By standardizing this connection layer, MCP enables models to access relevant information in real time, leading to more accurate and context-aware responses.

David Soria Parra is a Member of the Technical Staff at Anthropic, where he co-created the Model Context Protocol. He joins the podcast to talk about his career and the future of context-aware AI








The Language Server Protocol (LSP) is an open, JSON-RPC-based protocol for use between source code editors or integrated development environments (IDEs) and servers that provide "language intelligence tools":[1] programming language-specific features like code completion, syntax highlighting and marking of warnings and errors, as well as refactoring routines. 

The goal of the protocol is to allow programming language support to be implemented and distributed independently of any given editor or IDE.[2] In the early 2020s, LSP quickly became a "norm" for language intelligence tools providers.

LSP was originally developed for Microsoft Visual Studio Code and is now an open standard


Language Server Extension Guide | Visual Studio Code Extension API









MCP example:



"Source", sublanguages of JavaScript, for SICP

Structure and Interpretation of Computer Programs - Wikipedia

"a computer science textbook by Massachusetts Institute of Technology professors Harold Abelson and Gerald Jay Sussman with Julie Sussman. It is known as the "Wizard Book" in hacker culture.[1] It teaches fundamental principles of computer programming, including recursion, abstraction, modularity, and programming language design and implementation."

Source is a family of languages, designed for the textbook Structure and Interpretation of Computer Programs, JavaScript Adaptation (SICP JS) and supported by the Source Academy system. The languages are called Source §1, Source §2, Source §3 and Source §4, corresponding to the respective chapters 1, 2, 3 and 4 of the textbook. Each previous Source language is a sublanguage of the next, and all Source languages are sublanguages of JavaScript. (Chapter 5 does not require any features beyond Source §4.) This webpage contains the description of the Source languages and the libraries they come with.

Source Academy @GitHub: "Online experiential environment for computational thinking"

Source Academy.org

The Source Academy is a computer-mediated learning environment for studying the structure and interpretation of computer programs. Students write and run their programs in their web browser, using sublanguages of JavaScript called Source, designed for the textbook Structure and Interpretation of Computer Programs, JavaScript Edition.



Source Academy playground

sicpjs.pdf JS edition, 642 pages



Video Lectures | Structure and Interpretation of Computer Programs | Electrical Engineering and Computer Science | MIT OpenCourseWare





Thursday, May 15, 2025

AI: Google’s AlphaEvolve: "Agent OS": What => How

Not "open", but powerful AI models; "AI evolution"




Google’s AlphaEvolve is making new discoveries in math… - YouTube 4min, by Fireship


Google’s new AlphaEvolve shows what happens when an AI agent graduates from lab demo to production work, and you’ve got one of the most talented technology companies driving it.

Built by Google’s DeepMind, the system autonomously rewrites critical code and already pays for itself inside Google. It shattered a 56-year-old record in matrix multiplication (the core of many machine learning workloads) and clawed back 0.7% of compute capacity across the company’s global data centers.


"Beyond simple scripts: The rise of the “agent operating system”







AI API: Windows Copilot Runtime

like Win32 API was for GUI on Windows, now WCR API is for AI on Windows, Copilot edition

Windows Copilot Runtime is a big deal ✅️ - YouTube

@ Post | LinkedIn



Integrate AI in enterprise apps using Windows Copilot Runtime APIs powered by | BRK303 - YouTube


 Windows Copilot Runtime overview | Microsoft Learn

Windows Copilot Runtime includes the following features and AI-backed APIs powered by models running locally on the Windows device. These APIs will ship in the Windows App SDKand are currently only available in the latest experimental channel release of the Windows App SDK.


Similar to OpenAI's GPT Large Language Model (LLM) that powers ChatGPT, Phi is a Small Language Model (SLM) developed by Microsoft Research to perform language-processing tasks on a local device. Phi Silica is specifically designed for Windows devices with a Neural Processing Unit (NPU), allowing text generation and conversation features to run in a high performance, hardware-accelerated way directly on the device. 


Get started with AI on Windows | Microsoft Learn

The ability to build intelligent AI experiences on, and with, Windows is developing rapidly. Windows Copilot Runtime offers AI-backed features and APIs on Copilot+ PCs. These features are in active development and run locally in the background at all times. Learn more about Windows Copilot Runtime.