Sunday, July 20, 2025

Azure OpenAI API: REST & SDK

OpenAI API is almost "industry standard" since other AI API providers often provide similar or same interface. 

But that API is also changing over time, so instead of "completion" interface from two years ago, last year it become "chat completions", and now is "responses". 

What does it mean? To get new features, we often need to change way of calling AI APIs,
either using REST directly or SDK when available for selected programming language.

Azure OpenAI

While OpenAI is hosted on Azure, and its API is available directly from OpenAI, or from Azure OpenAI.
And that is changing too.

Until recently, the feature was called Azure OpenAI, and was parallel to "Microsoft native" Azure AI.
Now, there is "Azure AI Foundry" that has collection of AI services and models, including OpenAI,
but also some other third-party models and Microsoft's own AI services.

Indirectly, this is response to AWS AI competition Amazon Bedrock that has similar structure, 
and hosts Anthropic Claude and other third party models. Competition is good.

Now, setting up OpenAI API on Azure is not automatic. It requires deliberate deployment of select models, and setting up access security methods, and there is a lot of options there. Not easy.

And Azure OpenAI API documentation, while existing, is less than complete.
In particular, examples do not include recent "responses" style of OpenAI API calling style.

The good thing is that original OpenAI SDK (at least Python and JavaScript)
can be used directly with Azure deployments.

Here are a few working examples in various languages, with REST and SDK where available.

Python

azure_openai_sdk_class.py

import os
from openai import AzureOpenAI

class AzureOpenaiSdkClient:
    def __init__(self,
                 endpoint=None,
                 api_key=None,
                 deployment="gpt-4.1-mini",
                 api_version="2025-04-01-preview"):
        self.endpoint = endpoint or os.environ.get("AZURE_OPENAI_ENDPOINT")
        self.api_key = api_key or os.environ.get("AZURE_OPENAI_API_KEY")
        self.deployment = deployment
        self.api_version = api_version
        self.model = deployment
       
        self.client = AzureOpenAI(
            azure_endpoint=self.endpoint,
            api_version=self.api_version,
            api_key=self.api_key,
        )

    def responses_text(self, input: str, model2: str = None) -> str:
        model_to_use = model2 if model2 is not None else self.model
        response = self.client.responses.create(
            model=model_to_use,
            input=input,
        )
        # print(response.model_dump_json(indent=2))
        text = response.output[0].content[0].text
        # print(text)
        return text

    def chat_text(self, input: str, model2: str = None) -> str:
        model_to_use = model2 if model2 is not None else self.model
        chat_prompt = [
            {
                "role": "system",
                "content": [
                    {
                        "type": "text",
                        "text": "You are a helpful AI assistant."
                    }
                ]
            },
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": input
                    }
                ]
            },
        ]

        # Include speech result if speech is enabled
        messages = chat_prompt

        # Generate the completion
        result = self.client.chat.completions.create(
            model=model_to_use,
            messages=messages,
            max_tokens=800,
            temperature=1,
            top_p=1,
            frequency_penalty=0,
            presence_penalty=0,
            stop=None,
            stream=False
        )
        # print(result.to_json())
        text = result.choices[0].message.content
        # print(text)
        return text

test.py

from azure_openai_sdk_class import AzureOpenaiSdkClient

def main():
    # Create client with default parameters (from environment variables)
    client = AzureOpenaiSdkClient()
   
    prompt = 'how tall is k2?'
    print('azure openai, prompt:', prompt)

    chat_text = client.chat_text(prompt)
    print('chat.completions:', chat_text)
   
    resp_text = client.responses_text(prompt)
    print('responses:', resp_text)

if __name__ == "__main__":
    main()

azure_openai_rest_class.py

import os
import requests
# import dotenv
# dotenv.load_dotenv()

class AzureOpenaiRestClient:
    def __init__(self,
                 endpoint=None,
                 api_key=None,
                 deployment="gpt-4.1-mini",
                 api_version="2025-04-01-preview"):
        self.endpoint = endpoint or os.environ.get("AZURE_OPENAI_ENDPOINT")
        self.api_key = api_key or os.environ.get("AZURE_OPENAI_API_KEY")
        self.deployment = deployment
        self.api_version = api_version
        self.model = deployment

    def http_post(self, url, headers, body):
        response = requests.post(url, headers=headers, json=body)
        response.raise_for_status()  # Raises an HTTPError for bad responses
        return response.json()

    def chat(self, input):
        url = f"{self.endpoint}/openai/deployments/{self.deployment}/chat/completions?api-version={self.api_version}"
        body = {
            "messages": [
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": input},
            ],
            "max_tokens": 800,
            "temperature": 1,
            "top_p": 1,
            "frequency_penalty": 0,
            "presence_penalty": 0,
            "stop": None
        }
        headers = {
            "Content-Type": "application/json",
            "api-key": self.api_key
        }
        return self.http_post(url, headers, body)

    def extract_chat_content(self, result):
        return result["choices"][0]["message"]["content"]

    def chat_text(self, input: str) -> str:
        return self.extract_chat_content(self.chat(input))

    def responses(self, user_content):
        data = {
            "model": self.model,
            "input": user_content
        }
        url = f"{self.endpoint}/openai/responses?api-version={self.api_version}"
        headers = {
            "Content-Type": "application/json",
            "api-key": self.api_key
        }
        return self.http_post(url, headers, data)

    def extract_responses_text(self, result):
        return result["output"][0]["content"][0]["text"]

    def responses_text(self, input: str) -> str:
        return self.extract_responses_text(self.responses(input))

test.py

from azure_openai_rest_class import AzureOpenaiRestClient

def main():
    try:
        # Create client with default parameters (from environment variables)
        client = AzureOpenaiRestClient()
       
        prompt = 'how tall is k2?'
        print('prompt: ', prompt)

        chat_text = client.chat_text(prompt)
        print('chat: ', chat_text)

        resp_text = client.responses_text(prompt)
        print('responses: ', resp_text)

    except Exception as err:
        print(f"error: {err}")

main()

JavaScript

azure-openai-sdk-class.js

import { AzureOpenAI } from "openai";
// import dotenv from "dotenv";
// dotenv.config();

export class AzureOpenaiSdkClient {
    constructor(
        endpoint = process.env["AZURE_OPENAI_ENDPOINT"],
        apiKey = process.env["AZURE_OPENAI_API_KEY"],
        deployment = "gpt-4.1-mini",
        apiVersion = "2025-04-01-preview"
    ) {
        this.endpoint = endpoint;
        this.apiKey = apiKey;
        this.deployment = deployment;
        this.apiVersion = apiVersion;
        this.model = deployment;
       
        this.client = new AzureOpenAI({
            endpoint: this.endpoint,
            apiKey: this.apiKey,
            apiVersion: this.apiVersion,
            deployment: this.deployment
        });
    }

    async responsesText(input, model2 = null) {
        const modelToUse = model2 !== null ? model2 : this.model;
        const response = await this.client.responses.create({
            model: modelToUse,
            input: input,
        });
        // console.log(JSON.stringify(response, null, 2));
        const text = response.output[0].content[0].text;
        // console.log(text);
        return text;
    }

    async chatText(input, model2 = null) {
        const modelToUse = model2 !== null ? model2 : this.model;
        const chatPrompt = [
            {
                role: "system",
                content: [
                    {
                        type: "text",
                        text: "You are a helpful AI assistant."
                    }
                ]
            },
            {
                role: "user",
                content: [
                    {
                        type: "text",
                        text: input
                    }
                ]
            }
        ];
        const messages = chatPrompt;
        // Generate the completion
        const result = await this.client.chat.completions.create({
            model: modelToUse,
            messages: messages,
            max_tokens: 800,
            temperature: 1,
            top_p: 1,
            frequency_penalty: 0,
            presence_penalty: 0,
            stop: null,
            stream: false
        });
        // console.log(JSON.stringify(result, null, 2));
        const text = result.choices[0].message.content;
        // console.log(text);
        return text;
    }
}

test.js

import { AzureOpenaiSdkClient } from './azure-openai-sdk-class.js';

async function main() {
    // Create client with default parameters (from environment variables)
    const client = new AzureOpenaiSdkClient();
   
    const prompt = 'how tall is k2?';
    console.log('azure openai, prompt:', prompt);
   
    const chatText = await client.chatText(prompt);
    console.log('chat.completions:', chatText);
   
    const respText = await client.responsesText(prompt);
    console.log('responses:', respText);
}

main().catch((err) => {
    console.error("The sample encountered an error:", err);
});

azure-openai-rest-class.js

// import dotenv from "dotenv";
// dotenv.config();

export class AzureOpenaiRestClient {
    constructor(
        endpoint = process.env["AZURE_OPENAI_ENDPOINT"],
        apiKey = process.env["AZURE_OPENAI_API_KEY"],
        deployment = "gpt-4.1-mini",
        apiVersion = "2025-04-01-preview"
    ) {
        this.endpoint = endpoint;
        this.apiKey = apiKey;
        this.deployment = deployment;
        this.apiVersion = apiVersion;
        this.model = deployment;
    }

    async httpPostJson(url, headers, body) {
        const response = await fetch(url, {
            method: "POST",
            headers,
            body: JSON.stringify(body)
        });
        if (!response.ok) {
            throw new Error(`HTTP error! status: ${response.status}`);
        }
        return await response.json();
    }

    async chat(input) {
        const url = `${this.endpoint}/openai/deployments/${this.deployment}/chat/completions?api-version=${this.apiVersion}`;
        const body = {
            messages: [
                { role: "system", content: "You are a helpful assistant." },
                { role: "user", content: input },
            ],
            max_tokens: 800,
            temperature: 1,
            top_p: 1,
            frequency_penalty: 0,
            presence_penalty: 0,
            stop: null
        };
        const headers = {
            "Content-Type": "application/json",
            "api-key": this.apiKey
        };
        return await this.httpPostJson(url, headers, body);
    }

    extractChatContent(result) {
        return result.choices[0].message.content;
    }

    async chatText(input) {
        const result = await this.chat(input);
        return this.extractChatContent(result);
    }

    async responses(userContent) {
        const data = {
            model: this.model,
            input: userContent
        };
        const url = `${this.endpoint}/openai/responses?api-version=${this.apiVersion}`;
        const headers = {
            "Content-Type": "application/json",
            "api-key": this.apiKey
        };
        return await this.httpPostJson(url, headers, data);
    }

    extractResponsesText(result) {
        return result.output[0].content[0].text;
    }

    async responsesText(input) {
        const result = await this.responses(input);
        return this.extractResponsesText(result);
    }
}

test.js

import { AzureOpenaiSdkClient } from './azure-openai-sdk-class.js';

async function main() {
    // Create client with default parameters (from environment variables)
    const client = new AzureOpenaiSdkClient();
   
    const prompt = 'how tall is k2?';
    console.log('azure openai, prompt:', prompt);
   
    const chatText = await client.chatText(prompt);
    console.log('chat.completions:', chatText);
   
    const respText = await client.responsesText(prompt);
    console.log('responses:', respText);
}

main().catch((err) => {
    console.error("The sample encountered an error:", err);
});




web: shadcn/ui (vs MUI)

 Build your Component Library - shadcn/ui

A set of beautifully-designed, accessible components and a code distribution platform. Works with your favorite frameworks. Open Source. Open Code.

GitHub - shadcn-ui/ui: A set of beautifully-designed, accessible components and a code distribution platform. Works with your favorite frameworks. Open Source. Open Code.

Shadcn UI and Material UI (MUI) are both popular React UI libraries, but they cater to different needs. MUI is a comprehensive, feature-rich library with a strong emphasis on Material Design and a large component library, while Shadcn UI prioritizes flexibility, customization, and a lightweight, modular approach, often paired with Tailwind CSS.

Shadcn/UI is designed specifically for use with React. It provides a collection of customizable components that are built on top of Radix UI and styled with Tailwind CSS, all intended to be consumed within a React application.

Material UI vs Shadcn: UI library war