Tuesday, May 14, 2024

AI: ChatGPT 4o from OpenAI

Hello GPT-4o | OpenAI

new flagship model that can reason across audio, vision, and text in real time.

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, image, and video and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.


Introducing GPT-4o - YouTube

Interview Prep with GPT-4o - YouTube

Math problems with GPT-4o - YouTube

Sam Altman talks GPT-4o and Predicts the Future of AI - YouTube



OpenAI GPT-4o is now rolling out — here's how to get access | Tom's Guide..



OpenAI and Google are launching supercharged AI assistants. Here's how you can try them out. | MIT Technology Review

OpenAI struck first on Monday, when it debuted its new flagship model GPT-4o. The live demonstration showed it reading bedtime stories and helping to solve math problems, all in a voice that sounded eerily like Joaquin Phoenix’s AI girlfriend in the movie Her (a trait not lost on CEO Sam Altman).


Google Reveals CRAZY New AI to CRUSH OpenAI GPT4-o (Supercut) - YouTube




No comments: