Tuesday, June 25, 2024

Perplexity.AI search engine alternative

What is Perplexity?

Perplexity is an alternative to traditional search engines, where you can directly pose your questions and receive concise, accurate answers backed up by a curated set of sources. It has a conversational interface, contextual awareness and personalization to learn your interests and preferences over time.

Perplexity’s mission is to make searching for information online feel like you have a knowledgeable assistant guiding you, it is a powerful productivity and knowledge tool that can help you save time and energy with mundane tasks for a multitude of use cases.
How does Perplexity accomplish this?

With the help of our advanced answer engine, it processes your questions and tasks It then uses predictive text capabilities to generate useful responses, choosing the best one from multiple sources, and summarizes the results in a concise way.





Perplexity AI is an AI chatbot-powered research and conversational search engine that answers queries using natural language predictive text.[2][3] Launched in 2022, Perplexity generates answers using sources from the web and cites links within the text response.[4] Perplexity works on a freemium model; the free product uses its Perplexity model based on OpenAI's GPT-3.5 model combined with the company's standalone large language model (LLM) that incorporates natural language processing (NLP) capabilities, while the paid version Perplexity Pro has access to GPT-4, Claude 3, Mistral Large, Llama 3 and an Experimental Perplexity Model.[3][4][1] As of early 2024, it has about 10 million monthly users.[5]

podcasts:




AI vs SW Security (cURL)

 The I in LLM stands for intelligence | daniel.haxx.se

"Having a bug bounty means that we offer real money in rewards to hackers who report security problems. The chance of money attracts a certain amount of “luck seekers”. People who basically just grep for patterns in the source code or maybe at best run some basic security scanners, and then report their findings without any further analysis in the hope that they can get a few bucks in reward money.


...When reports are made to look better and to appear to have a point, it takes a longer time for us to research and eventually discard it. Every security report has to have a human spend time to look at it and assess what it means.

...Right now, users seem keen at using the current set of LLMs, throwing some curl code at them and then passing on the output as a security vulnerability report. What makes it a little harder to detect is of course that users copy and paste and include their own language as well. The entire thing is not exactly what the AI said, but the report is nonetheless crap."

Daniel Stenberg - daniel.haxx.se