The models, the tools, the implementations, everything is moving incredibly fast.
But a very interesting trend that I've been seeing, is that the open source benchmarks are improving just slightly faster than the closed source alternatives. Slowly but surely, they are catching up.
The day may come where the open source models are the best models, the ones with the greatest number of tools, and the largest variety of implementations, because naturally they're open and available for everyone to work on.
There have been numerous new developments on running models more efficiently, running larger models on even smaller devices, and doing so with less power and computational overhead, as well as a new advancement in one of my favourite open source models, which may just be a big leap forward for how these are going to evolve moving forward. And that is what today's episode of AI Unchained is all about.
A big leap for local LLMs.
Ai News Roundup
Running Ai locally on a Smartphone: A Performance Evaluation of a Quantized Large Language Model on Various Smartphones (Link: http://tinyurl.com/mv4k7eyp)
LLM in a flash: Efficient Large Language Model Inference with Limited Memory (Link:http://tinyurl.com/mrc5ur7m)
Explore the new Mixtral Model below
Poe.com (Link:https://poe.com/chat/2vdailufawo9jnkcb4l)
Fireworks AI Models (Link: https://app.fireworks.ai/models)
GGUF Model for Local Running (Link: http://tinyurl.com/34btrnre)
Run with LMStudio - LM Studio - Discover, download, and run local LLMs (Link:https://lmstudio.ai/)
Unleashed Chat - One button to deploy your own chat (Link: https://unleashed.chat/app/chat)
Host Links
Check out our awesome sponsors!
Activity