Meta Llama 3 — Minority Report

Sam Lin
3 min readApr 22, 2024
Google Building PR55

Long live the king Llama

On Apr. 18, 2024, Meta released Llama 3. Which is almost as good as the State-of-the-Art (SOTA) proprietary LLMs: GPT-4, Gemini Ultra & Claude 3 Opus. Thanks to the team Meta.AI, not only the gap between Closed & Open camps is closing quickly, but also there is on more Mate AI chat bot for consumers. It’ll be increasing difficult to justify why a Gen AI workflow has to be in a wall garden instead of the open field.

Open LLMs is closing the gap with Closed LLMs by Maxime Labonne + Gork 1 & Llama 3

Only if the history rhymes

In 2010, Opera had a team of ~10 to port Opera mobile browser to MediaTek feature phones. I was a proud PM & developer working with a great team shipping a full HTML browser to the most affordable computers: feature phones. In other words: MediaTek feature phones “got smarter” with Opera Mobile 10. Thanks to the flexible architecture, Opera browser not only runs on Windows, Mac, Linux, but also Internet TVs, Nintendo devices, smart phones and then feature phones too. At the time, the founder’s vision: open standard, but closed implementation had been serving the business very well. Until it’s not.

  1. In 2005, Apple was open sourcing its WebKit browser engine.
  2. In 2008, Google launched Chrome browser with WebKit inside.
  3. In 2013, Opera dropped its own engine to switch to the WebKit from Chromium project.
https://github.com/meta-llama/llama3

Don’t eat the grass around the nest

“狡兔不吃窩邊草.
A clever rabbit won’t eat the grass around the nest.” — Chinese saying

Why would a company open source a $10 billion model? No accountant can do such math, nor a market will reward such investment. But sometime, a company could be “lucky” enough to figure out a way to capture value creatively. When it does, everyone wins.

ArtificialAnalysis.ai: Llama 3 Instruct (70B): API Provider Benchmarking & Analysis

“We have a long history of open sourcing software. We don’t tend to open source our product. We don’t take the code for Instagram and make it open source. We take a lot of the low-level infrastructure and we make that open source.” — Mark Zuckerberg at Dwarkesh Podcast

Zuckerberg also said Meta may save billions or tens of billions of dollars when others figure out how to run Llama models more cheaply. Groq “happens” to start serving Llama 3 with a very high throughputs with lower cost. It’s 877 tokens/s on Llama 3 8B and 284 tokens/s on Llama 3 70B. It was only 12 hours since the release. Furthermore, Meta also provides Llama Recipes to fine-turn and run it locally, on-prem, and in every cloud: AWS, Google Cloud, Microsoft Azure, etc. It’s literally “wide open” for anyone to play. It’s commoditizing the LLM.

--

--

Sam Lin

A Taiwanese lives in Silicon Valley since 2014 with my own random opinions to share. And, they are my own, not those of companies I work for.