Disclaimer: The views and opinions expressed in this blog are entirely my own and do not necessarily reflect the views of my current or any previous employer. This blog may also contain links to other websites or resources. I am not responsible for the content on those external sites or any changes that may occur after the publication of my posts.
End Disclaimer
All history was a palimpsest, scraped clean and reinscribed exactly as often as was necessary.
-George Orwell
News
AIML
We’re Focusing on the Wrong Kind of AI Apocalypse
The Fight for AI Talent: Pay Million-Dollar Packages and Buy Whole Teams
The AI Credit Rule: Give Credit to AI the Way You Would Give Credit to a Human
Chatbot letdown: Hype hits rocky reality (h/t to the HK)
“The king is dead”—Claude 3 surpasses GPT-4 on Chatbot Arena for the first time
The GPT-4 barrier has finally been broken
Want to Know if AI Will Take Your Job? I Tried Using It to Replace Myself
Inside the Creation of the World’s Most Powerful Open Source AI Model
Painting
No. 13 (White, Red on Yellow), 1958, Mark Rothko , Oil and acrylic with powdered pigments on canvas, 95-1/4 × 81-3/8 x 1-3/8 in. (241.9 × 206.7 x 3.5 cm), The Met
Cyber
The Audacious MGM Hack That Brought Chaos to Las Vegas
Markets
Negative Equity Risk Premium Estimates Persist For US Equities
The Short-Vol Trade Is Back: Why Some Investors Think It’s Driving Tranquility in Markets
Hey ChatGPT, Why Isn’t My AI Fund Up Like Nvidia?
Why Has the EV Market Stalled?
State Farm won't renew homeowners coverage for 72,000 California homes and apartments
Insurers Report Rising Hail Damage Claims
Rising insurance costs, ample inventory create a unique market in Southwest Florida
Misc
What to know about charging EVs at home
Rick Beato: Why Tool's Danny Carey Is Your Drummer's Favorite Drummer
Andrew Huberman’s Mechanisms of Control
‘Extraordinary’ archive of ancient brains could help shed light on mental illness
Trader Joe’s just increased the price of a banana for the first time in more than 20 years
These Century-Old Stone “Tsunami Stones” Dot Japan’s Coastline (2015)
The Wi-Fi only works when it's raining
Tuition now costs $90,000 a year or more at some US universities
Paper
The Unreasonable Ineffectiveness of the Deeper Layers
Explores the impact of layer pruning on Large Language Models (LLMs):
Strategy: A simple layer-pruning strategy is tested on LLMs, showing minimal performance loss until a significant portion of the model is pruned.
Implications: Findings suggest current LLMs might not fully utilize deeper layers, and pruning combined with techniques like quantization can greatly improve efficiency.
Methodology: Layers are pruned based on similarity, with minimal finetuning afterwards.
Results: Experiments demonstrate that removing deep layers has little effect on model performance, highlighting potential for more efficient LLM designs.
Conclusion: The study suggests shallow layers may be crucial for LLMs, opening paths for research on optimizing model architecture and training methods.