Monday, October 14, 2024

Fusion news, investment


US firm shows breakthrough nuclear fusion device prototype with 100 KW of input power

 https://interestingengineering.com/energy/us-firm-nuclear-fusion-device-prototype


Every fusion startup that has raised over $100M

https://techcrunch.com/2024/10/04/every-fusion-startup-that-has-raised-over-100m/


...and how far away are we from fusion?

(from another previous post)

https://www.wsj.com/world/china/china-us-fusion-race-4452d3be

https://www.nasdaq.com/articles/can-you-invest-in-nuclear-fusion-stocks

US data centers will soon hit size limits

All the more reason the U.S. should stop funding foreign wars, update our own infrastructure, and continue fracking until alternative energy sources are reliable and abundant.

https://www.semafor.com/article/10/11/2024/microsoft-azure-cto-us-data-centers-will-soon-hit-limits-of-energy-grid

Here's the previous post that includes the article about a new technique for cutting energy consumption by 95%...

...and how far away are we from fusion?

(from another previous post)

https://www.wsj.com/world/china/china-us-fusion-race-4452d3be

https://www.nasdaq.com/articles/can-you-invest-in-nuclear-fusion-stocks

Wednesday, October 9, 2024

Pigeon-toed posing

I've been noticing this for a while. I figured it was deliberate, and sure enough, it's no coincidence.

It helps women look more ridiculous and less original. Very effective.

https://www.dailymail.co.uk/femail/article-3872996/Why-posing-pigeon-toed-suddenly-vogue-listers-turn-knees-look-younger-thinner-child-like-stance-taking-red-carpet.html 

https://x.com/thismorning/status/791369567280861184

Researchers Claim New Technique Slashes AI Energy Use By 95%

Researchers Claim New Technique Slashes AI Energy Use By 95% (decrypt.co)49

Researchers at BitEnergy AI, Inc. have developed Linear-Complexity Multiplication (L-Mul), a technique that reduces AI model power consumption by up to 95% by replacing energy-intensive floating-point multiplications with simpler integer additions. This method promises significant energy savings without compromising accuracy, but it requires specialized hardware to fully realize its benefits. Decrypt reports:L-Mul tackles the AI energy problem head-on by reimagining how AI models handle calculations. Instead of complex floating-point multiplications, L-Mul approximates these operations using integer additions. So, for example, instead of multiplying 123.45 by 67.89, L-Mul breaks it down into smaller, easier steps using addition. This makes the calculations faster and uses less energy, while still maintaining accuracy. The results seem promising. "Applying the L-Mul operation in tensor processing hardware can potentially reduce 95% energy cost by element wise floating point tensor multiplications and 80% energy cost of dot products," the researchers claim. Without getting overly complicated, what that means is simply this: If a model used this technique, it would require 95% less energy to think, and 80% less energy to come up with new ideas, according to this research.

The algorithm's impact extends beyond energy savings. L-Mul outperforms current 8-bit standards in some cases, achieving higher precision while using significantly less bit-level computation. Tests across natural language processing, vision tasks, and symbolic reasoning showed an average performance drop of just 0.07% -- a negligible tradeoff for the potential energy savings. Transformer-based models, the backbone of large language models like GPT, could benefit greatly from L-Mul. The algorithm seamlessly integrates into the attention mechanism, a computationally intensive part of these models. Tests on popular models such as Llama, Mistral, and Gemma even revealed some accuracy gain on certain vision tasks.

At an operational level, L-Mul's advantages become even clearer. The research shows that multiplying two float8 numbers (the way AI models would operate today) requires 325 operations, while L-Mul uses only 157 -- less than half. "To summarize the error and complexity analysis, L-Mul is both more efficient and more accurate than fp8 multiplication," the study concludes. But nothing is perfect and this technique has a major achilles heel: It requires a special type of hardware, so the current hardware isn't optimized to take full advantage of it. Plans for specialized hardware that natively supports L-Mul calculations may be already in motion. "To unlock the full potential of our proposed method, we will implement the L-Mul and L-Matmul kernel algorithms on hardware level and develop programming APIs for high-level model design," the researchers say. 

Tuesday, October 1, 2024

Ocean plastic propaganda

I first encountered a trend of young people obsessed with plastic polution, and having to tolerate paper straws which suck. I thought there must be some massive propaganda campaign at work.  

I ran across this video, so I thought I'd better archive it.

https://www.youtube.com/watch?v=IglBJ62Sv3Q

Saturday, September 28, 2024