Thursday, March 27, 2025

quantum computer generates truly random numbers

https://financialpost.com/pmn/business-pmn/jpmorgan-says-quantum-experiment-generated-truly-random-numbers

https://www.nature.com/articles/s41586-025-08737-1

https://www.bloomberg.com/news/articles/2025-03-26/jpmorgan-says-quantum-experiment-generated-truly-random-numbers

JPMorgan Chase used a quantum computer from Honeywell's Quantinuum to generate and mathematically certify truly random numbers -- an advancement that could significantly enhance encryption, security, and financial applications. The breakthrough was validated with help from U.S. national laboratories and has been published in the journal Nature.

From a report:

Between May 2023 and May 2024, cryptographers at JPMorgan wrote an algorithm for a quantum computer to generate random numbers, which they ran on Quantinuum's machine. The US Department of Energy's supercomputers were then used to test whether the output was truly random. "It's a breakthrough result," project lead and Head of Global Technology Applied Research at JPMorgan, Marco Pistoia told Bloomberg in an interview. "The next step will be to understand where we can apply it."

Applications could ultimately include more energy-efficient cryptocurrency, online gambling, and any other activity hinging on complete randomness, such as deciding which precincts to audit in elections.

Saturday, March 15, 2025

Google AI Studio: for live stream analysis and instruction: Camera, computer desktop, software learning

Updated review video:

https://www.youtube.com/watch?v=RxCZhltR9Cw

--

https://www.youtube.com/watch?v=e6c_uwQwV9A

Desktop assistant: https://aistudio.google.com/

Use this link to go live through the camera and look at your environment:

  1. Visit https://aistudio.google.com/live

  2. Click on the camera icon at the bottom of the interface2

  3. Share your phone screen with the AI

This functionality allows for various use cases, such as getting help with applications, receiving instructions, or obtaining real-time commentary on what you're viewing on your phone1. It's important to note that this feature is part of Google AI Studio's capabilities and is separate from the standard Gemini website or mobile app experience


Generate images for text:

https://generativeai.pub/4-creative-use-cases-of-googles-gemini-2-0-flash-generate-images-with-long-text-in-10-seconds-c7699da41508

Meta AI Decodes Thoughts into Text

 https://www.perplexity.ai/page/meta-ai-decodes-thoughts-into-DnLY1gk2Rl.a.EtfMhlUZQ

Curated by
dailyed
3 min read
Mar 10, 2025
13,430
330

Meta AI researchers have achieved a significant breakthrough in decoding brain activity into text without invasive procedures, demonstrating the ability to reconstruct typed sentences from brain signals with up to 80% accuracy at the character level using advanced brain scanning techniques and artificial intelligence.

Meta's Brain-to-Text Technology

Collaborating with the Basque Center on Cognition, Brain and Language, Meta's research team has developed an AI model capable of transforming neural signals into text12. This groundbreaking system records brain activity while participants type sentences, then trains an artificial intelligence to decode these signals into written words3. The technology has demonstrated impressive results, achieving a character-error-rate of just 19% for the most successful participants when using magnetoencephalography (MEG) data13. This performance significantly surpasses previous methods based on electroencephalography (EEG), marking a substantial leap forward in non-invasive brain-computer interfaces45.

livescience.com favicon
startupnews.fyi favicon
the-decoder.com favicon
5 sources

Non-Invasive Methods: MEG and EEG

Meta's Brain2Qwerty: Turning Brainwaves into Text with 80 ...

Meta's brain-to-text technology primarily utilizes two non-invasive neuroimaging techniques: magnetoencephalography (MEG) and electroencephalography (EEG). MEG has shown superior performance, achieving a character-error-rate of just 19% compared to EEG's higher error rates12. While both methods offer real-time brain activity monitoring, MEG provides higher spatial resolution by detecting magnetic fields produced by neural electrical currents, whereas EEG measures electrical activity directly from the scalp34.

Key differences between MEG and EEG in this context include:

  • Accuracy: MEG outperforms EEG in decoding accuracy, with Meta's AI model predicting up to 80% of written characters using MEG data5.

  • Equipment: MEG requires a magnetically shielded room and more sophisticated machinery, while EEG is more portable and widely accessible6.

  • Signal quality: MEG signals are less distorted by skull and scalp, offering cleaner data for AI interpretation3.

  • Cost and availability: EEG is generally more cost-effective and widely available, making it more suitable for potential widespread application despite lower accuracy4.

livescience.com favicon
the-decoder.com favicon
arxiv.org favicon
6 sources

Hierarchical Neural Dynamics

Meta's research has uncovered fascinating insights into the hierarchical nature of neural dynamics during language production. This breakthrough sheds light on how the brain processes and generates language, revealing a structured, layered sequence of neural activity. Key findings from the study include:

  • Identification of a 'dynamic neural code' linking successive thoughts1

  • Evidence of the brain processing language in a hierarchical manner2

  • Continuous holding of multiple layers of information during language production2

  • Seamless transition from abstract thoughts to structured sentences while maintaining coherence2

These discoveries provide a precise computational breakdown of the neural dynamics coordinating language production in the human brain1. The research suggests that the brain doesn't simply process one word at a time, but rather maintains a complex, multi-layered representation of information throughout the language production process2. This understanding could potentially inform the development of more sophisticated AI language models and enhance our ability to create brain-computer interfaces for communication assistance34.

arxiv.org favicon
tweaktown.com favicon
gvwire.com favicon
4 sources

Limitations and Future Applications

Meta's brain-to-text technology, while groundbreaking, faces several limitations and challenges. However, these hurdles also point to exciting future applications and areas for further research:

  • Current limitations:

    • MEG requires a magnetically shielded environment, limiting portability12

    • Subjects must remain still for accurate readings3

    • Only tested on healthy individuals, efficacy for neurological conditions unknown3

    • Limited vocabulary and sentence complexity in current studies4

  • Potential future applications:

    • Assistive communication for individuals with speech impairments or paralysis56

    • Enhanced human-AI interaction through direct brain-computer interfaces4

    • Improved understanding of language processing disorders7

    • Development of more intuitive and responsive AI language models8

As research progresses, we may see improvements in signal processing, AI decoding algorithms, and more portable neuroimaging technologies. These advancements could lead to practical, real-world applications of thought-to-text systems, potentially revolutionizing communication for those with disabilities and opening new frontiers in human-computer interaction48.