Singularity

45 readers
0 users here now

Everything pertaining to the technological singularity and related topics, e.g. AI, human enhancement, etc.

founded 2 years ago
MODERATORS
1
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/ShittyInternetAdvice on 2025-11-13 07:09:50+00:00.

2
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/kaggleqrdl on 2025-11-13 02:09:30+00:00.


https://www.nature.com/articles/s41586-025-09833-y

Recent AI systems, often reliant on human data, typically lack the formal verification necessary to guarantee correctness. By contrast,  formal languages such as Lean1 offer an interactive environment that grounds reasoning, and reinforcement learning (RL) provides a mechanism for learning in such environments. We present AlphaProof, an AlphaZero-inspired2 agent that learns to find formal proofs through RL by training on millions of auto-formalized problems. 

Lean is cool because the AI can actually verify if it got the answer correct. Unlike other forms of learning, it can actually do RLVR, reinforcement learning with verifiable rewards.  

https://en.wikipedia.org/wiki/Lean/_(proof/_assistant)

A lot of people are working heavily in this area. math.inc and Terrence Tao is very interested in this. Great recent article in quanta suggesting a complimentary usage of SAT - https://www.quantamagazine.org/to-have-machines-make-math-proofs-turn-them-into-a-puzzle-20251110/ (weird photo spread of heule tho)

3
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/andy_free on 2025-11-12 19:56:27+00:00.

4
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Pablogelo on 2025-11-12 22:27:24+00:00.

5
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/AngleAccomplished865 on 2025-11-12 14:52:18+00:00.


I didn't even know she had a substack site: https://drfeifei.substack.com/p/from-words-to-worlds-spatial-intelligence

"In this essay, I’ll explain what spatial intelligence is, why it matters, and how we’re building the world models that will unlock it—with impact that will reshape creativity, embodied intelligence, and human progress."

6
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/SharpCartographer831 on 2025-11-12 20:48:00+00:00.

7
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/SnoozeDoggyDog on 2025-11-12 19:13:37+00:00.

8
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/ShreckAndDonkey123 on 2025-11-12 19:07:57+00:00.

9
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/donutloop on 2025-11-12 17:55:07+00:00.

10
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Worldly_Evidence9113 on 2025-11-12 17:38:17+00:00.

11
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Bizzyguy on 2025-11-12 17:16:40+00:00.

12
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Altruistic-Skill8667 on 2025-11-12 12:39:28+00:00.


„A new framework suggests we’re already halfway to AGI. The rest of the way will mostly require business-as-usual research and engineering.“

Biggest problem: continual learning. The article cites for example Dario Amodei on that topic: „There are lots of ideas that are very close to the ideas we have now that could perhaps do [continual learning].“

13
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Distinct-Question-16 on 2025-11-12 02:43:33+00:00.

14
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Distinct-Question-16 on 2025-11-12 12:02:43+00:00.

15
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/RDSF-SD on 2025-11-12 07:43:43+00:00.

16
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/donutloop on 2025-11-12 06:57:56+00:00.

17
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/ShreckAndDonkey123 on 2025-11-12 07:54:47+00:00.

18
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Radiant-Act4707 on 2025-11-12 01:44:39+00:00.


While scrolling through social media recently, I stumbled upon an exciting piece of news: Black Forest Labs' Flux 2 seems to be on the verge of release! If you're like me, passionate about AI image generation tools, this is definitely a development worth watching. The Flux 1 series has already revolutionized the landscape of AI art creation, and Flux 2 is expected to further address some of the pain points from its predecessor. According to clues on social media, if you want to participate in testing, you can leave a comment directly under Robin Rombach's (one of the co-founders of Black Forest Labs) post to apply. I noticed he's already replied to some users' applications—it looks like there's a good chance, reminding me of the early community testing phase for Stable Diffusion, where developers gathered feedback through interactions to drive model iteration

https://preview.redd.it/2em7m1evgq0g1.png?width=757&format=png&auto=webp&s=5162dd4281db20072583d8c22bf39e6c7a14fdbe

Robin Rombach, a key figure behind Flux (and the original developer of Stable Diffusion), often shares firsthand information on his X (formerly Twitter) account. When Flux 1 launched in 2024, it stunned the industry with its excellent text-to-image generation capabilities, including variants like Flux 1.1 Pro (released in October 2024) and Kontext (focused on image editing). Now, Flux 2 is seen as the next leap forward. If you're interested, why not try leaving a comment under Rombach's latest relevant post—you might just become an early tester.

https://preview.redd.it/h22g5fsrdq0g1.jpg?width=1978&format=pjpg&auto=webp&s=4fb01a14a4cfc06ebede1393e2587c18eb67d7af

Of course, any new model's release comes with heated discussions in the community. I've gathered some netizens' feedback, which includes both anticipation and skepticism, reflecting the pain points and visions in the AI image generation field. Let's break them down:

  • Unified Model and Workflow Optimization: One netizen pointed out that while Flux 1's Kontext variant addressed only a few pain points in AI image workflows—such as the cumbersome separation of generation and editing, character drifting, poor local editing, and slow speeds—should the new version adopt a more unified model, consistent character sets, precise editing, and faster, smarter text processing?
  • Fixing Classic Pain Points: Another netizen hopes Flux 2 will address issues in Flux 1 with hand rendering, text generation, and multi-person consistency, optimistically saying, "if they crack even half of these we're so back." This is practically the "Achilles' heel" of all AI image models. Flux 1 has made progress in these areas (like better anatomical accuracy and prompt following), but hand deformities or text blurriness still pop up occasionally. If Flux 2 optimizes these through larger training datasets or improved flow-matching architecture (the core tech of the Flux series), it could stand out in the competition
  • Breakthrough Innovation vs. Hype: Someone takes a cautious stance: "Still waiting for something truly groundbreaking — hype doesn’t equal innovation." This reminds us that hype often leads the way in the AI field, but true innovation must stand the test of time. Flux 1 indeed led in image detail and diversity, but if Flux 2 is just minor tweaks (like speed improvements without revolutionary features), it might disappoint.
  • Competitive Pressure: Finally, one netizen expresses pessimism: "Don't really have any hope for them. They launched their first one at a real opportune time, but now the big companies are back to putting large compute and time into their models (NB2, hunyuan, qwen, seedream). Still hoping that the rumored date of today's release is real for NB2." Flux 1 did seize the opportunity in 2024, but AI competition in 2025 is fiercer.

Overall, the potential release of Flux 2 has the AI community buzzing, promising a more intelligent and user-friendly future for image generation. But from the netizens' feedback, what everyone most anticipates is practical improvements rather than empty promises.

19
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Terrible-Priority-21 on 2025-11-12 01:14:45+00:00.

Original Title: Despite of all the anti-AI marketing, Hollywood A-listers keep embracing AI. Michael Caine and Matthew McConaughey have teamed with AI audio company ElevenLabs to produce AI replications of their famous voices


"To everyone building with voice technology: keep going. You’re helping create a future where we can look up from our screens and connect through something as timeless as humanity itself — our voices," McConaughey says.

This in a year when we already saw James Cameron joining Stability AI board and Will Smith collaborating with an AI artist. I am sure more will be coming very soon.

https://www.rollingstone.com/culture/culture-news/james-cameron-stability-ai-board-1235111105

https://x.com/jboogx_creative/status/1890507568662933979

20
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/complains_constantly on 2025-11-11 23:18:47+00:00.


Some of you may have seen Google Research’s Nested Learning paper. They introduced HOPE, a self-modifying TITAN variant with a Continuum Memory System (multi-frequency FFN chain) + deep optimizer stack. They published the research but no code (like always), so I rebuilt the architecture and infra in PyTorch over the weekend.

Repo: https://github.com/kmccleary3301/nested_learning

Highlights

  • Level clock + CMS implementation (update-period gating, associative-memory optimizers).
  • HOPE block w/ attention, TITAN memory, self-modifier pathway.
  • Hydra configs for pilot/mid/target scales, uv-managed env, Deepspeed/FSDP launchers.
  • Data pipeline: filtered RefinedWeb + supplements (C4, RedPajama, code) with tokenizer/sharding scripts.
  • Evaluation: zero-shot harness covering PIQA, HellaSwag, WinoGrande, ARC-E/C, BoolQ, SIQA, CommonsenseQA, OpenBookQA + NIAH long-context script.

What I need help with:

  1. Running larger training configs (760M+, 4–8k context) and reporting W&B benchmarks.
  2. Stress-testing CMS/self-modifier stability + alternative attention backbones.
  3. Continual-learning evaluation (streaming domains) & regression tests.

If you try it, please file issues/PRs—especially around stability tricks, data pipelines, or eval scripts. Would love to see how it stacks up against these Qwen, DeepSeek, Minimax, and Kimi architectures.

21
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/Distinct-Question-16 on 2025-11-11 21:24:54+00:00.

22
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/eposnix on 2025-11-11 18:37:36+00:00.

23
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/TFenrir on 2025-11-11 18:00:25+00:00.

Original Title: A historians account of testing Gemini 3's (via A/B) ability to parse old English hand written documents on their benchmark, where they note that this model seems to excel not just at visual understanding, but symbolic reasoning, a great read - here are some snippets

24
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/AngleAccomplished865 on 2025-11-11 16:03:32+00:00.


Review of current state: https://www.nature.com/articles/d41586-025-03633-0

"Biocomputing, on the other hand, goes back to the biological source material. Starting with induced pluripotent stem (iPS) cells, which can be reprogrammed to become almost any type of cell, researchers culture communities of brain cells and nurture them with nutrients and growth factors. To communicate with them, researchers sit the cells on electrode arrays, then pass signals and commands to them as sequences of electrical pulses. These signals change the way that ions flow into and out of neurons, and might prompt some cells to fire an electrical impulse known as an action potential. The biocomputer electrodes can detect these signals and employ algorithms to convert them to usable information...."

25
 
 
This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/salihoff on 2025-11-11 12:27:46+00:00.

view more: next ›