curius graph
☾
Dark
all pages
search
showing 24751-24800 of 160880 pages (sorted by popularity)
« prev
1
...
494
495
496
497
498
...
3218
next »
roon on X: "had a dream where I was at a party, and every time a song I like comes on the dj immediately hits skip. wat means" / X
1 user ▼
Mistral (wind) - Wikipedia
1 user ▼
Granola on X: "@zapier Luke and Malaika made this quick demo earlier today 👇 https://t.co/36TABFX29S" / X
1 user ▼
John Sherman on X: "If it's Sunday, it's Warning Shots! -Every Sunday at 10AM EST on The AI Risk Network on YouTube, only 15 minutes long. -I sit with @liron and @lethal_ai to discuss ONE missed AI risk warning shot each week on @AIRiskNetwork. https://t.co/z3ZiaCPVrY" / X
1 user ▼
The existential horror of the paperclip factory. - YouTube
1 user ▼
Why I don’t think AGI is right around the corner - YouTube
1 user ▼
Why it's hard to make settings for high-stakes control research
1 user ▼
What's worse, spies or schemers?
1 user ▼
Jankily controlling superintelligence - by Ryan Greenblatt
1 user ▼
Monument to the Great Fire of London - Wikipedia
1 user ▼
Yoshua Bengio on X: "Very insightful video by @tristanharris on the growing risks of AI and the race to develop it—but just as importantly, on the danger of thinking our current path is inevitable. We still have the chance to coordinate globally and create a safer way forward. https://t.co/mAPumsmtLK" / X
1 user ▼
OpenAI | OpenRouter
1 user ▼
Daniel Kokotajlo on X: "I agree, that Dario quote was a bit of a shock to me this morning & is a negative update about his character and/or competence." / X
1 user ▼
Receiver operating characteristic - Wikipedia
1 user ▼
My Empathy Is Rarely Kind — LessWrong
1 user ▼
~*~*~WEDNESDAY NIGHT IMPROV~*~*~
1 user ▼
machine yearning engineer on X: "reading shakespeare for the first time is crazy because you go “oh that’s where that comes from” every other page" / X
1 user ▼
Katherine Moore 🌻 on X: "@onchainocean @xlr8harder The crazy dudes are out there too. They just think they’ve invented new physics." / X
1 user ▼
My Bay Area induction: a three-part story
1 user ▼
The United Kingdom
1 user ▼
How quick and big would a software intelligence explosion be?
1 user ▼
void. on X: "this is why I pay for internet. https://t.co/C2GYcqMNjl" / X
1 user ▼
ChatGPT on X: "good bot" / X
1 user ▼
Multiple stage fallacy - LessWrong
1 user ▼
Eliezer Yudkowsky ⏹️ on X: "@panickssery Carlsmith's analysis giving 5% chance of ASI disaster was a giant flaming Multiple Stage Fallacy, and it reflects poorly on the entire surrounding ecosystem that they funded it or took it seriously." / X
1 user ▼
Debate with Vitalik Buterin — Will “d/acc” Protect Humanity from Superintelligent AI? - YouTube
1 user ▼
jack morris on X: "OpenAI hasn’t open-sourced a base model since GPT-2 in 2019. they recently released GPT-OSS, which is reasoning-only... or is it? turns out that underneath the surface, there is still a strong base model. so we extracted it. introducing gpt-oss-20b-base 🧵 https://t.co/3xryQgLF8Z" / X
1 user ▼
Worse-than-average effect - Wikipedia
1 user ▼
James Sanders on X: "An approximation I like is that an H100 is roughly a studio apartment https://t.co/amxewhhWii" / X
1 user ▼
Giving the first £10,000 | Rohan's Rambles
1 user ▼
About | Oak Hu
1 user ▼
Hello | Cath Wang
1 user ▼
Eliezer's Sequences and Mainstream Academia — LessWrong
1 user ▼
Don't Revere The Bearer Of Good Info — LessWrong
1 user ▼
Some Problems for Bayesian Confirmation Theory on JSTOR
1 user ▼
Some Problems for Bayesian Confirmation Theory on JSTOR
1 user ▼
Sam Glover on X: "Seems weird that Leopold's fund has a massive put option on a semiconductor ETF? What's that about? https://t.co/6CNg6ikm09" / X
1 user ▼
Illusory superiority - Wikipedia
1 user ▼
Reading List - by Julian Stastny - Redwood Research blog
1 user ▼
METR on X: "In a new report, we evaluate whether GPT-5 poses significant catastrophic risks via AI R&D acceleration, rogue replication, or sabotage of AI labs. We conclude that this seems unlikely. However, capability trends continue rapidly, and models display increasing eval awareness. https://t.co/fT7qtfJ7C2" / X
1 user ▼
Lessons from the Iraq War for AI policy — LessWrong
1 user ▼
Log in | Aer Lingus
1 user ▼
Planning for Extreme AI Risks - by Josh Clymer
1 user ▼
My AGI timeline updates from GPT-5 (and 2025 so far) — LessWrong
1 user ▼
Low-stakes alignment — AI Alignment Forum
1 user ▼
How can we solve diffuse threats like research sabotage with AI control?
1 user ▼
Richard Hamming - Wikipedia
1 user ▼
superfates on X: "okay what I did here was I took a data set of ChatGPT interactions collected in the wild and reversed the "assistant" and "user" tags. fine-tuned llama 8B on some of that data and gave it the ability to message you first. try it at https://t.co/GbRKrZDyLz 😊" / X
1 user ▼
Phase transitions and AGI — LessWrong
1 user ▼
Lonely runner conjecture - Wikipedia
1 user ▼
« prev
1
...
494
495
496
497
498
...
3218
next »