curius graph
☾
Dark
all pages
search
showing 36301-36350 of 160880 pages (sorted by popularity)
« prev
1
...
725
726
727
728
729
...
3218
next »
The Best of Howard Marks Memos
1 user ▼
you can't predict you can prepare
1 user ▼
whats it all about, alpha?
1 user ▼
Succinctness is Power
1 user ▼
What the Bubble Got Right
1 user ▼
Why YC
1 user ▼
Canadian Jews are being targeted simply because they are Jewish | National Post
1 user ▼
[2010.11929] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
1 user ▼
The Give-to-Get Model for AI Startups
1 user ▼
Transformers in Video Processing (Part 1)
1 user ▼
vivit.pdf
1 user ▼
Video-R1: Reinforcing Video Reasoning in MLLMs
1 user ▼
Deep Video Discovery: Agentic Search with Tool Use for Long-form Video Understanding
1 user ▼
Temporal Chain of Thought: Long-Video Understanding by Thinking in Frames
1 user ▼
arxiv.org/pdf/2510.22954
1 user ▼
century0765.pdf
1 user ▼
The Way to Wealth by Benjamin Franklin
1 user ▼
Unsong
1 user ▼
Psychiatry’s Medical Model: How It Traumatizes, Retraumatizes & Perverts Healing - Mad In America
1 user ▼
Explore / X
1 user ▼
The Wall Street Journal on X: "Want guac in your burrito bowl or extra legroom on your flight? A new financial guideline called “the 0.01% rule”—and inspired by a Jay-Z lyric—might help you decide. https://t.co/PfwdSaZ2KB https://t.co/KVUStQgB3Q" / X
1 user ▼
Simplenote
1 user ▼
Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs 49 This paper contains model-generated content that might be offensive. 49
1 user ▼
Friday, 21 March
1 user ▼
grsampson.net/PErdoEs.html
1 user ▼
EIS II: What is “Interpretability”? — AI Alignment Forum
1 user ▼
Thoughts on AI 2027 — LessWrong
1 user ▼
2501.17315
1 user ▼
Ctrl-Z: Controlling AI Agents via Resampling — LessWrong
1 user ▼
2410.04332
1 user ▼
Kasso Okoudjou | For All
1 user ▼
pdf
1 user ▼
2406.19501
1 user ▼
“It’s not that bad” — Lisa Ballard Dunlap
1 user ▼
Developing a compass of bullishness
1 user ▼
The swings at the park
1 user ▼
7+ tractable directions in AI control — LessWrong
1 user ▼
Training-time schemers vs behavioral schemers — LessWrong
1 user ▼
Evolution provides no evidence for the sharp left turn — LessWrong
1 user ▼
My AGI safety research—2024 review, ’25 plans — AI Alignment Forum
1 user ▼
[Intro to brain-like-AGI safety] 1. What's the problem & Why work on it now? — AI Alignment Forum
1 user ▼
J2P and P2J Ver 1
1 user ▼
How can we solve diffuse threats like research sabotage with AI control? — LessWrong
1 user ▼
Scanned using Book ScanCenter 5033
1 user ▼
Secular Solstice - Wikipedia
1 user ▼
Litany of Gendlin - LessWrong
1 user ▼
Sorry, I Still Think MR Is Wrong About USAID
1 user ▼
QuantumMechanicsLiveNotes.pdf
1 user ▼
Distillation Robustifies Unlearning
1 user ▼
A dataset of questions on decision-theoretic reasoning in Newcomb-like problems
1 user ▼
« prev
1
...
725
726
727
728
729
...
3218
next »