- reflection
- research
- opinion
- creative
•
•
•
-
Literature Review: Distinguishing Ignorance From Error In LLM Hallucinations
-
Literature Review: Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
-
Literature Review: Language Model Circuits Are Sparse In The Neuron Basis
-
Literature Review: Who's in Charge? Disempowerment Patterns in Real-World LLM Usage
-
Literature Review: Gradual Disempowerment: Systemic Existential Risks From Incremental AI Development