Why an Air Force colonel — and many other experts — are so worried about the existential risk of AI
On Friday morning, Hamilton told RAS that he was actually describing a hypothetical thought experiment, saying, “We’ve never run that experiment, nor would we need...
AI machines aren’t ‘hallucinating’. But their makers are | Naomi Klein
Clear away the hallucinations and it looks far more likely that AI will be brought to market in ways that actively deepen the climate crisis....
What Is ChatGPT Doing … and Why Does It Work?
But it’s a representation that’s readily usable by the neural net. And then there’s the representation in the neural net of ChatGPT. But it’s a...
Deepfakes, Cheapfakes, and Twitter Censorship Mar Turkey’s Elections
“It was surprising that Erdoğan showed a manipulated video showing Millet Alliance candidate Kemal Kılıçdaroğlu side by side with PKK militants at rallies. “Opposition supporters...
Science and Enterprise
The team also designed the algorithm to separate variants common to all primates from matched controls, and predict human disease-causing probabilities. Identify more than 4...
A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn
The open letter was signed by more than 350 executives, researchers and engineers working in A.I. should be a global priority alongside other societal-scale risks,...
ChatGPT Prompt Engineering for Developers
This short course taught by Isa Fulford (OpenAI) and Andrew Ng (DeepLearning.AI) will describe how LLMs work, provide best practices for prompt engineering, and show...
Writing when tech has broken the web’s social contract
The software industry has never been particularly good at what it does, but it’s been getting worse. With AI, tech has broken the web’s social...
Language models can explain neurons in language models
One simple approach to interpretability research is to first understand what the individual components (neurons and attention heads) are doing. As future models become increasingly...
About LayerNorm Variants in the Original Transformer Paper, and Some Other Interesting Historical Tidbits About LLMs
For instance, in 1991, which is about two-and-a-half decades before the original transformer paper above ("Attention Is All You Need"), Juergen Schmidhuber proposed an alternative...