Large language models appear aligned, yet harmful pretraining knowledge persists as latent patterns. Here, the authors prove current alignment creates only local safety regions, leaving global ...
The biggest stories of the day delivered to your inbox.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results