Implicit in the idea that the model weights have national security implications is the idea that our current AI progress in deep learning-based multimodal models is on a path to... something, and that possessing those weights puts you closer to that then you would be without them. At present, it let's you skip the expensive training, but it doesn't get you a high-end compute cluster to run that model on, nor the scaffolding that makes it possible to interact usefully with the agent. Nor the know how to make the next advance.
If, on the other hand, this is a paradigm that is going to yield diminishing returns, then open model weights are only a business threat, not a national security one. Perhaps the next stage of AI development requires a complete rethink of architecture, compute structure, training data curation, and so on, in which case possessing the model weights to the latest GPT model would be equivalent to possessing working vacuum tubes when your opponent is building microprocessors.
Honestly, I don't know. Precautionary Principle would argue for protecting model weights, but that really depends on if this current paradigm IS in fact the path to AGI, or something close enough to it that the distinction is without meaning.
Great article, thank you. Do you think Sam Altman really believes these ideas or is it all just a front for other interests? I think the only outcome of a ban is an emboldened China, focused on a technology that is far from bottled up and controlled by any US laws.
Is Altman just naive? Am I naive? Is China incapable of duplicating what the US has been doing?
Implicit in the idea that the model weights have national security implications is the idea that our current AI progress in deep learning-based multimodal models is on a path to... something, and that possessing those weights puts you closer to that then you would be without them. At present, it let's you skip the expensive training, but it doesn't get you a high-end compute cluster to run that model on, nor the scaffolding that makes it possible to interact usefully with the agent. Nor the know how to make the next advance.
If, on the other hand, this is a paradigm that is going to yield diminishing returns, then open model weights are only a business threat, not a national security one. Perhaps the next stage of AI development requires a complete rethink of architecture, compute structure, training data curation, and so on, in which case possessing the model weights to the latest GPT model would be equivalent to possessing working vacuum tubes when your opponent is building microprocessors.
Honestly, I don't know. Precautionary Principle would argue for protecting model weights, but that really depends on if this current paradigm IS in fact the path to AGI, or something close enough to it that the distinction is without meaning.
Great article, thank you. Do you think Sam Altman really believes these ideas or is it all just a front for other interests? I think the only outcome of a ban is an emboldened China, focused on a technology that is far from bottled up and controlled by any US laws.
Is Altman just naive? Am I naive? Is China incapable of duplicating what the US has been doing?