Both these options don't feel good to me. Hard to really tell what is ultimately worse, when I can imagine similar outcomes when irresponsible or malicious agents have access to sufficiently powerful AI.
Main positive point for open models is that we will start seeing the abuse sooner and at smaller scales. That might give us more time to build an immune system up against exploits by encouraging us to prioritize development of comprehensive AI safety practices.
Main positive point for open models is that we will start seeing the abuse sooner and at smaller scales. That might give us more time to build an immune system up against exploits by encouraging us to prioritize development of comprehensive AI safety practices.