Oh, it shouldn't be the case. But anyone can easily finetune the Base model on the same datasets in a couple of hours on any consumer GPU with 8GB of VRAM and release it under the commercial license (using https://github.com/johnsmith0031/alpaca_lora_4bit).