I would also argue that most popular/usable LLMs are trained on such a huge data set, that the odds of it leveraging only a single source of information in a response is basically zero. The pretense of an LLM "using your script to answer a question" isn't very accurate, as it wouldn't even have your script retained in a way that it could reproduce it, even if asked.
That said, there are new LLMs that can cite sources, and referencing other written works has never required licensing/royalties.
And, sorry if this is long-winded I just want to get back around to the topic of generative art.
What if you had never seen a cat before (unrealistic, I know). Now imagine that you look at a handful of drawings, paintings, and photographs of cats to learn what one is.
You then draw a picture of a cat, and your only frame of reference is the small selection of other people's works.
Do you owe credit to the artists and photographers who taught you what a cat looks like?