For Christmas I received an intriguing present from a pal - my very own "very popular" book.
"Tech-Splaining for Dummies" (fantastic title) bears my name and my photo on its cover, and it has glowing evaluations.
Yet it was completely written by AI, with a couple of basic prompts about me provided by my good friend Janet.
It's an interesting read, and uproarious in parts. But it also quite a lot, and is someplace in between a self-help book and a stream of anecdotes.
It imitates my chatty design of composing, but it's likewise a bit repeated, and extremely verbose. It may have gone beyond Janet's triggers in collecting information about me.
Several sentences begin "as a leading innovation reporter ..." - cringe - which might have been scraped from an online bio.
There's likewise a strange, repetitive hallucination in the type of my cat (I have no pets). And there's a metaphor on nearly every page - some more random than others.
There are lots of companies online offering AI-book writing services. My book was from BookByAnyone.
When I contacted the president Adir Mashiach, based in Israel, he informed me he had sold around 150,000 customised books, generally in the US, given that rotating from assembling AI-generated travel guides in June 2024.
A paperback copy of your own 240-page long best-seller expenses ₤ 26. The firm uses its own AI tools to generate them, code.snapstream.com based on an open source big language model.
I'm not asking you to purchase my book. Actually you can't - just Janet, who developed it, can purchase any more copies.
There is currently no barrier to anyone producing one in any person's name, consisting of celebs - although Mr Mashiach states there are guardrails around abusive material. Each book contains a printed disclaimer stating that it is fictional, produced by AI, and created "solely to bring humour and happiness".
Legally, the copyright belongs to the firm, but Mr Mashiach stresses that the item is intended as a "customised gag present", and the books do not get sold further.
He intends to expand his range, creating various categories such as sci-fi, and maybe offering an autobiography service. It's designed to be a light-hearted form of customer AI - offering AI-generated products to human clients.
It's also a bit scary if, like me, you compose for a living. Not least because it probably took less than a minute to produce, and it does, definitely in some parts, sound much like me.
Musicians, authors, artists and actors worldwide have revealed alarm about their work being used to train generative AI tools that then churn out similar material based upon it.
"We ought to be clear, when we are discussing information here, we really imply human developers' life works," says Ed Newton Rex, founder of Fairly Trained, which campaigns for AI companies to regard creators' rights.
"This is books, this is articles, this is pictures. It's masterpieces. It's records ... The entire point of AI training is to find out how to do something and then do more like that."
In 2023 a tune featuring AI-generated voices of Canadian vocalists Drake and The Weeknd went viral on social media before being pulled from streaming platforms because it was not their work and they had not granted it. It didn't stop the track's creator attempting to nominate it for a Grammy award. And even though the artists were fake, it was still extremely popular.
"I do not believe making use of generative AI for creative purposes must be banned, but I do believe that generative AI for these functions that is trained on people's work without approval should be banned," Mr Newton Rex includes. "AI can be extremely effective but let's develop it ethically and relatively."
OpenAI says Chinese rivals using its work for their AI apps
DeepSeek: The Chinese AI app that has the world talking
China's DeepSeek AI shakes industry and damages America's swagger
In the UK some organisations - consisting of the BBC - have actually chosen to block AI developers from trawling their online material for training functions. Others have chosen to collaborate - the Financial Times has partnered with ChatGPT creator OpenAI for example.
The UK federal government is considering an overhaul of the law that would permit AI developers to utilize creators' content on the web to help develop their models, unless the rights holders pull out.
Ed Newton Rex explains this as "madness".
He mentions that AI can make advances in locations like defence, health care and logistics without trawling the work of authors, reporters and artists.
"All of these things work without going and changing copyright law and messing up the incomes of the country's creatives," he argues.
Baroness Kidron, a crossbench peer in the House of Lords, is likewise highly against removing copyright law for AI.
"Creative industries are wealth creators, 2.4 million jobs and a great deal of pleasure," says the Baroness, who is also an advisor to the Institute for Ethics in AI at Oxford University.
"The federal government is weakening among its finest performing markets on the vague pledge of growth."
A government representative stated: "No move will be made until we are absolutely positive we have a useful strategy that delivers each of our objectives: increased control for ideal holders to help them accredit their content, access to top quality material to train leading AI designs in the UK, and more openness for best holders from AI designers."
Under the UK federal government's brand-new AI plan, garagesale.es a national information library consisting of public information from a wide variety of sources will likewise be made offered to AI researchers.
In the US the future of federal rules to control AI is now up in the air following President Trump's return to the presidency.
In 2023 Biden signed an executive order that aimed to improve the safety of AI with, amongst other things, companies in the sector required to share details of the workings of their systems with the US government before they are released.
But this has actually now been rescinded by Trump. It stays to be seen what Trump will do instead, but he is stated to want the AI sector to face less guideline.
This comes as a number of lawsuits versus AI firms, and especially against OpenAI, continue in the US. They have been gotten by everybody from the New york city Times to authors, music labels, and even a comic.
They declare that the AI firms broke the law when they took their content from the web without their approval, and utilized it to train their systems.
The AI business argue that their actions fall under "reasonable usage" and are therefore exempt. There are a number of elements which can make up reasonable use - it's not a straight-forward definition. But the AI sector is under increasing analysis over how it gathers training information and whether it need to be paying for it.
If this wasn't all adequate to contemplate, Chinese AI firm DeepSeek has shaken the sector over the past week. It became one of the most downloaded free app on Apple's US App Store.
DeepSeek claims that it established its innovation for a portion of the price of the likes of OpenAI. Its success has actually raised security concerns in the US, and threatens American's existing supremacy of the sector.
As for me and a profession as an author, I think that at the minute, if I really want a "bestseller" I'll still have to write it myself. If anything, Tech-Splaining for Dummies highlights the present weakness in generative AI tools for bigger tasks. It is complete of mistakes and hallucinations, and it can be rather difficult to check out in parts because it's so verbose.
But given how rapidly the tech is progressing, I'm not exactly sure for how long I can stay positive that my significantly slower human writing and editing skills, are better.
Register for [rocksoff.org](https://rocksoff.org/foroes/index.php?action=profile
1
How an AI written Book Shows why the Tech 'Frightens' Creatives
Carson Blaylock edited this page 3 months ago