Increasing power and ease with which AI tools can create fake information as many people worried that people will be more than ever susceptible to scams on the Internet
Harder to spot a scam
False information will be harder to spot because grammatical errors and other telltale signs of offshore spam in the past, like misspellings or awkward phrasing, will no longer be present.
New scams will now be well written and presented in perfect English — created by AI tools.
That ability to write, combined with the ability to spoof voices and faces. It won’t be long before misinformation is spewed in the voice of reputable characters when, in fact, they are not saying what it looks like they are saying.
The coming flood of misinformation to consumers
Chatbots make it really easy to create convincing information that is 100% wrong.
As one person put it, it will find you the answer you’re looking for, whether it is correct or not.
As an experiment, I gave it my name, and it gave me information about myself that was 100% incorrect.
- It said that I had founded LegalZoom. That is not direct
- it said I went to the legal Loyola University, but that is also not correct
- when I asked it again, it gave me a totally different answer.
How do we know when a chatbot has given us the correct answer?
How does the consumer cope with all this?
That’s a very good question.
Sites like LegalConsumer.com hope to remain a beacon of truth, and we will do our best to prevent misinformation from spreading by only presenting the truth on this website.
Will legislation help?
Can laws and regulations help prevent the spread of miss information aided by AI?
Should the creators of such technologies leave it to the consumers to defend themselves against what predators can do with these tools? Where do the inventors have some responsibility to build safeguards of some sort?
Time will tell. We are entering into a potential time of real progress or real chaos. Time will tell.