Integrity Score 920
No Records Found
No Records Found
No Records Found
AI will soon run out of information to consume and enter an even more worrisome stage.
By Valerie Hudson
ChatGPT debuted in the autumn of 2022, and it took less than a year for experts to worry that artificial intelligence more broadly could represent an existential threat to humankind. By March 2023, more than 33,000 experts asked Big AI to “pause” further development and testing until that threat could be gauged, prevented and mitigated. The U.S. Senate held hearings with AI developers in May 2023 and called for strict government regulation.
It’s been almost a year now, and it’s time to admit our lawmakers are not up to the challenge. While the European Union has begun to implement regulations on AI, there’s very little that Congress or the executive branch can show for all the justifiable hand-wringing on the issue. President Joe Biden issued an executive order to federal agencies in late 2023, and the result has been a lot of what the administration itself characterizes as “convening,” “reporting” and “proposing.”
The states have begun, in inconsistent and piecemeal fashion, to try and step into the breach left by federal inaction. For example, several states have now criminalized deepfake porn, allowing victims to sue both the makers and distributors.
While regulation is struggling to get off the ground, Big AI has been plowing ahead with little restraint. The only example of restraint involving Big AI that I’ve been able to identity since ChatGPT’s debut is OpenAI’s recent decision not to release its voice imitation software to the public. I suspect it justifiably anticipated many lawsuits.
But Big AI is not pausing its efforts, and it’s not waiting for proposed regulation. Indeed, Big AI has been using this period of regulative impotence to dream big and to break down or circumvent any existing fences meant to keep it from its grandiose visions.
Consider what investigative reporters of The New York Times recently uncovered. In a shocking exposé, they found that Google, OpenAI and Meta are doing things they wanted no one to know about. Large language models (also known as LLMs) must be constantly fed more text......
https://www.deseret.com/opinion/2024/04/09/ai-regulation-synthetic-data-google-youtube/