
That is the horrific message from a public statement signed by 350 prominent tech executives and AI scientists. Signatories include senior executives from Microsoft, Google, as well as OpenAI (The company that developed ChatGPT).
Needless to say there is great excitement over the technology, yet at the same time there has also been mounting fear that AI could go out of control.
It’s not the first time that prominent tech figures have publicly voiced their concern. Earlier in March, AI experts including Elon Musk have issued a letter urging for a 6-month moratorium for AI companies to halt their R&D, to give the industry sufficient time to set safety standards.
I am beginning to notice that harmful effects of AI are surfacing; there was a news report just this week about a lawyer who presented bogus law cases as precedents in court while representing a client. It turned out that he had relied on the research of his colleague, who used ChatGPT to generate law cases that didn’t really exist.
As I took time to think about this news, I also started to worry. As platforms like ChatGPT grow in popularity, false information of this nature may be generated trillions of times. In this paperless world, could it be possible in this instance, that AI eventually steps in to incorporate bogus cases into legitimate legal sources, thereby mixing factual records with false information? That’s a scary thought.