Guide to Safe & Responsible Use of Artificial Intelligence
- Sultan Kiani
- 20 hours ago
- 4 min read
//
Sultan Kiani
You have probably come across a recent viral incident in which a Pakistani newspaper published an AI-generated article without even removing the ChatGPT prompt. This is not an isolated workplace blunder. Read on to learn more about using AI safely.

The internet is flooded with hilarious memes inspired by real incidents in which people ended up in similar embarrassing situations. Improper use of AI in office work can backfire, sometimes even costing employees their jobs. However, that is the least concerning problem, because AI blunders in critical fields can lead to far deadlier consequences.
This month’s Safety article highlights some harrowing stories in which AI has been linked to catastrophic incidents, some of which proved fatal.
Misleading Healthcare Advice
For years, people have looked up health information on sites like WebMD and Mayo Clinic. Today, AI chatbots can answer health-related questions, making information more accessible and seemingly accurate. Nonetheless, AI-generated health advice, like online health information, should only be used for educational purposes and not as a substitute for professional medical consultation. Yet people frequently ignore this rule and pay the price. Two such documented mishaps are highlighted below.
Horrible Misdiagnosis
Many Reddit users have shared stories about checking their symptoms on WebMD, only to misdiagnose themselves with cancer or worse. Just as WebMD searches can falsely convince people they are seriously ill, AI chatbots can also dangerously rule out cancer in real patients.
The case of Warren Tierney is a recent and tragic example of how trusting AI for health-related decisions can prove costly. This 37-year-old man from Ireland experienced throat pain and difficulty swallowing, so he asked a chatbot for a diagnosis. After assessing the symptoms, ChatGPT provided a list of possible causes. Tierney then asked whether he had cancer, and ChatGPT confidently replied, “highly unlikely,” offering reassurance. As he began taking medication, his condition improved for a while, seemingly confirming the AI’s assessment.
That relief was short-lived. His symptoms returned and worsened, forcing a hospital visit. It was only then that Tierney discovered he had stage 4 esophageal cancer. Unfortunately, he is not expected to live long due to terminal cancer. A timely diagnosis could have increased his chances of recovery, but now it’s too late.
Bromide Salt Poisoning
While it is good to focus on nutrition, wrong dietary advice can do more harm than good. Annals of Internal Medicine documented a case of bromide poisoning linked to AI misuse. It began when a 60-year-old man relied on AI to find a healthier substitute for table salt. ChatGPT offered a list of alternatives, including sodium bromide.
The individual then decided to replace regular salt, sodium chloride (NaCl), with sodium bromide (NaBr) without considering the risks. Soon, his health deteriorated due to bromism, eventually requiring hospitalization. Luckily, he fully recovered after receiving medical treatment.
Lessons Learned: Always consult a qualified physician about your health concerns. Never make major dietary changes without guidance from a medical professional or certified nutritionist. Always inform them about your medical history, including any pre-existing conditions or allergies.
Autonomous Vehicles Turn Deadly
Artificial intelligence is turning the long-held dream of self-driving vehicles into reality, a major milestone in transportation. However, it is too early to celebrate. The technology is still error-prone, and there have been reports of multiple fatal accidents. The 2018 case of Elaine Herzberg serves as a warning about the risks of AI-powered autonomous vehicles.
It happened on March 18, 2018, in Arizona (USA), when a 49-year-old cyclist was killed by a speeding autonomous vehicle. Although the victim didn’t properly check for oncoming traffic, the vehicle’s automatic emergency braking also failed, making an avoidable accident fatal. An NHTSA report confirmed that self-driving vehicles have been linked to hundreds of crashes, resulting in multiple fatalities in the United States alone. Additionally, several online videos show how AI-based autonomous vehicles can fail to detect real-life road hazards, exposing critical flaws.
Lessons Learned: Tesla and other carmakers using autopilot technology have warned drivers to stay alert, because these systems are designed to assist with safe driving, not to replace the driver entirely. The technology needs further improvement before it can be used with minimal or no human supervision.
Criminal Use of AI
The internet is flooded with edited images and videos of celebrities. Some of these AI-generated creations look hilarious, while others stand out for their creativity. Deepfake videos are made using advanced AI video generators and voice-cloning software to make them look convincingly realistic.
The fair use of such content for entertainment purposes may be tolerated, but some cybercriminals are now using the same tools as weapons. This is where the technology takes a dark turn. These tools can fuel false allegations, ruin someone’s reputation, and cause irreversible damage. They can also jeopardize cybersecurity, enable identity theft, and result in financial losses in a variety of ways.
One shocking example was a 2020 bank heist in the UAE, where cyber fraudsters used an AI-generated voice clone of a company director and got away with $35 million. In 2023, a hoax featuring an edited image of a Pentagon explosion caused panic and contributed to a short-lived stock market drop. These incidents show how criminals armed with AI tools can outsmart even robust security systems.
Lessons Learned: Governments and technology companies need strong alliances to track down those involved in such illicit activities. Regular internet users also need to be vigilant. Avoid falling for deepfakes and strengthen your digital privacy and security practices.
Remember: AI is developed by humans to assist humans in improving work efficiency and making life easier. However, AI cannot fully replace humans and must always be used responsibly.
Fun Fact
I asked ChatGPT for a list of AI-related incidents, and it responded with examples, including the deadly Boeing 737 Max crashes, allegedly caused by MCAS failure. Anyone unfamiliar with the technology might have included this in an article, but my experience as an aviation enthusiast told me it was incorrect. While MCAS is a technology designed to help keep the aircraft safe, it is not considered an adaptive AI tool, because it relies on software-based sensor inputs rather than learning or adapting like modern AI systems.
This is yet another example of why you must not blindly trust artificial intelligence. It can make unimaginable errors.




Comments