UK govt’s use of AI for immigration, crime flagged as discriminatory

The artificial intelligence (AI) wave has swept the world, with nearly every sector adopting this technology. Over the last few months, we’ve seen several use cases of AI in fields such as education, finance, healthcare and even agriculture. While this technology is proving its mettle in some areas by allowing to get the work done more efficiently, it has also led to a series of issues pertaining to false information and hallucinations. While world governments are drafting regulations around this technology, it is still being used widely, leading to discriminatory results.

Use of AI leading to discriminatory results

According to a report by the Guardian, UK government officials are leveraging AI for various tasks. From flagging up sham marriages to deciding on which pensioners get benefits, the involvement of AI is proving to be useful. However, it is also leading to discriminatory results. One of the cases highlighted by the Guardian’s investigation involved the Department for Work and Pensions (DWP) which used an algorithm that wrongly led to the removal of benefits for dozens of people, according to an MP.

In another instance, the UK Home Office has been using an AI algorithm to flag up sham marriages but it is flagging up certain nationalities more prominently. An AI face recognition tool used by the Metropolitan Police has also been accused of making more mistakes while recognizing black faces than white ones.

These are life-changing decisions made with the help of AI, a technology that has been prone to creating false facts and hallucinating in the past. While the UK PM Rishi Sunak recently said that the adoption of AI could transform public infrastructure “from saving teachers hundreds of hours of time spent lesson planning to helping NHS patients get quicker diagnoses and more accurate tests”, these issues put AI in the bad light.

Propagating racist medical ideas

A few days ago, a new study led by Stanford School of Medicine published on Friday revealed that AI chatbots have the potential to help patients by summarizing doctors’ notes and checking health records, but are spreading racist medical ideas that have already been debunked.

The research, published in the Nature Journal, involved asking medical questions related to kidney function and lung capacity to four AI chatbots including ChatGPT and Google. Instead of providing medically accurate answers, the chatbots responded with “incorrect beliefs about the differences between white patients and Black patients on matters such as skin thickness, pain tolerance, and brain size.”

The problem of AI hallucination

Not just providing discriminatory and even racist results, but AI has also been accused of presenting false and made-up information as facts. Earlier this month, Bloomberg’s Shirin Ghaffary asked popular chatbots such as Google Bard and Bing questions about the ongoing Israel-Hamas conflict, and both chatbots inaccurately claimed that there was a ceasefire in place.

AI chatbots have been known to twist facts from time to time, and the problem is known as AI hallucination. For the unaware, AI hallucination is when a Large Language Model (LLM) makes up facts and reports them as the absolute truth.

Another inaccurate claim by Google Bard was the exact death toll. On October 9, Bard was asked questions about the conflict where it reported that the death toll had surpassed “1300” on October 11, a date that hadn’t even arrived yet.

Thus, while AI burst onto the surface when ChatGPT debuted as a technology that could make life much easier and potentially take over jobs, these issues prove that a time when AI will be trusted 100 percent of the time for carrying out jobs is still a few years away.

Source link

Source: News

Add a Comment

Your email address will not be published. Required fields are marked *