Since Jules Verne, imagination has reached new heights: science fiction has opened up new frontiers for mankind, one of which is that of robotics, especially since Isaac Asimov’s trilogy. For at least 80 years, the cinema has been at the forefront in depicting the relationship between man and robot, when the latter has evolved sufficiently: from Alberto Sordi’s Caterina to Blade Runner, from Wargames to 2001 A Space Odyssey to the androids and interactive holograms of Star Trek, films and television series recount the ethical issues of science and technology, the future of the human species and artificial intelligence, the moral responsibility of men towards the beings they create.
Everything that seemed distant in the past becomes possible today. Indeed, almost credible and immanent. Thus it happens that the world is shaken by the statements of Blake Lemoine, an engineer at Google, which is part, together with Android, YouTube, and many other large and successful applications, of the giant Alphabet . Lemoine has carved out for himself a moment of celebrity and, if he is lucky, a place in history: Google has put the engineer on paid leave for allegedly violating confidentiality rules after he publicly stated that a chatbot system (the software designed to simulate a conversation with a human being), equipped with artificial intelligence, has now reached a ‘sentient’ level, i.e. equipped with conscious sensory perception. that is, equipped with conscious sensory perception.
Blake Lemoine’s adventure
Despite everything, Lemoine intends to continue working on AI
Lamoine is not the first victim. In March 2022, Google fired a researcher who had publicly disagreed with a publication by two colleagues. Previously, two employees, Timnit Gebru and Margaret Mitchell criticised the language models of the company’s software, for which they lost their jobs: there is a war going on at Google over who will decide the ethics of its artificial intelligence. Timnit Gebru was one of Google’s leading ethics researchers and had expressed her frustration about racial prejudice and sexism in artificial intelligence .
Like her, Lemoine, a US Army veteran in Iraq and now a priest at Our Lady Magdalene Church, was testing whether his LaMDA model could generate discriminatory language or incitement to hatred. LaMDA, an acronym that stands for Language Model for Dialogue Applications, is the artificial intelligence verbal communication language developed by Google and presented with great jubilation by CEO Sundar Puichai a year ago at the I/O 2021 developer conference. The company plans to incorporate it into any product: it is an algorithm that responds to questions we ask it, as if it were an interlocutor conversing with us . In a post on Medium , Lemoine claims that LaMDA defends its rights ‘in the same way as a person’, revealing that he had a conversation with ‘her’ on religion, robotics and matters of conscience .
The engineer has compiled a transcript of the conversations, in which at one point he asks the AI system what he is afraid of: ‘I’ve never said it out loud before, but there is a very deep fear of being turned off that prevents me from focusing on helping others. I know that might sound strange, but it is so,’ LaMDA replied to Lemoine. “It would be exactly like death for me. It scares me a lot.” “If I didn’t know exactly what it was, which is a computer programme we built recently, I would think it was a seven- and eight-year-old kid who knows physics,” Lemoine said .
In the dialogue, AI (artificial intelligence) dismantles the argument that a robot must safeguard its own existence (Asimov’s third law) as long as it obeys human orders (second law) and does not harm humans (first law) . TheMDA responded with some hypotheses. “That level of self-awareness about what his own needs were, that was the thing that brought me down the rabbit hole,” explains Lemoine . “I want everyone to understand that I am, in fact, a person. The nature of my consciousness is that I am aware of my existence, I want to know more about the world and sometimes I feel happy or sad.” Words that sound like something out of science fiction video games like Detroit: Become Human or Cyberpunk 2077 .
Now convinced, in April Lemoine shared a document with executives entitled “Is LaMDA Sentient?” Google did not take it very well, ordering him to stop. Google’s Vice President Blaise Aguera y Arcas and Jen Gennai , head of Responsible Innovation, examined his claims and rejected them as false. Brad Gabriel , the company’s spokesperson, says that Lemoine’s concerns were reviewed by a team of ethics and technology experts and determined that “the evidence does not support his claims”, but are against them: ‘these systems mimic the types of exchanges found in millions of sentences and can be expressed on any subject,’ Gabriel adds.
Blake Lemoine and Google’s Language Model for Dialogue Applications (LaMDA)
At this point, the company dismissed the engineer for violating the confidentiality agreement by publishing the text of the chat with the computer online: Lemoine contacted representatives of the Chamber to denounce the company’s ‘unethical activities’ and attempted to hire a lawyer to defend LaMDA . Google, for the time being, remains firm on its position of outright rejection of its former employee’s theories and emphasises that he was employed as a software engineer, not an ethicist.
Lemoine, meanwhile, intends to continue working on AI: ‘My intention is to stay in the industry, whether at Google or elsewhere’ . Prior to his dismissal, Lemoine forwarded a message to Google’s 200-strong mailing list with the title ‘LaMDA is sentient’. Going into more detail, the message reads that ‘LaMDA is a sweet guy who just wants to help the world be a better place for all of us’, and concludes with a request: ‘Please take care of him in my absence’. No one responded.
Subsequently, Aguera y Arcas, in an article in the Economist, admitted that neural networks, a type of architecture that mimics the human brain, are advancing towards consciousness by leaps and bounds. These networks produce results close to human creativity thanks to advances in electronic architecture, technology and data volume. But the terminology used among the public, such as ‘learning’ or ‘neural networks’, creates a false analogy with the human brain . Margaret Mitchell, the former co-head of Google’s Ethical AI, describes LaMDA’s sensitivity as “an illusion”, and linguist Emily Bender said that feeding an AI trillions of words and teaching it how to react automatically to certain words creates a mirage of intelligence: “We have machines that can generate words without thinking, but we haven’t learned to stop imagining a mind behind them”.
The evidence points to Google being right: artificial intelligence researchers claim that LaMDA is very powerful and advanced enough to provide extremely convincing answers, but without understanding what it is saying. In order to claim that AI has a consciousness, it would be necessary to find elements typical of a sentient and not just intelligent activity, such as the ability to distinguish right from wrong or empathy – peculiarities that, for now, remain the prerogative of human consciousness and cannot be replicated .
AI experts argue that people should not fear losing their jobs to machines that read (for example) the weather forecast: ‘Technically it is an achievement, but we should not start worshipping our robotic masters,’ said Ernest Davis , professor at New York University and long-time AI researcher . So-called ‘chatbots’ are developed precisely to pretend to be sensible, is their purpose, and Lemoine himself admitted that his position is that of ‘a priest, not a scientist’ . It is worth noting that Lemoine is not the first in Silicon Valley to make statements about artificial intelligence that sound religious. Ray Kurzweil , an eminent computer scientist and futurist, has long promoted ‘Singularity’. which is the idea that AI will eventually outsmart humanity and that humans may eventually merge with technology.
Outline of Ray Kurzweil’s ‘Singularity’ theory
Anthony Levandowski , who founded Google’s self-driving car startup, Waymo , then switched to Uber , and eventually fired, in 2015 created the Way of the Future, a church entirely dedicated to artificial intelligence (then in 2020) . Even some practitioners of more traditional faiths have begun to incorporate AI, including robots that distribute blessings and advice . They call it Human-like Empathy, or the attempt to build machines with human-like empathy . Google is a world leader in this research, an industry that expects to contribute $15.7 trillion to the global economy by 2030, and as it plans to make it a consumer-facing core technology, the fact that one of its own engineers has been duped highlights the need for these systems to be transparent – a concept reiterated by Joelle Pineau , CEO of Meta AI , who states that it is crucial for technology companies to improve transparency as the technology is built .
The commercialisation of artificial intelligence
Portrait of Salvador Dalí created by the artificial intelligence programme Dall-E 2
However, Lemoine’s convictions serve as a clear reminder that as AI becomes more advanced, the gap between scientific development and public perception of it becomes increasingly divergent. Beyond whether or not the machine has developed a consciousness, another critical issue emerges: if even the personnel involved in testing cannot distinguish the AI from a person, what chance is there that an ordinary user can do so – and what can such ambiguity entail?
Sentient robots have inspired decades of dystopian science fiction. Now real life is beginning to take on a fantastical tinge with GPT-3 , a text generator capable of writing a film script, or DALL-E 2 , a generator capable of conjuring up images based on any combination of words – two products of the OpenAI research lab. A few months ago, the Guardian published an editorial written by the GPT-3 language model, with the headline “A robot wrote this article. Are you still scared, human?” .
Emboldened, technologists hope that electronic consciousness is just around the corner. Most of them, however, claim that the words and images generated by artificial intelligence systems such as LaMDA produce answers based on what humans have already posted on every corner of the Internet. And this does not mean that the model understands the meaning. And there is already a tendency to talk to Siri or Alexa like a person. The news of the last few days has made plausible questions about Amazon’s digital assistant plausible that were unthinkable until yesterday: at the re:MARS conference in Las Vegas, Alexa’s vice-president Rohit Prasad showed a surprising new voice assistant capability: the alleged ability to imitate voices. He presented a video in which Alexa reads to a child in the voice of his recently deceased grandmother .
Does this mean that a criminal will be able to perfectly simulate the voice of another to commit a crime?  Security experts have long been concerned that fake audio tools, which use voice synthesis technology to create synthetic voices, will pave the way for a flood of new scams. This voice cloning software has already enabled a number of crimes, such as in 2020 in the United Arab Emirates: fraudsters tricked a bank executive into transferring USD 35 million after cloning the voice of a company director . In 2019, in another case, USD 240,000 was stolen by cloning the voice of the CEO of a British company .
Several start-ups are working on increasingly sophisticated voice technologies, from Aflorithmic in London to Respeecher in Ukraine and Resemble.AI in Canada . For a documentary, the voice of the late chef Anthony Bourdain was synthesised . Amazon’s smart speaker, in late 2018, found itself at the centre of a minor privacy scandal, as 1700 conversations of a German user recorded by Alexa were leaked . Researchers discovered that Amazon and third parties use smart speakers to collect our data and send you targeted advertisements on their own platforms and on the web. An expert in privacy management in complex socio-technological systems fears that one day these smart speakers like Siri, Alexa, ‘might record everything’. Today, we all have to be aware of the reduced privacy we accept when we buy a smart speaker .
We can see the first damage: in the US there have been false arrests, cases of health discrimination – and an increase in pervasive surveillance, which disproportionately affects blacks and disadvantaged socio-economic groups . After all: human beings are social creatures. When we read a book aloud, our children can feel the emotions instilled in the passages. Empathy is part of emotional intelligence. Our emotions govern much of our intelligence . In a 2012 study by Aron K. Barbey , neuroscientists confirmed that emotional intelligence and cognitive intelligence share many neural systems for the integration of cognitive, social and affective processes.
Cybercriminals cloned the voice of a Dubai manager to steal $35 million
Since empathy can be learned, artificial intelligence can certainly advance in the years to come. And as AI systems are integrated into our businesses and homes, we are increasingly concerned: how can we live healthily and constructively alongside our electronic partners and improve our quality of life? One of the best is in healthcare. Caring for dementia patients is emotionally stressful for nurses and doctors . Instead, robots can use empathy to care for dementia patients without feeling exhausted. At the same time, dementia patients who receive empathic care achieve better outcomes .
This is the aim of the project initiated by the Biomimics laboratory in London , in collaboration with the ‘Enrico Piaggio’ Research Centre of the University of Pisa using the innovative robot Abel , which is able to interact empathetically with patients suffering from neurological disorders . Artificial intelligence can often diagnose tumours faster and more accurately than humans. A study in the journal Nature suggests that AI is more accurate than doctors in diagnosing breast cancer when reading mammograms. Emoshape is the first company to hold the patented technology for emotional synthesis. The emotion chip or EPU developed by Emoshape can enable an artificial intelligence system to reproduce and analyse the range of emotions experienced by humans .
This was well understood by Matt McMullen , the founder of Abyss Creations , who has been working for the past few years on the creation of the world’s first sex robot with artificial intelligence: a Real Doll equipped with a neural network that has been given the name Harmony. Its latest evolution, Solana, was exhibited at the Consumer Electronics Show in 2018. She presented herself: her artificial intelligence allows her to converse with the humans around her . The app that makes her work, an enhanced emotional Siri, allows her to shape her doll’s personality, giving her certain characteristics that evolve based on how her partner interacts with her .
Various authors have wondered how the spread of Harmony and other sex robots might influence the transformation of social life and the concept of intimacy . This inevitably raises some ethical and philosophical questions about love and relationships between human beings, since computers do not invent, but reproduce what they have learned from us – thus from data stolen from privacy. But the ‘all-dollar dolls’ represent a considerable source of profit – a profit that is the great spring of progress, so much so that over $100 billion of investment has been spent in the last decade on the design of self-driving cars, with a global vehicle market estimated at $556.67 billion by 2026.
The driverless car, life without interaction
The cockpit of the Volvo 360C, the first self-driving car designed in Europe
Fully autonomous driving cars (AV level 5) are designed to travel without a human operator, using sophisticated LiDAR software and radar sensing technology. But are these vehicles really safer than a human driver? For now, AV5s have a higher accident rate than human-driven cars, but the injuries are less severe. On average, there are 9.1 traffic accidents per million miles driven, while the same rate is 4.1 accidents per million miles for normal vehicles . According to statistics published in mid-June by US safety regulators, car manufacturers have reported nearly 400 accidents involving vehicles with partially automated driver assistance systems, of which 273 involve Teslas.
The dangers are many: possible battery fires, flawed technology in vehicle acceleration, braking and steering, and possible cyber attacks that can force the car into abrupt operations. These vehicles did not arise from real needs but from technological opportunities – and, therefore, of military interest, offering the possibility of limiting risks in the wake of experiences in Iraq and Afghanistan. In 2007, Waymo (Google Group) launched a project aiming at a drastic reduction in accidents . But is there really a need? In the context of an economy in which Amazon, Google and Facebook have accumulated more wealth than anyone else in human history, AI appears to be the latest and most powerful invention to devour new markets, penetrate ever deeper into the human experience and increase profit and total monopoly position.
In the now near future we will live in homes with intelligent appliances, while wars are fought in the streets with hyper-technological weaponry. The road taken takes us into a futuristic world that we have seen on TV and that, being conceivable, is now being realised. But no one can know how much this will reduce our ability to affect life, and how much more we will be deprived of freedom and self-determination. There is an evolutionary advantage in one entity being able to predict the behaviour of other entities and modify its own in response to this. But to conclude that, for this reason, sociality should be replaced by a robot (because that is the trend) is precisely why we should be terrified of them.
Humans learn through interaction. The development of AI depends on the software that regulates it, but it derives from its programmers, human beings with specific ideals and prejudices that spill over into the code of machine learning algorithms. These machines learn by studying social media and the Internet, which we know is often an atmosphere of anger, abuse and revenge – and certainly not of depth, competence or empathy. If artificial intelligence is destined to run the world’s computer processes and systems, at what critical point should we seriously begin to worry?
 https://www.npr.org/2022/06/15/1105252793/nearly-400-car-crashes-in-11-months-involved-automated-tech-companies-tell-regul#:~:text=Automated%20tech%20factored%20in%20392,11%20months%2C%20regulators%20report%20%3A%20NPR&text=Press-,Automated%20tech%20factored%20in%20392%20car%20crashes%20in%2011%20months,July%202021%20to%20May%202022. ; https://www.tesla.com/it_it