At the end of the 1960s, the Advanced Research Projects Agency (ARPA) of the US Department of Defence was the first agency to initiate an interconnected communication project called ARPANET, the forerunner of the Internet. Things have changed a lot since then, as the Internet, with its widespread explosion, has changed the world and our perception of it. The network has expanded to be used daily by at least 5 billion people, with almost 2 billion websites, reaches every corner of the planet, offers an incalculable number of services and is the repository of an impressive amount of data.
In recent years, it continues to collect, with exponential increases, information concerning every aspect of our daily lives, starting with the traces of our use (personal data, photos, thoughts, purchases, movements, conversations and much more). This information occupies different layers of the network: from the public one, thus accessible to anyone who connects, to the most hidden and secret one, defended by more or less complex and sci-fi barriers.
Although there are organisations – which analyse search engines – that are committed to ensuring a minimum of transparency, the vastness of the content still generates enormous problems, and the truth is now so complex as to be unknowable. This huge amount of information is labelled as ‘unstructured’. If, somehow, methods were to be devised to link the information together, to make it dialogue within a kind of intelligent structure, what would be the result? Power over information and global communication. A terrifying prospect, which so far has been in the dreams of Elon Musk, the South African multi-billionaire who, to this end, bought Twitter. But this alone will not be enough for him.
Recent years have seen the publication of studies designed to intelligently collect information from the net, by developing increasingly sophisticated algorithms. The interpretation of this information would produce real-time scenarios of a predictive nature, relating to the specific dynamics of markets, finance, health, security, and, most interestingly, investigating human behaviour. There are obviously many hypotheses of use: profiling users in the business world of a given geographical area or a given social class; or, in medicine or security, improving prevention in the event of attacks. It is not difficult to imagine the doubts of this branch of science: the boundary of lawfulness in taking personal information and using it is very thin. Especially when one wants to profile billions of individuals.
The climb in the world of artificial intelligence
The network of Voyager Labs Ltd
Avi Kornblum, born in Israel in June 1966, after 20 years of service in Israeli intelligence and experience representing the Mossad at IDT Corporation, Verbx Communications and Fusion Telecommunications International, founded a small software company in 2010. His specific mission, however, is what will become his fortune: to be able to fish in the magnum sea of the Internet for unstructured information and transform it, on the basis of specific requests, into organised and comprehensible answers. One of his travelling companions is the British Jonathan Leon Joffe, former director of QTEC Systems Limited, who will be in charge of the start-up company, which will be called Voyager Labs Ltd.
Not much is known about the first years of activity, the startup is launched in stealth mode, a particular form of incorporation that allows maximum confidentiality for both the activity and the accounts. Voyager Labs is heard from in November 2016, when it comes out of stealth mode. During its incubation period, the small startup raised more than $100 million in funding from backers such as British businessman Ronald Cohen, British entrepreneur Lloyd Marshall Dorfman, Irish financier OCAPAC Holding, its owner Oracle , and Chinese tycoon Li Ka-Shing’s Horizons Ventures. The company has a research and development centre in Tel Aviv and offices in New York, Washington and London, employs around 130 people, most of whom are scientists, data analysts and artificial intelligence experts, and has a portfolio of (secret) customers in every sector of global economic life.
The company has three departments: Voyager Analytics, Voyager Finance and Voyager eCommerce. All products share the same philosophy: collect as much unstructured information as possible, including social media, and turn it into structured images through processing driven by complex algorithms. According to DoiT International, which provides IT support to Voyager Labs (large cloud spaces to securely store their data), the company collects an average of 10 billion pieces of data per day, amounting to 10 TB – an impressive figure.
In each of its public presentations, Voyager Labs is at pains to inform that its software only collects public information and therefore in total transparency, but critics say otherwise, as can be seen in the forums of the NDIA National Security (a CIA agency) or in the seminars organised by the ISS World Middle East (an association of cyber-espionage companies), which speak of “usable and previously unreachable information by analysing and understanding huge amounts of open, deep and obscure Web data”. A truly disturbing presentation.
A controversial former CIA agent
The swearing-in ceremony of Leon Panetta as CIA Director, with Stephan Kappes (Voyager Labs), Dannis Blair, Sylvia Panetta and Joe Biden
Success is immediate and it needs political cover. In December 2015, a few weeks before the company’s public unveiling, Stephan Robert Kappes took over: born on 22 August 1951 in Cincinnati, Kappes is married, has two children, speaks Farsi and Russian and has a Bachelor’s and Master’s of Science in Pathology from Ohio State University. After graduation, he enlisted in the Marine Corps (1976) and rose rapidly through the ranks until, in 1981, he joined the CIA, where he held numerous positions of responsibility: division chief in New Delhi, Frankfurt, Kuwait City, Moscow and Pakistan, then in the Near East and South Asia until 2000; he returned to America as Deputy Assistant Director for Counterintelligence Operations and then, in 2004, took over from Deputy Director of Operations James Pavitt.
In this function he is one of the supervisors, together with Pavitt, of the direction of CIA operations during the compilation of the controversial report on the weapons of mass destruction in Iraq. He resigns after the mess becomes public knowledge, and a few days later, also the Vice-Director, John E. McLaughlin, resigns. In May 2006, Kappes is recalled and nominated Deputy Director of the CIA by John Negroponte – a position he holds until 2009: his objective is to re-launch the image of the CIA after years of scandals. Kappes did not succeed, because he had several bigwigs in the intelligence services against him, who considered him guilty of ‘gross insubordination’ and had him impeached.
In 2009, Kappes was convicted in absentia by an Italian court for having directed, in 2003, the kidnapping, extradition and torture of the Egyptian citizen Abu Omar, who was kidnapped in Milan by CIA agents and transferred to Egypt, where he was tortured. His style is already well known: he asked Obama to ‘re-establish secret prisons and use aggressive interrogation methods’. On 14 April 2010, Kappes resigned in a heated atmosphere; an operation he was in charge of turned into a massacre due to ‘carelessness’, i.e. a failure to search: on 30 December 2009, a Jordanian was let into the CIA’s Afghan base wearing an explosive vest. Seven CIA officers died, including the base chief, and Kappes was investigated.
It is not an honourable exit: for many observers, he is one of the principal architects of the numerous failures of the American Intelligence Agency. In April, 2005, he joins ArmorGroup International as Director and member of the Board of Directors. Subsequently he is appointed Director in Qtec Analytics Ltd (January 2013) and Director of Quest Global Holdings Ltd, two cybersecurity companies . In December 2015 he began his venture with Voyager Labs.
Voyager Analytics under indictment
In April 2020, a scandal swept through the Colombian Security Forces: officials spied on the 2016 talks with the Revolutionary Armed Forces of Colombia (FARC) rebels, which ended in a peace agreement. The military improperly collected information on more than 130 people, including journalists from foreign and domestic media, politicians and NGO representatives, in an attempt to profile their behaviour. The espionage operation used software from Voyager Labs, purchased a couple of years earlier. On 1 May 2020, the Colombian Ministry of Defence, Carlos Holmes Trujillo, announced the expulsion of 11 officers, while one general voluntarily resigned.
In August 2020, Voyager Labs becomes involved in the historic conflict between the Chilean government and the indigenous Mapuche people. In the crosshairs is Operation Hurricane, during which, through searches and arrests by the Chilean Carabineros, an attempt is made to intimidate the Mapuche community, fabricating false evidence to blame some families for terrorism. The operation was carried out with the involvement of the Chilean Specialised Operative Intelligence Unit, using Voyager Labs software to profile Mapuche leaders.
A similar case occurs in Spain: in January 2021, the CTTI Center for Telecommunications and Information Technologies of the Generalitat de Catalunya spends 1.5 million euros to acquire two platforms, Voyager Analytics and Voyager Check, for the official purpose of refining the fight against jihadist terrorism. But the concern that such tools could be used for abusive control of ordinary communities is high: the Mossos d’Esquadra (Catalonia’s police force) are not new to computer hacking and espionage. The list of violations is long: going back in time, the CESICAT Centre for Information Security of Catalonia, no longer operational, ends up under accusation for profiling social activists and journalists, spying on their behaviour: a parliamentary enquiry leads to the request for resignation of the then Minister of the Interior Felip Puig.
Dulcis in fundo, comes the Italian criminal investigation: in an investigation by the Prosecutor’s Office of Florence in November 2019, the main suspects are the lawyer Alberto Bianchi and his friend Marco Carrai – the charge is trafficking in influence and illegal financing of parties. At the centre of the scandal is the Open Foundation, created by Bianchi himself to finance the political activities of Matteo Renzi, former Prime Minister, leader and founder of the Italia Viva party. Towards the end of 2021, it is revealed that Marco Carrai allegedly had a meeting in 2016 with Avi Korenblum, founder of Voyager Labs, to buy their software with the aim of “monitoring and influencing the campaign” of the vote on a constitutional referendum that, if approved, would have profoundly changed the Constitution. Renzi lost the referendum, resigned, and his political parabola quickly began to decline.
Is the LAPD violating the First Amendment?
Criminal organisations are increasingly using social networks as a communication vehicle. Voyager Labs offers police analysis tools that promise to foil criminal acts
The use of social media against crime is on the rise: a survey conducted by the International Police Association in 2017 states that 70% of police departments use social media to monitor their citizens. Voyager Labs’ software is also being purchased by the Los Angeles Police Department – a body that has already been using highly contested tools for some time, such as Geofeedia, which processes the geographical positioning of individuals by exploiting social media, or Media Sonar and Dunami, two pieces of software useful for individual profiling through data collected on the net. The information from these software packages is managed globally by Palantir’s Gotham platform, one of the most controversial and powerful tools of police forces around the world.
The suspicion that police forces are making unscrupulous use of this software is very high. In January 2020, the Brennan Center for Justice asked the Los Angeles Police Department to report on the methods used to gather information on individuals, groups and activities through the use of social media such as Facebook, Twitter and Instagram. The response arrived on 24 February, but was deemed partial and unsatisfactory. On 17 November 2020, the Brennan Center filed a lawsuit in California Superior Court against the police department, forcing it to produce additional documentation: between March and September 2021, 12 files with a total of about 10,000 pages arrived in response.
From the documents – which the Brennan Center will make public in November 2021 – a picture emerges that borders on a violation of the First Amendment: the department confirms that it is making massive use of investigative tools on social networks, intervening undercover using fake profiles, collecting personal information of all kinds, and exploiting predictive software such as Voyager Analitics. It is above all their operativeness that leaves many doubts, both in the methods of collection and in the responses: the information is acquired without leaving a trace, it is used to reconstruct closed social profiles or closed social groups (thus violating the rules of the platforms), to analyse the relationships between the various profiles, to identify the users ‘most involved in a given position: emotionally, ideologically and personally’. All this without any auditing of procedures and personal responsibility for the methods of data collection, use and storage.
In a Voyager Labs document describing an operation against Islamic terrorism, it is admitted that its tools can automatically monitor and classify people according to their risk of becoming Islamic fundamentalists. The results are colour-coded (green, orange and red) on the basis of ‘artificial intelligence calibrations’, information obtained within minutes and without human involvement – and without the citizen suspect being able to defend himself.
7 July 2016: Thousands of people take to the streets in New York and other American cities to shout ‘Black Lives Matter’ against Geofeedia and the undemocratic use of police surveillance software
A frightening method, if one thinks that, to be identified as a probable Islamic terrorist, it is sufficient to have an unconscious connection on Facebook, perhaps created years before and now forgotten, with someone who, once in his life, typed “Allah is great”. So frightening that even a representative of the LAPD expresses concern about the risk of unmanageability of Voyager’s monitoring mechanisms, but also at the memory of the violent protests caused by the use of Geofeedia by law enforcement in 2016.
No one can know how many people around the world have been classified as Islamic terrorists. This information is held by Voyager Labs and is not accessible to any legal authority: it is a secret asset with immense destructive power because, if used by the intelligence services, it can take anyone to Guantanamo without any responsibility for anything. This is the Hamletian doubt of the fundamental principle of the democratic concept of justice: the one who is proven guilty of a crime should be punished, and never someone who is suspected of knowing someone who would like to commit it one day.
The Brennan Centre believes that these methodologies can easily produce aberrant results. Arrests based on artificial intelligence findings risk indiscriminately targeting innocent activists and communities of colour. On 7 December 2021, the Brennan Center returned to the task of asking the Office of Intelligence and Analysis of the US Department of Homeland Security and Immigration and Customs Enforcement for information on the use of tools developed by Voyager Labs, ShadowDragon and Logically Inc. The requests, with different reasons, fell on deaf ears.
Facebook does not like Voyager Labs
Facebook asks LAPD to suspend practices that violate platform rules
At the beginning of 2018, Zuckerberg found himself dealing with the hurricane that hit his creation, Facebook: the Cambridge Analytica scandal. Having been accused of failing to protect the data of more than 87 million users (data stolen for use in political propaganda), Facebook has developed a strong sensitivity in protecting the personal information of its users, not least because the affair has cost it a good $643,000 in court settlements. One of the first moves was to close the accounts of members of the Cybersecurity for Democracy team in August 2021: the researchers were collecting information from the network to conduct research on Covid19, using a browser extension to circumvent detection systems and collect data such as usernames, ads, and links to user profiles, some of which were not publicly visible. According to Facebook, this collection, as with Cambridge Analytica, violated users’ privacy.
In November 2021, the Los Angeles Police Department stated that Facebook did not like Voyager Labs’ software, which was used too casually. According to Facebook, such software violates privacy, but the most important accusation is aimed at police officers who use fake accounts to act undercover, a blatant violation of the platform’s standards – an activity shared by the Californian and Russian police. Facebook (Meta) demands the cessation of these activities. In vain. This is not the first time that Facebook has intervened by ordering the police to suspend investigative activities through the use of fake accounts: it happened already in 2018 with the Memphis police, but in that case Facebook had managed to have the accounts suspended.
According to documents published by the Brennan Center, Twitter was also used for the indiscriminate collection of information using a monitoring tool, ABTShield, developed by the Polish EDGE NPD: ABTShield collected so many tweets that it made the software itself go haywire and forced EDGE NPD to stop its activities. A huge volume of data continued to be collected throughout the test period, totalling almost 2 million tweets, an average of about 70,000 per day. According to EDGE NPD, the software collected about 200 million tweets during the trial period, sending only a fraction of them to the LAPD. The rest remains in the company’s databases, and who knows what and to whom they will be used.
The big bluff
Deep learning applied in the detection of violent behaviour through images
“This is hyperbolic artificial intelligence marketing. The more they brag, the less I believe it”: is a statement by Cathy O’Neil, data scientist and algorithmic reviewer – that Voyager Labs’ promises have nothing scientific about them: “They’re telling us, ‘We can see if someone has criminal intent.’ No, you can’t. Not even in the case of people who commit crimes can you predict that they have criminal intentions.” . One example is the use of predictive techniques in Santa Cruz, California, in 2011: after nine years, the city council voted unanimously to ban it, because the method exacerbated racial inequalities. As if there was any need.
The idea that there are predictive risk indicators that can be used successfully is discredited by decades of academic research. There is no evidence that such software has brought benefits in the fight against terrorism and crime, but on the contrary, it has proven to be discriminatory, divisive and destructive to communities. Particularly in the area of Islamic terrorism, where Voyager Labs has worked most intensively, analysis techniques are based on concepts that have been shown to be totally aberrant, such as considering Islamic radicalisation an automatism that leads to terrorism.
In 2017, Voyager Labs was named ‘Cool Vendors in AI for Banking and Investment Services 2017’ by the American giant Gartner. In November of the same year, it also received the 2017 European New Product Innovation Award from the prestigious consultancy Frost&Sullivan. In September 2020, it wins the 2020 Open Source Technology Innovation of the Year Award from the OSMOSIS Institute, which trains investigators, researchers, journalists and cyber intelligence analysts in Open Source Intelligence techniques and practices. In November 2020, it was awarded ‘Best AI Industry Solution for Intelligence’ in the AI Breakthrough Awards 2020, conducted by the leading intelligence organisation of the same name.
The accolades are there, but there is a lack of real evidence of effectiveness. Yet there are those who believe in it and are betting heavily on predictive techniques: in November 2021, Voyager Labs arrived in Japan and, in synergy with Terilogy Worx, signed an important agreement with a Japanese government agency to supply its platforms for combating terrorism and crime. The signing comes a few months after another important agreement, this time between Voyager Labs and Microsoft, consisting of several platforms that make up the “Microsoft One Commercial Partner” programme for the joint marketing of software with Microsoft’s global sales teams, spread across 170 countries.
Voyager Labs therefore continues to find customers. It is only to be hoped that the method matures and becomes effective – and above all that it causes as little collateral damage as possible. When military technology aims not to help but to replace human intelligence, disquiet takes the place of fascination: the risk of making irreparable and inhuman mistakes is dramatically real. Expressing an opinion cannot in itself be considered a crime or a precursor to violence. In the past, fighting for civil rights or for women’s right to vote were considered terrorist expressions. This is the way to repress ideas, not violence.
 https://sirronaldcohen.org/ ; https://insidebigdata.com/2016/11/02/voyager-labs-emerges-from-stealth-mode-with-next-gen-cognitive-computing-deep-insights-platform/
 https://www.forbes.com/profile/lloyd-dorfman/?sh=5d24653b227f ; https://insidebigdata.com/2016/11/02/voyager-labs-emerges-from-stealth-mode-with-next-gen-cognitive-computing-deep-insights-platform/
 CAMBRIDGE ANALYTICA: I CRIMINALI CHE CI CONVINCONO A VOTARE TRUMP | IBI World Italia
Leave a Reply