The company, which has worked closely with the UK government on artificial intelligence safety, the NHS and education, is also developing AI for military drones.
The defense industry partner says consultancy Faculty AI has “experience in developing and deploying AI models on UAVs (unmanned aerial vehicles).”
Faculty has emerged as one of the most active companies selling AI services in the UK. Unlike OpenAI, Deepmind, Anthropic, and others, it does not develop its own models and is specifically focused on reselling models from OpenAI and consulting for use in government and industry.
The faculty member became particularly well-known in the UK after working on data analysis for the Vote Leave campaign before the Brexit vote. Boris Johnson’s former adviser Dominic Cummings then gave government jobs to the faculty during the pandemic and included its chief executive Mark Warner in meetings of the government’s scientific advisory committee.
Since then, the company, officially called Faculty Science, has been testing AI models for the UK government’s AI Safety Institute (AISI), which was set up in 2023 under former chancellor Rishi Sunak. .
Governments around the world are racing to understand the safety implications of artificial intelligence after rapid improvements in generative AI sparked a wave of hype about its potential.
Weapons companies are keen on the possibility of equipping drones with AI, from “loyal wingmen” that can fly alongside fighter jets, to loitering weapons that already have the ability to wait for a target and fire. .
The latest technological developments are increasing the potential for drones to track and kill without the “involvement” of humans making the final decisions.
In a press release announcing the partnership with the London-based faculty, British startup Hadean said the two companies will “develop, deploy and operate human subject identification, object movement tracking and autonomous swarming. He wrote that he is cooperating with the “exploration of
It is understood that the professors’ work with Hadeen did not involve targeting weapons. However, the faculty declined to answer questions about whether it is working on developing drones capable of applying lethal force or details about its defense activities, citing non-disclosure agreements.
“We help our defense partners develop new AI models to create safer and more robust solutions,” a department spokesperson said, adding that the department has “strict ethics policies and It added that it had “internal processes” and followed the Ministry of Foreign Affairs’ ethical guidelines for AI. defense.
A spokesperson said the teacher has 10 years of experience in AI safety, including child sexual abuse and counter-terrorism.
The Guardian’s ultimate owner, Scott Trust, is an investor in Mercuri VC (formerly GMG Ventures), which has a minority stake in the faculty.
“We have been working on AI safety for 10 years and are the world’s leading experts in this field,” a spokesperson said. “That’s why we are trusted by governments and model developers to ensure the safety of frontier AI, and trusted by defense clients to ethically apply AI to keep the public safe. ”
Many experts and politicians are calling for caution before introducing autonomous technology into the military. In 2023, a House of Lords committee called on the UK government to seek to develop a treaty or non-binding agreement to clarify the application of international humanitarian law in relation to lethal drones. In September, the Green Party called for legislation to completely outlaw autonomous lethal weapons systems.
The Faculty continues to work closely with AISI, placing AISI’s judgments in a position where they can potentially influence UK Government policy.
In November, AISI contracted with the faculty to investigate how large-scale language models are “used to facilitate crime and other undesirable conduct.” AISI said the contract winners will “become key strategic collaborators on AISI’s security team and directly contribute critical information to AISI’s system security model.”
The company is working directly with OpenAI, the startup that started the latest wave of AI craze, to use the ChatGPT model. According to the news website Politico, experts have previously raised concerns about possible inconsistencies in the department’s work with AISI. The department tested OpenAI’s o1 model before its release, but did not provide details about which companies’ models it tested.
The government has previously said that Faculty AI’s work at AISI is “importantly consistent through the development of its own models”.
Green Party peer Natalie Bennett said: ‘The Green Party has long expressed serious concerns about the ‘revolving door’ between industry and government, with gas company staff being seconded to work on energy policy. “We have raised issues ranging from the current situation to the resignation of the former defense minister.” Work at a weapons company.
“It is a serious concern that a single company has numerous government contracts to work on AI while collaborating with the AI Safety Institute to test language models at scale. rather than “became a gamekeeper,” it’s more like he plays both roles. at the same time. “
Mr Bennett also stressed that the UK government is “not yet fully committed” to ensuring humans are involved in monitoring autonomous weapons systems, as recommended by the Lords’ Committee.
Faculty, whose largest shareholder is a Guernsey-registered holding company, also seeks to develop close relationships across the UK government and has secured contracts worth at least £26.6m, according to government disclosures. These include contracts with the NHS, Department of Health and Social Care, Department of Education and Department of Culture, Media and Sport.
These contracts are a significant source of income for the company, which generated sales worth £32m in the year to 31 March. I did not lose £4.4m during that period.
Albert Sánchez Graels, a professor of economics law at the University of Bristol, warned that the UK relies on technology companies’ “restraint and responsibility in the development of AI”.
“Companies supporting AISI efforts will need to avoid systemic conflicts of interest arising from other departments of government or broader market-based AI business efforts,” Sánchez Graels said.
“Companies with broad portfolios of AI activities, such as teachers, will need to understand how advice to AISI ensures independence and impartiality, and how to avoid using that knowledge in other activities. , we have questions that need to be answered.”
The Ministry of Science, Innovation and Technology declined to comment, saying it does not go into details about individual commercial contracts.