government.) When it comes to staffing up the rest of the Taskforce, Hogarth says lower salaries haven’t dissuaded many formidable machine-learning researchers from joining. Notably, Hogarth is working without pay, because he sees it as a “critical public service.” (In his role as a partner at the investment fund he co-founded, Plural, he has also agreed to divest from his stakes in AI companies that are “building foundation models or foundation-model safety tools,” according to the U.K. “There’s a community of researchers who’ve been waiting for an opportunity to do this kind of public service,” he says. When it comes to attracting talent, though, Hogarth says neither budget nor Big Tech competition has been a barrier. But in his interview with TIME two months later, Hogarth declined to share details about what type of access, if any, the government has secured. “early or priority access” to their models for safety-research purposes. In June, Prime Minister Rishi Sunak announced that three world-leading AI labs-OpenAI, Google DeepMind, and Anthropic-had committed to give the U.K. To do meaningful safety work at the same standard as industry researchers, Hogarth’s Taskforce will need to secure access. to compete with the tech giants when it comes to training “foundation” AI models at the frontier of capability-“people are spending $100 million just on a single training run”-but he believes it’s still possible on a government budget to meaningfully contribute to safety research, which is “much less capital-intensive.”Īnother reason safety research is still mostly done inside AI companies is that AI labs tend to guard the confidential “weights” and training datasets of their most powerful models-partly because they are easily replicable trade secrets, but also because of real concerns about the dangers of their proliferation. Hogarth acknowledges it’s impossible for the U.K. And salaries for world-leading machine-learning researchers can run to several million dollars per year. Google spent $39.5 billion on research and development in 2022. OpenAI raised $300 million just at its last fundraising round in April, putting its total valuation at $28.7 billion. While £100 million sounds like a lot of money, it pales in comparison to the budgets of the leading AI companies. That’s an issue that is growing in risk, and the kind of thing that we’re looking into.” Biosecurity-given the risks of AI making it easier to design and synthesize dangerous pathogens-is another area that will be a focus, he says. “Those same tools will augment the potential for the development of various kinds of cyberattacks. “There is an enormous amount of capital and energy being poured into making more capable coding tools,” he says by way of example. “We are putting researchers in the Taskforce working on AI safety on a similar footing to those working on AI safety in tech companies.”Īlthough his FT essay warned of the longer-term risks of superintelligent AI, Hogarth says his Taskforce will prioritize safety research into near-term risks. The Taskforce is “the most significant effort globally, in any nation-state, at tackling AI safety,” Hogarth tells TIME, in an interview at the Taskforce’s headquarters, located in the same Westminster building as the U.K. Under Hogarth’s guidance, the U.K.’s AI Taskforce aims to build capacity inside government to do the kinds of safety research currently possible only in industry. In November, the country will host a first-of-its-kind AI summit convening international policymakers with top CEOs and safety researchers. to cast itself as a leader in the move toward global norms around the use and regulation of AI systems. The £100 million ($126 million) investment-the largest by any state into the field of AI safety, according to Hogarth-is part of a wider push by the U.K. government announced it had appointed the 41-year-old Brit as the head of the U.K.’s new AI safety initiative, now known as the Frontier AI Taskforce. In April, Hogarth authored a viral piece for the Financial Times, in which he argued that the corporate race toward “God-like” AI posed a risk that could, in the worst-case scenario, “usher in the obsolescence or destruction of the human race.” Two months later, the U.K.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |