AI Weekly: DeepMind’s AlphaCode, Automatic Age Verification, and New Open Language Model

Join today’s top leaders online at the Data Summit on March 9. Register here.


This week in AI, DeepMind detailed a new code generation system, AlphaCode, which it claims is competitive with top human programmers. UK supermarket chains have announced they will start testing automatic age verification systems to estimate customers’ ages when buying alcohol. And EleutherAI, a research group focused on open-source, high-performance AI systems, released GPT-NeoX-20B, a language model that is among the most important of its kind.

AlphaCode is one of the most sophisticated examples of programming machines or tools that automate software development and maintenance processes. DeepMind claims it can write “competition-level” code, earning an average ranking in the top 54.3% across 10 recent competitions on the Codeforces programming challenge platform.

The applications of machine programming are vast, which is why there is excitement around it. According to a Cambridge University study, at least half of developers’ efforts are spent on debugging, costing the software industry an estimated $312 billion a year. Even migrating from a code base to a more efficient language can command a princely sum. For example, the Commonwealth Bank of Australia spent approximately $750 million over five years to convert its platform from COBOL to Java.

AI-based code generation tools like AlphaCode promise to reduce development costs while allowing coders to focus on creative and less repetitive tasks. But AlphaCode is not perfect. As well as being expensive to maintain, it doesn’t always produce correct code and could – if similar systems are any indication – contain problematic biases. Additionally, if made public, malicious actors could misuse it to create malware, circumvent programming tests, and deceive cybersecurity researchers.

“[A]While the idea of ​​giving the power to program to people who can’t program is exciting, we have a lot of problems to solve before we get to that,” said Queen’s University AI researcher Mike Cook. Mary from London.

Automatic age verification

Three supermarket chains in the UK – Asada, Co-op and Morrisons – are using cameras to estimate the age of customers in a test carried out by the Home Office, the UK department responsible for immigration, security and law and order. The technology, which was already in use at Aldi’s cashless location in London, guesses the age of customers who consent using algorithms trained on “a database of anonymous faces”, according to the BBC. If he decides they are under 25, they will need to show ID to a member of staff.

Yoti – the company providing the technology – claims it has been tested on over 125,000 faces and has estimated its age to be less than 2.2 years old. But while Yoti says it does not perform facial recognition or store images taken, the system raises ethical concerns.

Age estimation systems, like other AI systems, could amplify any bias in the data used to develop the systems. Study highlights effect of makeup, which can cover up signs of aging like age spots and wrinkles, finds age estimation software tends to be more accurate for men . The same research found that the software overestimates the age of young non-Caucasians and underestimates the age of older Asian and Black people, and may even be influenced by whether or not someone is smiling.

In an interview with Wired, Yoti co-founder and CEO Robin Tombs admitted the company doesn’t know what facial features its AI uses to determine people’s ages. Although he pointed out that Yoti’s training dataset of “hundreds of thousands” of faces was “representative across skin tones, ages and genders” and that his internal research showed rates errors across all demographic data, the academic literature would seem to suggest otherwise. Yoti’s own white paper shows that the technology is least accurate for darker-skinned older women.

A misjudgment of age at the supermarket could be nothing more than an inconvenience (and possibly embarrassment). But it could standardize the technology, leading to more problematic applications elsewhere. Daniel Leufer, a European policy analyst specializing in AI with civil liberties group Access Now, told Wired that regulators should consider who these systems are likely to fail when considering use cases. “Typically, that response is from people who are regularly rejected by other systems,” he said.

Open source language model

EleutherAI released its new language model, GPT-NeoX-20B, on Wednesday as part of its mission to expand access to high-performance text-generating AI. Available now through an API and next week in open source, GPT-NeoX-20B outperforms other public language models in several areas while generally being cheaper to deploy, according to EleutherAI.

GPT-NeoX-20B – which was developed on infrastructure provided by CoreWeave, a specialist cloud provider – was trained on EleutherAI’s 825 GB text dataset and contains 20 billion parameters, or about 9 times less than OpenAI’s GPT-3. In machine learning, parameters are the part of the model that is learned from historical training data. Generally speaking, in the domain of language, the correlation between the number of parameters and sophistication has held up remarkably well.

EleutherAI does not claim that GPT-NeoX-20B solves any of the major problems plaguing current language models, including aspects such as bias and toxicity. But the group argues that the benefits of releasing the model – and others like it – outweigh the risks. Language models can cost up to millions of dollars to train from scratch, and inference – that is, actually running the trained model – is another hurdle. An estimate pegs the cost of running GPT-3 on a single Amazon Web Services instance at a minimum of $87,000 per year.

“From spam and astroturfing to chatbot addiction, the use of these patterns can already manifest harms evident today, and we expect the alignment of future patterns to be critically important. We believe that accelerating security research is extremely important,” EleutherAI co-founder Connor Leahy said in a statement.

Previous models of EleutherAI have already spawned brand new AI-as-a-service startups. If history is any indication, GPT-NeoX-20B will do the same.

For AI coverage, send topical advice to Kyle Wiggers – and be sure to subscribe to the AI ​​Weekly newsletter and bookmark our AI channel, The Machine.

Thanks for reading,

Kyle Wiggers

Senior AI Writer

VentureBeat’s mission is to be a digital public square for technical decision makers to learn about transformative enterprise technology and conduct transactions. Learn more