According to a source and internal correspondence, dozens of contractors who trained OpenAI language models that run ChatGPT were laid off in March.
According to Insider’s Slack screenshots, San Francisco-based Invisible Technologies fired 31 contractors on March 16. As OpenAI’s ChatGPT becomes a global hit, layoffs follow. OpenAI continues hiring.
Slack pictures show hundreds of Invisible “advanced AI data trainers” train OpenAI’s GPT bots. According to an Invisible contractor who requested anonymity due to a non-disclosure agreement, AI data trainers improve models’ coding, creative writing, and speech. Insider confirmed identity and employment.
Invisible vice president of operations Kamron Palizban addressed the layoffs at an all-staff meeting in March. According to Insider’s meeting recording, he indicated OpenAI intended to cut contractors due to shifting business demands. In the discussion, Palziban said many of the laid-off contractors worked on projects that didn’t provide enough ROI for OpenAI.
Invisible Technologies and OpenAI declined comment.
After recruiting 1,000 internationally, OpenAI cut contractors.
Invisible’s partnership with OpenAI reveals ChatGPT’s data-training methods, which had been kept private.
Following Semafor’s six-month staffing increase, OpenAI’s contract with Invisible was adjusted. Sources told Semafor that OpenAI had employed around 1,000 data-labeling vendors in Eastern Europe and Latin America by January.
Invisible laid off two months after Microsoft invested $10 billion in OpenAI. Invisible isn’t OpenAI’s only contractor.
According to Time, San Francisco-based contracting business Sama canceled its cooperation with OpenAI in February 2022 after realizing its Kenyan data labelers were analyzing sexual abuse, hate speech, and violence.
OpenAI told Time, “Classifying and filtering harmful [text and images] is a necessary step in minimizing the amount of violent and sexual content included in training data and creating tools that can detect harmful content.”
AI trainer life.
The Invisible contractor said data trainers’ main duties are to examine AI-user dialogues for illegal, private, offensive, or error-filled content. Insider’s contractor described a daily routine:
They verify their teams’ tasks via an internal work browser to start their shift. They might click “Have a conversation about a random topic with browsing disabled” and type a question into a message box.
The model generates four replies after the query. Contractors use a drop-down option to assess responses for factual mistakes, spelling, punctuation, and harassment. According to a presentation the contractor gave Insider, they then rank the severity of the inaccuracies from one to seven, with seven signifying a “basically perfect” answer.
Next, contractors must write and submit a great response. The contractor said OpenAI and Invisible quality checkers receive the output. Next task repeats cycle.
“They’re in a stage where they’re on the cusp of getting a lot more clarity on where they’re going,” Palizban said of OpenAI during the discussion.
Invisible partner and operations manager Grace Matelich said in the recorded meeting that contractors were fired based on “quality” and “throughput” measurements.
According to the discussion, some contractors who failed and those who were onboarded but didn’t “hit their bar for certification” were laid go, while many were offered the chance to join another OpenAI team. “If you’re still here today, I want you to know it’s because we have faith and trust in your ability to operate with excellence,” Matelich added.