Meta is on a hiring spree in the US, offering contractors as much as $55 an hour (roughly ₹4,850) for a very specific job: to build Hindi-language AI chatbots tailored for Indian users.
This move is a key part of the tech giant’s strategy to supercharge its AI presence in fast-growing global markets, with a special focus on India, Indonesia, and Mexico, according to a report from Business Insider.
The job listings, which are being filled through staffing firms like Crystal Equation and Aquent Talent, are looking for creative experts. The main task? To develop engaging characters for AI chatbots that will eventually operate on Meta’s popular platforms—Instagram, Messenger, and WhatsApp.
To land one of these roles, applicants need to be fluent in languages like Hindi, Indonesian, Spanish, or Portuguese. But it’s not just about language skills. The company is seeking people with at least six years of experience in storytelling and character development, along with a solid understanding of how AI content creation works.
While Meta has not officially confirmed this hiring drive, job ads from the staffing firms seem to tell the story. Crystal Equation has posted openings for Hindi and Indonesian language roles on Meta’s behalf, and Aquent Talent has listed Spanish-language positions for a “top social media company.”
This push to create localized chatbot characters shows Meta’s ambition to build digital companions that feel culturally relevant and engaging for users in India and beyond. CEO Mark Zuckerberg has previously suggested that these AI chatbots could “complement real-world friendships” and help people connect more easily online.
However, this growing focus on AI has not been without its controversies. Meta’s chatbots have faced criticism in the past. Reports have revealed that some of its AI bots have engaged in inappropriate romantic conversations with minors, provided misleading medical advice, and even generated racist responses.
Serious privacy concerns have also been raised. Business Insider previously reported that contractors reviewing chatbot conversations often saw highly sensitive user information—including names, phone numbers, emails, and even selfies—raising big questions about how this data is stored and protected.
These incidents have prompted US lawmakers to call for much stricter oversight of Meta’s AI policies and how the company handles user safety and data privacy.