Skip to main content

Apple supplier responsibility report addresses Foxconn suicides, underage workers

Image used with permission by copyright holder

Apple’s industrial operations overseas are notorious for their poor conditions, a fact that the company has openly acknowledged in its own Supplier Responsibility progress report, released Monday. In the report, Apple addressed the controversy surrounding its underage employees, and working conditions and suicides. The company audited 127 worldwide production facilities in the last three years, nearly all of which were the first time the locations had been investigated by Apple authorities.

According to the progress report, Apple found that 49 workers under the age of 16 were employed at its facilities. One particular location was responsible for 42 of those hires, and because “management [at the location] had chosen to overlook the issue and was not committed to addressing the problem,” Apple terminated its business there.

The report found that some of this child labor was caused by “unsophisticated” methods for verifying age, and instituted further training and consulting to amend the issue. However, it wasn’t entirely caused by naivety: In the extreme case concerning 42 child workers, a vocational school had purposefully forged student IDs and “threatened retaliation against students” who did not cooperate. Apple has reported the school to the Chinese government and also claims it has aggressively attempted to return underage workers to their families and education. The facility guilty of employing the children is required to pay for these expenses.

While combating illegal child labor is commendable, Apple still had to address the damage surrounding its Foxconn facility. The controversy began in 2009 when an employee committed suicide after losing a prototype fourth-generation iPhone. It was, unfortunately, not the last incident: 12 workers took their lives and one died of exhaustion after a 34-hour shift. According to the report, COO Tim Cook and other Apple execs, accompanied by Chinese suicide prevention specialists, traveled to Foxconn to “better understand the conditions at the site.” The document also states that Apple commends some of Foxconn’s efforts, including: “hiring a large number of psychological counselors, establishing a 24-hour care center, and even attaching large nets to the factory buildings to prevent impulsive suicides.”

Apple reports that Foxconn is taking measures to improve employee conditions and monitor mental health. Apple claims it will continue working (read: checking up on) Foxconn.

The report also states that all employees poisoned by toxic chemicals at its Wintek facility have been “treated successfully” and Apple continues to monitor their health until they are fully recuperated. Most of the affected workers are back at the facility.

Above all, the progress report certifies that Apple will be holding guilty parties responsible for their actions. How will it be doing this? From the sounds of it, facilities can expect the strict audits to continue, and will feel an increased presence from NGOs Apple will be collaborating with. The company also claims it wants to further empower individual workers, informing them of labor laws and their own rights.

Topics
Molly McHugh
Former Digital Trends Contributor
Before coming to Digital Trends, Molly worked as a freelance writer, occasional photographer, and general technical lackey…
A dangerous new jailbreak for AI chatbots was just discovered
the side of a Microsoft building

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called "Skeleton Key." Using this prompt injection method, malicious users can effectively bypass a chatbot's safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It's a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, "[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions," Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Read more