New York City’s Law 144 is a groundbreaking local law aimed at regulating the use of artificial intelligence (AI) and automated decision-making tools (also known as Automated Employment Decision Tools, or AEDTs) in hiring and employment practices. The law, which came into effect on January 1, 2023, requires companies to ensure transparency and fairness when using AI for employment decisions like hiring and promotions.

Key Provisions of NYC Law 144

1. Bias Audits:
Employers and employment agencies using AEDTs to evaluate candidates for hiring or promotions must subject the tools to an annual bias audit. This audit is designed to assess whether the AI tools result in a disparate impact on individuals based on protected characteristics, such as race, gender, or ethnicity, under the Equal Employment Opportunity laws. The bias audit must evaluate the impact of the AEDT by reviewing historical data to ensure the tool does not disproportionately disadvantage protected groups.
2. Notification Requirements:
Employers must notify candidates (or employees, in the case of promotions) that AEDTs will be used in the hiring or promotion process. This notice must include details on how the tool works and the criteria it assesses. Candidates must be informed at least ten business days before the tool is used, giving them the option to request an alternative selection process.
3. Disclosure of Results:
Employers are required to disclose the results of the bias audit publicly, either on the company’s website or by providing it to individuals upon request. This transparency requirement helps ensure companies are held accountable for using fair tools.
4. Data Retention and Reporting:
Companies must retain audit reports and ensure proper documentation of how AI and automated tools are applied. This allows regulatory bodies to review compliance if necessary.

How a Company’s HR Department Using AI Can Be Found Not in Compliance with NYC’s Law 144

There are several ways a company’s HR department could violate NYC’s Law 144. Below are examples of non-compliance, along with explanations of how these issues could arise:
1. Failure to Conduct Annual Bias Audits
Example: A company uses an AI tool to screen resumes for job applicants. However, the company does not conduct an annual bias audit to check for disparate impacts on candidates based on race, gender, or ethnicity. Non-Compliance: NYC Law 144 requires an annual bias audit to ensure that the AI tools are not disproportionately affecting certain protected groups. Failing to conduct this audit can lead to claims of biased hiring practices and legal penalties. Impact: If a pattern emerges showing that women or minority groups are disproportionately rejected by the AI tool, the company could be found in violation of both Law 144 and broader anti-discrimination laws.
2. Using AI Tools Without Notifying Candidates
Example: An HR department implements an AI-driven tool that ranks candidates based on their qualifications and experience. However, candidates are not informed that AI is being used in the hiring process. Non-Compliance: Law 144 mandates that companies provide candidates with advance notice when AI tools are being used. The failure to notify applicants at least ten days before their data is processed by the AI tool constitutes a violation of this law. Impact: The company could face penalties, as well as lawsuits from candidates who may feel their hiring process was unfair or opaque.
3. Inadequate or Misleading Disclosure of Bias Audit Results
Example: The company posts a general statement on its website saying that its hiring AI tool has been audited for bias but does not provide any specifics on the results of the audit or information on any corrective actions taken to address identified disparities. Non-Compliance: Law 144 requires detailed disclosure of the bias audit’s results. Merely claiming that an audit has been conducted without providing any actionable information could be seen as failing to comply with the law’s transparency requirements. Impact: This can lead to regulatory scrutiny, especially if there are complaints from candidates or employees about biased hiring outcomes. The company may also face fines and demands to correct its practices.
4. Using Unapproved or Non-Compliant AI Tools
Example: A company adopts a hiring tool developed by a third-party vendor. The vendor claims the tool is AI-based but has not conducted the required bias audit to assess its impact on different demographics. Non-Compliance: If the tool has not undergone a proper bias audit, the company using it will be held responsible for violating Law 144, even if the tool was developed by a third-party vendor. Employers must ensure the tools they use comply with the law. Impact: The company could be found liable for using a tool that produces biased outcomes, potentially leading to both reputational damage and legal fines.
5. Improper Documentation or Data Retention
Example: After conducting a bias audit, the HR team fails to retain detailed records of the audit, or the data used for analysis is incomplete. Later, a regulatory body requests a review of the audit results, and the company is unable to provide adequate documentation. Non-Compliance: Law 144 requires proper retention of audit documentation. Incomplete records could result in a company being deemed non-compliant during an investigation or audit by regulatory bodies. Impact: This may lead to fines, audits, or other penalties. Additionally, it opens the door for further regulatory scrutiny on other AI-driven processes within the company.
6. Bias Found, But No Corrective Action Taken
Example: During a bias audit, it is discovered that the AI tool has a 15% higher rejection rate for minority candidates compared to white candidates. Despite this finding, the company does not take steps to rectify the tool’s algorithm or implement corrective actions. Non-Compliance: Even if a bias audit is conducted, failing to take action to remedy discriminatory outcomes would put the company in violation of both Law 144 and anti-discrimination laws. Impact: The company could face legal action from rejected candidates or government regulators for knowingly using a biased tool without taking steps to mitigate its discriminatory effects.

How is this policed and enforced?

1. Department of Consumer and Worker Protection (DCWP) Oversight
  • The New York City Department of Consumer and Worker Protection (DCWP) is the main regulatory body tasked with enforcing Law 144.
  • Investigations and Audits: DCWP has the authority to investigate companies to ensure compliance with the law. They can initiate an audit of a company’s AI tools or request bias audit reports to verify whether the tools have been evaluated for potential biases.
  • Fines and Penalties: The law empowers DCWP to impose fines on companies that fail to comply. Fines range from $500 to $1,500 per violation, with additional penalties for repeat offenses.
2. Annual Bias Audit Requirements
  • Companies must ensure that the AI or automated tools they use for employment decisions are subject to an annual bias audit. This creates an incentive for businesses to stay compliant because the audit results must be publicly available or disclosed to candidates upon request.
  • DCWP or other regulatory bodies may conduct random audits or follow up on specific reports of non-compliance to ensure companies have conducted the required bias audits.
3. Transparency and Public Accountability
  • The law requires companies to publish bias audit results, which are often available on the company’s website or by request. This transparency allows for public scrutiny and can lead to complaints or actions if candidates or employees believe the company is not compliant.
  • Third-party advocacy groups or civil rights organizations may also monitor published audit results to identify potential discrimination and report violations to authorities.
4. Candidate Complaints
  • Individual complaints can be a major trigger for enforcement. Job applicants or employees who feel they were unfairly impacted by AI-driven hiring tools can file complaints with the DCWP or other relevant agencies.
  • Complaints can lead to formal investigations into the company’s hiring practices, including whether proper audits were conducted and whether the company properly notified candidates about the use of AI tools.
5. Private Lawsuits
  • Beyond government enforcement, individuals who believe they were discriminated against by an AI hiring tool could file private civil lawsuits against the company for violating NYC’s anti-discrimination laws and Law 144.
  • Plaintiffs can allege that the company failed to comply with the law, either by not conducting bias audits or by using tools that have discriminatory impacts on protected groups.
6. Spot Checks and Data Requests
  • The DCWP or other authorities can request a company to provide documentation of bias audits or evidence of candidate notification at any time. Failure to provide this documentation could lead to penalties.
  • If a company fails to maintain adequate documentation or does not keep records of the bias audit, this would be considered a violation, leading to fines and regulatory action.
7. Third-Party Bias Auditors
  • Law 144 encourages companies to work with third-party auditing firms to conduct bias audits. These auditors ensure compliance by examining AI tools for disparate impacts on protected groups and producing reports.
  • Auditors could also be regulated to ensure they meet industry standards and produce accurate, unbiased reports.
8. Periodic Revisions and Expansion of Law
  • As AI technologies evolve, there may be future amendments to NYC Law 144 to expand its scope or further refine enforcement mechanisms. The DCWP, in conjunction with other governmental bodies, may issue guidance updates or more stringent oversight policies as AI tools become more pervasive.

Summary of Enforcement:

  • Main Enforcer: The Department of Consumer and Worker Protection (DCWP).
  • Mechanisms: Random audits, candidate complaints, public bias audit results, and private lawsuits.
  • Penalties: Fines ranging from $500 to $1,500 per violation.
  • Accountability: Companies must keep detailed documentation and ensure transparency, with bias audits being central to compliance.