The increasing adoption of artificial intelligence (AI) in decision-making processes has raised significant concerns regarding algorithmic bias and legal accountability. This study examines the regulatory challenges and enforcement gaps in addressing AI bias, with a particular focus on Indonesia’s legal landscape. Through a comparative analysis of AI governance frameworks in the European Union, the United States, China, and Indonesia, this research identifies key deficiencies in Indonesia’s regulatory approach. Unlike the EU’s AI Act, which incorporates risk-based classification and strict compliance measures, Indonesia lacks a dedicated AI legal framework, leading to limited enforcement mechanisms and unclear liability provisions.The findings highlight that transparency mandates alone are insufficient in mitigating algorithmic discrimination, as weak enforcement structures hinder effective regulatory oversight. Furthermore, the study challenges the notion that global AI regulatory harmonization is universally applicable, emphasizing the need for a context-sensitive hybrid model tailored to Indonesia’s socio-legal environment. The research suggests that Indonesia must adopt a comprehensive AI legal framework, strengthen regulatory institutions, and promote interdisciplinary collaboration between legal experts and AI developers. Future research should focus on empirical case studies, the development of context-specific AI accountability models, and the role of public engagement in AI bias mitigation. These efforts will be essential in shaping effective AI governance strategies that ensure fairness, transparency, and accountability in Indonesia’s digital transformation.