Responsible AI: What the EU’s AI Act Teaches Us about Governance, Ethics and Sustainability
Broadstone Sustainability Consulting
5/22/20252 min read


A Global Framework for AI Regulation
Approved in 2024 and already in force in stages since February 2025, the EU’s AI Act is the world’s first comprehensive set of legal rules for artificial intelligence. It establishes clear guidelines for the safe, ethical and transparent use of AI systems, based on a risk-based approach. As AI advances across a range of sectors — from logistics to healthcare, from law to social — there is an urgent need to integrate technological innovation with principles of human rights, data protection and social justice.
What is already in force?
Since February 2025, AI systems deemed to pose unacceptable risk have been banned, such as:
· Social scoring of individuals;
· Emotional monitoring of workers;
· Behavioral manipulation of vulnerable users.
These practices are seen as direct threats to fundamental rights and human dignity.
What will change in August 2025?
As of August 2, 2025, transparency obligations for general-purpose AI models, such as those used in natural language processing, content generation, and decision automation, will come into force. The requirements include:
· Informing whether content was generated by AI;
· Publishing technical documentation on its operation and risks;
· Ensuring traceability of data and training processes.
Opportunities for ESG, Compliance, and Human Rights
For companies operating in different regions — including outside Europe — the AI Act represents a new standard of good practices. Even without direct obligation, it should influence regulatory frameworks in other countries and become a reference in:
· Socio-environmental due diligence in technology;
· Ethical compliance in global supply chains;
· Rights-centered corporate governance.
How can organizations prepare?
Adopting principles such as transparency, explainability, non-discrimination and human oversight is the safest path. Mapping existing AI systems, assessing their impacts on vulnerable groups and establishing clear internal policies are key steps.
Conclusion
The AI Act is not just about technology. It is about accountability, equity and how we want to shape the future of innovation. Organizations that proactively embrace this vision will be better prepared to face risks and generate sustainable value.
At Broadstone, we continue to closely monitor these milestones and foster dialogue on the ethical use of technology in the service of social development.
To continue following our content and reflections, follow Broadstone on LinkedIn and visit our website.
