Ijeoma Mbamalu, Chief Technology Officer, ACLU

A growing number of organizations are ceding their decision-making authority to artificial intelligence, or AI, systems. From who gets a job, to who gets a grant, to who gets investigated by child welfare agencies, to who receives social services, and who is paroled, companies and governmental entities are relegating decision making that often requires human oversight and context to a machine or algorithm. This movement to use AI to eliminate human oversight demonstrates the life-altering implications that must be addressed in AI systems.

If AI’s purpose is to provide opportunity and create efficiencies, how much thought and care is given to the communities most adversely impacted by this technology? Whose lives may be upended by its use or misuse? Given the widespread fixation on purported opportunities and the claimed promise of automation that AI can offer, it can be challenging to stay clear eyed on the impacts of this emerging technology.

Responsible AI design, deployment and adoption is critical now more than ever, not only to ensure inclusion of the voices of communities often left behind during the design of these technologies, but to also ensure that anytime an AI system is deployed the deployment considerations incorporate ongoing monitoring to evaluate whether the AI solution continues to be the best tool or approach to the problem.

This month, at the ACLU’s first Civil Rights in the Digital Age (CRiDA) AI Summit, we’re convening civil society, nonprofit, academia, and industry leaders to carefully consider how to center civil rights and liberties in our digital age, especially in the design, deployment, and evaluation of AI systems.

Centering Civil Rights in AI Design and Deployment

AI is often marketed as more objective and less discriminatory than the status quo. However, we've consistently seen how AI systems used in areas like hiring, policing, and social services risk exacerbating or amplifying discrimination based on race, sex, disability, and other protected characteristics. For years, the ACLU has called on political leaders to take concrete steps to bring civil rights and equity to the forefront of AI policymaking. We’ve fought in courts and in communities to address AI’s systemic harms and, within our organization, we work to meet the same standards.

We understand that AI can be an asset to organizations' work if implemented thoughtfully and responsibly. For example, if designed and governed carefully with appropriate guardrails, AI systems could be used to support critical accessibility technologies, like screen readers or voice assistants. But the same AI systems can also have the opposite impact—harming rather than helping marginalized communities—when they are not designed and deployed carefully.

Putting Responsible AI Principles into Action

One of the ways we ensure meaningful adoption of AI technologies in our work is that we take an approach grounded in carefully selecting the right tool (AI or not) for the job at hand. More haphazard approaches, such as indiscriminately applying the latest generative AI model for a given task, risks leveraging techniques and approaches that are not appropriate, and that can lead to serious harms, including those related to privacy, security, fairness, and more. We think it is critical to balance the benefits AI systems may provide against possible risks, and we know that when it comes to AI adoption, moving carefully and intentionally is often much more fruitful than moving fast and breaking things.

At the ACLU we have developed a risk-based approach that guides how we continuously evaluate and adopt AI tools in our work. We also evaluate AI systems we are considering procuring our privacy, security, fairness, and transparency values and closely follow emerging research on the impacts of AI systems, especially generative AI systems.

Acting To Protect Civil Rights in the Age of AI

While we live in a digital age the Founding Fathers couldn’t have possibly imagined, we must ensure AI aligns with the core liberties and protections the Constitution provides.

As part of this journey, we are also staging the ACLU’s first-ever Civil Rights in the Digital Age (CRiDA) AI Summit on July 10 in New York City. CRiDA will educate the public on how a century-old civil rights organization is using a two-pronged values-based social contract on its journey to AI adoption. One is built on trust between ACLU’s tech and program teams to adopt AI responsibly. The other is built on trust with the public—showing how our technical, legal, and policy expertise helps ensure AI protects rights and serves justice for all.

If you are interested in watching some of the CRiDA panels, please click here.

Date

Wednesday, July 2, 2025 - 1:45pm

Featured image

The camera focuses on a sign held by a demonstrator that reads, "REGULATE AI KEEP THE FUTURE HUMAN."

Show featured image

Hide banner image

Override default banner image

The camera focuses on a sign held by a demonstrator that reads, "REGULATE AI KEEP THE FUTURE HUMAN."

Tweet Text

[node:title]

Share Image

ACLU: Share image

Related issues

Privacy

Show related content

Imported from National NID

211719

Menu parent dynamic listing

1776

Imported from National VID

211727

Imported from National Link

Show PDF in viewer on page

Style

Standard with sidebar

Teaser subhead

As AI increasingly makes decisions in hiring, policing, and social services, the ACLU’s Civil Rights in the Digital Age Summit focuses on promoting responsible AI design to ensure technology protects rights and serves justice for all.

Show list numbers

Disability Pride is back again! This year, we will celebrate the 35th Anniversary of the Americans with Disabilities Act.

 

Accessibility Information: 

  • The pavilion is a hard, flat, paved surface. Pavement extends beyond the pavilion before the surface of the park turns to level grass. There is a packed, gravel trail to the right of the pavilion. 
  • ASL interpreters will be available throughout the event.
  • Accessible parking will be marked and there will be a drop-off area next to the pavilion. Parking at the venue is limited, so carpooling is encouraged.
  • DRM provides masks at all agency-sponsored events and will make masks available to all attendees.
  • The event is scent-free.
  • Ear plugs will be available.
  • An accessible portable bathroom is available next to the parking area.
  • A map of the event will be available closer to the event.
  • We will be live streaming the speaker portion of the event! 
  • Other accessibility questions? Contact [email protected]

Directions

We recommend using “Mill Park Parking, Augusta, ME” as your GPS location. You can also use the following coordinates: 44°19’16.0″N 69°46’22.2″W

Event Date

Friday, July 18, 2025 - 11:00am to
2:00pm

Featured image

More information / register

Venue

Mill Park, Augusta

Address

Mill Park
Augusta, ME 04330
United States

Website

Tweet Text

[node:title]

Share Image

Disability Pride 2025 image

Date

Friday, July 18, 2025 - 2:00pm

Menu parent dynamic listing

Pages

Subscribe to ACLU of Maine RSS