
Ethically Engineered Tools for Intelligent, Human-Centered Care
The work we do in human services deserves tools that honor judgement, protect boundaries, and strengthen the clarity, professionals already bring. Aluma builds AI-enhanced systems rooted in ethics, reflection, and deep respect for the people behind the work.
Our Story
Aluma began with a simple belief: the future of care must be built with intention. Not speed. Not hype. Not shortcuts. Intention
After nearly twenty years working across mental health, healthcare, hospice, and the legal system, I saw firsthand how documentation, decision-making, and systemic pressures shape real lives. I also saw how AI-if left unbounded-could intensify those pressures rather than relieve them.
I created Aluma to answer a question that had been growing louder across the field:
How do we integrate AI into human-centered work without losing the judgement, ethics, and professional grounding that define our practice?
Aluma is my response.
A space where technology is held to a higher standard.
A place where ethical clarity leads, ethics guide, and innovation supports- never overrides - the human beings doing the work.
Meet Aluma's Founder

Ashleigh Gardner-Cormier, LMSW
Ashleigh Gardner-Cormier is a licensed social worker, author, and builder of frameworks that protect ethical judgment in AI-assisted environments. With nearly two decades of experience across mental health, healthcare, hospice, and the legal field, she has developed a practice informed by real-world systems, reflective discipline, and a deep respect for human complexity.
Ashleigh is the creator of AI-Integrated Reflective Practice (AIRP) and the ARP Framework, two emerging approaches designed to safeguard boundaries and preserve professional discernment in an era of accelerating automation. Her work sits at the intersection of ethics, technology, and care—helping practitioners think clearly in environments that often move too quickly.
Founder & CEO, Aluma
Our Approach: Ethical by Design
Aluma’s approach is grounded in a simple but essential truth:
AI does not get to decide what matters—people do.
We build tools that slow the work down just enough for clarity to re-enter. Tools that support judgment instead of competing with it. Tools that respect the boundaries, emotions, cultural considerations, and ethical responsibilities that shape human-centered professions.
Our approach is structured, intentional, and deeply reflective.
We don’t chase the newest trend in AI; we examine the conditions under which AI should—and should not—be used. We design for transparency, alignment, and professional safety.
At Aluma, ethical clarity isn’t an add-on.
It’s the architecture.
The "Big 4" of the Aluma Ecosystem
Ethical Foundation
Everything we build is anchored in professional ethics.
Aluma centers dignity, privacy, non-maleficence, transparency, and the foundational belief that human judgment must remain primary. These principles are not a footnote—they drive every system decision we make.
The AIR-P Framework
The ARP Framework (Awareness, Reflection, and Pause) is a boundary-protection method designed to keep professionals grounded in their own judgment while interacting with AI. It introduces intentional stopping points that prevent drift, support clarity, and ensure the human remains the final interpreter of meaning.
The Aluma Philosophy
Aluma’s philosophy is simple:
Technology should deepen understanding, not replace it.
We believe AI should illuminate—not blur—the moral, emotional, and human layers that shape real practice. Our philosophy rejects automation for automation’s sake. Instead, we pursue responsible innovation guided by context, culture, and care.
Our Values
-
Integrity — We build what we can stand behind.
-
Clarity — We slow the work down so meaning can re-emerge.
-
Human Judgment — Professionals remain the decision-makers.
-
Boundary Awareness — Every tool must protect the human behind it.
-
Reflective Practice — Quality thinking is a safeguard, not a luxury.
-
Ethical Responsibility — Technology must honor the lives it touches.
Where Our Work Lives

Newsletter
Messy Ethics / Intelligent Care is where I explore the emerging questions shaping AI, documentation, ethics, and human-centered practice. Each issue slows the conversation down, challenges assumptions, and gives professionals a grounded place to think in a rapidly shifting landscape.


