As a UX engineer, I often found the process of logging accessibility issues to be time-consuming and inefficient. I’ve long wanted to create a tool that would streamline this workflow and simplify issue reporting and automatically generate clear, useful reports.
Accessibility issues are typically logged manually, often using spreadsheets. While tools exist to assist, many are unintuitive or cumbersome, especially when dealing with large volumes of issues.
Additionally, accessibility reports can be overwhelming for non-technical stakeholders. They often require extra explanation to be actionable, which creates communication friction and slows down remediation.
This Reddit discussion on logging accessibility issues highlights many of these shared frustrations within the accessibility community.
" I’m just looking to see what was the best way to present them [a11y issues]. I have been using Excel so far but I’m searching for something a little bit more polished. "
alekeuwu - Reddit User
In my current role, I had been using a makeshift system powered by AI to log issues and generate reports. Initially, this was done manually using multiple ChatGPT sessions. As I refined this workflow, it became clear that others could benefit from a more unified experience, so I began designing an integrated UI around it.
Drawing from my own usage patterns, I adopted an agent-based architecture inspired by Anthropic's Parallelization workflow.
My system is built around four AI agents, each with a specific role:
The application's data model enables users to categorize and manage accessibility issues across multiple levels of context.
The user interface for this application was designed to be as simple and intuitive as possible. Think of it as a highly stripped-down version of JIRA that can be used effortlessly by both technical and non-technical users to log accessibility issues.
During the high-fidelity design phase, accessibility of the UI itself was a top priority. Large, bold fonts, clear focus and hover states, and high-contrast color palettes were employed to ensure usability for individuals with visual impairments.
Keyboard navigation was also a key focus. Logging issues can be tedious, so every possible comfort and efficiency improvement was considered to make the experience more fluid and accessible.
The technology stack includes Next.js and Tailwind CSS for the frontend, with FastAPI and PostgreSQL powering the backend. The frontend communicates with the backend via GraphQL.
The initial issue enrichment, handled by an LLM, is implemented within the Next.js application. This task is relatively lightweight and doesn't require routing through the entire backend pipeline. Additionally, issue images are stored on Cloudinary, with image URLs saved to the database, all managed directly from the frontend.
The remaining AI agents operate on the FastAPI server, running asynchronously in the background. These processes are more resource-intensive and prepare enriched content ahead of time. By the time users access reports or summaries, the data has already been processed and is ready to review.
The core functionality of the application is live. Users can sign in and log issues using the AI-powered enrichment system. A public demo with credentials will be available soon. The next focus is user testing to ensure the issue-logging experience is fast, efficient, and accurate as it is the foundation of this tool.