Opened 3 years ago
Last modified 3 months ago
#33620 assigned New feature
Accessibility in pipeline
Reported by: | Sarah Abderemane | Owned by: | Tushar |
---|---|---|---|
Component: | contrib.admin | Version: | dev |
Severity: | Normal | Keywords: | accessibility, ux, ui, admin |
Cc: | Tom Carrick | Triage Stage: | Accepted |
Has patch: | no | Needs documentation: | no |
Needs tests: | no | Patch needs improvement: | no |
Easy pickings: | no | UI/UX: | yes |
Description
Automate part of the process by including tools such as aXe into the CI pipeline, to make sure new changes come with at least the very basic accessibility considerations.
This is a part of the ticket #31617
Change History (18)
comment:1 by , 3 years ago
Resolution: | → duplicate |
---|---|
Status: | assigned → closed |
comment:2 by , 3 years ago
Ooops. I asked Sarah to open a separate ticket, because the multiple conversations around the large scope accessibility tickets were getting too hard to follow. Sorry 😬
Either way, thanks for picking this up Sarah. Yes, adding the CI tools is a good step to help us move forward.
comment:3 by , 3 years ago
Resolution: | duplicate |
---|---|
Status: | closed → new |
Triage Stage: | Unreviewed → Accepted |
Fine, a separate ticket works for me too :D
comment:4 by , 2 years ago
Just seeing this now – this does feel like something that warrants its own ticket in my opinion. Evaluating the right tool is going to be a big task in its own right, let alone setting it up locally, and in CI.
I can provide a few options I would recommend considering:
- axe-selenium-python seems worth considering, since Django already uses Selenium. I haven’t worked with Selenium in years myself so don’t really have the ability to judge how good of a fit this is. At a high level, all this does is injecting the Axe script onto the page and running it, with a nice API to configure Axe and get a useful report.
- Puppeteer + Axe is what I have the most experience with myself, generally running within Jest to structure a test suite. This is very similar to the above but with JS tooling rather than Python.
- Playwright + Axe is also popular these days. Playwright comes with its own testing framework contrary to Puppeteer.
One important aspect is the need to test pages protected by login. To test public-facing pages, we could also consider:
- Pa11y, an opinionated "Puppeteer + Axe" CLI tool.
- Evaluatory, same but with Playwright
My demo test suite uses Pa11y + Puppeteer + Lighthouse. Since both Pa11y and Lighthouse are built upon Puppeteer, they can be used via their JS API so we can first use a Puppeteer script to log in, then hand over to those tools.
---
To go through this, I’d recommend the following initial steps:
- Confirm what exactly is needed between a testing tool (write assertions, break the build if fails) and a reporting tool (tests a given page and reports results)
- Check whether there are any restrictions on what can run in CI (I can’t see whether Selenium tests run there)
- Decide whether to use a Python Selenium integration or a JS-based tool (looks like Django already uses Puppeteer?).
comment:5 by , 2 years ago
Thank you Thibaud for your suggestions and recommendations !
- Confirm what exactly is needed between a testing tool (write assertions, break the build if fails) and a reporting tool (tests a given page and reports results)
My guess was to do both, to be able to test fields separately and also the hole admin to view improvements needed for templates of the admin to keep the admin accessible. I may be wrong, but not sure how to test correctly fields / inputs separately.
- Check whether there are any restrictions on what can run in CI (I can’t see whether Selenium tests run there)
Actually there are javascript tests run with QUnit and those tests are already run in the pipeline, I just add another workflow dedicated for accessibility. I started to implement Axe with QUnit, may be not the best way. I can go for another solution as you have suggested.
For restrictions it's not a problem, Tom suggest me to use his django admin demo, I can run his demo in the pipeline and launch a reporting tool on it. I can show result in console and even add a report in a file which will be an artifact keep a defined period if we want. I just have to pay attention to the duration of the job.
- Decide whether to use a Python Selenium integration or a JS-based tool (looks like Django already uses Puppeteer?).
I think QUnit is run via puppeteer with Grunt, that's why I started my implementation based on this tool but we can decide to go for something else if it's not relevant.
Since both Pa11y and Lighthouse are built upon Puppeteer, they can be used via their JS API so we can first use a Puppeteer script to log in, then hand over to those tools. => Yes, I think with the use of the reporting tool, I can go for it since Puppeteer is already in the project.
I don't have any particular opinion about this.
comment:6 by , 2 years ago
Cc: | added |
---|
comment:7 by , 2 years ago
Owner: | changed from | to
---|---|
Status: | new → assigned |
Hey, I was digging into this and was able to get aXe run with the current testing suite using axe-selenium-python. I am not sure about the scope of checking the accessibility of the admin panel, and what views and admin page state are we looking to test.
It could be 0 violations on the first admin landing page and then we might get violations when we visit a specific group or table or other section of the admin dashboard.
What I have in my mind is after every selenium test that is run, run the aXe accessibility test as well by using a decorator probably. Please share your views on how you would want it.
cc @Carlton Gibson
comment:8 by , 2 years ago
Hi Tushar,
Thank you for your interest on the subject :)
Just for people who read this ticket, a part of the discussion was out of the ticket, we agreed with Thibaud and Tom, the admin should be tested as long as there is a change on it. We should have a page or something to be able to test each component like (input, search ...) but also test the full admin to get the best accessibility and avoid regressions in the future.
IMO, from the pipeline we should be able to generate a report which could be downloaded with artifact on Github Actions.
I didn't go further on the subject I was trying to launch the admin from the project and avoid to use another repo/dependencies for that. It's still in progress (and I didn't have time to check more yet).
What I have in my mind is after every selenium test that is run, run the aXe accessibility test as well by using a decorator probably. Please share your views on how you would want it.
I'm not very familiar with Selenium, but I think you can do assertions directly in tests but I guess some tests may be not really useful for accessibility testing but useful for the purpose of the initial test and vice versa. It really depends on the selenium test to me.
Do you have the code somewhere to check ? I'll be happy to help you on this subject.
comment:9 by , 2 years ago
I have made a draft PR to get some comments and better clarity on what could be done. https://github.com/django/django/pull/16372. Linking it here manually as issue tracker is not picking it up (I did something wrong probably)
comment:10 by , 2 years ago
Thanks Tushar, this looks interesting. Let's see what folks say.
I adjusted the PR title to start Fixed #33620 -- ...
which allows Trac to pickup that it links to this issue.
comment:11 by , 2 years ago
pa11y seems to be much better than using axe. pa11y can be called directly from CLI on URLs, we can use axe-core as a pa11y runner, it can output a decent formatted result, and have options to customize them even further. Apart from that, I see pa11y-dashboard as well to visualize results if needed.
After every UI change, writing a helper function that calls pa11y on the current selenium instance will not take much effort.
I highly favour using the existing admin tests for accessibility as compared to writing and maintaining accessibility tests.
One issue that I see with using pa11y as described above is redundancy in pa11y results because every time we would call pa11y, it would scan the whole page instead of only the new UI change.
pa11y has an option rootElement to only test a subset of DOM but this would require the use of "#" id tag to identify the elements that I doubt might be present on every UI piece of admin.
Other way than writing a whole accessibility testing suite is to iterate overall the .html files in the project and run pa11y over them to generate results. This might give us a lot of false positives like missing head tag or title tags due to .html file being in jinja template format.
Let me know your thoughts on the above points before I go ahead with something.
comment:12 by , 22 months ago
Hi Tushar,
Sorry for the late reply.
Pa11y seems a good option to me, it's well maintained and Thibaud already suggested it previously so I think you can go ahead.
Other way than writing a whole accessibility testing suite is to iterate overall the .html files in the project and run pa11y over them to generate results. This might give us a lot of false positives like missing head tag or title tags due to .html file being in jinja template format.
I think it's a very good idea to re-use the existing test suite rather than maintaining whole new ones but pa11y should be run on rendered pages and we may have specific tests case to test correctly accessibility for the whole page. A small change can affect the rest of the page and we should be aware of that according to me.
I suggest to use the existing tests if possible, for those when we need a dedicated test a part, we will add it to an accessibility testing suite.
comment:13 by , 16 months ago
Hi folks, I have made a draft PR to communicate my approach, would love to have some feedback there. PR link - https://github.com/django/django/pull/17074
comment:14 by , 15 months ago
Following up from #16372#issuecomment-1357132971, particularly with another new approach proposed by @Tushar in #17074, here are the points of decision I would recommend making before we proceed with coding, ordered based on how much I think they influence one-another:
- Test outcome: whether the accessibility checks will fail the build if there are issues, or only provide a warning / reporting
- Test Output: either pass/fail of specific checks, or JSON or HTML report(s), or both, or something in-between
- Test suite: reuse of the existing Selenium tests (like in #16372, #17074), or something separate altogether (like in #16115).
- Accessibility checker: Axe, Pa11y (HTML Code Sniffer + Axe), IBM Equal Access, Alfa, etc
- Configuration of the checks: either "run all supported checks", or opt-in / allow-list "run specific subset that pass", or opt-out / disallow-list "run all minus the ones that fail".
---
The ideal scenario to me would be for previously-undetected accessibility issues to fail the build, even if that means only running a subset of possible accessibility checks in CI. With this in mind, I think the best setup is:
- Test outcome: fail the build
- Test output: pass/fail of specific checks in the command line output
- Test suite: reuse of the existing Selenium tests (for consistency, and because we might as well)
- Checker: Axe (most configurable, usable in lots of different scenarios)
- Configuration: opt-out – run all checks where possible. Exclude problematic elements or checks as needed, scenario-by-scenario
With this in mind, of the three attempts so far my preference is for #16372 with axe-selenium-python. If we have concerns about its maintenance (#16372#issuecomment-1343332628), I’d say we should investigate the specific concerns further and either re-implement the functionality (200 LOC) in django/django, or fork, or approach someone at Mozilla to discuss maintenance. cc @smithdc1 who raised this initially – and we also discussed this together in Edinburgh.
I’m also interested in having more comprehensive automated reports of Django’s accessibility, though thibaudcolas/django_admin_tests has worked well enough so far that it didn’t seem as pressing to me. If we wanted full reports as an "official" implementation, I’d recommend proceeding with #16115. Ideally towards the goal of having a full-blown official demo site we could use to troubleshoot those issues in a reliable way, rather than contributors having to come up with their own test suite (cf. knyghty/django-admin-demo).
comment:15 by , 15 months ago
To bring in other conversations here as well, @David also shared this blog https://blog.pamelafox.org/2023/08/accessibility-snapshot-testing-for.html which seems like a good approach for setting up a baseline and testing against it. Also, in a single selenium test, there are multiple instances within the test that should be tested for accessibility. For example when a new UI element gets visible or gets deleted.
comment:16 by , 14 months ago
Sarah, Tom and I met a couple weeks ago to discuss this and are generally in agreement that #16372 with axe-selenium-python is the best approach for the time being.
As far as snapshot tests, that sounds great to me if we’re ok with using snapshots in Django :)
comment:17 by , 9 months ago
We have an update! After contacting the DSF board back in November 2023, we now have our answer that moving axe-selenium-python to the Django org in GitHub wouldn’t be appropriate. As it’s not a Django-related package, we don’t want to risk its maintenance falling to the Django ops team.
So -- the accessibility team will proceed another way. Currently we’re planning to still try to take over maintenance of the package, but in a separate GitHub organization so it’s clearer whose role it is to maintain it.
@Tushar would you be up for helping with maintenance as well?
edit: I see you already said "yes" in GitHub :)
comment:18 by , 3 months ago
Been looking into screen readers a bit, guidepup might be worth investigating if we can integrate into some selenium tests in our pipeline
IMO a separate ticket is not needed, please use
Refs #31617 --
in your patch.