DCMI Accessibility Community

Accessibility Metadata: a rich mix of standards

Liddy Nevile

ACRI, Latrobe University, Australia
[email protected]

Abstract

People with disabilities, like others, access the web with the usual range of devices, including wireless palm devices, phones, voice input, and so on. All people change their needs and circumstances, including their motivation to tolerate difficulties with access. In this paper, we consider the problems of providing metadata to help with matching web content to people's devices and relating devices to people's immediate needs. An aim of this work is to discover if a single accessibility metadata element could be used to support discovery and on-the-fly repair of resources, especially assisting those with disabilities. It must handle the dual role of discovery and execution, while supporting profiling of user's immediate access needs. The necessary protocols are emerging but their use must be integrated appropriately if they are to be effective for all, including those with disabilities. In this paper, we bring together the variety of relevant standards that may need to be investigated in order to develop a metadata element.

Introduction

Accessibility metadata is, put simply, metadata that describes the accessibility of resources and services, usually those on or available through, the web.

Web content accessibility became a topic in the mid-nineties. It was realised that much of the content of the new 'web' was not accessible to people who did not use standard web GUI browsers, the same technology that was making the web attractive and available to naïve computer users. Many people complained, at that time, that they could not download the 'much too big'files, or that the colours were not consistent, but many other people suddenly found that the same technology that had enabled them to rejoin society was suddenly alienating them again. In particular, low-vision and blind people, people with motor coordination problems, in fact, people including those who could not use a mouse on a computer screen for one reason or another, were suddenly not able to use their computers as their life-style-support machines. Additionally, people who depended, for one reason or another, on screen readers were often being read content that was unrecognisably jumbled, according to the GUI layout specifications of the author.

The World Wide Web Consortium (W3C, [1]) responded by establishing a Web Accessibility Initiative (WAI) program to work on what was making web content inaccessible. Since that time, W3C WAI have developed extensive guidelines advising how to make content accessible to all devices, so that those using non-standard combinations of hardware and software can access web content, or content available through the web, if their devices are standards (or recommendations) compliant. This work is undertaken under the banner of the Web Accessibility Initiative, is open to all, and is international, partly due to its special funding structures. The immediate aim was to work to avoid content that would be totally inaccessible to some, before working on making all content more generally accessible.

The W3C WAI works on how to make offending content accessible, often by repairing it. The working groups concentrate on what accessibility would be but also on the authoring and user access tools. The WAI Authoring Tools Accessibility Working Group has emphasised how to make authoring tools, for instance, productive of accessible content, even when the author is not aware of what is necessary. Such emphases were chosen to get relief as quickly as possible for those who were totally dependent on the improvements, realising that authoring was becoming more complex and more people would soon be using authoring tools.

Repairing an inaccessible element of a page; identifying inaccessible content; techniques for making accessible content, are all under control, if not yet completely documented. What is required now, is work on metadata to perform a number of roles. Of course, discovery is a primary goal. Finding a resource or service is an on-going problem on the web, and all the usual difficulties operate when people with special needs use the web. Everyone has a need for information that suits their purposes at the time they seek it. This may occur in special circumstances, such as when people working underground, perhaps in protective clothing because they are working in a mine, need to access information (possibly how to deal with a leak), without using their hands, and so without keyboards. These users would possibly need to be able to use their voice-controlling software to use the command-key navigation of a web page. They will need to know if the page is properly constructed so that such navigation is possible. If it is not well constructed, they may need to know:

  • how it is constructed so they can determine if they will be able to get to the information in some compromised way, or
  • if they can expect some transformation application to make it accessible or finally,
  • if there is no hope of access to this content for them.

The challenge is to find a suitable way of expressing and disseminating accessibility metadata and to make it available as soon as possible.

An accessible, media rich resource or service

Imagine a media-rich resource: a web page with some video, some images, a sound clip, some text, an invitation to submit comments, contact information for the author and maybe some text in a foreign language. Such a resource might be a page reporting on a family holiday for those who were left at home.

The resource might qualify as AAA compliant with the W3C Web Content Accessibility Guidelines (see http://w3.org/WAI/WCAG/ ). In order to achieve this rating, the individual objects within the page, and the page itself would comply with what might seem like a vast number of W3C recommendations:

  • the page content would be marked up in XHTML with an appropriate DTD reference,
  • The page layout would be specified in CSS2, Cascading Style Sheet Language, with an appropriate LINK= tag in the head of the document;
  • The page may expect several transformations for different types of devices and be associated with suitable XSLT transformations.
  • The main language of the resource would be indicated in the LANG= tag in the head of the resource;
  • The text would be written in clear language, or where literary expressions were used, alternative forms of expression would be available, possibly indicated by a RUBY tag;
  • The foreign language section would be marked with LANG= immediately before it appeared in the resource, and possibly the foreign language would be marked with Ruby for the benefit of those who do not understand it;
  • The links would all be identified by text or images that differentiate them one from another, and give some indication of their destination, by the provisions of descriptive tags;
  • The images would all have ALT tags, alternate text tags, giving a concise description of their role in the resource and what they, as images, depict;
  • and each image that contributes to the 'story' of the resource would have a LONGDESC, or long description, that would provide an alternative text rendering of the image,
  • and each image would be in SVG format, so that it could be rendered in whatever size suited the user, including providing for applications that allow for zooming in on sections of the image,
  • and have appropriate discovery and other metadata, probably Dublin Core-based (DCMES), expressed in RDF, Resource Description Framework,
  • but also with sections of the image mapped out (as in a graphics application) so individual elements of the image could be identified and manipulated graphically, as well as tagged with precise descriptive metadata, and maybe also referring the user to WordNet to obtain a definition of the object; and
  • each video clip would be well-structured as a composite object containing the QuickTime video, with Magpie captions for both the dialogue and descriptions of the imagery, and possibly sign-language alternatives, and maybe a set of SVG still images, but all synchronised by SMIL, Synchronised Multimedia Integration Language,
  • and so would the sound files, offering text alternatives,
  • and each item would be under user control, with facilities for selecting the timing and order of access to the resource elements,
  • and so on...

Such a resource would be useful to almost anyone with any device, or, as suggested, according to W3C recommendations, Level AAA acccessible. Such compliance would depend, however, upon compliance with all the individual standards as well as with the accessibility standards and those involved in the representation of the information about all this compliance.

Metadata for the Accessible Resource or Service

The problem is then, how should the accessibility qualities of the resource, qualities that do not relate to the subject matter of the content, be represented for the benefit of users and devices?

The problem does not appear in isolation from the related question of how should the user or device's needs be represented so that the user and the resource can be matched appropriately?

The Human, Political and Social Dimensions

Typically, this combination offers more than a complex technical problem.

Almost always, technical solutions are often not sensitive to people's emotional needs, and in some circumstances, this is not acceptable. Given that device independence can be about the provision of access for people with disabilities, people's feelings are often relevant in this context. People with disabilities do not like to be identified as such, for many good reasons. People with no disabilities would not like to be labeled as if they have them, when they are just using their senses in a concurrent activity. People's needs vary not only according to their available assistive technologies, but also their motivation and interest in the particular resource to which they seek access.

Humans, using different devices, change their needs according to their circumstances. In particular, people using assistive technologies (AT) may have undergone a significant orientation process, to learn to use their AT. Once familiar with the way their device handles materials, it may not be the most helpful thing for them to find assistive encoding as elements of the resource, especially if these attempt to bypass the user's own AT. On the other hand, people may not always have access to their ATs, and so may appreciate the ability to transform the resource for whatever device they are using at the time. This means that people vacillate between wanting direct and compatible access; direct access being available when the resource offers the necessary transformation and compatible access being operative when the transformations are effected by the assistive technology.

Identifying needs is more politically appropriate if done as identification of available technologies than people, and so the focus is moved to identification of devices' needs. This partly reduces the problem to a technical one, but it introduces problems associated with privacy and security. These, in turn, are not simple technical problems, as they have a strong and complex social dimension of their own.

Describing the Accessibility of the Resource

Choosing what to say about a resource is not easy.

W3C have developed a comprehensive set of solutions to the accessibility problems that have been identified in the last few years. Together, these solutions make resources very much more accessible than they might have been.

Individual needs are not easily ascertained and related to these solutions. Such needs vary according to the motivation and circumstances of the person, the type of resource, the available technology, the skills of the user, and the level of granularity required. Resources are rarely required for all their content even though at different times all of it will need to be accessed.

Defining immediate abilities and needs probably means a real-time atomic analysis of the user's needs and this will have to be related to a real-time atomic analysis of the resource's capabilities. This will require a comprehensive report of the resource's capabilities that will need to have been pre-defined. The information involved will be extensive, given the many possible combinations of user needs profiles.

Currently, there is not a well-specified format for such descriptions, but one can image formats being developed in the near future from the available accessibility specifications.

Conveying the Resource Profile

It is an accessibility requirement that the content of the resource be well described for discovery purposes. This description is possibly available in Dublin Core™ format, but there may be some additional metadata for public search engines etc. But the accessibility description is of concern here.

Each of the elements of the resource described above will need to be identified, and their accessibility described in order to develop an accessibility description of the entire resource. We have assumed that this is a Level AAA compliant resource. It may be considered adequate to describe it in an HTML META tag as such, but why? Simply adding a META tag with this information is not likely to be of much benefit to anyone because this information is unlikely to be used by devices to manage transformations.

There is not a suitable Dublin Core™ element (yet), so there will be no place for the information beyond notification of the format of the resource and its components. For instance, notification that the resource contains SVG, and QuickTime, does not convey how well these formats were used, or that their combined use has resulted in high levels of accessibility.

For local purposes, that is, within a close environment, a local META tag may be defined for the purpose of conveying this information, but that will not work beyond the locality unless there is a publicly available namespace that can be used to interpret the element. Such a description will typically need to be in XML and associated with a namespace, but it may not be sought, and so still a waste of time beyond the immediate locality.

For interoperable purposes, a RDF description of the accessibility of the resource might be specified. This, again accompanied by a declared namespace, may be useful but it is unlikely it will be found and used.

An EARL description, on the other hand, may be sought and if found, be very useful. An EARL (Evaluation and Report Language) description, in constrained RDF, has all the benefits of the RDF description, with the additional merit of a format that is designed for evaluation and repair tools.

Moving beyond Discovery

As shown above, the complexity of the accessibility description makes it necessary for machine interpretation of it, but may also facilitate on-the-fly modification or repair of the resource. In many cases, the tools used to evaluate accessibility can increase the accessibility of the resource by automated activity, such as by identifying the language of the resource, by linearising a table, etc.

On-the-fly repair and transformation of resources is, of course, preferable to the provision of alternative materials, although that is an option often employed when the original resource is not considered sufficiently accessible.

Access to Alternative Accessible Resources

Alternative materials come in two modes: equivalent and plain alternative.

Alternative materials are those which involve the same material in an alternative mode, such as closed captions to accompany a film. In such a case, that alternative mode is text, replacing sound. Local language text captions to accompany a foreign language film might be considered equivalent alternatives to the foreign speech: they are never the same as the original but they serve the purpose and needs of the user.

In some cases, the alternative material will not be of much use to a user, as in the circumstances where a concept is being explained and the user is blind, and so descriptions of a diagram may not be as useful as alternative explanations based on different, more suitable, metaphors.

Describing the Abilities of the User

As noted above, it is more appropriate to describe the user's devices than the user's abilities. It is not sufficient, however, to just do this. The user needs to be able to make decisions at the time about what resources to use, and in what modality.

For descriptions of devices, W3C developed the Composite Capability/Preference Profiles (CC/PP) ( http://www.w3.org/Mobile/CCPP/ ) recommendation. This profile contains information about the device in a format that can be conveyed to user and server agents. Use of such a profile is demonstrated by the University of Toronto's Barrier-Free project. Users carry smart cards that contain device profiles that can be established on computers previously equipped with a range of assistive technologies. (CC/PP has not been used extensively yet, but appears to be coming back into focus in the context of web services.)

While CC/PP may provide the protocol for conveying the device requirements, live negotiation of the requirements, fitting the resource being accessed, may also be required. Applications for such activities have not yet been used in the accessibility context, but they are anticipated and will probably be developed in XML.

Accessibility Aspects of Relevant Standards

It is not merely coincidental that the 'standards' considered above were chosen. They are, almost without exception, not standards but W3C recommendations, all well defined and with information available from the main W3C website. W3C recommendations have the two distinct advantages that they are, as far as possible, interoperable and accessibility promoting. The following table makes explicit these features and their relevance to the context.

Specification Accessibility Advantages

Standard Relevance
   
WCAG Web Content Accessibility Guidelines provide not only comprehensive guidelines, but also sample implementation techniques and compliance check-points.
XHTML XHTML is a useful alternative to HTML as it both offers the simplicity of HTML and the integration potential of XML, particularly supporting the separation of content and presentation and the use of applications to transform content types.
CSS CSS is the language used to facilitate the use of cascaded, or prioritised style sheets, further supporting the separation of content and presentation.
XSLT XSLT is the language that is used to perform transformations on style sheets. These can include transformations that introduce alternative content into resources when what is already available is not suitable in the context, or for the user.
RUBY RUBY is an annotation language conventionally used to explain versions of languages such as those that may be required when Japanese text is involved. This annotation language is proposed as useful also for explaining levels of language in the accessibility context.
SVG SVG offers compression and other image formats but also the advantages of vector as opposed to raster imagery. In particular, SVG images are text-based and have a compound format that allows for the inclusion of metadata that can be exploited by applications to adjust the image if required.
SMIL SMIL is yet another multimedia integration language but has the added advantage that it is non-proprietary and interoperable. It is particularly useful when users need to adjust the integration and timing of interactive multimedia elements.
XML XML, the meta-language in which most of the languages referred to have been developed, provides the glue that allows the proposed languages to operate as first-class objects in the interoperable environment.
RDF RDF is a deliberately constrained language that has more structure than standard XML in order to facilitate logic-based applications.
EARL EARL is constrained RDF, designed to provide rich statements in RDF that convey important information about the evaluator and the evaluation process along with a compliance report.
CC/PP CC/PP is designed to match devices with resources, particularly in the context of web services. It is suitable for use in the accessibility arena and if used, will increase the integration of accessibility solutions into the main stream.
DCMES DCMES is a widely deployed metadata standard and the foundation of many sophisticated and extensively specialised metadata schemata.

A Typical Accessible Page

Finally, let us consider a typical web resource comprised of a range of media. The following page contains the sort of resources commonly found on such pages with an explanation of a few of the kinds of additional care that needs to be taken to produce a fully accessible resource.

Pretend menu frame Example of content type Explanation of accessibility needs
  The 'hidden' texts in the document's 'head'. The document should be in XML (possibly XHTML), with clear identification of the language of the text. The title, which is often used for discovery, should be sensible in this context, eg not just ‘Home Page’. There should be other metadata for discovery purposes so the user can check that this resource is of interest, after all.
About | Search | Contacts | … The ‘navbar’ needs to have links carefully labeled but also with alternative text that describes their destination. The navbar should support the use of command keys as alternative to the mouse. There should be a link that allows users to jump over the navbar, if necessary. If images, rather than text are used for the navbar, the images should be tagged with link destination as well as any descriptions necessary to explain their role within the content.
Belle Vacanze! The use of a foreign word in a page that is previously labeled as being in English, should be clearly identified. As this is the main heading for the page, it should be the first expression marked as a heading of level 1.
photo of Cooktown Harbour at sunset

Images that contain information should not only be given a brief tag indicating this, but they should also have a detailed description available for those who cannot view them.As this image links to a video of the holiday, there should be several forms of this content available: the video, an audio files that can be used when a screen is not being used (eg when a telephone is being used), possibly a text file for use on devices such as Braille screen readers, and maybe even a signed version for deaf users. The choice of media may need to be determined by reference to metadata about the user’s devices and the selected media may need to be coordinated. These two ideas introduced both XSLT and SMIL (as described above).

This picture is of just one of the places ….. Stories and other forms of text should be formatted so that the text is available to all devices. This means that any formatting or layout should be achieved through the use of style sheets, not direct formatting that commits the user to the author’s view.
Our address: …..
Addresses are useful to users and should be marked as addresses, so that browsers can immediately locate them wherever they are on a page.
  Note that a frame, such as a marginal menu frame, is actually another web page, and users can be confused if movement from one frame to another is not clearly marked.

It is not only that guidelines and formats are relevant but the degree to which they have been used correctly that has to be reported by accessibility metadata. This involves the possibility of up to 100 comments and unless they are encoded and interoperable, they will not be understandable to the user agents that need them, let alone the humans who may choose to read and interpret them.

Conclusion

Accessibility metadata is emerging as a field of interest. It will greatly assist those with special needs because of disabilities they have. It will also benefit the more fortunate, including many who consider themselves able-bodied but for instance, disable their eyes by using them for driving while they access an electronic, web-based map. This accessibility metadata will inevitably have to cover the range of complexity associated with accessibility of resources and services, and so a large number of formats. Inter-operability of these formats will be essential to the effectiveness of the metadata. The use of standard formats thus appears essential. Such formats can be expected to include: HTML, XHTML, XML, DTD, CSS, XSLT, RUBY, SVG, DCMES, RDF, SMIL, EARL, and CC/PP, for a start. Such a mix will not be effective or rich unless it is well integrated and interoperable.

References

All recommendations referred to in this paper are from either the W3C website at http://w3.org/ or the Dublin Core™ Metadata Initiative website at http://dublincore.org/ .