This draft is out of date, an update that addresses new research and new technologies is being worked on at Symbol users with Speech, Language and Literacy Difficulties.
Voice Output Communication Aids (VOCAs) are devices that have been developed to allow pictures, symbols or words to be used with speech synthesis. The devices may be specialist items with built in systems carrying the symbols. They may also be a generic computer, tablet or mobile with specialist software or apps. The symbols are activated by single or multiple touches, eyes dwelling on a chosen image or manual use of a keyboard or switches. An external input device usually allows for step by step scanning up and down rows and columns of symbols on a grid. Symbols and word choices are personalised to suit the user, representing known objects, actions and happenings.
The users of Alternative and Augmentative Communication (AAC) based on images or pictographs as symbols tend to have no speech or language, very unintelligible speech or difficulties expressing themselves and/or may need reading support. Individuals may also have severe mobility and dexterity disabilities and/or cognitive impairments. Depending on the skills of the user, the environment and task in hand, symbols may be used to represent a phrase or whole sentence of speech or be made up of individual parts of speech to aid sentence making. Where literacy skills are a challenge the use of online communication becomes an issue due to the lack of accurate symbol to text or text to symbol sets that can be used/translate across all symbol systems.
Symbol users may be unable to cope with large amounts of online material. This depends on their ability but may limit the degree to which they can cope with content that is text based.
Symbol users may find it hard to plan a route through web pages unless navigation is clear.
They may have other physical and cognitive impairments affecting their understanding and use of navigation controls.
AAC users may also be overwhelmed by the amount of interactions required to complete tasks.
Symbol users may find large amounts of text reduces attention but symbol based content helps concentration.
Symbol users may find dense text based content with audio output confusing whereas audio with symbols may be helpful.
Symbol users may not comprehend the meaning of text based content and may fail to act correctly when warnings or other interactive items appear on the web.
Symbol users may fail to recognize images, such as symbols or icons that are not in their known set and may lack the ability to use the web as it has been intended, failing to find understandable icons or imagery that provides easy to use navigation or content.
The strategy of mapping symbol sets to a common vocabulary may allow developers to incorporate helpful symbols knowing these can be automatically presented to the user in their own symbol language. This schema would also allow symbol users to use their own symbols in an edit box or form knowing it can be recognised by other symbol users with the automatic translation. The primary solutions for the mapping symbols for symbol users could be divided into three different user scenarios. For all these scenarios, the symbols presented to the symbol users could be alternative and adaptive based on the users' preference.
One option for interoperable symbol mapping uses the proposed syntax: aria-concept = "uri" to reference target concept URI like the following example.
This aria-concept syntax could take advantage of the Concept Coding Framework (CCF). Due to the lack of standardised encoding schemes and common practises, it is difficult to reference to and exchange symbols as an alternative or complement character based texts. Therefore, the CCF provides a common framework to link and map these symbols (currently Blissymbolics and ARASAAC symbols, and eventually other alternative representations) based on the concept coding. For example, the presentation of cat symbol (concept coding: cc-cat-1001) in this scenario would be as follows: for the cat concept: cc-cat-1001, there are different symbols to represent this concept across different symbols providers.
In order to map the symbols from different dictionaries, one of the approaches could be the use of an ontology and Semantic Web technologies to enable interoperability of symbols datasets.
An ontology is a formal specification of a shared conceptualization. In Semantic Web, a vocabulary can be considered as a special form of (usually light-weight) ontology, or sometimes also merely as a collection of URIs with a (usually informally) described meaning. Linked Data is a term used to describe a recommended best practice for exposing, sharing, and connecting pieces of data, information, and knowledge on the Semantic Web using URIs and RDF (The Resource Description Framework, W3C recommend metadata data model). The Concept Encoding Framework Working Group provides the multilingual and multi-modal lexical ontology (The Lexicon Model for Ontologies of the CCF to be made available in a Linked Open Data (LOD) format. It also allows users to search symbols based on different concepts and metadata, such as language, symbol datasets, localization. The following example demonstrates using the RDFa to represent the mapping between symbols based on concept coding.
This approach mainly requires the symbol dataseat providers to publish the symbols and their concepts as "Linked Open Data". As one of four principles in Linked Data, the URI naming for the concepts could provide the target concept link described in the first solution. It could also provide the alternative same concept symbols based on preferred properties, such as localization, language and colour etc.