Skip to main content Skip to navigation

TactileNet: Bridging the Accessibility Gap with AI-Generated Tactile Graphics for Individuals with Vision Impairment

Project Overview

The document discusses TactileNet, an innovative AI-driven framework aimed at generating tactile graphics for individuals with vision impairments, thereby enhancing educational accessibility. Leveraging a comprehensive dataset and advanced AI methodologies such as Low-Rank Adaptation (LoRA) and DreamBooth, TactileNet automates the creation of high-quality tactile images, significantly improving the educational experience for visually impaired learners. The framework not only streamlines the design process of tactile graphics but also demonstrates a strong commitment to addressing the accessibility gap in education. Overall, TactileNet showcases the potential of generative AI to transform educational resources, making them more inclusive and effective for all learners, particularly those with disabilities.

Key Applications

TactileNet - a dataset and AI-driven framework for generating tactile graphics

Context: Accessibility in education for visually impaired learners

Implementation: Utilizes text-to-image Stable Diffusion models fine-tuned with LoRA and DreamBooth to create tactile graphics from text prompts.

Outcomes: Achieved 92.86% adherence to accessibility standards, demonstrating high fidelity and structural similarity to expert-designed tactile images.

Challenges: Challenges include the scarcity of high-quality paired datasets, the complexity of tactile graphic requirements, and the need for further refinement of generated images.

Implementation Barriers

Technical Barrier

Scarcity of paired datasets for training effective AI models for tactile graphic generation.

Proposed Solutions: Developing a comprehensive dataset (TactileNet) that integrates high-quality tactile images and textual descriptions.

Implementation Barrier

High costs associated with refreshable tactile displays and the complexity of tactile graphic design, leading to challenges in production.

Proposed Solutions: Automating tactile graphic generation to reduce dependency on manual design, thus lowering production costs.

Project Team

Adnan Khan

Researcher

Alireza Choubineh

Researcher

Mai A. Shaaban

Researcher

Abbas Akkasi

Researcher

Majid Komeili

Researcher

Contact Information

For information about the paper, please contact the authors.

Authors: Adnan Khan, Alireza Choubineh, Mai A. Shaaban, Abbas Akkasi, Majid Komeili

Source Publication: View Original PaperLink opens in a new window

Project Contact: Dr. Jianhua Yang

LLM Model Version: gpt-4o-mini-2024-07-18

Analysis Provider: Openai

Let us know you agree to cookies