We have known for some time now that no attempts to bridge the digital divide or leverage the power of information and communication technology to reach out to those who have been left out of the ‘digital revolution', will be complete without tiding over the critical language barrier.
Indeed, the predominant language of technology, as we know, is English. And unless this changes, large sections of our population would find it impossible to navigate interfaces, interact with these gadgets, and in essence, utilise the potential of various communication technologies — be it on a simple computer or the now-ubiquitous mobile phone.
While Indic language computing — though still glitchy and rough around the edges — has indeed grown over the past decade, has it been able to keep pace with large technological leaps in medium, operating systems and device options? Right from the start, Indic computing has been a tough nut to crack. This is largely understandable given the multiplicity in language and scripts, technical challenges posed by large alphabet sets and consonant-vowel or conjunct alphabet pairs, and the non-linearity of our scripts.
Over the years, significant research has gone into Indic language computing. However, the lack of a standard practice has resulted in a largely “fragmented” approach to basic things such as input methods, efficient search or even creating platform-independent fonts. Though the government in the late 1980s released in-script (a common keyboard layout), many found the mapping non-intuitive, and so several other input formats were evolved.
While localised products released by Centre for Development of Advanced Computing (CDAC) and initiatives by the Free and Open Source community on localising desktops, operating systems and applications, contributed greatly to localisation, a lot leaves to be desired when it comes to adoption. Government agencies such as CDAC too have drawn flak for not doing their bit to accelerate the process, instead choosing to “lock up” a lot of the innovation with patents and licences. Another factor that slowed down the localisation process considerably was the non-adoption of standards, for instance, in Karnataka the Government is yet to adopt Unicode for e-governance and data storage.
On the desktop, input methods such as in-script struggled to gain ground, only to be saved to some extent by transliteration tools (that convert text entered in English to the Indic script) that became popular. IT and Internet majors such as Microsoft and Google latched on to this and released Free transliteration products, and other technology efforts such as Baraha for Kannada also made their mark. However, implementing these on mobile phone's was a different story, and presented a new and more complex set of challenges altogether.
Mobile phone space
In the cellphone space, these efforts have been largely fragmented. That over 50 alphabets had to be mapped to around 12 keys was a problem to begin with, and each company tried to do their own mapping, or purchase-related IP, says Arjuna Rao Chavala, a localisation expert. So, while local language support was deployed on basic feature phones (phones priced on the low-end), little efforts were made to market this.
As for the high-end phones, until recently, language support was not provided. Yes, it was simple to go to your settings and enable language support, but people were not told that this feature existed. Now, several Android phones, many of the Samsung models and the iPhones are coming with language support. In 2009 and 2010, there was an industry attempt to evolve a standard, where several companies joined to work with the Centre for Excellence in Wireless and Internet.
A major impediment in adopting Indic language computing has been the input interface. A potential way around this hurdle is presented by the ‘touch screen' technology. With crashing price points of touch devices and the entry of tablets with screens that offer higher resolutions, developing an input standard for them can help in speeding up deployment, says Mr. Chavala, who is part of an IEEE working committee that is working on standards for virtual keyboards for use on mobile phones and tablets with touch interface. Desktops too, he believes, could use this technology using new ‘touch' mouse that has entered the market. “This could be a game-changer,” he says.
With multiple platforms (mobiles phones and cloud, for instance) and new trends in representation (such as HTML 5), multimedia, and new voice-enabled devices, there is a demand for fresh and new standards.
Jitendra Shah, an academic who works with Indic font development, believes that not enough efforts from the Government are focussed on this, either on regular desktops, or on mobile phones. A lot of the ‘action' that is being talked about is simply being driven by corporates, who come in with limited vested interests,” he says. Mr. Shah believes a framework for standardisation and adoption must be defined by the Government.
Today, in the mobile phone space, Mr. Shah believes, companies are coming up with their own solutions, hoping the market will force people to adopt that, and then they can “extract royalties”. This is similar to what happened when there was no Unicode. Yet even with Unicode evolution, a solution to this problem has not emerged. “Perhaps that is because Unicode is not as font-independent as it should have been. This entails graphics packages to modify their libraries to still use proprietary font and libraries, which is an additional barrier; this, developers of these software have not felt compelled to adopt,” Mr. Shah says. Similar problems he points out in other areas, for instance in GIS software such as GeoServer, MapServer.