NOTEBOOK | Comment

Piggybacking on open source and open data

more-in

How the learning curve for political and data journalists got flattened

Elections and electoral results occupy a key slot among the events that animate our regular coverage. From ground reportage to data-based analyses, The Hindu strives to bring to its readers all the features of what is probably the most vibrant aspect of India’s electoral democracy.

Today, thanks to the Election Commission’s website, finding out the rural-urban difference in voter choice and the regional break-up of the mandate and obtaining minute-by-minute coverage of the electoral results is possible due to advanced data visualisation tools. Most TV channels depend on third-party agencies to provide them regular updates from polling booths for their live coverage of election results, but newspaper websites rely extensively on EC data. This method is slower than the updates given by agencies, but it is more thorough and accurate.

The print edition is the best place to provide context and analyses of electoral results. For example, geographical analyses of the recent Jharkhand results using Census data showed that it was not just traditional weakness in rural areas or lack of adequate tribal support that put paid to the ruling BJP’s hope of retaining power, but an urban backlash too.

The creation of both live and static electoral maps is not an altogether difficult exercise for a software coder, but it is a steep learning curve for a political journalist, whose job is to interpret the data and present conclusions. When I first wanted to present electoral data in a readable constituency map format nearly eight years ago, there were many technical hurdles. Downloading live electoral data from the EC website required knowledge of web scraping tools. Visualising this information in the form of both static and live electoral maps required a familiarity with web GIS tools. There was also the additional and seemingly insurmountable problem of having no shapefiles (map contours) for newly delimited constituencies. The EC had provided shapefiles for seats before the delimitation exercise in 2009, but there was no non-proprietary shapefile available for the post-2009 constituency boundaries.

However, thanks to an open source data community, DataMeet, this problem was sorted out. Somewhere around 2013, a hackathon involving geocoders and GIS specialists managed to recreate Assembly and parliamentary shapefiles that were released with a creative commons licence for public use. Once shapefiles were available, the job of visualising data became much easier thanks to the availability of free tools such as Google Fusion Tables. I also managed to write a web scraper for the ECI website using my rudimentary programming skills, besides taking recourse to the availability of similar code in repositories such as GitHub. It was a web programmer from Iceland who helped me refine the code.

Today, The Hindu provides live data in maps on the website besides deep and structured analyses of electoral results on print after every election. Comparisons with past data are also readily done using EC data that are also in easily reusable forms in repositories such as the Ashoka University’s Trivedi Centre for Political Data. The presence and the work of the open source data and coding communities who provided the stepping stones for us is the key reason why the process is now seamless and relatively easy.

Why you should pay for quality journalism - Click to know more

Related Topics Opinion Comment
Recommended for you
This article is closed for comments.
Please Email the Editor

Printable version | Jan 24, 2020 8:26:30 AM | https://www.thehindu.com/opinion/op-ed/piggybacking-on-open-source-and-open-data/article30462959.ece

Next Story