21 Sep 2015

Share this

GIPO Observatory Tool Almost Here

By: luis.meijueiro

The goal of developing a tool that can decode, contextualise and analyse thousands of pieces of data around a wide set of Internet governance topics is close to becoming a reality. As a final product, the GIPO Observatory Tool will be able to seamlessly provide powerful insights into Internet searches related to cyber security, jurisdiction, e-commerce and several other important Internet related issues. Currently, the project is in the first beta testing phase and second “iteration” of a four-part software development plan that would have the Observatory Tool ready for public use as early as 2016.

The first development iteration began in January 2015 and was dedicated to coding the main functionalities of the tool. This meant ensuring that the Observatory Tool was able to automatically collect IntGov related information online, analyse its content and extract its main meaning in the form of keywords. The storage capability of the collected information and metadata also had to be coded in order to facilitate its public use as “Open Data”. As well, programmers had to provide users with the ability to search for the collected data on the basis of multiple criteria, as well as evaluate the content and express their opinion(s) on its quality via vote up/down and commentaries.

Once programming concluded in May 2015, a testing phase began in June 2015. The development status of the project was presented during a workshop at EuroDig 2015 in Sofia, Bulgaria. All the previously mentioned functionalities were available and some of the most illustrative ones were demoed for the audience. Members of the Advisory Group and personnel from the European Commission received early access to the first beta version of the tool.

In that same month, the second development iteration began in order to complete all functionalities planned for the public release of the tool. New functionalities included the development of the fetcher module that enables the retrieval of contents directly from HTML web pages (aka scraping), the implementation of the software components that enable the scalability of the whole platform, and a visualisation module which enriches the tool's homepage so that users have the option of visualising categorical data in the form of a treemap.

The first public release of the platform is scheduled for the end of 2015. In the meantime, keep track of the development of this groundbreaking tool by viewing the timeline below.

Comments

Add comment

Image CAPTCHA

More Blogs