Table ronde, Sciences Po Paris, 6 décembre 2018, 18h00
Pour que la recherche en sciences sociales puisse pleinement tirer profit des grandes bases de données numériques, un verrou reste à lever : l’accès à ces données est limité, inégalement distribué, et entouré d’un flou juridique et déontologique. Nous proposons d’en discuter à l’occasion de la parution du numéro spécial de la Revue Française de Sociologie sur “Big data, sociétés et sciences sociales” (n. 59/3). Cette table ronde réunit les chercheur.e.s avec d’autres parties prenantes publiques et
- Garance Lefèvre, Policy senior associate, Uber
- Roxane Silberman, Conseillère scientifique, Centre d’Accès Sécurisé aux Données (CASD)
- Sophie Vulliet-Tavernier, Directrice des relations avec les publics et la recherche, Commission Nationale de l’Informatique et des Libertés (CNIL)
- Les auteurs du numéro spécial.
Modérateurs : Gilles Bastin (Univ. Grenoble Alpes) et Paola Tubaro (CNRS), coordinateurs du numéro spécial.
Entrée libre et gratuite, dans la limite des places disponibles: pour s’inscrire, cliquez ici.
Accès : Sciences Po, salle Goguel. Entrée par le 27 rue Saint-Guillaume, 75007 Paris (traverser le jardin et prendre l’ascenseur jusqu’au dernier étage). La table ronde est organisée par la Revue Française de Sociologie en collaboration avec les Presses de Sciences Po. Elle sera suivie d’un pot.
I am now in Montréal, where I participated, last Friday, in a panel on Open Data at “Science & You” international conference. It was interesting for me to reflect on how the picture has changed since my previous panel on the same topic – in Kiev in 2012. Back then, we were busy trying to convince public administrations that data opening was good for transparency and could help improve services to communities. Since then, a lot of attempts have been made in numerous countries – local authorities often pioneering the process, followed only later by central governments (one example cited in my panel was Québec City). What is made open is typically information from public registers (first names of newborns, records of road accidents) and increasingly, from technological devices and sensors (bus traffic information).
There are some conditions to be met for a dataset to be said “open”:
- Technically, it needs to be “raw”, detailed, digital and reusable. The French Interior Ministry released results of the first round of the recent presidential elections within a few days, at polling station level. This is sufficiently detailed (with over 69,000 polling stations throughout the country), raw (allowing aggregations, comparisons etc.), and digital/reusable (so much so that the newspaper Le Monde could develop a user-friendly application to let readers easily check results in their neighborhoods). Some would also insist that “open” data should be released in non-proprietary formats (better .csv than .xls, for example).
- Legally, the data must come with a license that allows re-use by third parties (typically within the Creative Commons family). Ideally, no type of reuse should be ruled out (including somewhat controversially, commercial / for-profit reuse).
- Economically, the data should be available to all for free (or at least with minimal charges if data preparation requires extra work or expenses).
If in the past few years, a lot of thought has been devoted to the “ideal” conditions for data opening and how this would positively affect public service, the data landscape has now significantly changed.
Continue reading “Open Data: What’s new in 2017?”
Some time ago, I wrote a post on ethical issues in research with secondary data – a somewhat grey area, where students and scholars often feel guidance is insufficient. Even more complex is research with internet data – neither primary nor secondary strictly speaking, but “big” data. A recent case fuelled an international debate on how researchers should deal with data that are, apparently, accessible to all on the web: a Danish graduate student published a large dataset of users of the online dating site OkCupid (he apparently did so without any institutional backing, and Aarhus University where he studies, is now on the case). Michael Zimmer, a specialist of information studies and the policy and ethics of online research, properly summarizes the issues in a recent Wired article:
- Don’t say that “the data are already public”. The fact that OkCupid users knowingly share some personal information, does not mean they consent to it being used for purposes other than interactions with other users on that site. By scrapping data, one may be able to put together the whole history of users’ presence on that platform, revealing more of their life or personality than they themselves are aware of. More dangerously, data extracted in this way might in some cases be matched with other information, thereby potentially becoming much more disclosive than what the persons concerned ever intended or agreed. And the disclosure may be aggravated by releasing the data outside the platform.
Continue reading “Ethical issues in research with online data”
This week was World Statistics Day, celebrated at the UN and in individual countries around the world. While celebrating the successes of official statistics throughout its history of producing vital information for governments and citizens, this time much of the debate focused on its – more uncertain – future. The landscape is rapidly changing, swiftly shifting from a data-scarce to a data-rich world, from structured to unstructured data, from the quasi-monopoly of official statisticians on the production of information to fier competition, from pure statistics to multi-disciplinarity and the rise of so-called “data science”. There are obvious opportunities, but also formidable challenges, and it is always difficult for large organisations (such as statistical institutes) to adapt.
The President of the IAOS urged official statisticians to stick to the UN-backed Fundamental Principles of Official Statistics as a guide. She focused on the efficiency and ethics of engaging with users and the private sector, combined with the rigour of methods, to deliver “better data for better lives” (the slogan of the day).
Continue reading “World Statistics Day 2015”
It is often believed that use of secondary data relieves the researcher from the burden of applying for ethical approval – and sometimes, from thinking about ethics altogether. But the whole process of research involves ethical considerations, whether or not any primary data collection is involved. This starts from the initial design of the study, which should aim at the public good (and at the very least should do no harm) and continues until communication of results, which should ensure transparency, publicness and replicability. More specifically, what ethical issues will the data collection and analysis stages involve, when secondary data are used?
Secondary data are usually defined as those that were collected as part of a different research, with purposes other than those of the present study. They may be official statistical data (census for example, but also, increasingly, administrative data), data gathered by commercial operators (time series of stock prices for example), and researchers’ data from past projects. They are more often quantitative, although secondary analysis of qualitative data is becoming more and more common.
Weighing risks and benefits
Use of secondary data is in itself, a highly ethical practice: it maximizes the value of any (public) investment in data collection, it reduces the burden on respondents, it ensures replicability of study findings and therefore, greater transparency of research procedures and integrity of research work. But the value of secondary data is only fully realized if these benefits outweigh the risks, notably in terms of re-identification of individuals and disclosure of sensitive information.
For this to happen, use of secondary data must meet some key ethical conditions:
- Data must be de-identified before release to the researcher
- Consent of study subjects can be reasonably presumed
- Outcomes of the analysis must not allow re-identifying participants
- Use of the data must not result in any damage or distress
Continue reading “Research ethics in secondary data: what issues?”
The recent VW emissions scandal says it all: even a large company can’t get away with behaviours that disrespect key societal values. Protection of the environment is among these values today, so much so that not only public authorities step in to defend it, but even markets punish the transgressors.
Data protection is not (yet) such a value. Admittedly, some associations, individuals, and government officials fight for it, but the larger public is still unsure. It’s not that people don’t care, but that uncertainty as to what data are actually collected, for what usages, and by whom, is overwhelming; and it becomes difficult to identify the best course of action.
In this context, a new initiative is most welcome: an open letter on “Data for Humanity“, initiated by two scholars of the University of Frankfurt, pleads for a more responsible use of data. The message is simple: Do no harm. And if you can, on top of it, do something good. It’s so simple, and so necessary.
Sure, the world won’t change after this letter, but it will be a first step. Even the promotion of environmental protection started with simple, basic declarations, 30-40 years ago; and it was by insisting and perseverating, that it finally gained the conscience of everybody.
If you are a researcher in economics, demography, sociology, geography or political science, you may have experienced the frustration of discovering a relevant data resource and being denied access to it — typically on the ground that data release would violate the confidentiality of data subjects. Or you may have heard of fantastic analyses — with all the fancy new statistical and econometrics tools and software that are increasingly in fashion today — done with large amounts of very detailed microdata, but you have no clue how to do anything like that yourself. Maybe you have tried to look at the website of some public administration that likely holds the data you want – like labor market or business data — but could not figure out how to ask for these data in the first place. And f you ever tried to access data from two or more different countries, you probably found the task of even finding out how to apply in different systems daunting.
Now, there is a great opportunity for you to get closer to your goal. The European project “Data without Boundaries” (DwB) offers social scientists from across Europe funding, information and support to access household surveys and business data from public-sector records in special Research Data Centers in France, Germany, the Netherlands and UK. These are microdata at individual level, highly detailed; they cannot be publicly released, but access can be legally given for scientific and statistical research purposes.
Both confirmed researchers and PhD students are welcome to apply, and should do so in a country different from the one where they reside. There is a preference for comparative, cross-country projects. The deadline is 15th October 2013.
For more information, see the call for proposals on the DwB website.
This is part of a broader policy effort to improve researchers’ access to data in Europe, to enhance capacity to produce science-based understanding of society across the continent.
The “open data” movement is radically transforming policy-making. In the name of transparency and openness the UK, US and other governments are releasing large amounts of records. It is a way to hold the government to account: in UK for example, all lobbying efforts in the form of meetings with senior officers are now publicly released. Data also enable the public to make more informed decisions: for example, using apps from public transport services to plan their journeys, or tracking indicators of, say, crime or air pollution levels in their area to decide where to buy property. Data are provided as a free resource for all, and businesses may use them for profit.
The open data movement is not limited to the censuses and surveys produced by National Statistical Institutes (NSIs), the public-sector bodies traditionally in charge of collecting, storing and analyzing data for policy purposes. It extends to other administrations such as the Department for Work and Pensions or the Department for Education in the UK, which also gather and process data, though usually through a different process, not using questionnaires but rather registers.
Continue reading “Data in the public sector: Open data and research data”