Small data and big models: Sunbelt 2014

Uh, it’s been a while… I should have written more regularly! All the more so as many things have happened this month, not least the publication of our book on the End-of-Privacy hypothesis. Well, I promise, I’ll catch up!

Meanwhile, a short update from St Pete Beach, FL, where the XXXIV Sunbelt conference is just about to end. This is the annual conference of the International Network for Social Network Analysis and in the last few years, I noticed some sort of tension between the (let’s call it like that — no offense!) old-school of people using data from classical sources such as surveys and fieldwork, and big data people, usually from computer science departments and very disconnected from the core of top social network analysts, mostly from the social sciences. This year, though, this tension was much less apparent, or at least I did not find it so overwhelming. There weren’t many sessions on big data this time, but a lot of progress with the old school — which in fact is renewing its range of methods and tools very fast. No more tiny descriptives of small datasets as was the case in the early days of social network analysis, but ever more powerful statistical tools allowing statistical inference (very difficult with network data — I’ll go back to that in some future post), hypothesis testing, very advanced forms of regression and survival analysis. In this sense, a highly interesting conference indeed.  We can now do theory-building and modeling of networks at a level never experienced before, and we don’t even need big data to do so.

The keynote speech by Jeff Johnson, interestingly, was focused on the contrast between big and small data. Johnson has strong ethnographic experience with small data, including in very exotic settings such as scientific research labs at the South Pole and fisheries in Alaska. He combined social network analysis techniques, sometimes using highly sophisticated mathematical tools, with fieldwork observation to gain insight into, among other things, the emergence of informal roles in communities. His key question here was, can we bring ethnographic knowing to big data? And how can we do so?

My own presentation (apart from a one-day workshop I offered on the first day, where I taught the basis of social network analysis) took place this afternoon. I realize, and I am pleased to report, that it was in line with the small-data-but-sophisticated-modeling mood of the conference. It is a work derived from our research project Anamia, using data from an online survey of persons with eating disorders to understand how the body image disturbances that affect them are related to the structure of their social networks. The data were small, because they were collected as part of a questionnaire; but the survey technique used was advanced, and the modeling strategy is quite complex. For those who are interested in the results, our slides are here:

Network data, new and old: from informal ties to formal networks

Fig1Network data are among those that are changing fastest these days. When I say I study social networks, people almost automatically think of Facebook or Twitter –without necessarily realizing that networks have been around for, well, the whole history of humanity, long before the internet. Networks are just systems of social relationships, and as such, they can exist in any social context — the family, school, workplace, village, church, leisure club, and so forth. Social scientists started mapping and analysing networks as early as the 1930s. But people didn’t think of their social relationships as “networks” and didn’t always see themselves as “networkers” even if they did invest a lot in their relationships, were aware of them, and cared about them. The term, and the systemic configuration, were just not familiar. There was something inherently informal and implicit about social ties.

What has changed with Facebook and its homologues, is that the network metaphor has become explicit. People are nowSocial-Media-Network accustomed to talking about “networks”, and think in systemic terms, seeing their own relationships as part of a more global structure. Network ties have become formal — you have to make a clear choice and action when you add a “friend” on Facebook, or “follow” someone on Twitter; you will have a list of your friends/followers/followees (whatever the specific terminology is) and monitor changes in this list. You know who the friends of your friends are, and can keep track of how many people viewed your profile /included you in their “lists” / mentioned you in their Tweets. Now, everyone knows what networks are –so if you are a social network researcher and conduct a survey like in the old days, you won’t fear your respondents may misunderstand. In fact, you may not even need to do a survey at all –the formal nature of online ties, digitally recorded and stored, makes it possible to retrieve your network information automatically. You can just mine network tie data from Facebook, Twitter, or whatever service your target populations happen to be using.

Continue reading “Network data, new and old: from informal ties to formal networks”

Training in European data: EU-SILC

Official statistical surveys are still the best sources of data in terms of quality. Practically, they are the only ones that apply random sampling and the legal obligation to respond makes the actual sample very close to the targeted one. No other approach to data collection can hope to do as well.

The European Union Statistics on Income and Living Conditions (EU-SILC) is an instrument aiming at collecting timely and eurostat1comparable cross-sectional and longitudinal multidimensional microdata on income, poverty, social exclusion and living conditions. It started in 2003 with a small group of participant countries, and was enlarged in 2004. It is one of the richest sources of information on the daily life conditions of Europeans.

EU-SILC data are available for research use, but many barriers exist and these data are actually underutilized. On the one hand, the fact that access is legally authorised does not make it practically straightforward – the application process can be lengthy and costly. On the other hand, the very handling of data requires some specific knowledge and skills.

The Data without Boundaries European initiative, aimed at moving forward research access to official data, organises a training programme on EU‐SILC with a specific focus on the longitudinal component. Local organization lies with Réseau Quetelet, host of the training course is GENES ‐ Groupe des Écoles Nationales d’Économie et Statistique both in Paris (France).

Continue reading “Training in European data: EU-SILC”


On Friday last week, the British Sociological Association (BSA) held an event on “The Challenge of Big Data” at the British Library. It was interesting, stimulating and relevant – I was particularly impressed by the involvement of participants and the very intense live-tweeting, never so lively at a BSA event! And people were particularly friendly and talkative both on their keyboards and at the coffee tables… so in honour of all this, I am choosing the hashtag of the day #bigdataBL as title here.


Some highlights:

  • The designation of “big data” is from industry, not (social) science, said a speaker at the very beginning. And it is known to be fuzzy. Yet it becomes a relevant object of scientific inquiry in that it is bound to affect society, democracy, the economy and, well, social science.
  • Big-data practices change people’s perception of data production and use. Ordinary people are now increasingly aware that a growing range of their actions and activities are being digitally recorded and stored. Data are now a recognized social object.
  • Big data needs to be understood in the context of new forms of value production.
  • So, social scientists need to take note (and this was the intended motivation of the whole event). The complication is that Big Data matter for social science in two different ways. First, they are an object of study in themselves – what are their implications for, say, inequalities, democratic participation, the distribution of wealth. Second, they offer new methods to be exploited to gain insight into a wide range of (traditional and new) social phenomena, such as consumer behaviours (think of Tesco supermarket sales data).
  • Put differently, if you want to understand the world as it is now, you need to understand how information is created, used and stored – that’s what the Big Data business is all about, both for social scientists and for industry actors.

Continue reading “#bigdataBL”

Big Data and social research

Data are not a new ingredient of socio-economic research. Surveys have served the social sciences for long; some of them like the European Social Survey, are (relatively) large-scale initiatives, with multiple waves of observation in several countries; others are much smaller. Some of the data collected were quantitative, other qualitative, or mixed-methods. Data from official and governmental statistics (censuses, surveys, registers) have also been used a lot in social research, owing to their large coverage and good quality. These data are ever more in demand today.

Now, big data are shaking this world. The digital traces of our activities can be retrieved, saved, coded and processed much faster, much more easily and in much larger amounts than surveys and questionnaires. Big data are primarily a business phenomenon, and the hype is about the potential gains they offer to companies (and allegedly to society as a whole). But, as researcher Emma Uprichard says very rightly in a recent post, big data are essentially social data. They are about people, what they do, how they interact together, how they form part of groups and social circles. A social scientist, she says, must necessarily feel concerned.

It is good, for example, that the British Sociological Association is organizing a one-day event on The Challenge of Big Data. It is a good point that members must engage with it. This challenge goes beyond the traditional qualitative/quantitative divide and the underrepresentation of the latter in British sociology. Big data, and the techniques to handle them, are not statistics, and professional statisticians have trouble with it too. (The figure below is just anecdotal, but clearly suggests how a simple search on the Internet identifies Statistics and Big Data as unconnected sets of actors and ties). The challenge has more to do with the a-theoretical stance that big data seem to involve.


Continue reading “Big Data and social research”

The fuzziness of Big Data

Fascinating as they may be, Big Data are not without posing problems. Size does not eliminate the problem of quality: because of the very way they are collected, Big Data are unstructured and unsystematized, the sampling criteria are fuzzy, and the classical statistical analyses do not apply very well. The more you zoom in (the more detail you have), the more noise you find, so that you need to aggregate data (that is, to reduce a “big” micro-level dataset to a “smaller” macro one) to detect any meaningful tendency. Analyzing Big Data as they are, without any caution, increases the likelihood of finding spurious correlations – a statistician’s nightmare! In short, processing Big Data is problematic: Although we do have sufficient computational capacity today, we still need to refine appropriate analytical techniques to produce reliable results.

In a sense, the enthusiasm for Big Data is diametrically opposed to another highly fashionable trend in socioeconomic research: that of using randomized controlled trials (RCTs), as in medicine, or at least quasi-experiments (often called “natural experiments”), which enable collecting data under controlled conditions and facilitate detection of causal relationships  much more clearly and precisely than in traditional, non-experimental social research. These data have a lot more structure and scientific rigor than old-fashioned surveys – just the opposite of Big Data!

This is just anecdotal evidence, but do a quick Google search for images on RCTs  vs. Big Data. Here are the first two examples I came across: on the left are RCTs (from a dentistry course), on the right are Big Data (from a business consultancy website).  The former conveys order, structure and control, the latter a sense of being somewhat lost, or of not knowing where all this is heading… Look for other images, I’m sure the great majority won’t be that different from these two.


Continue reading “The fuzziness of Big Data”

What is data?

All the hype today is about Data and Big Data, but this notion may seem a bit elusive. My students sometimes struggle understanding the difference between “data” and “literature”, perhaps because of the unfortunate habit to call library portals “databases”. Even colleagues are sometimes uncomfortable with the notion of data (whether “big” or “small”) and the breadth it is now taking. So, a definition can be helpful.

Data  are pieces of unprocessed information – more precisely raw indicators, or basic markers, from which information is to be extracted. Untreated, they hardly reveal anything; subject to proper analysis, they can disclose the inner working of some relevant aspects of reality.

The “typical” example of socioeconomic data is the observations/variables matrix, where each row represents an observation – an individual in a population – and each column represents a variable – a particular indicator about that individual, for example age, gender, or geographical location. (In truth data types are more varied and may also include unstructured text, images, audio and video; But for the sake of simplicity, let’s stick to the Matrix here.)


Continue reading “What is data?”

Data in the public sector: Open data and research data

OpendataThe “open data” movement is radically transforming policy-making. In the name of transparency and openness the UK, US and other governments are releasing large amounts of records. It is a way to hold the government to account: in UK for example, all lobbying efforts in the form of meetings with senior officers are now publicly released. Data also enable the public to make more informed decisions: for example, using apps from public transport services to plan their journeys, or tracking indicators of, say, crime or air pollution levels in their area to decide where to buy property. Data are provided as a free resource for all, and businesses may use them for profit.

The open data movement is not limited to the censuses and surveys produced by National Statistical Institutes (NSIs), the public-sector bodies traditionally in charge of collecting, storing and analyzing data for policy purposes. It extends to other administrations such as the Department for Work and Pensions or the Department for Education in the UK, which also gather and process data, though usually through a different process, not using questionnaires but rather registers.

Continue reading “Data in the public sector: Open data and research data”

Hallo world – a new blog is now live!

Hallo Data-analyst, Data-user, Data-producer or Data-curious — whatever your role, if you have the slightest interest in data, you’re welcome to this blog!

This is the first post and as is customary, it needs to tell what the whole blog is about. Well, data. Of course! But it aims to do so in an innovative, and hopefully useful, way.


Continue reading “Hallo world – a new blog is now live!”