Working with Migrants’ Remittances Data: How to Master a Big Project with a Small Team


By David Bauer, journalist and digital strategist at the Swiss online newspaper TagesWoche

Migrants from all over the world sent back more than $500 billion to their home countries last year. This number has risen sharply since 2000 and it keeps rising. We wanted to know more about this and thought it would be a great story to tell, both in our online publication and in our weekly print issue, which are both produced out of the same newsroom in Basel, Switzerland.

The whole project has been quite a ride, for us anyway, since TagesWoche hadn't done anything quite like this before. If you are a journalist working in a smaller newsroom and don’t have a team of developers and graphic designers, this post describes one way in which you can get your project done and learn things along the way.

The project started with a team of three that had never worked together before, and, seven weeks later, we published this:

The project was a big hit for our newsroom:

  • The story received 10 times more visits than we average for big stories on TagesWoche.
  • The visualisation was republished by Austria's leading news site, standard.at.
  • We are currently working with a TV station to make an adaptation of the project for their purposes.


Screenshot of ‘The Incredible Rise of Migrants’ Remittances' intro page, TagesWoche

How the project got done: The hackathon format

It all began at the end of March with a data journalism hackathon initiated by the Global Editors Network (GEN). They had invited nine teams from German, Austrian and Swiss news outlets to Vorarlberg, Austria. GEN had defined a theme - global migration, but, apart from that, we were free to build whatever we liked.

Each team was made up of a journalist - that was my part, a programmer and a designer. This setup proved to be my first challenge. In our newsroom, we had neither front end designers nor programmers. So I invited two freelancers to come with me. We had one conversation beforehand, but the hackathon was in fact were we first met.

While searching for a good topic for the hackathon with a colleague of mine, we soon got intrigued by remittances, money sent back home by migrants. 

There have been a number of excellent visualisations of remittance flows for one specific year, like the one in The Guardian and more recently the one by Al Jazeera with a focus on Gulf states. There was, however, no visualisation of the development of remittances over time. And, after a quick look at the data, it became clear to us that this is where the big story was.

Where to find and how to work with remittances data

Remittances are fairly well documented, and, thanks to the World Bank, the data is easily accessible and well structured. A big plus.

Unfortunately though, the complete matrix of remittance flows - showing the exact amounts flowing between all countries - are only available for 2010 and 2011, which is probably why all other visualisations had focused on that. What we had, though, were total inflows and outflows for all countries dating back to 1970 (all data provided by the World Bank). In retrospect, I'd say this restriction helped us focus on what the real story was. You should not show everything just because you have the data - and we sure would have been tempted to do exactly that.

We decided to focus on inflows, i.e. on the receiving countries, and compare remittances to a different type of money inflows that receives a lot more public attention: foreign aid, using data provided by the OECD. Our first rough calculations indicated that remittances had well overtaken foreign aid after 2000 and had grown to almost four times as big (which turned out to be a bit off, but might still happen in the next few years).

We needed one more dataset: migration stocks, which we got from the World Bank. This allowed us to show, for each country, where its emigrants lived and how much of the increase in remittances was due to the increase in the number of migrants.

Many of the difficulties we had to overcome had to do with the data that we had access to:

  • Since 1970, some countries have ceased to exist and others have come into existence. We couldn't fully account for this. So we decided to work with today's countries, which meant that remittances to countries like the Soviet Union or Yugoslavia are not included at all. Since remittances had remained fairly low until the mid-‘90s, that didn't distort the big picture.
  • Migration stocks are only recorded every 10 years. We had long discussions on whether to interpolate, knowing that migration flows usually don't follow linear patterns. We opted to interpolate anyway, as not being able to show migration stocks for most years in the timeline would have been the bigger evil.
  • The OECD data on foreign aid was inflation adjusted, remittances data wasn't. Picking the right deflator proved more complex than it first seemed, but with a little help from OECD we managed to find it.
  • We didn't have the latest remittances data (for 2012) when we started working on the project, but knew it must become available anytime soon. Eventually, it became available a mere three days before our launch. As we were prepared for it and had structured our data repository accordingly, we were able to easily feed the visualisation with the latest data.

How we visualised the data

Picking the tools for our visualisation was an easy task. We had positive experiences with D3.js in the past and it provided us with all the functionality and flexibility we wanted (if you want more about how we used D3.js to visualise the data, you should ask our programming wizard Ilya).

So while my two colleagues stared working on a prototype and the design, I wrote up into a brief what our coverage around the data visualisation should look like. I wanted to make this an example of how we understand data journalism - as a process, not as a product. So while data was at the core of our coverage, the end products needed to be diverse. We had, obviously, the big visualisation. We wanted it to work as a standalone element, so we decided to add an introduction to it in order to highlight some of the most interesting findings and to guide users into using the interactive. We then needed a second visualisation to highlight Switzerland as a typical country from where migrants send money back home. We also needed text and images to show the people behind the numbers. Two journalists from our staff joined us after the hackathon to do most of the reporting on the people behind the numbers. Finally, I set ourselves a deadline. We wanted the piece published by the end of April. This proved to be a good idea, although we slightly missed the deadline. You don't want to lose the drive of a hackathon.

By the end of the hackathon, we had a working prototype, an almost final design and a clear idea of what to do next. After that, it was mainly a matter of execution and attention to detail. The whole project went surprisingly smoothly, but, of course, you'll always find yourself killing bugs and working on final touches well into the night before publication...

Summing up, this is what it took:

  • Seven weeks from zero to publication.
  • Five people: one journalist/project manager, one programmer, one designer, two journalists.
  • Roughly 35 person-days.


Screenshot of migrants’ remittances to India, TagesWoche

Eight lessons for running a big data project with a small team

Finally, these are our main takeaways and our advice to others who might want to work on similar projects.

  1. Kick off the project in a hackathon-like setting. It's great if you can start your project in a bigger hackathon where you get feedback from others and feel the buzz of lots of people building things. But you can also start your own hackathon. What is important is total focus, and to not be distracted by anything else. After two or three days, a lot of work will still be ahead of you, but you're deep into the project and you see where it's going. And you have something to show to others to get them excited.
  2. Appoint a project manager, ideally a journalist with advanced technical know-how - in our case, a wannabe-coder. It proved very valuable and efficient to have someone (me) take care of management and communication so that everyone else can focus on their work.
  3. Write a project brief which details what you plan to do and what each step is useful for. It helps structure the project, identify challenges and opportunities and it serves as briefing once you need to get additional people on board.
  4. Don't just use datasets, talk to the data providers. OECD and the World Bank helped us spot and explain irregularities in the data and confirmed to us that we were crunching the data the right way (which is especially helpful when the results surprise you).
  5. Let numbers tell their story, but find the people behind the numbers to tell theirs. A visualisation is not the only way to present data journalism.
  6. Build the application with further development in mind, so that others can plug in additional data and tell their own stories with it. And, of course, so that you can build on it yourself.
  7. Give yourself a deadline, but take your time. In a project as complex as this one, turbulences are always to be expected. We could have made the original deadline, but at the expense of getting it right. Done is often better than perfect, but not if you've invested that much time (and the deadline is not a matter of life and death).
  8. If in doubt, just try it.