Running a Prototyping Sprint with BigDataOcean

The BigDataOcean project had already completed its kick-off meeting, where great challenges for the project emerged, and the consortium has to prioritise its actions and focus on what is meaningful for the pilots. The BigDataOcean project runs as an Innovation Action in Horizon 2020, thus it should focus on releasing results closer to a product. For that reason, in NTUA, we decided to test a promising and emerging prototyping methodology, called “Sprint”, introduced by Google Ventures. The idea of the methodology is to develop and test a prototype in 5 days.

Of course, the limitations imposed by a EU-funded project, with partners all around Europe, did not allow us as to “go big” since day one and include every partner in the process; for that reason, we decided to “go Lean” and test it in smaller scale, measure the positive outcomes and the difficulties, and reconsider in the near future possible prototyping sessions in bigger scale. In this blogpost, we describe the process we ran, the outcomes and the lessons learned through our prototyping experience.

To give some background information, only members of the DSS Lab team from NTUA participated in process, meaning that everyone had previous knowledge on the project and some initial work had been done individually. 7 people in total participated, all researchers in the team; nevertheless, diversity in the team came from the different perception and background each team member, covering both business and technical expertise. The team did not book a full week, but in order to test the methodology one day was booked initially. Because of time limitations to meet the project deadlines, it was decided to push two days into one, if possible.

Day 1

The team came together on Day 1 in order to define the project high goals and the sprint questions. The team followed the schedule, but because of its previous experience it ran faster than expected. By the end of “Day 1” we managed to design the customer map (see Picture below) and to describe the “Long Term Goal” and the “Sprint Questions”, which are:


Sprint Questions

  • Will corporations be willing to share their Business Data?
  • Will authorities collaborate to make available their Data?
  • Who will curate the data?
  • How will the value of the datasets be measured?
  • Will the service be fast enough to cover users’ needs?
  • Will the users be able to find homogeneous data on what they look for?
  • Why users will keep coming after the 1st time (return)?
  • Will users trust us?

The team then went through the “experts” session, by going through all the pilot cases that had been presented in the kick-off meeting and described in the Description of Action. Then, the team initiated the “HMW” questions, refined the map, reduced the stakeholders in less groups, and chose the question (i.e. “How will the value of the datasets be measured?”) as the most important to run through the prototyping phase and the step (i.e.”View Resources Preview”) to focus on (see red areas on the picture below).

The questions and the map that the team came up with

The questions and the map that the team came up with

As the team had worked and prepared a relative State of the Art analysis, it was easy to go through existing solutions and present each one of us what we liked from different solutions (see image below).

Topics the team extracted by relevant tools in the area

Topics the team extracted by relevant tools in the area

The team then was able to allocate who will design what, during the ideation phase where different designs will emerge. After allocating the members on different steps, it was agreed that the design round will run “offline”, before the next meeting to go with prototyping. The next meeting was set for the upcoming week, and the team left the room tired but contented about the outcomes of the first-day meeting.

Day 2

The second day started with the “Museum” session, where every solution was placed on the wall, and each one of us commented on them and highlighted the most promising ideas (see picture below). Then, “speed critique” took place, where the “facilitator” described what each design presented, and tried to answer the questions from the post-its. Each “creator” came up to cover all the necessary gaps, and then the team voted on the most promising ideas. The “decider” came up with the three most promising “screens” to implement on the prototyping phase. Those screens were:

a) A preview screen for the data with multiple tabs
b) A report-synthesis screen engaged through tables of raw data
c) A report-synthesis screen enabled by a library of code to run analysis

There were different other ideas and concepts that our team came up with, like an intuitive landing page with advanced search options, or a “requesting” page. Nevertheless, we will take them for granted in that phase and we will focus on how data review and reporting can engage users in the platform.

A design voted by the Decider

A design voted by the Decider

The team decided to implement the storyboard and the prototypes offline, and then visit the pilot users to test those ideas. Thus, that was the end of the experimenting phase of using the “Sprint” methodology on a collaborative, EU co-funded project.


The team was tired at the end of the second day, but happy about the outcomes. Typically, our organisation engages its members through meetings, and then we come up with the individuals who will design the mock-ups, and then we come together to discuss them. It was the first time where every part of our team participated actively throughout the whole process, and ideas that we had never imagined came up through this prototyping methodology. The input for mock-ups is valuable, but still it should be communicated with the partners before ending up with an agreement.

Of course, there were many difficulties. It was hard for the team not to use its devices, and during the breaks we still kept working, answering emails even debugging projects. The team was reluctant in the beginning of each session, but at the end it was impressed by the outcomes and the usefulness of each session. The experts (i.e. pilots) could not be present in the meeting, as they are also the users of the platform that will interact with the prototype; still we made to play a role-game and go through their presentations to break down their motives and needs. What the team really disliked initially was time limitations, and running “brainstorming” meetings under the usage of a clock; surprisingly, the final outcomes were better than expected, and the team was tired (i.e. especially mentally) but excited.

A picture of our team in front of the “museum” wall, after the voting session

A picture of our team in front of the “museum” wall, after the voting session

It has to be noted that the prototyping phase was not completed, but now more people are aware of the advantages of this process, and on the upcoming challenge it will more easy to engage seven people in a full-week “meeting”. Adjusting also the methodology on partners in a EU co-funded project is even a greater challenge, with multiple obstacles to overcome (e.g. operational, organisational, geographical etc.). However, the first outcomes are promising, the team is positive for the upcoming phases of the project, and we liked to share with you our experience in order to change our state of mind around research, and collaborative projects in Europe.

We will share with you the outcomes of our Sprint when they are available!

Comments are closed