Developing the World Service archive prototype
The time-span for developing the prototype was very short. We wanted to be able to demonstrate the prototype at the IBC’2012 convention in early September where our section was offered a booth in its Future Zone, which gave us two months.
Quite early on, we realised that in order to get to something viable in such a short time, we needed to involve a small user community very early on. In order to do so, we put a first version of the prototype online very quickly (mid-July), pointed a few hundred users from the Global Minds listener panel at it very early on, and started to gather feedback. This feedback helped us understand user needs more precisely and prioritise features of the prototype. We will describe our development process during these two months in two blog posts, this one focuses on the engineering work and the next one focuses on the User Experience of the prototype.
We also decided quite early on to work on top of a triple store (namely, the simple and fast 4store). Triple stores are a good match for the problem we were trying to solve: prototype quickly on top of data extracted by a very varied range of sources (automated tagging tools from audio or text, contributor identification tools, image data, original World Service database, etc.). Basically, all those tools can just push some text to that store, which happens to be RDF/Turtle, without any assumption on how that data is going to get used. Custom feeds can then be generated from that store using SPARQL queries executed by the prototype. We also use Apache Solr for our search, indexing textual data available in the triple store. All the audio (around 70,000 programmes) is served from Amazon S3.
On top of Solr and the triple store, we built a Ruby on Rails 3 application using the sparql-client gem (which we contributed to, to make it support some new SPARQL 1.1 features), and a simple library, inflating model objects from SPARQL queries called easy_sparql. In order to run integration tests we use the lightweight RedStore triple store, for which we built Ubuntu packages in our PPA. We store all the user contributions (e.g. new tags, validation and invalidation of tags) in a relational database. Each of these contributions point to the URI of an item described within the triple store. We also used CoffeeScript to write our Javascript, dalli for storing expensive computations in memcache, devise and cancan for managing users, authentication and authorisation, rsolr for talking to Solr, choices for configuration management, mina, Jenkins and Foreman for continuous integration and deployment, and aws-s3 for accessing our audio content on Amazon S3. When new edits are being made by users in the prototype, we use a Resque job for re-indexing the corresponding data in Solr. We also wrote two new gems, onthisday and inthenews, parsing content from Wikipedia.

Where possible, we tried to ensure that the audience facing site worked across a range of modern devices. To help with this, we used Twitter's open source Bootstrap framework to provide some baseline user interface (UI) elements although the design required us to make adjustments in order to fit the BBC's Global Experience Language (GEL).
We followed the Scalable and Modular Architecture for CSS (SMACSS) for organising our CSS into base styles, layouts and reusable modules. Using the LESS CSS preprocessor helped as we could share and reuse common properties such as theme colours and helper functions.
The audio player went through several iterations, starting with the audio.js library. Audio.js is simple and easy-to-use but we found that it didn't give us the control that we required. We implemented a custom UI based on the very flexible SoundManager 2 library. The new UI allows the media player to scale to the width of the viewport dynamically which is something that we couldn't find in a lot of existing players.
The image picker and homepage carousel (in earlier versions) used the Swipe 2 library which implements a lightweight carousel using CSS3 animations and supports touch gestures which improved the experience on tablet devices.
All our components fire events that are listened to by the page controllers allowing us to easily collate user actions in our analytics system. This gives us feedback on how our users interact with the site so that we can improve the user experience, develop new features and build richer metadata.
Comments Post your comment