Who’s Product Is It, Actually? at ProductCamp London

A crucial part of my product management work at PeerIndex has been coordinating with the many stakeholders in our product. I’ve needed to keep lots of people in the loop with various levels of detail and input, doing things like having one to one chats, presenting to the whole company, putting roadmaps and wireframes up on the walls for people to view, and inviting people to add comments and ideas on HipChat and Yammer.

I keep hearing other product managers at ProductTank saying they find this one of the hardest parts of the job. I remember one presenter being asked what they’d have done differently on a project – and answering that although they started a project thinking they were dedicating too much time to stakeholders, on reflection they’d have dedicated twice as much.

Working with stakeholders seems to be a very under-discussed topic given how important it is. So at the ProductCamp Unconference last weekend I ran a session on it, and about fifteen people came along.

Who's product is it, actually?

I opened by saying that while stakeholder relationships could bring some of biggest complications to products, strong relationships unlock tremendous productivity, opportunities and insights. For most of an hour we shared ideas, strategies and war stories of stakeholder management. We talked confidentially so I won’t share details here, but I enjoyed speaking frankly with a wide group of product managers. And I was delighted to be awarded the best session prize at the end of the day – thanks everyone!

Best session prize

Responsive They Work For You hack

Heard of TheyWorkForYou? It’s a political web site that lets you look up MPs and learn about their voting record, interests, expenses and more.

When I asked this at Rewired State’s National Hack The Government day, most of the room raised their hands. It’s hugely popular and has remarkable history. Built in 2004 at a NTK conference, it’s won a New Statesman New Media Award, had development supported by the Department for Constitutional Affairs, been referenced in the House of Commons and House of Lords, appeared on Channel 4, been replicated around the world and was called “the most amazing, subversive piece of political webware I’ve ever seen” by Cory Doctorow.


Wow. All this and remaining relevant for ten years on the internet is very impressive, but unfortunately it doesn’t look so good on a mobile in 2013.

TheyWorkForYou in Mobile Safari

Much as I love diving it and making quick, fun hacks on hackdays, for Rewired State I set myself the challenge of making something that would have lasting value and remain useful after the event. A responsive front end for TheyWorkForYou seemed a great project. My first idea was to build one and make a pull request to the Github repo.

Heritage PHP

Ah. Turns out TheyWorkForYou is written in what we might kindly describe as heritage PHP, which makes it very hard to adjust the front end.

But TheyWorkForYou has an API. If the data’s there, I can read it and build a new front end on top, right? Ah. Turns out the API also has a lot of… heritage.

Heritage JSON

I realised I couldn’t work through much of this data and code in a day, so I looked for the single most interesting feature of the site to convert to mobile. I think it’s the voting record, it’s intriguing to look up how MPs vote and see the issues they actually support. Did you know Nick Clegg was very strongly for tuition fees? Ok… maybe not the best example.

Nick Clegg's votes

Well, having worked out a clear and achievable goal, I dived in and started hacking. The result’s live at is.gd/twfym. It looks like this:

They Work For You Mobile

There’s also a spin off hack. I had to parse the TheyWorkForYou API data for my own use, so I made a public endpoint to my data /mp_api/<member_id>/

There’s only one call, it returns information about an MP and their voting records. Here’s how my JSON response for Nick Clegg looks:


My source code’s on Github at https://github.com/ollieglass/theyworkforyou-responsive. I’m pleased to have turned it around so quickly, but it’s very rough. There are some especially hacky things in there with the images and MP lookup, we might kindly describe it as stream-of-consciousness Python. If I’m lucky, perhaps it’ll get upgraded to heritage status one day.

Everyone presented their hacks at the end of the day, and I was delighted to win a prize from MySociety. It was an honour to give something back and have Tom Steinberg recognise my contribution. MySociety’s projects were a huge influence and inspiration to me when I first saw them, back in the early 2000s when I was reading NTK and writing Perl scripts for a living.


Thanks to everyone I met at the event, Amy Whitney for giving me a hand with the design, and the teams from Rewired State, MySociety, the Taxpayers’ Alliance and Government Digital Service a great day.

Update Alex Pestell has built a Chrome extension that uses some of my code.

Nice one Alex!

PeerIndex product launch

In August 2012 I joined PeerIndex as growth hacker. I put a measurement framework in place (based on Dave McClure’s Pirate Metrics), recorded and analysed the conversion funnels across the peerindex.com site, asked users about their understanding of PeerIndex and what they wanted from us, and ran tests to discover how our audience would respond to different messages.

If you ever met Dee on the homepage, you saw one of those tests!

Original peerindex.com homepage

Original peerindex.com homepage

Dee, my homepage ab test

Dee, star of my last homepage ab test

It became clear pretty quickly that social media analytics were highly valuable, but for a limited audience. I started experimenting with some bold new features, and in December 2012 was asked to lead a team of six to ideate, re-design and build the consumer product. Fast forward to February 26th 2013, now working as PeerIndex’s product manager I’m proud to release the new peerindex.com.




The consumer proposition is simple and has broad appeal. Your retweets and likes earn you influence, which gives you discounts on products. It’s had good responses in tests, and over the next few iterations I hope to tighten the messaging, functions and whole product vision even further. I hope you enjoy it!

A few people noticed and wrote some lovely things about us.

econsultancy“Clearly social influence plays a key role in purchases but social influence has yet to have an effect on product discounts… until now. Now with launch of PeerIndex’s new site, this promises product discounted based on your social influence” eConsultancy

the_telegraph“From today, people whose views are respected by their online friends will be offered discounts of up to 50 per cent on hundreds of products in the hope that they might mention them when they next log in.” The Telegraph

the_drum“PeerIndex, the company that measures brand influence on social media, has launched a new service that allows brands to reward consumers with free products and exclusive discounts for their social media contributions, with the aim of generating word-of-mouth on a large scale” The Drum

marketing“Social influence company PeerIndex is launching a new service designed to drive word of mouth marketing on social networks in exchange for perks and discounts” Marketing magazine

love_money“… a new service from PeerIndex means Twitter users can bag a range of freebies and discounts.” Love Money

There’s also been some great buzz on Twitter…





It’s great to see the new product being so warmly received. In a startup there’s always a thousand and one things you could do – I think we did the right one, and I’m looking forward to pressing on with designing, building and iterating through the rest of our ideas!

It’s also been a pleasure and privilege to work with such smart and varied people. Thanks so much to the whole product team:

  • Hugh Hopkins (commercial) sourced over 350 items in two months, stellar work!
  • Natalie Rooke (UI/UX design) made it look so marvellous at so many screen sizes.
  • Mike Fox (lead dev) made it work, made a million and one tweaks to make it fast, responsive and quick for us all to develop.
  • Sid Karunaratne (back-end dev) also made it work, did some very smart stuff with Hugh to parse and import the product data (we’ll tell you war stories over a pint).

p.s. my personal favourite tweet – Linda Sandvik totally gets it 🙂


Akin To – explore music through adjectives

There’s a lot of music recommendation systems out there that give pretty obvious and predictable suggestions. They suggest the Beatles if you like the Stones, and Aphex Twin if you’re a Squarepusher fan. I find them pretty dull and uninspiring.

Some of the best music recommendations I’ve had have been very wild jumps, to music with similar qualities in very different genres. A friend once recommended John Coltrane when I said I liked Squarepusher. Years later another guy suggested I listen to Steve Reich when I said I liked Coltrane. I wanted to try and capture some of what was going on in their recommendations.

Akin To is my attempt at a more imaginative and literary kind of music discovery, letting you explore and compare music through the adjectives in album reviews. Try searching for “cinematic”, “enigmatic” or “space-age” music and you’ll see albums described with those words. Look up an album or artist you like to see other music that’s described similarly.

I’ll share a few buzzwords and bullet points about the creative and technical challenges of making it:

  • I used Python’s Natural Language Processing Toolkit to detect adjectives in the reviews.
  • The similarity between two albums is based on the number of adjectives their reviews have in common, and how unusual those adjectives are.
  • The web app’s built with Django on the back-end, Bootstrap, SASS and JavaScript for the responsive front-end, all served by Heroku.
  • There’s a graph database underneath it, implemented with MySQL. Mmm, graphs.

It’s an experimental project and I’d love to hear what you think about it, please leave a comment! And if you’d like to show your thanks and support further development, why not sponsor your favourite adjective?


Huge thanks to Pitchfork who’s reviews fuel Akin To’s engine. They have all the best adjectives, and their writing is perfect material for a project like this.

Thanks to everyone who helped me make it. Jamie Matthews inspired me to learn Django a few years ago and I’ve never looked back, and Tomek Kopczuk helped me with a particularly tricky database issue. Clare Sutcliffe did all of the beautiful mobile and desktop UX and design that you see on the site. The concept, algorithms, full stack development and everything else, especially the mistakes, are my own.

Topsy Tracker – see who’s tweeting about your blog posts

Like to know when people are tweeting about your blog posts? Sure you do! Topsy give these neat reports on who’s sharing your links, like so:

Topsy report for my last post

See how the address is basically my blog post’s address, with topsy.com at the front?


I wondered how easily I could write a script to open browser tabs for my last ten posts. Turns out, very easily:


The comments say it all really, the script reads posts from WordPress, changes the urls to Topsy urls, and opens them all in the default browser. For me, that means they open in new tabs in Safari. Because those two imports are standard Python libraries, this code should “just work” if you have Python, i.e. it should work straight out the box on any Mac. Here’s how you can use it.

Get set up with the Topsy Tracker

First download the code above by right clicking on “view raw” and downloading it as a file. Move the file to your desktop, right-click and open it with TextEdit.

Change ollieglass.com to your blog’s address, and change the WORDPRESS_BLOG_NAME, WORDPRESS_USERNAME, and WORDPRESS_PASSWORD settings to your details. Leave the quotes around them! They should end up looking like this:

server = xmlrpclib.ServerProxy('http://myblogaddress.com/xmlrpc.php')
result = server.metaWeblog.getRecentPosts('myblogname', 'my_name', 'my_password', 10)

Ok, script’s ready to go!

Running the Topsy Tracker

Open Terminal – use Spotlight if you don’t know where your terminal app is!

Best way for a civilian to find Terminal

Type cd Desktop and press enter. Then python topsy_tabs.py, and enter again. Voila.

Thanks, Topsy!

User experience design at PeerIndex – complications with two social logins

I gave a talk at UX Cafe on a user experience challenge we have at PeerIndex. There’s two social logins on our homepage, Facebook and Twitter. Some users create an account with one, then come back and login with the other. This creates a new account, which causes some confusion. My slides tell the story…

[slideshare id=16282894&doc=peerindexsocialloginissue-130131161837-phpapp01]

I got some helpful feedback from Jenny Grinblo, Mike Atherton and Mark Doolan on the panel, and many people in the audience. There was no definitive answer or best practice for this, but there were some great suggestions. I think I’ve remembered most of them! Here’s a summary:

    • Give users an opportunity to link other networks during onboarding. This would prevent the second account from being made in the first place.
    • Incentivise this with a clearer value proposition. PeerIndex could make the benefits of linking several accounts much clearer.
    • Check names and email addresses when accounts are made. If we’ve seen them before, ask users if the other accounts are theirs.
    • Related, we could make it clear users are creating new accounts when they’re signing in for the first time.
    • Prompt users to add their other accounts after onboarding. LinkedIn and others do a great job of softly nagging you to complete your profile. PeerIndex could do a better job of this.

See that grey F? That’s our nag. We’re a bit too soft.

  • We could improve the messaging when connecting an account fails. We need to tell users a PeerIndex account already exists for that social account, and that it might be theirs.
  • Most radically, we might drop the social login altogether. We could invite users to link their social accounts after signing up with PeerIndex. We’d need a very strong and clear value proposition to do this without seriously impacting our signups.
  • Similarly, we could drop one of the logins, and have users link that account after joining.

Everyone had plenty to say about PeerIndex in general as well. Thank you for your thoughts and kind words, we’re working on some new designs which we hope will address many of these points. And I’m glad you’re enjoying the perks. You’re welcome, you earned them!

If you’re read this far, let me pitch you a job with us 😉 I’d like to find a UX designer / front-end developer to join me on the product team. If you’re interested, have a look at the PeerIndex careers page, and give me a call if you’d like to know more about it.

Update: thanks Mike Fox for remembering some of the points made – glad you were there!

Instagram campaign reporting, part 2

Also looks cool at night

Also looks cool at night

Barely a week goes by when I don’t shop in Urban Outfitters. Or window-shop, or think about shopping there, or hear about something they’re doing. I ran through that conversion funnel, and am happily hanging out in the retention / referral / revenue zone. I got to thinking about who else was in there with me, and if that Instagram data I looked at last time could reveal who the top Urban Outfitters fans are.

The funnel

In this blog post I build on the ideas in my previous Instagram campaign reporting piece, developing the code to capture more data, find the most engaged fans and produce an informative HTML report.

This is a much more technical post than part one. I focus on the architecture needed to get results and how to hack it together quickly, skipping over a lot of material at a fairly high level. If you’re learning how to write hacks like this, try reading each section and writing your own code to get the same results. Then look at my code and see how our approaches compare. If you just want the results, the code is all there for you to run.

Capturing all the Instagram data

In my earlier code I downloaded the Instagram data to a JSON file. It’s a trivial step from here to store it in a Mongo database, which gives us more powerful and flexible tools to analyse it with. If you’re on a Mac, I recommend installing Mongo with Homebrew. It’s easy and neat, and easy to keep up to date.

If you’re not, how did you come to be reading some hipster growth hacker’s blog post about tracking Urban Outfitters on Instagram? Didn’t I throw enough buzzwords and buzz-brands in there to put you off? I thought I was good at audience targeting, but you’ve thrown me. I don’t know what to say to you.

Getting set up

You can get everything you need with these commands:

ruby -e "$(curl -fsSkL raw.github.com/mxcl/homebrew/go)"
brew install mongo
git clone git@github.com:ollieglass/instagram-campaign-tracker.git
cd instagram-campaign-tracker
pip install -r requirements.txt

They will install Homebrew, install Mongo, clone my github repo and install the python dependencies, in that order. Now you’re good to go.

Downloading and saving to Mongo

The download_data.py script has two parameters at the top.

CAMPAIGN = "uostyle"

A quick hack to get an API key: visit Instagram’s web API console, change authentication to OAuth2 and make any request. Copy the access token from the request details in the left hand panel and paste them into the script. Quicker than registering for a key 😉

Get an access token from the Instagram API console

Setting the CAMPAIGN variable will change the hashtag that’s searched for and the Mongo collection the data is saved to, so you can collect data from many campaigns in your database.

The script downloads and stores all of the photo data it can get. You should see output like this as it paginates through the data:

download_data.py output

Yeah, it just errors when it’s finished. It’s a hack, not some enterprise CRM solution!

Finding top users in Instagram data

Now run find_top_fans.py. It will run through all of the photos in the “uostyle” collection, and record a count of each username that features in a like, comment, caption or as a photo creator. This script also has a CAMPAIGN variable you can change to analyse different campaigns.

Every username appearance is counted, and the top ten most frequently occuring are printed to the terminal.

Find top fans - output

This is just a quick script to see what kind of volume we have. Let’s do something a little more sophisticated and count each type of user engagement.

Recording user engagement

The analyse_campaign_users.py script first records the details of every user into a new collection. Then it counts each engagement type – likes, photos, comments, captions and a total – and records them to a stats object, one per user.

The new collection name is the campaign name with “_users” appended, so uostyle’s users are in “uostyle_users”. You can view the contents of this collection in the database. Run mongo to open a database query analyser, then use instagram to switch to our database, show collections to see each campaign’s collection and it’s users collections, and db.uostyle_users.find.limit(1) to see a user.

Reporting the most engaged users

A modern web templating enthusiast, surely

Let’s get this data off the command line and into the browser. Here’s a minimal HTML report template that cycles through a dictionary of users, rendering basic information about them, a list of stats and a link to their Instagram profile. It’s written for Jinja2, in the handlebars syntax that all the cool templating libraries use these days. There’s some filters in there to sort and capitalise the user data.

Top fans template

All we need now is a script to collect the data from Mongo, feed it to the template and render it – enter produce_top_fans_report.py. This does just that, and utf-8 encodes the result so it will print to the terminal without errors, letting you pipe the output to a file. Run it with python produce_top_fans_report.py >; report.html to create your report. Click through on the image to see one I generated earlier.

Instagram campaign report in glorious HTML

You’ll notice there’s still issues with unicode names, but I didn’t want to get into unicode wrangling with Python. Sorry ⚓leannelimwalker⚓, maybe in the next version.

A quick trick for hosting flat HTML files is to put them in your public Dropbox folder. Visit the web interface, right-click and Dropbox will give you a public URL for the file.

Serving HTML with Dropbox

Serving HTML with Dropbox

Reporting the most engaged users

So there you go. A bit more work has given me the foundations for a scalable, multi campaign reporting system. There’s more data in the API to segment and learn from, dates and locations would be good. With a bit more work I could also mine the data for insights like cliques of users and optimal posting times to get comments. It could also be linked to other APIs – I expect most people have the same username on Instagram and Twitter, so you can probably see where my hacker mind is going with that one.

I’d love to hear your thoughts. Please leave a comment, and feel free to use and develop the code.

Instagram campaign reporting

I saw this sticker in my local Urban Outfitters’ changing room for a new Instagram campaign. You take a photo of yourself trying on an outfit and post it to Instagram with the hashtag #uostyle. Urban Outfitters’ @uoeurope account might then share it on their account. There’s so much that makes this a great campaign idea:

  • Urban Outfitters become a bigger part of the online fashion conversation.
  • The stores become a relevant part of that, at a time when offline retail is struggling to compete with online.
  • The hashtag paves a desire path for people who are already sharing changing room photos, and encourages new people to join in.
  • The shared photos generate content for @uoeurope, and build up the audience for it and #uostyle
  • The chance of being shared both amplifies the user voice and acts as an intermittent reinforcement game mechanic.
  • Qualified prospects are captured from Instagram for Urban Outfitters’ social CRM.
  • The overheads are really low. The hashtag doesn’t need moderation and sharing photos on the @uoeurope account doesn’t take long.

It’s a really smart social campaign and I love the thinking behind it, but I want to know how it’s actually doing. What’s the activity and engagement like, can we see some performance stats? Sure! Now Instagram have an API out, we can get the data we need to see the activity for ourselves.

In the following sections of this post I’ll show you how to search for this data using Instagram’s API console, the information in the raw data that Instagram returns, and how to collect all the data you need to track a campaign. I’ll finish by looking up the stats of this Urban Outfitters campaign so you can see how it’s working out. I hope this piece helps you better understand the data available from Instagram’s API and assess it as a social campaign platform.

Viewing Instagram hashtag activity with the API console

Here’s a quick guide to getting hashtag data from the API console:

1. Visit Instagram’s API console, click the authentication box, select OAuth 2 and login with your Instagram account details.

2. Click the arrow on the left hand side to bring up the list of API calls, and select tags/{tag-name}/media/recent from the menu.


3. In the Request URL bar, replace {tag-name} with the name of the tag we’re interested in, uostyle.

4. Click Send and the API console will fetch the results for you.

5. When it’s finished working, the response section will show raw Instagram JSON data about the photos featuring the hashtag. Let’s have a look at what this data means.

Instagram data anatomy

Here’s a field by field breakdown of the information in an Instagram photo’s JSON data. The interesting parts for us are:

  • List of hashtags in the photo’s caption
  • Location of the photo – latitude and longitude, and sometimes a location name
  • List of comments on the photo, each with the text of the comment and details about the comment’s author
  • Date and time the photo was created
  • Link to view the photo on the web
  • Count of likes, with details of each user who liked the photo
  • Links to the image in thumbnail (150×150), low resolution (306×306) and standard (612×612) sizes
  • Photo’s caption
  • Details of the user who posted the photo – their username, website, bio, profile picture and full name

Many of these fields also have ids, so we can cross reference them and link data across users. As well as the photo data, the response contains pagination data so we can get further pages of results, and some meta data to handle rate limits on the API.

Measuring and reporting an Instagram campaign

So that’s the contents of a hashtag search, how can we go from this to a full campaign report? Basically we need to:

1. Download all the data we can get on the hashtag by paginating through the results
2. Get the stats we’re interested in from the data, like the number of photos shared and the number of likes
3. Report this in a digestible format, like a table or chart

Here’s some python code that does just that. You’ll need a basic knowledge of git and python to use them, but they’re very straightforward. The first script downloads Instagram data, the second parses it and outputs some interesting stats.

They’ve very quick and dirty, the biggest hack in there is that I’m only downloading a few pages of data and manually stopping the pagination rather than waiting until the end is reached. But I wanted to spend five minutes on this for a quick snapshot of campaign activity, not produce a comprehensive report. Totally caveat emptor if you use them yourself – feel free to fork and develop them further.

How’s Urban Outfitter’s campaign doing then? Here’s what I found:

Instagram activity between 2012-01-24 22:53:30 and 2012-11-04 21:09:43
Campaign duration: 284 days, 22:16:13
Photos shared: 442
Users: 442
Likers: 2633
Likes: 10915

That’s about two photos and ten likes per day. Now I have no idea what the scope of this campaign is – perhaps it’s just running in the one store I visited in Shoreditch – so I’m not really in a position to comment on how well it’s doing. But close to eleven thousand likes looks good, that’s a pretty engaged audience they have there. Nice work!

Update I wrote a second piece that builds on this work. The code from this post is now in an “old_version” folder in the repository.

Data Visualization: How can I make a visualization of a startup’s refer-a-friend program?

I just answered this question over on Quora (Ollie Glass’s answer to:
Data Visualization: How can I make a visualization of a startup’s refer-a-friend program?
). If you can write basic SQL and use a scripting language like Python or Ruby, it’s easy to throw a quick and dirty network visualisation together.

I’m assuming you have a database table of your users, with each user having and id an a referrer id. Create a script that loops over every row in the database with a referrer id, printing the user id and referrer id separated by two hyphens. Put this in a text file and add “graph referrers {” to the top and “}” to the bottom.

Here’s Python code that does exactly that, just change the database details to work with your setup.

Run this an pipe the output to a file. On the command line, use python vis.py > vis.dot Your .dot file should looks like this:

graph referrers {
a — b
b — c
e — b
… etc.

Open this in Graphviz and voila, you have a basic network visualization like this:

Network visualisation with Graphviz

Network visualisation with Graphviz

If your file is so large that it crashes Graphviz, or you want to do something a bit prettier, try Gephi. It’ll open the file you just made and let you create visualisations that look more like this:

Network visualisation with Gephi

Network visualisation with Gephi

Here’s their step by step tutorial to get you going, it’s pretty straightforward.