Function Space was a social learning network for science that ran from 2013-2016. As a designer, it was the biggest product that I’ve ever worked on, in terms of scale, features and multi-devices experiences. I’ve published a case study on how we redesigned the platform from its first version to a more personalised and adaptive second version. This case study focuses on various innovative solutions that we shipped on Function Space.
The Problem Space
One of the major challenge was that we had users across a large spectrum – from high school students till doctorates and working professionals. Even though we had introduced better personalisation, users were still feeling lost in this ocean of content and numerous features available on the platform. This resulted in uneven distribution of users across different sections of the platform – some sections had higher engagement while others saw huge bounce rates or even abandonment in some cases. We realised that the platform felt disconnected – it had some information architecture issues, some features were only used by advanced users and a chunk of content remained unexplored.
Content discoverability issues
Our analytics showed that a significant portion of content on the platform remained undiscovered. During onboarding, users were given the option to personalise the platform according to subjects and their skill level. While this resulted in more relevant content for users, it led to discoverability issues as the feed was chronological and not all subjects received equal participation from users. We were using a third-party search engine that has issues related to indexing and only fetched a limited number of results. This also created a sub-optimal search experience for our users.
Information architecture issues
We conducted qualitative interviews to understand the issues faced by our users. It was realised that most of them were unable to find related discussions and news articles that resulted in higher bounce rates and dissatisfaction with the platform. It was a big taxonomy issue for us. For the uninitiated, taxonomy deals with organizing and classifying information and features based on the similarities and differences of the concepts behind those features.
Being a science platform, it was important to provide equation writing capabilities so that the users can express their opinions with scientific rigor. However, writing equations digitally requires learning LaTeX that has a specific syntax with numerous mathematical and scientific symbols. This was intimidating for new users.
Through user interviews and behavorial analytics, we realised that 22% of our users were not engaging with discussions that had more than 50 comments. The common feedback was that they usually lost interest midway and abandoned the discussion without going through all the comments. Some users also complained of less interesting content on the platform.
The Solution Space
Taxonomy with card sorting and auto-tagging
One of the first task we accomplished was creating proper taxonomies for all the content on the platform. We did 3 closed card sorting exercises with different groups to identify categories for different discussions. With this learning, we used natural language processing for auto-tagging of all the discussions on the platform.
We introduced an algorithmic feed for displaying all relevant content on the social network along with an option to toggle between preferred subjects feed and peer feed. Preferred subject feed only showed content with topics selected by the user during onboarding while peer feed only showed comments and discussions posted by people in your network.
Smart search with auto-scoping
Being a content based social network, search was an important feature for us and we dedicated a lot of time and effort to it.
We used Sphinx, an open source search server to build our engine and introduced auto-scoping and auto-complete, much like the Mac OS Spotlight, to provide a more efficient and robust search experience.
The point here is that when designing search, you have to consider the technology, the technical expertise of your team and the cost. It’s not always necessary to build your own engine, but if it’s critical to your business, you would like to invest time and money into it. In fact, it’s always helpful to build a sort of service blueprint when you’re designing search.
Equation caller and visual equation editor
Writing equations was a critical function on the platform. So, we decided to provide enough flexibility with this feature so that users with different skills can easily write equations in discussions.
Advanced users could always use LateX for equations. For intermediate users, we introduced a visual equation editor to make equations without the hassle of remembering LaTeX syntax. And for beginners, we made a custom equation caller, so you just had to start typing the name of the equation and the system automatically wrote the needed LaTeX for it. The equation database was crowdsourced. So, users had the ability to add equations to the database.
Online discussions are “scalar” in nature. They have magnitude in terms of number of comments but don’t give you a sense of where the discussion is going unless you analyse all the comments in a particular discussion. That’s precisely what we decided to do at Function Space. We used natural language processing to analyse all the comments and auto-tagged the comments based on subject matter. The max entropy classifier in NLTK was used to categorise the comments with pre-defined tags.
All the comments were visualised in a time-series graph based on tags. This gave a true picture of where a discussion was heading based on subject matter. So, we had magnitude as well as direction thereby leading to “vector” discussions.
Vector discussions and better feed led to a good increase in our engagement numbers. The average duration on the platform increased by 9 minutes and the bounce rate decreased by 15%. Our overall satisfaction scores also increased by 3 points. Smart search led to more search queries with reduced clicks to find the relevant content. We were now also able to gauge content requirements for a particular topic by analysing search queries and tag clicks. We also achieved good results in our usability and task analysis tests.
Overall, these were some great “interactions wins” for the platform.