Back

Dissecting Linguapon Part 1

Posted on 1/22/2025

10 minute read

Hello Linguaponearners, I wish 2025 to bring upon huge progress on your language learning goals!


In this series of blogs, I'll dive into the details of Linguapon's learning modules. For each module, I will talk about the idea and the methodology behind it, some technical detail, the vision and how it links back to the four challenges that guide Linguapon's direction, as described back in the first blog.


There are four key modules to Linguapon: Explore, (Translation) Bank, Test Me and Community. I'll start with the Test Me component.


Blog 3 Image 1



The Idea


As its name suggests, the idea is simply extracting a list of sentences and vocabulary items from a source and present a quiz system for learners to test their knowledge against. It is a low-stakes quiz system where learners challenge themselves in a target language, from a set of vocabulary items. Questions that learners answer wrongly slightly mechanically affect the outcome of the quiz, but should not deter the learner from continuing. If the learner answers a number of questions correct (not questions correctly answered in a row), a reward is presented and saved against their account.


To keep the quizzes non-repetitive, there are a variety of question types that test a range of skills. Question types may not necessarily equal each other in difficulty - for instance, one question type may ask the user to translate a word whilst another may ask the user to comprehend a few sentences and identify the correct usage of the word in them. There could also be mechanical difficulty variations of the same question type.




The Methodology


The learner is presented a form for customising the test session before starting one.


Blog 3 Image 2

(At the top of the form, you can see our friend Morby who is more than happy to see you wanting to learn)


A source may be selected, that is - the list of vocabulary items that the learner is tested on is based off. The ‘Translation Bank' source integrates directly with the learner's personal Translation Bank - this is basically a storage unit for all of their saved vocabulary items that they can additionally categorise into customised lists, or ‘collection' for the jargon term. On the other hand, the learner may just simply select an alternative source, such as a pre-made or user-submitted collection. Picking the source cascades into a choice of target language and optionally, the collections that are available. A simple example of how this flow resolves: the learner selects a collection from their Translation Bank with target language of Portuguese. The test session that is created would specifically generate questions based on the vocabulary items specific to that collection.


Blog 3 Image 3

The difficulty would allow parameters of the quiz session to be tweaked to show easier or harder variations of question types, or leave in or filter out difficult items derived from the source. For example, if English was the target language, the difficulty is set to Easy and there is a sentence like this within the source:‘The organization's altruistic efforts have positively impacted thousands of lives.' This sentence would then likely not show up at all.


Blog 3 Image 4

(Just a mock up! This is an idea of what difficulties might look like)


The question types test a range of reading, comprehension, (in future - listening) and writing. These can be categorised further into further input categories such as multiple choice, freeform, clicking word pills to match X with Y or to assemble Z. It is completely random, subject to parameters on the form, on which questions appear next. The current caveat of selection randomness is if learners got a question incorrect, the same question will not necessarily later reappear in the session for them to correct themselves. I designed it this way to maximise the freshness of each test session and maintain the momentum of challenge.


Blog 3 Image 5
Blog 3 Image 6

Some questions use AI to generate possible answers for certain question types. Multiple choice is a great example application of AI completion, as you will later see. As a disclaimer, I am wary of over-relying on AI when designing question types. It is particularly tempting to go off-the-rails with AI to design question types that sound great and wacky. Fundamentally, using AI for carrying out major roles in answer validation has to be approached carefully because generated output is not always faultless, or sometimes not helpful.


Blog 3 Image 7

For every 3 questions answered correctly, the keys open the lock to a random Linguapon character - the central reward system unique to Linguapon. The prospect of answering enough questions until hitting that 3-answer milestone gives the sense of aiming for a small goal - something which should only take a minute or two.


Blog 3 Image 8

Test sessions are easy to pick up and put down. By leaving the page, the progress of that test session is saved and can be revisited at anytime. By implementing it this way, I hypothesise that learners are more likely to want to jump back into the same test session, as soon as they access the platform. If learners can then spend a minute or two on answering questions and earn a Linguapon, learners are inherently engaged with learning or reinforcing concepts in their target language.


Blog 3 Image 9



Technical Details


Integration with Translation bank


The items in the Translation bank are centrally accessed across Linguapon. When the source on the options form is selected as Translation Bank, a call is made to pull the data all at once. As the source resolves and all of the user's saved metadata has been fetched, then the list of language options get presented, alongside the collections associated with that language.


There are two grouping mechanics happening in the background - one that simply categorises each item into its target language; the other performs a separate query to fetch the personal bank collections known to the account and locates an association of an item with the collection it belongs to.


This same process also takes place when the test session is resumed i.e. when the user revisits the same test session as before.


Items in the Translation bank can be added or removed. Similarly, collections within a Translation bank can be added or removed. This means whenever a change like this happens, the test session dynamically updates the pool of items it can draw question types from to ensure synchronicity with the Translation bank.


Preventing misuse


If the user decided to keep one item in the Translation bank and attempt to test themselves on that one item, the item would be repeatably answered correctly. This is an example of misuse. To circumvent this issue, items are blocked after the third time the item string appears against the user's account for a certain amount of time. The blocking mechanism is managed by a lightweight token that contains information about the item text and the expiry timestamp. Once the expiry time elapses, the token is deemed stale and is cleared from the system after the next fetch. This token is checked against whenever a session is created or resumed, to determine the pool of test items.


Use of AI


At the time of writing, Test Me has one application of AI that is question completion. The idea is simple: for constructing a question type where the user is asked to find the correct usage of a word from a choice of 4 questions, present a word and form a prompt to generate 3 sentences with incorrect uses of that word and 1 sentence that uses it correctly.


Additionally within the prompt, there is a request for hints that correspond to each sentence option. The question type presents these hints to the user after an answer is selected. They simply show the translation of each sentence in the learner's native language.


Having fiddled around with different prompts between OpenAI and Google's Vertex AI - I had more luck with Google's Vertex AI on generating more accurate sentences that seemed to produce the most convincing sounding sentences.


Take this example below. The question here is in Cantonese and used OpenAI to generate sentence options. In Cantonese, sentences represented in spoken and conversational form can be very different to a sentence of equivalent meaning in written form. These generated sentences are akin to the written form and in Cantonese would generally not be useful unless one is specifically learning to gain exposure from articles and documents. So for example, the sentence '今天的天上有很多雲' seems uncharacteristic if you were practicing Cantonese for daily use. It uses words that you would not come across in conversational format.


Blog 3 Image 10

This took a couple of hours of tweaking around with a prompt for OpenAI, before I finally ditched using OpenAI for this purpose altogether.




The Callback


Test Me integrates with all of the other modules of Linguapon to create a holistic experience that addresses elements of all four challenges. In particular, out of those four, I would say Test Me is addressing two in particular.


Test Me doesn't just allow the learner to interact with words and sentence items they've saved, each item in Test Me is presented in visualisations that you may not find in traditional text books. AI generates incorrect usages of word items, items are blanked out, sentences can be visualised out of order and need to be reconstructed. The powerful part of this solution is that these three examples can easily be three different visualisations of the same sentence.


Test Me also addresses boredom by means of instantiating fresh sessions, short term goals for earning rewards and sessions that can easily be bounced out of and bounced back into. I don't want the user to feel ‘trapped' into having to test everything in one go. I personally have had difficulties finding motivation learning and retaining vocabulary because I don't find it an interesting enough task. I believe by breaking down knowledge testing into a way that balances flexibility, variation and challenge, this tackles my own stigma of finding the task as a whole ‘boring'. The reward at the end is the cherry on top of the cake.




The Vision


In the shorter term, I consider the Test Me system as a messenger that informs the other modules on what words, sentences, question types, which the system thinks the learner has a strong or weak understanding at based on their performance. In turn, this will improve and personalise the experience of other modules.


Test Me at its core is a form of gamification and I also really like the idea of incorporating a scoreboard. A scoreboard adds a competitive element that I personally resonate with. This automatically ties into the community and social aspect of Linguapon that I am aiming to cultivate.


Blog 3 Image 11

(Just a mock up!)


In the longer term, I want to build the most engaging knowledge testing platform - one that prioritises what the user wants to know rather than what the user should already know. What do I mean by that?


In the traditional quiz system, we are used to answering questions and whether we get questions right or wrong, the question bank stays the same. My vision of the Test Me session is having a smart system that could dynamically personalise itself to the learner's strengths and weaknesses. For example, it could adjust the difficulty of questions or filter question types if unsuitable for the user, phase in or filter out test items based on the learner's knowledge.


Blog 3 Image 12

Something like this... a smart self-aware quiz system...


The Test Me module is one that remains in my overall vision for Linguapon as a crucial module that bridges the gap between learner exploration in the platform and the reward system. It is a bridge that cannot be collapsed by lazy design. Each question type teaches and tests learners in some meaningful way. I do not intend for Test Me to ever be a mindless click-for-reward system.


Written by Elvin, Linguapon Admin