National Digital Service Assistant aims to ensure an improved information access services assistance gateway for citizens and government employees of Bangladesh. Existing e-governance services are dependent on traditional human-centric one to one question answering approach for ensuring information flow. We wanted to mitigate this communication overhead by replacing the service provider person with an artificially intelligent virtual agent.
Objectives of the project
- Develop an Intelligent virtual assistant for seven e-governance services.
- 24/7 availability for smooth information accessibility.Â
- Generating close human-level replies on different queries from users’ end.
- Providing support for Bangla language understanding.
- Increasing citizen satisfaction while interacting with an e-governance platform.
Technical challenges we faced and how we solved those problems
Inaccessibility to the structured data set
Developing a machine learning-based artificial system, the first challenge we have faced with the shortage of relevant data sets. Considering domain-specific knowledge base, there is no structured data available to train a system. This is a huge and time-consuming task. To solve this problem we continuously work with the specific vendor of the services. We collected their user perspective and expectation, which help us to normalize the most common use cases of artificially intelligent virtual assistants. We ran several statistical analyses to reform a balanced dataset from the data we collected.Â
Incorporating Bangla word representation model
Bangla is considered one of the most spoken languages, but in terms of machine interpretability, it is still marked as a low resource language. As all of the virtual assistants only interact in Bangla, we have to consider the complexity of representing Bangla words in a machine-interpretable format.Â
User Interface
There are several ways to represent a word in machine-interpretable format, depending on our problem space we run a research and development phase to find the most feasible way to resolve the problem with available resources. We found existing pre-trained word vectors are very useful to represent words in machine learning models, here we faced another trade-off, despite making a custom word vector(which is expected to perform well in our problem) we choose to use a pre-trained model to save the huge work of data collection, training new model for few weeks.
Intents recognition and Entities extraction
Recognizing intents and Entities extraction from a natural language text is an ongoing research problem of the natural language processing domain. There is no predefined logic to implement to solve this issue easily, also rules used to break down this problem vary from language to language due to different syntactic and lexical properties of certain languages.
We collected group entities from respective service providers, in the meantime all intents are human evaluated. This process helped to shrink the attention span to service-related intents and entities. We also used some rule-based statistical approaches to remove unnecessary words from queries, which has a significant impact on intent detection.
Conversation flow design
Conversation plays an important role in human communication, it is one of the primitive activities of human beings. To maintain a seamless conversation processing user intention was the most crucial part of making virtual assistance more human-like in real-time.
For maintaining a human-like conversation we implement several chit chat conversation flow, in the meantime, our main chat engine was developed in a way to handle both happy and sad paths. A happy path usually means user queries are being answered as a human agent does, and sad path means the exact opposite. For handling sad paths we designed a few fallback mechanisms to keep conversation alive and keeping information flow persistent.
Retrieving external domain information
As the virtual assistants varied from service to service, the interaction between the end-user and retrieving the correct domain-specific information was a complex task. Also, most of the information was available in a raw and unstructured format.Â
We were able to handle this problem using the existing API of the service. Some of the APIs needed to scrape relevant information based on user queries, other APIs used to provide user-specific information.Â
Automating Data collection and acquisition pipeline
Virtual assistants were continuously interacting with the end-users, sometimes failing to generate the best response because of data and knowledge limitations. Relevant data collection is a tiresome job if it is done manually, so we adopt an automated pipeline for data collection and acquisition.
Overall NDSA Architecture
As data is the core part of the system, we heavily focused on new sample collection and imported those data to our running system. To keep data noise-free, we intervene with human support in this process. We set up an additional administrative panel for each virtual assistant where previous queries from users are shown in a dashboard. So, a person with enough domain-specific domain knowledge can now go through all the wrongly answered questions from users and further enrich the data set using administrative modules.Â
Realtime chatbot learning and training infrastructure
Realtime chatbot training is a serious issue for evolving with continuous learning from data retrieved from the data acquisition pipeline. In the meantime, we also implement state of machine learning practices for ensuring real-time training between a certain time period.
As we have already set up a data acquisition pipeline that is going to improve the quality of data in our system, but still we need to train our model with these new data. For emulating this task we again use the admin panel, where we set up a different module to train and restart the existing virtual assistants.
Accessibility to human supervision with different administrative privileges
Sometimes we need human supervision to take control over virtual assistants. This is necessary to make some important decisions and circulate information easily. Humans are also necessary to change the specific behavior of already existing trained intelligent systems. Though we have developed an artificially intelligent virtual agent, many of the tasks still require human supervision. We gather the requirements from the vendor and compile all of those functionalities in a single admin panel. We provided different types of user roles and maintenance facilities in the admin panel. All processes are automated so any admin can control the virtual assistant without understanding the technological jargon.
Seamless API communication with frontend and backend
Virtual assistants will work on the specific service’s websites which are hosted on a completely different server. Managing smooth communication with a large user based load with a fast response rate is the challenge here.
We always wanted to keep the whole system isolated from the main service to avoid the workload for merging two different systems. So, we hosted our chat user interface on a different server and established another gateway to the chat engine backend. Now, the main challenge is to incorporate the whole thing on the vendor’s service page. To resolve this issue, we used an iframe to pass all the resources to the main service page, but the whole chat UI and engine both are isolated from the main system which makes it easier to work in the future.
Deployment Server Architecture
For maintaining security and load balancing for a huge user base we faced multiple issues designing the deployment architecture. This compact architecture includes the front end and back end for both virtual assistant and admin panels for every service. We designed the whole server architecture in two layers. In the first layer, we set up a load balancing server whose job is to route the upcoming requests to specific servers. This is a public server so it will play the role of user gateway to interact with virtual assistants. In the second layer, we put all the servers containing the running systems for the virtual assistants and admin panels. The second layer is a collection of private servers, so requests from outside can not pass to those servers avoiding the first layer. This two-layer server architecture ensures a more secured and isolated development environment.
Collaboration Challenges
Our chatbot is implemented as plug and play for different services. Actual web applications for those services are maintained by different third parties. And our hosting server is also provided by a2i.Â
Time-consuming integration on the main web appÂ
After the completion of our chatbot and API, responsible third party companies need to implement and integrate those in the main application. So product visualization is delayed or on hold depending on respected companies’ availability.Â
Server configuration and maintenance issue
As our hosting server is provided by a2i, we are heavily dependent on them when we need core level changes on the server. As we had limited access and provided a VPN connection to access those servers, we faced problems on a regular basis to connect with private servers. Any core level changes are queued for a long time. It took several days, sometimes several weeks to get those changes on the server due to the availability of responsible persons.Â
Server Routing ArchitectureÂ
Apart from this long term development, we valued this project differently, it is one of the research-based projects we have encountered recently. There are so many decisions to make from different choices and perspectives. Focusing on the traditional agile development cycle with continuous data evolving pipeline makes the whole process more holistic. Still, We are concentrated on the more robust development of existing machine learning models. In the meantime, our future goal is to design more human-centric virtual assistants and improved personalized conversation flow. As this field is still evolving with many trials and errors, we will be able to get more accurate results in the near future with continuous development and a large dataset.Â
Contributor: MD Nayem Uddin