Ruth Natural Language Understanding
Welcome to the RUTH NLU documentation. RUTH is a open sourced Natural Language Understanding (NLU) framework developed by puretalk.ai. It is a Python module that allows you to parse natural language sentences and extract information from them.RUTH is cli based tool that can be used to train and test models.
Quick Installation
$ pip install ruth-nlu
Manual Installation
Manual installation is not recommended. If you want to install from source, please follow the instructions below. In order to install from source, you need to clone the repository and use the following commands.
$ git clone https://github.com/prakashr7d/Research-implementation-NLU-engine.git
$ cd Research-implementation-NLU-engine
$ python setup.py install
Installation Using Makefile (for Linux or mac Users)
Makefile is a file that contains a set of directives used by make build automation tool to generate executables and other non-source files of a program from the program's source files.
$ git clone https://github.com/prakashr7d/Research-implementation-NLU-engine.git
$ cd Research-implementation-NLU-engine
For Ubuntu,
$ sudo apt-get install make
$ make bootstrap
For Mac,
$ brew install make
$ make bootstrap-mac
After builidng dependencies and install using Makefile
$ make install
Pytorch installation with GPU support
$ pip3 install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu116
Documentation
Getting Started
The main objective of this lib performs to extract information by parsing the sentence written in natural language. To getting started with RUTH let's follow the below steps. Run the following command to build an initial project with data and a default pipeline file.
$ mkdir project_name
$ ruth init
Output
Project will be initialized with below structure
├── data
└── example.yml
└── pipeline.yml
Project will be created with example data and pipeline
CLI
RUTH has a CLI interface to train and test the model, to get started with the CLI interface, run the following command
$ ruth --help
For example, to train the model, run the following command
usage: $ ruth [-h] [-v] {train,test} ...
Training
To train the model, run the following command
$ ruth train -p path/to/pipeline.yaml
-d path/to/dataset.json
Parameters
-p, --pipeline Pipeline file
-d, --data dataset path
Saving Trained models
After training the model, the model will be saved in the models
current directory.
Dataset format
RUTH uses a yaml file to store the training data, the yaml file should have the following syntax
Example
version: "0.1"
nlu:
- intent: ham
examples: |
- WHO ARE YOU SEEING?
- Great! I hope you like your man well endowed. I am <#> inches
- Didn't you get hep b immunisation in nigeria.
- Fair enough, anything going on?
- Yeah hopefully, if tyler can't do it I could maybe ask around a bit
- intent: spam
examples: |
- Did you hear about the new Divorce Barbie? It comes with all of Ken's stuff!
- I plane to give on this month end.
- Wah lucky man Then can save money Hee
- Finished class where are you.
- K..k:)where are you?how did you performed?
Pipeline
--- RUTH is a pipeline based NLU engine, it has 3 basic main components - Tokenizer - Featurizer
- Intent Classifier
In pipeline-data.yml
file is used to define the pipeline and its components Example
of pipeline-basic.yml file for Support Vector Machine (SVM) based intent classifier and
CountVectorizer based featurizer.
task:
pipeline:
- name: "WhiteSpaceTokenizer"
- name: "CountVectorFeaturizer"
- name: "SVMClassifier"
task:
pipeline:
- name: "HFTokenizer"
model: "bert-base-uncased"
- name: "HFClassifier"
model: "bert-base-uncased"
Parsing
To parse the text, run the following command
$ ruth parse -m path/to/model_dir
-t "I want to book a flight from delhi to mumbai"
Parameters
-m, --model_path model file (optional)
-t, --text text message (required)
If model path is not provided, Parse function will use the latest model in the model directory as a default model.
Testing
To test the model performance, run the following command
$ ruth evaluate -p path/to/pipeline-basic.yml
-d path/to/dataset
Parameters
-p, --pipeline Pipeline file
-d, --data dataset file
-o, --output_folder to save result as PNG file (optional)
-m, --model_path (optionol)
If model path is not provided, Evaluate function
will use the latest model in the
model directory as a default model.
If output folder is not provided, the result
will be saved in results folder in the current working directory.
Deployment
RUTH uses FastAPI to deploy the model as a REST API, to deploy the model, run the following command
$ ruth deploy -m path/to/model_dir
Parameters
-m, --model_path model file (required)
-p, --port port number (optional)
-h, --host host name (optional)
Output
INFO: Started server process [1]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:5500 (Press CTRL+C to quit)
API
Once the model is deployed, you can use the following API to parse the text
POST /parse
{
"text": "I want to book a flight from delhi to mumbai"
}
Output
{
"text": "Hello ruth!",
"intent_ranking": [
{
"name": "greet",
"accuracy": 0.9843385815620422
},
{
"name": "how_are_you",
"accuracy": 0.0017248070798814297
},
{
"name": "voice_mail",
"accuracy": 0.0008955258526839316
},
],
"intent": {
"name": "greet",
"accuracy": 0.9843385815620422
}
}