
The labels_source must be text file(s), each line represents the class that the corresponding sample in the features_source file belongs to. In this case the data layer is supposed to be used in train mode or eval mode. Otherwise, the input pipline will be prepared as labeled data pipeline.
labels_source (Optional): If None, the dtat layer only works in the inference mode. Each line in the file represents one sample. features_source: A tf.string tensor containing one or more filenames. : this default data layer is designed to build input pipeline for word-based text classification. First, we describe the abstract class DataLayer. As Textify is desgined to mainly suuport text classification and other NLP tasks, we provide some predifined s. The textify.data module enables you to build input pipelines from simple, reusable pieces. Textify provides a framework consisting of two main API layers: You can find the implementation in this repo, CharCNN. Character-level Convolutional Networks for Text Classification.
Textify is used to implement the following models: While text classification is the main task of this toolkit, it is, also, has been designed to support different NLP tasks such as: Textify (comes from the prefix of "Text" and the suffix of "Classify") is a high-level framework using TensorFlow for text classification.