Introduction

An NLP tokenization algorithm that is a trainable layer for neural networks.

This is the documentation for a package about an experimental "tokenization layer", a tokenization algorithm that is a neural network layer, training as part of a model trying to solve some NLP task, to make tokens that are best for the task.

However note, this is simply a concept, and in it's current state, this layer should not be used in any real tasks.

Last updated

Was this helpful?