I Introduction
Arestricted Boltzmann machine (RBM) [1]
is a probabilistic model that employs a layer of hidden variables to achieve highly expressive marginal distributions. RBMs are an unsupervised learning technique and have been extensively explored and applied in various fields, such as pattern recognition
[2][3] and signal processing [4]. However, conventional RBMs are designed for vector data and cannot directly deal with matrices and higherdimensional data, which are common in reallife. For example, grayscale images are secondorder tensors (i.e. matrices) while color images or grayscale videos are thirdorder tensors. The traditional approach to apply an RBM on such highdimensional data is through vectorization of the data which leads to two drawbacks. First, the spatial information in the original data is lost, thus weakening the model’s capability to represent these structural correlations. Second, the fully connected visible and hidden units may cause overfitting since the intrinsic lowrank property of many reallife data is disregarded.
To address these, we propose a matrix product operator (MPO) restricted Boltzmann machine (MPORBM) where both the visible and hidden layers are in tensor forms. The link between the visible and hidden layers is represented by an MPO, which is essentially a tensor network representation of a matrix. To train an MPORBM, we further describe its customized parameter learning algorithms. The MPORBM is also compared with the standard RBM and matrixvariate RBM [5] for tensor inputs in numerical experiments. This article has the following major contributions:

Compared with standard RBMs, the number of parameters in an MPORBM grows only linearly with the order of the tensor instead of exponentially. This alleviates overfitting, which is demonstrated through numerical experiments.

Both the visible and hidden layers are represented in tensor forms and therefore useful structural information in the original tensor data is preserved and utilized.

The graphical “language" of tensor network diagrams [7] is introduced to represent how specific quantities (such as the conditional probabilities) are computed. This way, complicated mathematical expressions are easily understood in a visual manner.

Although the data structures are generalized to tensors, the bipartite graph nature of an RBM still applies, together with the fast sampling and inference properties.
Ii Related work
Reallife data are extensively multiway. Researchers have been motivated to develop corresponding multiway RBMs. For example, [8] proposed a factored conditional RBM for modeling motion style. In their model, both historical and current motion vectors are considered as inputs so that the pairwise association between them is captured. However, since the visible layer is in vector form, the spatial information in the multiway data is not retained. In [3] a threeway factored conditional RBM was proposed where a threeway weight tensor is employed to capture the correlations between the input, output, and hidden variables. However, their training data still requires vectorization.
The above works are both aiming to capture the interaction among different vector inputs and are hence not directly applicable to matrix and tensor data. The first RBM designed for tensor inputs is [6], which is called a tensorvariate RBM (TvRBM). In TvRBM, the visible layer is represented as a tensor but the hidden layer is still a vector. Furthermore, the connection between the visible and hidden layers is described by a canonical polyadic (CP) tensor decomposition [9]. However, this CP weight tensor is claimed to constrain the model representation capability [5].
Another RBM related model that utilizes tensor input is the matrixvariate RBM (MvRBM) [5]. The visible and hidden layers in an MvRBM are both matrices. Nonetheless, to limit the number of parameters, an MvRBM models the connection between the visible and hidden layers through two separate matrices, which restricts the ability of the model to capture correlations between different data modes.
All these issues have motivated the MPORBM. Specifically, MPORBM not only employs tensorial visible and hidden layers, but also utilizes a general and powerful tensor network, namely an MPO, to connect the tensorial visible and hidden layers. By doing so, an MPORBM achieves a more powerful model representation capacity than MvRBM and at the same time greatly reduces the model parameters compared to a standard RBM. Note that a mapping of the standard RBM with tensor networks has been described in [10]. However, their work does not generalize the standard RBM to tensorial inputs and is therefore still based on visible and hidden units in vector forms.
Iii Preliminaries
We review some necessary tensor basics, the MPO tensor decomposition, the standard RBM and its tensorial variants.
Iiia Tensor basics
Tensors are multidimensional arrays that are higherorder generalization of vectors (firstorder tensors) and matrices (secondorder tensors). A thorder (also called way or mode) tensor is denoted as where each entry is determined by indices . The numbers are called the dimensions of the tensor. We use boldface capital calligraphic letters , , …to denote tensors, boldface capital letters , , …to denote matrices, boldface letters , , …to denote vectors, and roman letters , , …to denote scalars. and are the transposes of a matrix and a vector . An intuitive and useful graphical representation, named tensor network diagrams, of scalars, vectors, matrices and tensors is depicted in Fig. 1. The unconnected edges are the indices of the tensor. We will mainly employ these graphical representations to visualize the tensor networks and operations in the following sections and refer to [7] for more details. We now briefly introduce some important tensor operations.
Definition 1
(Tensor index contraction): A tensor index contraction is the sum over all possible values of the repeated indices in a set of tensors.
For example, the following contraction between a thirdorder tensor and a fourthorder tensor
over the and indices results in a thirdorder tensor . The corresponding tensor network diagram for these index contractions is shown in Fig. 2, where the summation over the and indices is indicated by the connected edges. The resulting diagram has three unconnected indices (), which confirms that is of third order.
IiiB Tensor MPO decomposition
An MPO is essentially a tensor network representation of a matrix. To relate the row and column indices of a matrix to the corresponding multiindices of the MPO we need the following definition.
Definition 2
The mapping between a linear index of a vector and its corresponding multiindex when reshaped into a tensor is
(1) 
Now suppose we have a matrix , where the index mapping (1) is used to split both the row index and column index into multiindices , respectively. After this index splitting we can arrange all matrix entries into a way tensor . Per definition, the corresponding MPO decomposition represents each entry of as
(2) 
The “building blocks" of the MPO are the way tensors , also called the MPOcores. The dimensions of the summation indices are called the MPOranks. Observe that the second summation index of is the same as the first index of , which ensures that the righthand side of (2) results in a scalar. When , the MPO is said to have periodic boundary conditions. We will assume throughout the remainder of this article that . The tensor network diagram representation of (2) is shown in Fig. 3. All index contractions are again represented as edges connecting two tensors. The main motivation to use an MPO representation of a matrix is to reduce its storage requirement. Indeed, using an MPOrepresentation with maximal MPOrank for a matrix reduces the storage requirement from down to approximately . This leads to significant storage savings when is relatively small. Consequently, all computations are done on the MPOcores rather than on the whole matrix itself. This could potentially reduce the computational complexity of the learning algorithm, especially if the visible and hidden layers are also represented as matrix product states (MPS, also called tensor trains in the scientific computing community [11]). This extension is kept for future work.
IiiC Standard RBM
The standard RBM [1] is a bipartite undirected probabilistic graphical model with one visible layer and one hidden layer , both in vector forms. Here we mainly consider binary RBMs, which implies that entries in both and attain binary values. The standard RBM assigns the following energy function for a specific joint configuration :
(3) 
where and are the bias of the visible layer and hidden layer, respectively, and is the mapping matrix. All model parameters together are denoted . We can easily use a tensor network diagram to represent Eq. 3 in Fig. 4 (a). The conditional distributions over the hidden and visible layers can be written as:
(4) 
(5) 
where
is the logistic sigmoid function and
denotes a vector of ones. The parameter training in a standard RBM is commonly performed by the contrastive divergence (CD) algorithm [1] and its variants [12, 13].IiiD Matrixvariate and tensorvariate RBMs
Here we briefly introduce the two nonvector RBM models, namely MvRBM [5] and TvRBM [6]. TvRBM employs a tensorial visible layer and keeps the hidden layer in a vector form. A rank CP tensor decomposition is used to connect the visible and hidden layers. However, such a connection form is also criticized to limit the model capability [5]. The corresponding energy function of TvRBM is shown in Fig. 4(b) as a tensor network diagram.
In the MvRBM model, both the input and hidden layers are matrices and are interconnected through two independent weight matrices . This particular construction reduces the total number of parameters from down to but comes at the cost of a limited representation ability, as the weight matrix is constrained to be a Kronecker product of the matrices. The energy function of the MvRBM is graphically represented as a tensor diagram in Fig. 4(c).
Iv Mporbm
We now describe the proposed MPORBM and its customized model parameter learning algorithms.
Iva Model definition
In an MPORBM, both the visible layer and the hidden layer are way tensors. As a result, the weight matrix is now a way tensor
, which is represented by an MPO instead. With both the visible and hidden layers being tensors, it is therefore also required that the bias vectors
are tensors , respectively. The energy function of an MPORBM is shown in Fig. 4(d) for the specific case where . The vertical edges between the different MPOcores are the key ingredient in being able to express generic weight tensors , as opposed to the two disconnected weight matrices in the MvRBM model. The corresponding conditional distribution equations over the hidden and visible layers can be derived from the energy function and are visualized in Fig. 4(e)&(f) for the case where . This involves the summation of the weight MPO with either the hidden or visible layer tensors into a way tensor, which is then added elementwise with the corresponding bias tensor. The final step in the computation of the conditional probability is an elementwise application of the logistic sigmoid function on the resulting tensor.IvB MPORBM Learning algorithm
Let denote the model parameters. The model learning task is then formulated into maximizing the training data likelihood:
(6) 
with respect to model parameter . Similar to the standard RBM [1], the expression of the gradient of the loglikelihood is:
(7) 
where is the data expectation w.r.t. , which can be computed by Fig. 4(f). is the model expectation w.r.t. , which can be approximately computed by the CD procedure. The main idea in the CD procedure is as follows: first, a Gibbs chain is initialized with one particular training sample = . Figs. 4(e)&(f) are then computed times in an alternating fashion, which results in the chain . The model expectation is then approximated by .
The derivative of the loglikelihood with respect to the th MPOcore is visualized in Fig. 5. This involves the computation of 2 fourthorder tensors, obtained by removing the th MPOcore from the two tensor networks and summing over all connected edges. The resulting tensors are then subtracted elementwise from one another. The derivatives of the loglikelihood with respect to the bias tensors are
We mainly use the CD procedure to train the MPORBM model. However, instead of updating all the MPOcores simultaneously with one batch of input training data, we employ the alternating optimization procedure (AOP). This involves updating only one MPOcore at a time while keeping the others unchanged using the same batch of input training data. We name this parameter learning algorithm CDAOP and its pseudocode is given in Algorithm 1 . The superiority of this alternating optimization procedure over simultaneously updating all MPOcores, which we call CDSU from hereon will be demonstrated through numerical experiments.
Algorithm 1
MPORBM learning algorithm (CDAOP)
Input: Training data of N tensors , the maximum iteration number , the batch size , the momentum , the learning rate and CD step .
Output: Model parameters .
V Experiments
In this section, the MPORBM is compared with both the standard RBM and MvRBM. Specifically, we first investigate the performance and scalability of these models as a means to do feature extraction for classification, together with the influence of the MPOranks on the expressive power of the MPORBM. Furthermore, we demonstrate the generative capacity of MPORBM by implementing an image completion task and an image denoising task. For hyperparameter setting, we select mostly the same default values as in MvRBM paper, namely momentum
= , CD step = . For learning rate, we provide a set of possible value and choose the one which achieves the best validation accuracy. Moreover, we choose the same maximum iteration number and batch size for all methods. All experiments are run in MATLAB on an Intel i5 3.2GHz desktop with 16GB RAM.Va Data Classification
In the first experiment, we demonstrate the data classification accuracy superiority of MPORBM on extensive standard machine learning datasets, namely the Binary Alphadigits dataset
^{1}^{1}1https://cs.nyu.edu/ roweis/data.html, normalized DrivFace^{2}^{2}2https://archive.ics.uci.edu/ml/datasets/DrivFace, Arcene^{3}^{3}3https://archive.ics.uci.edu/ml/datasets/Arcene and COIL100 dataset^{4}^{4}4http://www1.cs.columbia.edu/CAVE/software/softlib/coil100.php. The size of those datasets vary from 320 to 49152. Since we assume binary input in our RBM setting, thus for nonbinary datasets, we first use a multibit vector to represent each value in the original data. The additional multibit dimension is combined with the RGB channel if the dataset is color image. The following table I shows the detailed information of these datasets, while table II and table III show the visual and hidden layer structure of different RBM models for different datasets.Datasets  Original data size  Data value range 

Alphadigits  x  Binary 
DrivFace  x   integer 
Arcene  x   integer 
COIL100  x x   integer 
Datasets  RBM  MvRBM  MPORBM 

Alphadigits  x  x  x 
DrivFace  x  x  x x 
Arcene  x  x  x x 
COIL100  x  x  x x 
Datasets  RBM  MvRBM  MPORBM 

Alphadigits  x  x  x 
DrivFace  x  x  x x 
Arcene  x  x  x x 
COIL100  x  x  x x 
We randomly separate the above four datasets into training, validation and testing parts respectively. It might happen that training sample sizes are insufficiently large to train a good RBM model. Thus the training data number we chose for each class is significant smaller than their data dimension, commonly less than 35. The MPORBMranks were chosen empirically from a range of 10 to 50 depended on the model input size. The trained RBM models were then employed to extract features from the hidden layer and then those new features will be utilized to train a Nearest Neighbor (
NN) classifier with
for all experiments. Table IV lists the resulting classification errors.Datasets  RBM  MvRBM  MPORBM CDSU/AOP 

Alphadigits  /  
DrivFace  /  
Arcene  /  
COIL100  / 
The restrictive Kronecker product assumption of the weight matrix in MvRBM explains why it has the worst classification performance in all datasets. The standard RBM also performs worse than MPORBM, which may be caused by overfitting because of the small training sample size. It is notable that due to the PC memory constraint, the standard RBM fails to learn the enormous number of parameters in the full weight matrix for COIL100 dataset, but MPORBM still gets the best classification performance due to the general MPO weight structure. Moreover, the CDAOP algorithm shows a higher classification accuracy than CDSU, which further indicates that the alternating updating scheme is more suitable for our MPORBM model.
VB Investigation of influence MPOrank on classification error
In this experiment, we demonstrate how the expressive power of the MPORBM is determined by the MPOranks. For this purpose, the MNIST dataset^{5}^{5}5http://yann.lecun.com/exdb/mnist/ was used to train an MPORBM for image classification. The MNIST dataset has a training set of samples and a test set of samples. Each sample is a grayscale picture of handwritten digits {}, where each pixel value was converted into a binary number. The hidden layer states in the RBM model were also regarded as features extracted from the training data and used in a NN classifier with . The dimensions of the hidden layer were set to . The MPOrank was varied from 2 up to 200. To reveal the general rule of MPORBM model expression power on MPOranks, the above mentioned experiments were run for training sample sizes of 3000, 4000 and 5000, which are randomly chosen from training set. In addition, 10000 samples of the training set were chosen for validation. The classification error for each of these runs is shown in Fig. 6 as a function of . Note that the MPORBM with is identical with the MvRBM for these matrix inputs. It can be seen that the classification error stabilizes when , which indicates that a lowrank MPO may be powerful enough to model the latent probability distribution of the training data and a lot of storage and computational resources can be saved. We need to mention that by setting the MPOranks to their maximal values the MPORBM gets the same model expression ability as a standard RBM. This experiments, however, shows that using a lowrank MPORBM is more than enough to model reallife data.
VC Image completion and denoising
In this experiment, we show that an MPORBM is good at generative modeling by implementing image completion and denoising. We test the above generative tasks on the binarized MNIST dataset. Specifically, we randomly choose
samples from the 60k training set to train the RBM models with iterations and using all test samples to check the model generative performance. The standard RBM is set up with vectorial visual layer and hidden layer, while both MvRBM and MPORBM are constructed with 2way tensorial visual layer and hidden layer. The MPOrank in this experiment is set to 40 according to the observation from the previous experiment. The visualhidden layer structure and total weight parameter number of each RBM model is listed in Table V.RBM  MvRBM  MPORBM CDAOP  

Visual structure  x  x  x 
Hidden structure  x  x  x 
# parameters 
In the image completion task, one half of the image is provided to the trained RBM models to complete the another half. As mentioned in [14], RBM completion performance is different when the given half image is in the column or row direction. Thus we investigate the completion ability of different RBM models with giving the right half and the bottom half image respectively. Figure 7 and Figure 8 demonstrate the completed images of different RBM models when given the same randomly selected right and bottom halfs, respectively. It is clear that MvRBM is not able to complete the image, which further confirms the necessity of the MPO generalization. Table VI further lists the average PSNR results between the completed image and the original binarized image on the whole test set. We found that the MPORBM demonstrates a comparable image completion performance to a standard RBM, but with much fewer model parameter (around %).
Given half  RBM  MvRBM  MPORBM CDAOP 

Right half  dB  dB  dB 
Bottom half  dB  dB  dB 
In the image denoising task, we employ the same trained RBM models from the completion task and randomly add a certain percentage of salt & pepper noise to the test set. The average PSNR results between the original binarized pictures and denoised pictures on the whole test set are listed in Table VII. It is clear to see that the denoising performance of the MPORBM is comparable with a standard RBM when % noise is added, but performs better when higher percentages of noise are added. The fewer model parameters in the MPORBM may lead to a more robust generative model.
% noise  RBM  MvRBM  MPORBM CDAOP 

%  dB  dB  dB 
%  dB  dB  dB 
%  dB  dB  dB 
Vi Conclusion
This paper has proposed the MPORBM, which preserves the tensorial nature of the input data and utilizes a matrix product operator (MPO) to represent the weight matrix. The MPORBM generalizes all existing RBM models to tensor inputs and has better storage complexity since the number of parameters grows only linearly with the order of the tensor. Furthermore, by representing both the visible and hidden layers as tensors, it is possible to retain useful structural information in the original data. In order to facilitate the exposition on the various computations, tensor network diagrams were introduced and used throughout the paper.
There are several avenues for future research. If both the visible and hidden layers are represented in a matrix product state (MPS) form, then all computations can be done on individual cores. This can significantly improve the computational complexity of the learning algorithm.
Furthermore, analogous to stacking RBMs into a Deep Belief Network (DBN) or Deep Belief Machine (DBM), MPORBMs can be stacked into an MPODBN or MPODBM to extend its expressive power and application. Again, this stacking can also gain computational benefits from keeping all layers in the MPS form.
References
 [1] G. E. Hinton, “Training products of experts by minimizing contrastive divergence,” Neural computation, vol. 14, no. 8, pp. 1771–1800, 2002.
 [2] H. Larochelle and Y. Bengio, “Classification using discriminative restricted Boltzmann machines,” in Proceedings of the 25th international conference on Machine learning. ACM, 2008, pp. 536–543.

[3]
A. Krizhevsky, G. Hinton et al., “Factored 3way restricted Boltzmann
machines for modeling natural images,” in
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics
, 2010, pp. 621–628.  [4] A.r. Mohamed and G. Hinton, “Phone recognition using restricted Boltzmann machines,” in Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on. IEEE, 2010, pp. 4354–4357.
 [5] G. Qi, Y. Sun, J. Gao, Y. Hu, and J. Li, “Matrix variate restricted Boltzmann machine,” in Neural Networks (IJCNN), 2016 International Joint Conference on. IEEE, 2016, pp. 389–395.
 [6] T. D. Nguyen, T. Tran, D. Q. Phung, S. Venkatesh et al., “TensorVariate Restricted Boltzmann Machines.” in AAAI, 2015, pp. 2887–2893.
 [7] R. Orús, “A practical introduction to tensor networks: Matrix product states and projected entangled pair states,” Annals of Physics, vol. 349, pp. 117–158, 2014.
 [8] G. W. Taylor and G. E. Hinton, “Factored conditional restricted Boltzmann machines for modeling motion style,” in Proceedings of the 26th annual international conference on machine learning. ACM, 2009, pp. 1025–1032.
 [9] T. G. Kolda and B. W. Bader, “Tensor decompositions and applications,” SIAM review, vol. 51, no. 3, pp. 455–500, 2009.
 [10] J. Chen, S. Cheng, H. Xie, L. Wang, and T. Xiang, “Equivalence of restricted Boltzmann machines and tensor network states,” Phys. Rev. B, vol. 97, p. 085104, Feb 2018.
 [11] I. V. Oseledets, “Tensortrain decomposition,” SIAM J. Sci. Comput., vol. 33, no. 5, pp. 2295–2317, 2011.
 [12] T. Tieleman, “Training restricted Boltzmann machines using approximations to the likelihood gradient,” in Proceedings of the 25th international conference on Machine learning. ACM, 2008, pp. 1064–1071.
 [13] T. Tieleman and G. Hinton, “Using fast weights to improve persistent contrastive divergence,” in Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 2009, pp. 1033–1040.
 [14] Z.Y. Han, J. Wang, H. Fan, L. Wang, and P. Zhang, “Unsupervised generative modeling using matrix product states,” Physical Review X, vol. 8, no. 3, p. 031012, 2018.
Comments
There are no comments yet.