Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LSTM:About data sequence in 7-RNN_Classifier_example.py 关于过程数据序列化问题 7-RNN_Classifier_example.py? #66

Open
rosefun opened this issue Apr 27, 2018 · 0 comments

Comments

@rosefun
Copy link

rosefun commented Apr 27, 2018

Now dataset X={x1,x2,x3...,xn},shape=[n,m], x1,x2,...,xn are samples of X.
And label data y.shape=[n,k]
If I use a time window with length of 2,then after reshape:
X= tf.reshape(X,[int(n/2), 2, m])
X.shape=[n/2,m]
But I have a problem in getting the cost by formula,
cost_rnn = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_ , labels=y))
because both X and y have different shape.

Anybody knows how to solve this problem?


现在,有数据集X={x1,x2,x3...,xn},shape=[n,m]
其中,x1包含多个变量,shape=[m].
比如,X=[[1,10,100],[2,20,200],[3,30,300]],可以看做X由多个样本x1,x2,...组成的。

标签样本y={y1,y2,...yn},shape=[n,k],
比如,Y=[[1,0,0],[0,1,0],[0,0,1]]。
这个LSTM如果序列化数据的话,比如说,用时间窗time_step=2,
X= tf.reshape(X,[int(n/2), 2, m])

那么,序列化之后的样本,X就只有n-1 个了,shape=[n/2,m]
这样,由于维度不一样,就无法求出cost
cost_rnn = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_rnn, labels=y))

针对这种数据集应该怎样处理?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant