I was trying to use the Kennard-Stone algorithm to partition my dataset into training and testing. but the result showed the y values of test set were all the same class 2 ( the y values are classes of 1 and 2). why it happened this case? can i have some ways to solve it? ]]>

The steps above are just one possible version(not fixed). For example, if your data are known to be of high quality, outlier detection would be omitted.

Regarding initial value of A, it is, again, data-dependent. The key is not about the settting of initial A, but about the selection of the optimal A by CV. If you are not certain, a large A can be initialized though the optimal A may be small, say 3.

]]>1) Outliers detection

2) Cross validation to choose the optimal numbers of LVs

3) Build a PLS regression model

I want to konw whether the steps are right or not?

I confused to the selection of the optimal numbers of LVs in the example, A=6,optlvs=6, if A=30 the optlvs=24,

So I do not konw what initial value of A ,I should set.

Look forward Your Reply!

Best Wishes!

Yours Jia! ]]>

could you tell me the performance of the libpls toolbox compared with that available in off the shelf packages such as The Unscrambler, please? ]]>

Bests,

Ahmad

]]>How can i save the created model from the PLS? I want to use the saved model for unknown sample. Is there any way to save the model changing th following lines:

PLS=pls(Xcal,ycal,10); %+++ Build a PLS regression model using training set

[ypred,RMSEP]=plsval(PLS,Xtest,ytest); %+++ make predictions on test set

Looking forward for your reply,

Thanks & Regards,

Kishore

]]>