Step1: Please select an analysis mode.

Binary classification

The process of dividing new observations into distinct two categories based on the features of a set of input data.

Multiclass classification

The process of dividing new observations into distinct at least three categories based on the features of a set of input data.

Regression

The process of exploring the relationships between dependent variables (labels) and a series of indenpendent variables (features).

Survival

The process of establishing the connection between covariates (features) and the time of events (labels) to predict the risk of future events.


Step2: Please upload the files.



File: 
OR
       
ProjectID: 



File: Please upload a csv file with features in rows and samples in columns and no larger than 200MB. Please refer to the sample file for the exact format of the file.

ProjectID: A unique identifier for each task, which can be used to access the results of analyzed tasks.


Step3: Please select a feature normalization method. (Optional)


 MinMax    Z-Score    MaxAbs    None  


Step4: Please select a feature selection method.


 ANOVA    MRMR  



 TopK    FSS    BSS  


Step5: Please choose a processing method.


Naive Bayes

A kind of simple probabilistic classification model based on Bayes’ theorem with the assumption of independence between features.

SVM

SVM aims to create a decision boundary called maximum margin separating hyperplane that enables the prediction of labels from one or more features.

RandomForest

An ensemble estimator that fits a number of decision tree classifiers on various sub-samples of the dataset

Logistic

Logistic regression adds a nonlinear function to a linear function to estimate the probability of an event occurring.

KNN

A non-parametric method which assigns the samples with labels by the majority rule on the labels of selected nearest neighbors.

XGBoost

A scalable and highly accurate implementation of gradient boosting which integrates multiple parallel tree models to build a more powerful learner model.

lightGBM

A gradient boosting framework that uses tree based learning algorithms.it uses histogram-based algorithms to speed up training and reduce memory usage.

Adaboost

An ensemble estimator based on multiple weak prediction models.It can adjust the weight of misclassified instances so that subsequent classifiers focus more on difficult cases.

DecisionTree

A non-parametric algorithm that can classify a population into branch-like segments that construct an inverted tree with a root node, internal nodes, and leaf nodes.

GBDT

An ensemble estimator based on multiple decision trees models. It will fit a decision tree on the residuals error of the previous tree.

LinearRegression

It establishes a linear relationship between label and features to minimize the residual sum of squares between the observed targets and the predicted targets.

SVM

It aims to create a decision boundary called maximum margin separating hyperplane such that all data in a set are at the shortest distance from the hyperplane

Ridge

A linear regression model where the loss function is the linear least squares function and regularization is given by the l2-norm.

Lasso

a linear regression model that minimizes the residual sum of squares so it tends to produce some coefficients that are exactly 0 and hence gives interpretable models.

DecisionTree

A non-parametric model that predicts the values of target variables by learning simple decision rules inferred from data features.

XGBoost

A scalable and highly accurate implementation of gradient boosting which integrates multiple parallel tree models to build a more powerful learner model.

RandomForest

An ensemble estimator that fits a number of decision tree classifiers on various sub-samples of the dataset

Adaboost

An ensemble estimator based on multiple weak prediction models.It can adjust the weight of misclassified instances so that subsequent classifiers focus more on difficult cases.

GradientBoost

An ensemble estimator based on multiple decision trees models. It will fit a decision tree on the residuals error of the previous tree.

SurvivalSVM

Support vector machine (SVM) method is used for survival analysis.

SurvivalTree

SurvivalTree method is used for survival analysis.

ExtraSurvivalTrees

An ensemble estimator that fits a number of randomized survival trees on various sub-samples of the dataset.

RandomSurvivalForest

An ensemble estimator that fits a number of survivaltrees on various sub-samples of the dataset.

GradientBoostingSurvival

Gradient-boosted Cox proportional hazard loss with regression trees as base learner.



























One-Click Analysis: The model is built with the default parameters and the data is analyzed using all the models provided.

Customized Analysis: Users can choose to specify models and parameters to personalize the data.


Step6:Email(Optional)



Given the potential extended duration of the analysis process, we recommend submitting your email address above. This will enable us to notify you via email once the analysis is completed, providing you with access to your result page.





Please access your task status or results via this link