diff --git a/README.md b/README.md
index 4f9b68cece..21bc0219e0 100644
--- a/README.md
+++ b/README.md
@@ -17,6 +17,7 @@ Recent released features
| High-frequency trading example | [Part of code released](https://github.com/microsoft/qlib/pull/227) on Jan 28, 2021 |
| High-frequency data(1min) | [Released](https://github.com/microsoft/qlib/pull/221) on Jan 27, 2021 |
| Tabnet Model | [Released](https://github.com/microsoft/qlib/pull/205) on Jan 22, 2021 |
+| TCTS Model | [Released](https://github.com/microsoft/qlib/pull/491) on July 1, 2021 |
Features released before 2021 are not listed here.
@@ -288,6 +289,7 @@ Here is a list of models built on `Qlib`.
- [TFT based on tensorflow (Bryan Lim, et al. 2019)](examples/benchmarks/TFT/tft.py)
- [TabNet based on pytorch (Sercan O. Arik, et al. 2019)](qlib/contrib/model/pytorch_tabnet.py)
- [DoubleEnsemble based on LightGBM (Chuheng Zhang, et al. 2020)](qlib/contrib/model/double_ensemble.py)
+- [TCTS based on pytorch (Xueqing Wu, et al. 2021)](qlib/contrib/model/pytorch_tcts.py)
Your PR of new Quant models is highly welcomed.
diff --git a/examples/benchmarks/TCTS/TCTS.md b/examples/benchmarks/TCTS/TCTS.md
deleted file mode 100644
index ee67ffbeb1..0000000000
--- a/examples/benchmarks/TCTS/TCTS.md
+++ /dev/null
@@ -1,52 +0,0 @@
-# Temporally Correlated Task Scheduling for Sequence Learning
-We provide the [code](https://github.com/microsoft/qlib/blob/main/qlib/contrib/model/pytorch_tcts.py) for reproducing the stock trend forecasting experiments.
-
-### Background
-Sequence learning has attracted much research attention from the machine learning community in recent years. In many applications, a sequence learning task is usually associated with multiple temporally correlated auxiliary tasks, which are different in terms of how much input information to use or which future step to predict. In stock trend forecasting, as demonstrated in Figure1, one can predict the price of a stock in different future days (e.g., tomorrow, the day after tomorrow). In this paper, we propose a framework to make use of those temporally correlated tasks to help each other.
-
-
-
-
-
-
-### Method
-Given that there are usually multiple temporally correlated tasks, the key challenge lies in which tasks to use and when to use them in the training process. In this work, we introduce a learnable task scheduler for sequence learning, which adaptively selects temporally correlated tasks during the training process. The scheduler accesses the model status and the current training data (e.g., in current minibatch), and selects the best auxiliary task to help the training of the main task. The scheduler and the model for the main task are jointly trained through bi-level optimization: the scheduler is trained to maximize the validation performance of the model, and the model is trained to minimize the training loss guided by the scheduler. The process is demonstrated in Figure2.
-
-
-
-
-
-At step
, with training data
, the scheduler
chooses a suitable task
(green solid lines) to update the model
(blue solid lines). After
steps, we evaluate the model
on the validation set and update the scheduler
(green dashed lines).
-
-### DataSet
-* We use the historical transaction data for 300 stocks on [CSI300](http://www.csindex.com.cn/en/indices/index-detail/000300) from 01/01/2008 to 08/01/2020.
-* We split the data into training (01/01/2008-12/31/2013), validation (01/01/2014-12/31/2015), and test sets (01/01/2016-08/01/2020) based on the transaction time.
-
-### Experiments
-#### Task Description
-* The main tasks
(
in Figure1) refers to forecasting return of stock
as following,
-
-

-
-
-* Temporally correlated task sets
, in this paper,
,
and
are used.
-#### Baselines
-* GRU/MLP/LightGBM (LGB)/Graph Attention Networks (GAT)
-* Multi-task learning (MTL): In multi-task learning, multiple tasks are jointly trained and mutually boosted. Each task is treated equally, while in our setting, we focus on the main task.
-* Curriculum transfer learning (CL): Transfer learning also leverages auxiliary tasks to boost the main task. [Curriculum transfer learning](https://arxiv.org/pdf/1804.00810.pdf) is one kind of transfer learning which schedules auxiliary tasks according to certain rules. Our problem can also be regarded as a special kind of transfer learning, where the auxiliary tasks are temporally correlated with the main task. Our learning process is dynamically controlled by a scheduler rather than some pre-defined rules. In the CL baseline, we start from the task
, then
, and gradually move to the last one.
-#### Result
-| Methods |
|
|
|
-| :----: | :----: | :----: | :----: |
-| GRU | 0.049 / 1.903 | 0.018 / 1.972 | 0.014 / 1.989 |
-| MLP | 0.023 / 1.961 | 0.022 / 1.962 | 0.015 / 1.978 |
-| LGB | 0.038 / 1.883 | 0.023 / 1.952 | 0.007 / 1.987 |
-| GAT | 0.052 / 1.898 | 0.024 / 1.954 | 0.015 / 1.973 |
-| MTL(
) | 0.061 / 1.862 | 0.023 / 1.942 | 0.012 / 1.956 |
-| CL(
) | 0.051 / 1.880 | 0.028 / 1.941 | 0.016 / 1.962 |
-| Ours(
) | 0.071 / 1.851 | 0.030 / 1.939 | 0.017 / 1.963 |
-| MTL(
) | 0.057 / 1.875 | 0.021 / 1.939 | 0.017 / 1.959 |
-| CL(
) | 0.056 / 1.877 | 0.028 / 1.942 | 0.015 / 1.962 |
-| Ours(
) | 0.075 / 1.849 | 0.032 /1.939 | 0.021 / 1.955 |
-| MTL(
) | 0.052 / 1.882 | 0.020 / 1.947 | 0.019 / 1.952 |
-| CL(
) | 0.051 / 1.882 | 0.028 / 1.950 | 0.016 / 1.961 |
-| Ours(
) | 0.067 / 1.867 | 0.030 / 1.960 | 0.022 / 1.942|
\ No newline at end of file
diff --git a/examples/benchmarks/TCTS/workflow_config_tcts_Alpha360.yaml b/examples/benchmarks/TCTS/workflow_config_tcts_Alpha360.yaml
index 589f4b43ea..89c66f992a 100644
--- a/examples/benchmarks/TCTS/workflow_config_tcts_Alpha360.yaml
+++ b/examples/benchmarks/TCTS/workflow_config_tcts_Alpha360.yaml
@@ -22,11 +22,9 @@ data_handler_config: &data_handler_config
- class: CSRankNorm
kwargs:
fields_group: label
- label: ["Ref($close, -2) / Ref($close, -1) - 1",
- "Ref($close, -3) / Ref($close, -1) - 1",
- "Ref($close, -4) / Ref($close, -1) - 1",
- "Ref($close, -5) / Ref($close, -1) - 1",
- "Ref($close, -6) / Ref($close, -1) - 1"]
+ label: ["Ref($close, -1) / $close - 1",
+ "Ref($close, -2) / Ref($close, -1) - 1",
+ "Ref($close, -3) / Ref($close, -2) - 1"]
port_analysis_config: &port_analysis_config
strategy:
class: TopkDropoutStrategy
@@ -61,11 +59,12 @@ task:
GPU: 0
fore_optimizer: adam
weight_optimizer: adam
- output_dim: 5
- fore_lr: 5e-7
- weight_lr: 5e-7
+ output_dim: 3
+ fore_lr: 5e-4
+ weight_lr: 5e-4
steps: 3
- target_label: 0
+ target_label: 1
+ lowest_valid_performance: 0.993
dataset:
class: DatasetH
module_path: qlib.data.dataset
@@ -87,7 +86,8 @@ task:
kwargs:
ana_long_short: False
ann_scaler: 252
+ label_col: 1
- class: PortAnaRecord
module_path: qlib.workflow.record_temp
kwargs:
- config: *port_analysis_config
\ No newline at end of file
+ config: *port_analysis_config
diff --git a/qlib/contrib/model/pytorch_tcts.py b/qlib/contrib/model/pytorch_tcts.py
index 9f44ba31cb..bf46660ea6 100644
--- a/qlib/contrib/model/pytorch_tcts.py
+++ b/qlib/contrib/model/pytorch_tcts.py
@@ -9,12 +9,13 @@
import numpy as np
import pandas as pd
import copy
+import random
from sklearn.metrics import roc_auc_score, mean_squared_error
import logging
from ...utils import (
unpack_archive_with_buffer,
save_multiple_parts_file,
- create_save_path,
+ get_or_create_path,
drop_nan_by_y_index,
)
from ...log import get_module_logger, TimeInspector
@@ -60,8 +61,9 @@ def __init__(
weight_lr=5e-7,
steps=3,
GPU=0,
- seed=None,
+ seed=0,
target_label=0,
+ lowest_valid_performance=0.993,
**kwargs
):
# Set logger.
@@ -85,6 +87,9 @@ def __init__(
self.weight_lr = weight_lr
self.steps = steps
self.target_label = target_label
+ self.lowest_valid_performance = lowest_valid_performance
+ self._fore_optimizer = fore_optimizer
+ self._weight_optimizer = weight_optimizer
self.logger.info(
"TCTS parameters setting:"
@@ -113,40 +118,6 @@ def __init__(
)
)
- if self.seed is not None:
- np.random.seed(self.seed)
- torch.manual_seed(self.seed)
-
- self.fore_model = GRUModel(
- d_feat=self.d_feat,
- hidden_size=self.hidden_size,
- num_layers=self.num_layers,
- dropout=self.dropout,
- )
- self.weight_model = MLPModel(
- d_feat=360 + 2 * self.output_dim + 1,
- hidden_size=self.hidden_size,
- num_layers=self.num_layers,
- dropout=self.dropout,
- output_dim=self.output_dim,
- )
- if fore_optimizer.lower() == "adam":
- self.fore_optimizer = optim.Adam(self.fore_model.parameters(), lr=self.fore_lr)
- elif fore_optimizer.lower() == "gd":
- self.fore_optimizer = optim.SGD(self.fore_model.parameters(), lr=self.fore_lr)
- else:
- raise NotImplementedError("optimizer {} is not supported!".format(fore_optimizer))
- if weight_optimizer.lower() == "adam":
- self.weight_optimizer = optim.Adam(self.weight_model.parameters(), lr=self.weight_lr)
- elif weight_optimizer.lower() == "gd":
- self.weight_optimizer = optim.SGD(self.weight_model.parameters(), lr=self.weight_lr)
- else:
- raise NotImplementedError("optimizer {} is not supported!".format(weight_optimizer))
-
- self.fitted = False
- self.fore_model.to(self.device)
- self.weight_model.to(self.device)
-
def loss_fn(self, pred, label, weight):
loc = torch.argmax(weight, 1)
@@ -258,11 +229,9 @@ def test_epoch(self, data_x, data_y):
def fit(
self,
dataset: DatasetH,
- evals_result=dict(),
verbose=True,
save_path=None,
):
-
df_train, df_valid, df_test = dataset.prepare(
["train", "valid", "test"],
col_set=["feature", "label"],
@@ -274,7 +243,62 @@ def fit(
x_test, y_test = df_test["feature"], df_test["label"]
if save_path == None:
- save_path = create_save_path(save_path)
+ save_path = get_or_create_path(save_path)
+ best_loss = np.inf
+ while best_loss > self.lowest_valid_performance:
+ if best_loss < np.inf:
+ print("Failed! Start retraining.")
+ self.seed = random.randint(0, 1000) # reset random seed
+
+ if self.seed is not None:
+ np.random.seed(self.seed)
+ torch.manual_seed(self.seed)
+
+ best_loss = self.training(
+ x_train, y_train, x_valid, y_valid, x_test, y_test, verbose=verbose, save_path=save_path
+ )
+
+ def training(
+ self,
+ x_train,
+ y_train,
+ x_valid,
+ y_valid,
+ x_test,
+ y_test,
+ verbose=True,
+ save_path=None,
+ ):
+
+ self.fore_model = GRUModel(
+ d_feat=self.d_feat,
+ hidden_size=self.hidden_size,
+ num_layers=self.num_layers,
+ dropout=self.dropout,
+ )
+ self.weight_model = MLPModel(
+ d_feat=360 + 2 * self.output_dim + 1,
+ hidden_size=self.hidden_size,
+ num_layers=self.num_layers,
+ dropout=self.dropout,
+ output_dim=self.output_dim,
+ )
+ if self._fore_optimizer.lower() == "adam":
+ self.fore_optimizer = optim.Adam(self.fore_model.parameters(), lr=self.fore_lr)
+ elif self._fore_optimizer.lower() == "gd":
+ self.fore_optimizer = optim.SGD(self.fore_model.parameters(), lr=self.fore_lr)
+ else:
+ raise NotImplementedError("optimizer {} is not supported!".format(self._fore_optimizer))
+ if self._weight_optimizer.lower() == "adam":
+ self.weight_optimizer = optim.Adam(self.weight_model.parameters(), lr=self.weight_lr)
+ elif self._weight_optimizer.lower() == "gd":
+ self.weight_optimizer = optim.SGD(self.weight_model.parameters(), lr=self.weight_lr)
+ else:
+ raise NotImplementedError("optimizer {} is not supported!".format(self._weight_optimizer))
+
+ self.fitted = False
+ self.fore_model.to(self.device)
+ self.weight_model.to(self.device)
best_loss = np.inf
best_epoch = 0
@@ -291,7 +315,8 @@ def fit(
val_loss = self.test_epoch(x_valid, y_valid)
test_loss = self.test_epoch(x_test, y_test)
- print("valid %.6f, test %.6f" % (val_loss, test_loss))
+ if verbose:
+ print("valid %.6f, test %.6f" % (val_loss, test_loss))
if val_loss < best_loss:
best_loss = val_loss
@@ -316,6 +341,8 @@ def fit(
if self.use_gpu:
torch.cuda.empty_cache()
+ return best_loss
+
def predict(self, dataset):
if not self.fitted:
raise ValueError("model is not fitted yet!")
diff --git a/qlib/workflow/record_temp.py b/qlib/workflow/record_temp.py
index fc71b3f9a2..cf30bfad52 100644
--- a/qlib/workflow/record_temp.py
+++ b/qlib/workflow/record_temp.py
@@ -227,10 +227,11 @@ class SigAnaRecord(SignalRecord):
artifact_path = "sig_analysis"
- def __init__(self, recorder, ana_long_short=False, ann_scaler=252, **kwargs):
+ def __init__(self, recorder, ana_long_short=False, ann_scaler=252, label_col=0, **kwargs):
super().__init__(recorder=recorder, **kwargs)
self.ana_long_short = ana_long_short
self.ann_scaler = ann_scaler
+ self.label_col = label_col
def generate(self, **kwargs):
try:
@@ -243,7 +244,7 @@ def generate(self, **kwargs):
if label is None or not isinstance(label, pd.DataFrame) or label.empty:
logger.warn(f"Empty label.")
return
- ic, ric = calc_ic(pred.iloc[:, 0], label.iloc[:, 0])
+ ic, ric = calc_ic(pred.iloc[:, 0], label.iloc[:, self.label_col])
metrics = {
"IC": ic.mean(),
"ICIR": ic.mean() / ic.std(),
@@ -252,7 +253,7 @@ def generate(self, **kwargs):
}
objects = {"ic.pkl": ic, "ric.pkl": ric}
if self.ana_long_short:
- long_short_r, long_avg_r = calc_long_short_return(pred.iloc[:, 0], label.iloc[:, 0])
+ long_short_r, long_avg_r = calc_long_short_return(pred.iloc[:, 0], label.iloc[:, self.label_col])
metrics.update(
{
"Long-Short Ann Return": long_short_r.mean() * self.ann_scaler,