CRAN Package Check Results for Package lightgbm

Last updated on 2020-12-04 00:48:25 CET.

Flavor Version Tinstall Tcheck Ttotal Status Flags
r-devel-linux-x86_64-debian-clang 3.1.0 305.89 126.23 432.12 OK
r-devel-linux-x86_64-debian-gcc 3.1.0 248.67 123.44 372.11 OK
r-devel-linux-x86_64-fedora-clang 3.1.0 808.04 NOTE
r-devel-linux-x86_64-fedora-gcc 3.1.0 594.33 OK
r-devel-windows-ix86+x86_64 3.1.0 671.00 159.00 830.00 NOTE
r-patched-linux-x86_64 3.1.0 285.22 119.76 404.98 OK
r-patched-solaris-x86 3.1.0 1064.10 NOTE
r-release-linux-x86_64 3.1.0 280.28 111.90 392.18 OK
r-release-macos-x86_64 3.1.0 NOTE
r-release-windows-ix86+x86_64 3.1.0 498.00 164.00 662.00 NOTE
r-oldrel-macos-x86_64 3.1.0 NOTE
r-oldrel-windows-ix86+x86_64 3.1.0 553.00 163.00 716.00 ERROR

Check Details

Version: 3.1.0
Check: installed package size
Result: NOTE
     installed size is 49.5Mb
     sub-directories of 1Mb or more:
     libs 48.9Mb
Flavors: r-devel-linux-x86_64-fedora-clang, r-devel-windows-ix86+x86_64, r-patched-solaris-x86, r-release-macos-x86_64, r-release-windows-ix86+x86_64, r-oldrel-macos-x86_64, r-oldrel-windows-ix86+x86_64

Version: 3.1.0
Check: running tests for arch ‘i386’
Result: ERROR
     Running 'testthat.R' [18s]
    Running the tests in 'tests/testthat.R' failed.
    Complete output:
     > library(testthat)
     > library(lightgbm)
     Loading required package: R6
     >
     > test_check(
     + package = "lightgbm"
     + , stop_on_failure = TRUE
     + , stop_on_warning = FALSE
     + )
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002185 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.314167 test's binary_logloss:0.317777"
     [1] "[2]: train's binary_logloss:0.187654 test's binary_logloss:0.187981"
     [1] "[3]: train's binary_logloss:0.109209 test's binary_logloss:0.109949"
     [1] "[4]: train's binary_logloss:0.0755423 test's binary_logloss:0.0772008"
     [1] "[5]: train's binary_logloss:0.0528045 test's binary_logloss:0.0533291"
     [1] "[6]: train's binary_logloss:0.0395797 test's binary_logloss:0.0380824"
     [1] "[7]: train's binary_logloss:0.0287269 test's binary_logloss:0.0255364"
     [1] "[8]: train's binary_logloss:0.0224443 test's binary_logloss:0.0195616"
     [1] "[9]: train's binary_logloss:0.016621 test's binary_logloss:0.017834"
     [1] "[10]: train's binary_logloss:0.0112055 test's binary_logloss:0.0125538"
     [1] "[11]: train's binary_logloss:0.00759638 test's binary_logloss:0.00842372"
     [1] "[12]: train's binary_logloss:0.0054887 test's binary_logloss:0.00631812"
     [1] "[13]: train's binary_logloss:0.00399548 test's binary_logloss:0.00454944"
     [1] "[14]: train's binary_logloss:0.00283135 test's binary_logloss:0.00323724"
     [1] "[15]: train's binary_logloss:0.00215378 test's binary_logloss:0.00256697"
     [1] "[16]: train's binary_logloss:0.00156723 test's binary_logloss:0.00181753"
     [1] "[17]: train's binary_logloss:0.00120077 test's binary_logloss:0.00144437"
     [1] "[18]: train's binary_logloss:0.000934889 test's binary_logloss:0.00111807"
     [1] "[19]: train's binary_logloss:0.000719878 test's binary_logloss:0.000878304"
     [1] "[20]: train's binary_logloss:0.000558692 test's binary_logloss:0.000712272"
     [1] "[21]: train's binary_logloss:0.000400916 test's binary_logloss:0.000492223"
     [1] "[22]: train's binary_logloss:0.000315938 test's binary_logloss:0.000402804"
     [1] "[23]: train's binary_logloss:0.000238113 test's binary_logloss:0.000288682"
     [1] "[24]: train's binary_logloss:0.000190248 test's binary_logloss:0.000237835"
     [1] "[25]: train's binary_logloss:0.000148322 test's binary_logloss:0.000174674"
     [1] "[26]: train's binary_logloss:0.000120581 test's binary_logloss:0.000139513"
     [1] "[27]: train's binary_logloss:0.000102756 test's binary_logloss:0.000118804"
     [1] "[28]: train's binary_logloss:7.83011e-05 test's binary_logloss:8.40978e-05"
     [1] "[29]: train's binary_logloss:6.29191e-05 test's binary_logloss:6.8803e-05"
     [1] "[30]: train's binary_logloss:5.28039e-05 test's binary_logloss:5.89864e-05"
     [1] "[31]: train's binary_logloss:4.51561e-05 test's binary_logloss:4.91874e-05"
     [1] "[32]: train's binary_logloss:3.89402e-05 test's binary_logloss:4.13015e-05"
     [1] "[33]: train's binary_logloss:3.24434e-05 test's binary_logloss:3.52605e-05"
     [1] "[34]: train's binary_logloss:2.65255e-05 test's binary_logloss:2.86338e-05"
     [1] "[35]: train's binary_logloss:2.19277e-05 test's binary_logloss:2.3937e-05"
     [1] "[36]: train's binary_logloss:1.86469e-05 test's binary_logloss:2.05375e-05"
     [1] "[37]: train's binary_logloss:1.49881e-05 test's binary_logloss:1.53852e-05"
     [1] "[38]: train's binary_logloss:1.2103e-05 test's binary_logloss:1.20722e-05"
     [1] "[39]: train's binary_logloss:1.02027e-05 test's binary_logloss:1.0578e-05"
     [1] "[40]: train's binary_logloss:8.91561e-06 test's binary_logloss:8.8323e-06"
     [1] "[41]: train's binary_logloss:7.4855e-06 test's binary_logloss:7.58441e-06"
     [1] "[42]: train's binary_logloss:6.21179e-06 test's binary_logloss:6.14299e-06"
     [1] "[43]: train's binary_logloss:5.06413e-06 test's binary_logloss:5.13576e-06"
     [1] "[44]: train's binary_logloss:4.2029e-06 test's binary_logloss:4.53605e-06"
     [1] "[45]: train's binary_logloss:3.47042e-06 test's binary_logloss:3.73234e-06"
     [1] "[46]: train's binary_logloss:2.78181e-06 test's binary_logloss:3.02556e-06"
     [1] "[47]: train's binary_logloss:2.19819e-06 test's binary_logloss:2.3666e-06"
     [1] "[48]: train's binary_logloss:1.80519e-06 test's binary_logloss:1.92932e-06"
     [1] "[49]: train's binary_logloss:1.50192e-06 test's binary_logloss:1.64658e-06"
     [1] "[50]: train's binary_logloss:1.20212e-06 test's binary_logloss:1.33316e-06"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002059 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_error:0.0222632"
     [1] "[2]: train's binary_error:0.0222632"
     [1] "[3]: train's binary_error:0.0222632"
     [1] "[4]: train's binary_error:0.0109013"
     [1] "[5]: train's binary_error:0.0141256"
     [1] "[6]: train's binary_error:0.0141256"
     [1] "[7]: train's binary_error:0.0141256"
     [1] "[8]: train's binary_error:0.0141256"
     [1] "[9]: train's binary_error:0.00598802"
     [1] "[10]: train's binary_error:0.00598802"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000406 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 98
     [LightGBM] [Info] Number of data points in the train set: 150, number of used features: 4
     [LightGBM] [Info] Start training from score -1.098612
     [LightGBM] [Info] Start training from score -1.098612
     [LightGBM] [Info] Start training from score -1.098612
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: train's multi_error:0.0466667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: train's multi_error:0.0466667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: train's multi_error:0.0466667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: train's multi_error:0.0466667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: train's multi_error:0.0466667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: train's multi_error:0.0466667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: train's multi_error:0.0466667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: train's multi_error:0.0466667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: train's multi_error:0.0466667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: train's multi_error:0.0466667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[11]: train's multi_error:0.0333333"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[12]: train's multi_error:0.0266667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[13]: train's multi_error:0.0266667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[14]: train's multi_error:0.0266667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[15]: train's multi_error:0.0266667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[16]: train's multi_error:0.0333333"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[17]: train's multi_error:0.0266667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[18]: train's multi_error:0.0333333"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[19]: train's multi_error:0.0333333"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[20]: train's multi_error:0.0333333"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002054 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_error:0.0304007 train's auc:0.972508 train's binary_logloss:0.198597"
     [1] "[2]: train's binary_error:0.0222632 train's auc:0.995075 train's binary_logloss:0.111535"
     [1] "[3]: train's binary_error:0.00598802 train's auc:0.997845 train's binary_logloss:0.0480659"
     [1] "[4]: train's binary_error:0.00122831 train's auc:0.998433 train's binary_logloss:0.0279151"
     [1] "[5]: train's binary_error:0.00122831 train's auc:0.999354 train's binary_logloss:0.0190479"
     [1] "[6]: train's binary_error:0.00537387 train's auc:0.98965 train's binary_logloss:0.16706"
     [1] "[7]: train's binary_error:0 train's auc:1 train's binary_logloss:0.0128449"
     [1] "[8]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00774702"
     [1] "[9]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00472108"
     [1] "[10]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00208929"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002087 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_error:0.0222632"
     [1] "[2]: train's binary_error:0.0222632"
     [1] "[3]: train's binary_error:0.0222632"
     [1] "[4]: train's binary_error:0.0109013"
     [1] "[5]: train's binary_error:0.0141256"
     [1] "[6]: train's binary_error:0.0141256"
     [1] "[7]: train's binary_error:0.0141256"
     [1] "[8]: train's binary_error:0.0141256"
     [1] "[9]: train's binary_error:0.00598802"
     [1] "[10]: train's binary_error:0.00598802"
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002162 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] Start training from score 0.482113
     [1] "[1]: train's l2:0.206337"
     [1] "[2]: train's l2:0.171229"
     [1] "[3]: train's l2:0.140871"
     [1] "[4]: train's l2:0.116282"
     [1] "[5]: train's l2:0.096364"
     [1] "[6]: train's l2:0.0802308"
     [1] "[7]: train's l2:0.0675595"
     [1] "[8]: train's l2:0.0567154"
     [1] "[9]: train's l2:0.0482086"
     [1] "[10]: train's l2:0.0402694"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002120 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_error:0.0222632 train's auc:0.981784 valid1's binary_error:0.0222632 valid1's auc:0.981784 valid2's binary_error:0.0222632 valid2's auc:0.981784"
     [1] "[2]: train's binary_error:0.0222632 train's auc:0.981784 valid1's binary_error:0.0222632 valid1's auc:0.981784 valid2's binary_error:0.0222632 valid2's auc:0.981784"
     [1] "[3]: train's binary_error:0.0222632 train's auc:0.992951 valid1's binary_error:0.0222632 valid1's auc:0.992951 valid2's binary_error:0.0222632 valid2's auc:0.992951"
     [1] "[4]: train's binary_error:0.0109013 train's auc:0.992951 valid1's binary_error:0.0109013 valid1's auc:0.992951 valid2's binary_error:0.0109013 valid2's auc:0.992951"
     [1] "[5]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714"
     [1] "[6]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714"
     [1] "[7]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714"
     [1] "[8]: train's binary_error:0.0141256 train's auc:0.994714 valid1's binary_error:0.0141256 valid1's auc:0.994714 valid2's binary_error:0.0141256 valid2's auc:0.994714"
     [1] "[9]: train's binary_error:0.00598802 train's auc:0.993175 valid1's binary_error:0.00598802 valid1's auc:0.993175 valid2's binary_error:0.00598802 valid2's auc:0.993175"
     [1] "[10]: train's binary_error:0.00598802 train's auc:0.998242 valid1's binary_error:0.00598802 valid1's auc:0.998242 valid2's binary_error:0.00598802 valid2's auc:0.998242"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002121 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.179606"
     [1] "[2]: train's binary_logloss:0.0975448"
     [1] "[3]: train's binary_logloss:0.0384292"
     [1] "[4]: train's binary_logloss:0.0582241"
     [1] "[5]: train's binary_logloss:0.0595215"
     [1] "[6]: train's binary_logloss:0.0609174"
     [1] "[7]: train's binary_logloss:0.317567"
     [1] "[8]: train's binary_logloss:0.0104223"
     [1] "[9]: train's binary_logloss:0.00497498"
     [1] "[10]: train's binary_logloss:0.00283557"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002909 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.179606"
     [1] "[2]: train's binary_logloss:0.0975448"
     [1] "[3]: train's binary_logloss:0.0384292"
     [1] "[4]: train's binary_logloss:0.0582241"
     [1] "[5]: train's binary_logloss:0.0595215"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003121 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [1] "[6]: train's binary_logloss:0.0609174"
     [1] "[7]: train's binary_logloss:0.317567"
     [1] "[8]: train's binary_logloss:0.0104223"
     [1] "[9]: train's binary_logloss:0.00497498"
     [1] "[10]: train's binary_logloss:0.00283557"
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002786 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 5211, number of used features: 116
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002961 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 5211, number of used features: 116
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003209 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 5210, number of used features: 116
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003323 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 5210, number of used features: 116
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003418 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 5210, number of used features: 116
     [LightGBM] [Info] Start training from score 0.483976
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Info] Start training from score 0.480906
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Info] Start training from score 0.481574
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Info] Start training from score 0.482342
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Info] Start training from score 0.481766
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[1]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306994+0.00061397"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[2]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[4]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[5]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[6]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[7]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[8]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[9]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[10]: valid's l2:0.000306984+0.000613968 valid's l1:0.000306985+0.000613967"
     [LightGBM] [Info] Number of positive: 198, number of negative: 202
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000436 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 167
     [LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1
     [LightGBM] [Info] Number of positive: 196, number of negative: 204
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000426 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 167
     [LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1
     [LightGBM] [Info] Number of positive: 207, number of negative: 193
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000450 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 167
     [LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1
     [LightGBM] [Info] Number of positive: 207, number of negative: 193
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000431 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 167
     [LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1
     [LightGBM] [Info] Number of positive: 192, number of negative: 208
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000439 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 167
     [LightGBM] [Info] Number of data points in the train set: 400, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.495000 -> initscore=-0.020001
     [LightGBM] [Info] Start training from score -0.020001
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.490000 -> initscore=-0.040005
     [LightGBM] [Info] Start training from score -0.040005
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.517500 -> initscore=0.070029
     [LightGBM] [Info] Start training from score 0.070029
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.517500 -> initscore=0.070029
     [LightGBM] [Info] Start training from score 0.070029
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.480000 -> initscore=-0.080043
     [LightGBM] [Info] Start training from score -0.080043
     [1] "[1]: valid's auc:0.476662+0.0622898 valid's binary_error:0.5+0.0593296"
     [1] "[2]: valid's auc:0.477476+0.0393392 valid's binary_error:0.554+0.0372022"
     [1] "[3]: valid's auc:0.456927+0.042898 valid's binary_error:0.526+0.0361109"
     [1] "[4]: valid's auc:0.419531+0.0344972 valid's binary_error:0.54+0.0289828"
     [1] "[5]: valid's auc:0.459109+0.0862237 valid's binary_error:0.52+0.0489898"
     [1] "[6]: valid's auc:0.460522+0.0911246 valid's binary_error:0.528+0.0231517"
     [1] "[7]: valid's auc:0.456328+0.0540445 valid's binary_error:0.532+0.0386782"
     [1] "[8]: valid's auc:0.463653+0.0660907 valid's binary_error:0.514+0.0488262"
     [1] "[9]: valid's auc:0.443017+0.0549965 valid's binary_error:0.55+0.0303315"
     [1] "[10]: valid's auc:0.477483+0.0763283 valid's binary_error:0.488+0.0549181"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002956 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[1]: train's binary_error:0.00307078 train's auc:0.99996 train's binary_logloss:0.132074"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: train's binary_error:0.00153539 train's auc:1 train's binary_logloss:0.0444372"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[3]: train's binary_error:0 train's auc:1 train's binary_logloss:0.0159408"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[4]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00590065"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[5]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00230167"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[6]: train's binary_error:0 train's auc:1 train's binary_logloss:0.00084253"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[7]: train's binary_error:0 train's auc:1 train's binary_logloss:0.000309409"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[8]: train's binary_error:0 train's auc:1 train's binary_logloss:0.000113754"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[9]: train's binary_error:0 train's auc:1 train's binary_logloss:4.1838e-05"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[10]: train's binary_error:0 train's auc:1 train's binary_logloss:1.539e-05"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Info] Number of positive: 35110, number of negative: 34890
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.000655 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 12
     [LightGBM] [Info] Number of data points in the train set: 70000, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.501571 -> initscore=0.006286
     [LightGBM] [Info] Start training from score 0.006286
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] Number of positive: 500, number of negative: 500
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000673 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's binary_error:0"
     [LightGBM] [Info] Number of positive: 500, number of negative: 500
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000437 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's binary_error:0"
     [LightGBM] [Info] Number of positive: 500, number of negative: 500
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000435 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_error:0"
     [LightGBM] [Info] Number of positive: 500, number of negative: 500
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000469 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_error:0"
     [LightGBM] [Info] Number of positive: 500, number of negative: 500
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000433 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_error:0"
     [LightGBM] [Info] Number of positive: 500, number of negative: 500
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000456 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's binary_error:0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_error:0"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002737 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's auc:0.987036"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's auc:0.987036"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's auc:0.998699"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[4]: valid1's auc:0.998699"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's auc:0.998699"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[6]: valid1's auc:0.999667"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[7]: valid1's auc:0.999806"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's auc:0.999978"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[9]: valid1's auc:0.999997"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[10]: valid1's auc:0.999997"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003264 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_error:0.016139"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_error:0.016139"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_error:0.016139"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[4]: valid1's binary_error:0.016139"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_error:0.016139"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[6]: valid1's binary_error:0.016139"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000455 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's rmse:55"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's rmse:59.5"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's rmse:63.55"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's rmse:67.195"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's rmse:70.4755"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's rmse:73.428"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's rmse:76.0852"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's rmse:78.4766"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's rmse:80.629"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's rmse:82.5661"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000452 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's rmse:55"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's rmse:59.5"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's rmse:63.55"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's rmse:67.195"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's rmse:70.4755"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's rmse:73.428"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000448 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1
     [LightGBM] [Info] Start training from score 0.045019
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's constant_metric:0.2 valid1's increasing_metric:0.1"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's constant_metric:0.2 valid1's increasing_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's constant_metric:0.2 valid1's increasing_metric:0.3"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's constant_metric:0.2 valid1's increasing_metric:0.4"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's constant_metric:0.2 valid1's increasing_metric:0.5"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's constant_metric:0.2 valid1's increasing_metric:0.6"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's constant_metric:0.2 valid1's increasing_metric:0.7"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's constant_metric:0.2 valid1's increasing_metric:0.8"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's constant_metric:0.2 valid1's increasing_metric:0.9"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's constant_metric:0.2 valid1's increasing_metric:1"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000446 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1
     [LightGBM] [Info] Start training from score 0.045019
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's increasing_metric:1.1 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's increasing_metric:1.2 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's increasing_metric:1.3 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's increasing_metric:1.4 valid1's constant_metric:0.2"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000470 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1
     [LightGBM] [Info] Start training from score 0.045019
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's increasing_metric:1.5 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's increasing_metric:1.6 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's increasing_metric:1.7 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's increasing_metric:1.8 valid1's constant_metric:0.2"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000584 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1
     [LightGBM] [Info] Start training from score 0.045019
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's increasing_metric:1.9 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's increasing_metric:2 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's increasing_metric:2.1 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's increasing_metric:2.2 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's increasing_metric:2.3 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's increasing_metric:2.4 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's increasing_metric:2.5 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's increasing_metric:2.6 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's increasing_metric:2.7 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's increasing_metric:2.8 valid1's constant_metric:0.2"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000510 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 100, number of used features: 1
     [LightGBM] [Info] Start training from score 0.045019
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's rmse:1.10501 valid1's l2:1.22105 valid1's increasing_metric:2.9 valid1's rmse:1.10501 valid1's l2:1.22105 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's rmse:1.10335 valid1's l2:1.21738 valid1's increasing_metric:3 valid1's rmse:1.10335 valid1's l2:1.21738 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's rmse:1.10199 valid1's l2:1.21438 valid1's increasing_metric:3.1 valid1's rmse:1.10199 valid1's l2:1.21438 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's rmse:1.10198 valid1's l2:1.21436 valid1's increasing_metric:3.2 valid1's rmse:1.10198 valid1's l2:1.21436 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's rmse:1.10128 valid1's l2:1.21282 valid1's increasing_metric:3.3 valid1's rmse:1.10128 valid1's l2:1.21282 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's rmse:1.10101 valid1's l2:1.21222 valid1's increasing_metric:3.4 valid1's rmse:1.10101 valid1's l2:1.21222 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's rmse:1.10065 valid1's l2:1.21143 valid1's increasing_metric:3.5 valid1's rmse:1.10065 valid1's l2:1.21143 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's rmse:1.10011 valid1's l2:1.21025 valid1's increasing_metric:3.6 valid1's rmse:1.10011 valid1's l2:1.21025 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's rmse:1.09999 valid1's l2:1.20997 valid1's increasing_metric:3.7 valid1's rmse:1.09999 valid1's l2:1.20997 valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's rmse:1.09954 valid1's l2:1.20898 valid1's increasing_metric:3.8 valid1's rmse:1.09954 valid1's l2:1.20898 valid1's constant_metric:0.2"
     [LightGBM] [Info] Number of positive: 66, number of negative: 54
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000506 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
     [LightGBM] [Info] Start training from score 0.200671
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_error:0.486486 valid1's binary_logloss:0.693255"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_error:0.486486 valid1's binary_logloss:0.691495"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_error:0.486486 valid1's binary_logloss:0.69009"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688968"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688534"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689883"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689641"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689532"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's binary_error:0.432432 valid1's binary_logloss:0.691066"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's binary_error:0.432432 valid1's binary_logloss:0.690653"
     [LightGBM] [Info] Number of positive: 66, number of negative: 54
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000525 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
     [LightGBM] [Info] Start training from score 0.200671
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_logloss:0.693255"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_logloss:0.691495"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_logloss:0.69009"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's binary_logloss:0.688968"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_logloss:0.688534"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's binary_logloss:0.689883"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's binary_logloss:0.689641"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's binary_logloss:0.689532"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's binary_logloss:0.691066"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's binary_logloss:0.690653"
     [LightGBM] [Info] Number of positive: 66, number of negative: 54
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000523 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
     [LightGBM] [Info] Start training from score 0.200671
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_error:0.486486 valid1's binary_logloss:0.693255"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_error:0.486486 valid1's binary_logloss:0.691495"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_error:0.486486 valid1's binary_logloss:0.69009"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688968"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688534"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689883"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689641"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689532"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's binary_error:0.432432 valid1's binary_logloss:0.691066"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's binary_error:0.432432 valid1's binary_logloss:0.690653"
     [LightGBM] [Info] Number of positive: 66, number of negative: 54
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000773 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
     [LightGBM] [Info] Start training from score 0.200671
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_logloss:0.693255"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_logloss:0.691495"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_logloss:0.69009"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's binary_logloss:0.688968"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_logloss:0.688534"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's binary_logloss:0.689883"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's binary_logloss:0.689641"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's binary_logloss:0.689532"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's binary_logloss:0.691066"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's binary_logloss:0.690653"
     [LightGBM] [Info] Number of positive: 66, number of negative: 54
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000446 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
     [LightGBM] [Info] Start training from score 0.200671
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's binary_error:0.486486 valid1's binary_logloss:0.693255"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's binary_error:0.486486 valid1's binary_logloss:0.691495"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's binary_error:0.486486 valid1's binary_logloss:0.69009"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688968"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's binary_error:0.432432 valid1's binary_logloss:0.688534"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689883"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689641"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's binary_error:0.432432 valid1's binary_logloss:0.689532"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's binary_error:0.432432 valid1's binary_logloss:0.691066"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's binary_error:0.432432 valid1's binary_logloss:0.690653"
     [LightGBM] [Info] Number of positive: 66, number of negative: 54
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000408 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 120, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.550000 -> initscore=0.200671
     [LightGBM] [Info] Start training from score 0.200671
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's constant_metric:0.2"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's constant_metric:0.2"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000428 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's mape:1.1 valid1's rmse:55 valid1's l1:55"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's mape:1.19 valid1's rmse:59.5 valid1's l1:59.5"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's mape:1.271 valid1's rmse:63.55 valid1's l1:63.55"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's mape:1.3439 valid1's rmse:67.195 valid1's l1:67.195"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's mape:1.40951 valid1's rmse:70.4755 valid1's l1:70.4755"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's mape:1.46856 valid1's rmse:73.428 valid1's l1:73.428"
     -- Skip (test_basic.R:1171:3): lgb.train() supports non-ASCII feature names ----
     Reason: UTF-8 feature names are not fully supported in the R package
    
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000413 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid1's rmse:125 valid2's rmse:98.1071"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid1's rmse:87.5 valid2's rmse:62.5"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid1's rmse:106.25 valid2's rmse:80.0878"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid1's rmse:96.875 valid2's rmse:71.2198"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid1's rmse:101.562 valid2's rmse:75.6386"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid1's rmse:99.2188 valid2's rmse:73.425"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid1's rmse:100.391 valid2's rmse:74.5308"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid1's rmse:99.8047 valid2's rmse:73.9777"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid1's rmse:100.098 valid2's rmse:74.2542"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid1's rmse:99.9512 valid2's rmse:74.1159"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000429 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: train's rmse:25 valid1's rmse:125 valid2's rmse:98.1071"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: train's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: train's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: train's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: train's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: train's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: train's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: train's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: train's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: train's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000455 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: train's rmse:25 valid1's rmse:125 valid2's rmse:98.1071"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: train's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: train's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: train's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: train's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: train's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: train's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: train's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: train's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: train's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000431 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: train's rmse:25 valid1's rmse:125 valid2's rmse:98.1071"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: train's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: train's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: train's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: train's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: train's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: train's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: train's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: train's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: train's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000437 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: something-random-we-would-not-hardcode's rmse:25 valid1's rmse:125 valid2's rmse:98.1071"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: something-random-we-would-not-hardcode's rmse:12.5 valid1's rmse:87.5 valid2's rmse:62.5"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: something-random-we-would-not-hardcode's rmse:6.25 valid1's rmse:106.25 valid2's rmse:80.0878"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: something-random-we-would-not-hardcode's rmse:3.125 valid1's rmse:96.875 valid2's rmse:71.2198"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: something-random-we-would-not-hardcode's rmse:1.5625 valid1's rmse:101.562 valid2's rmse:75.6386"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: something-random-we-would-not-hardcode's rmse:0.78125 valid1's rmse:99.2188 valid2's rmse:73.425"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: something-random-we-would-not-hardcode's rmse:0.390625 valid1's rmse:100.391 valid2's rmse:74.5308"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: something-random-we-would-not-hardcode's rmse:0.195312 valid1's rmse:99.8047 valid2's rmse:73.9777"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: something-random-we-would-not-hardcode's rmse:0.0976562 valid1's rmse:100.098 valid2's rmse:74.2542"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: something-random-we-would-not-hardcode's rmse:0.0488281 valid1's rmse:99.9512 valid2's rmse:74.1159"
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000704 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 3
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: train's rmse:25"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: train's rmse:12.5"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: train's rmse:6.25"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: train's rmse:3.125"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: train's rmse:1.5625"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: train's rmse:0.78125"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: train's rmse:0.390625"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: train's rmse:0.195312"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: train's rmse:0.0976562"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: train's rmse:0.0488281"
     [LightGBM] [Info] Number of positive: 500, number of negative: 500
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000452 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 255
     [LightGBM] [Info] Number of data points in the train set: 1000, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
     [1] "[1]: something-random-we-would-not-hardcode's auc:0.58136 valid1's auc:0.429487"
     [1] "[2]: something-random-we-would-not-hardcode's auc:0.599008 valid1's auc:0.266026"
     [1] "[3]: something-random-we-would-not-hardcode's auc:0.6328 valid1's auc:0.349359"
     [1] "[4]: something-random-we-would-not-hardcode's auc:0.655136 valid1's auc:0.394231"
     [1] "[5]: something-random-we-would-not-hardcode's auc:0.655408 valid1's auc:0.419872"
     [1] "[6]: something-random-we-would-not-hardcode's auc:0.678784 valid1's auc:0.336538"
     [1] "[7]: something-random-we-would-not-hardcode's auc:0.682176 valid1's auc:0.416667"
     [1] "[8]: something-random-we-would-not-hardcode's auc:0.698032 valid1's auc:0.394231"
     [1] "[9]: something-random-we-would-not-hardcode's auc:0.712672 valid1's auc:0.445513"
     [1] "[10]: something-random-we-would-not-hardcode's auc:0.723024 valid1's auc:0.471154"
     [LightGBM] [Info] Number of positive: 50, number of negative: 39
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000387 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 89, number of used features: 1
     [LightGBM] [Info] Number of positive: 49, number of negative: 41
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000454 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 90, number of used features: 1
     [LightGBM] [Info] Number of positive: 53, number of negative: 38
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000725 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 91, number of used features: 1
     [LightGBM] [Info] Number of positive: 46, number of negative: 44
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000558 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 90, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.561798 -> initscore=0.248461
     [LightGBM] [Info] Start training from score 0.248461
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.544444 -> initscore=0.178248
     [LightGBM] [Info] Start training from score 0.178248
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.582418 -> initscore=0.332706
     [LightGBM] [Info] Start training from score 0.332706
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.511111 -> initscore=0.044452
     [LightGBM] [Info] Start training from score 0.044452
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.701123+0.0155541"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.70447+0.0152787"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.706572+0.0162531"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.709214+0.0165672"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.710652+0.0172198"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid's binary_error:0.500565+0.0460701 valid's binary_logloss:0.713091+0.0176604"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.714842+0.0184267"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.714719+0.0178927"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.717162+0.0181993"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid's binary_error:0.508899+0.0347887 valid's binary_logloss:0.716577+0.0180201"
     [LightGBM] [Info] Number of positive: 45, number of negative: 35
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000525 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Info] Number of positive: 40, number of negative: 40
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000543 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Info] Number of positive: 47, number of negative: 33
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000554 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 42
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.562500 -> initscore=0.251314
     [LightGBM] [Info] Start training from score 0.251314
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.500000 -> initscore=0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.587500 -> initscore=0.353640
     [LightGBM] [Info] Start training from score 0.353640
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid's constant_metric:0.2+0"
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000470 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000483 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000420 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000445 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000403 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Info] Start training from score 0.024388
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] Start training from score 0.005573
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] Start training from score 0.039723
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] Start training from score 0.029700
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] Start training from score 0.125712
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid's increasing_metric:4.1+0.141421 valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid's increasing_metric:4.6+0.141421 valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid's increasing_metric:5.1+0.141421 valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid's increasing_metric:5.6+0.141421 valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[5]: valid's increasing_metric:6.1+0.141421 valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[6]: valid's increasing_metric:6.6+0.141421 valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[7]: valid's increasing_metric:7.1+0.141421 valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[8]: valid's increasing_metric:7.6+0.141421 valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[9]: valid's increasing_metric:8.1+0.141421 valid's constant_metric:0.2+0"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[10]: valid's increasing_metric:8.6+0.141421 valid's constant_metric:0.2+0"
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000392 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000386 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000388 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000388 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Warning] Unknown parameter: 0x06ecff30>
     [LightGBM] [Warning] Unknown parameter: valids
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000432 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 35
     [LightGBM] [Info] Number of data points in the train set: 80, number of used features: 1
     [LightGBM] [Info] Start training from score 0.024388
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] Start training from score 0.005573
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] Start training from score 0.039723
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] Start training from score 0.029700
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] Start training from score 0.125712
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: valid's constant_metric:0.2+0 valid's increasing_metric:9.1+0.141421"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: valid's constant_metric:0.2+0 valid's increasing_metric:9.6+0.141421"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[3]: valid's constant_metric:0.2+0 valid's increasing_metric:10.1+0.141421"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[4]: valid's constant_metric:0.2+0 valid's increasing_metric:10.6+0.141421"
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002317 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] Start training from score 0.482113
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: train's l2:0.24804"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: train's l2:0.246711"
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002502 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] Start training from score 0.482113
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: train's l2:0.24804"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: train's l2:0.246711"
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002385 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] Start training from score 0.482113
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: train's l2:0.24804"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: train's l2:0.246711"
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002441 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] Start training from score 0.482113
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: train's l2:0.24804"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: train's l2:0.246711"
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002468 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] Start training from score 0.482113
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[1]: train's l2:0.24804"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [1] "[2]: train's l2:0.246711"
     [LightGBM] [Warning] Using self-defined objective function
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002636 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Warning] Using self-defined objective function
     [1] "[1]: train's auc:0.994987 train's error:0.00598802 eval's auc:0.995243 eval's error:0.00558659"
     [1] "[2]: train's auc:0.99512 train's error:0.00307078 eval's auc:0.995237 eval's error:0.00248293"
     [1] "[3]: train's auc:0.99009 train's error:0.00598802 eval's auc:0.98843 eval's error:0.00558659"
     [1] "[4]: train's auc:0.999889 train's error:0.00168893 eval's auc:1 eval's error:0.000620732"
     [1] "[5]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
     [1] "[6]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
     [1] "[7]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
     [1] "[8]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
     [1] "[9]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
     [1] "[10]: train's auc:1 train's error:0 eval's auc:1 eval's error:0"
     [LightGBM] [Warning] Using self-defined objective function
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002416 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Warning] Using self-defined objective function
     [1] "[1]: train's error:0.00598802 eval's error:0.00558659"
     [1] "[2]: train's error:0.00307078 eval's error:0.00248293"
     [1] "[3]: train's error:0.00598802 eval's error:0.00558659"
     [1] "[4]: train's error:0.00168893 eval's error:0.000620732"
     [LightGBM] [Info] Saving data to binary file D:\temp\RtmpO2WXb8\lgb.Dataset_53845ce92e81
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000627 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 32
     [LightGBM] [Info] Number of data points in the train set: 6000, number of used features: 16
     -- FAILURE (test_learning_to_rank.R:52:9): learning-to-rank with lgb.train() wor
     abs(eval_results[[2L]][["value"]] - 0.745986) < TOLERANCE is not TRUE
    
     `actual`: FALSE
     `expected`: TRUE
    
     -- FAILURE (test_learning_to_rank.R:53:9): learning-to-rank with lgb.train() wor
     abs(eval_results[[3L]][["value"]] - 0.7351959) < TOLERANCE is not TRUE
    
     `actual`: FALSE
     `expected`: TRUE
    
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000470 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 40
     [LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000503 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 40
     [LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000525 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 40
     [LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000489 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 40
     [LightGBM] [Info] Number of data points in the train set: 4500, number of used features: 20
     [1] "[1]: valid's ndcg@1:0.675+0.0829156 valid's ndcg@2:0.655657+0.0625302 valid's ndcg@3:0.648464+0.0613335"
     [1] "[2]: valid's ndcg@1:0.725+0.108972 valid's ndcg@2:0.666972+0.131409 valid's ndcg@3:0.657124+0.130448"
     [1] "[3]: valid's ndcg@1:0.65+0.111803 valid's ndcg@2:0.630657+0.125965 valid's ndcg@3:0.646928+0.15518"
     [1] "[4]: valid's ndcg@1:0.725+0.0829156 valid's ndcg@2:0.647629+0.120353 valid's ndcg@3:0.654052+0.129471"
     [1] "[5]: valid's ndcg@1:0.75+0.165831 valid's ndcg@2:0.662958+0.142544 valid's ndcg@3:0.648186+0.130213"
     [1] "[6]: valid's ndcg@1:0.725+0.129904 valid's ndcg@2:0.647629+0.108136 valid's ndcg@3:0.648186+0.106655"
     [1] "[7]: valid's ndcg@1:0.75+0.165831 valid's ndcg@2:0.653287+0.14255 valid's ndcg@3:0.64665+0.119557"
     [1] "[8]: valid's ndcg@1:0.725+0.129904 valid's ndcg@2:0.637958+0.123045 valid's ndcg@3:0.64665+0.119557"
     [1] "[9]: valid's ndcg@1:0.75+0.15 valid's ndcg@2:0.701643+0.116239 valid's ndcg@3:0.701258+0.102647"
     [1] "[10]: valid's ndcg@1:0.75+0.165831 valid's ndcg@2:0.682301+0.117876 valid's ndcg@3:0.66299+0.121243"
     -- FAILURE (test_learning_to_rank.R:130:5): learning-to-rank with lgb.cv() works
     all(...) is not TRUE
    
     `actual`: FALSE
     `expected`: TRUE
    
     -- FAILURE (test_learning_to_rank.R:136:5): learning-to-rank with lgb.cv() works
     all(...) is not TRUE
    
     `actual`: FALSE
     `expected`: TRUE
    
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003677 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
     [LightGBM] [Info] Start training from score 0.482113
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[1]: test's l2:6.44165e-17"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[2]: test's l2:1.97215e-31"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[3]: test's l2:1.97215e-31"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[4]: test's l2:1.97215e-31"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[5]: test's l2:1.97215e-31"
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002177 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
     [LightGBM] [Info] Start training from score 0.482113
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[1]: test's l2:6.44165e-17"
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [1] "[2]: test's l2:1.97215e-31"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[3]: test's l2:1.97215e-31"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[4]: test's l2:1.97215e-31"
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Stopped training because there are no more leaves that meet the split requirements
     [1] "[5]: test's l2:1.97215e-31"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002005 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.198597"
     [1] "[2]: train's binary_logloss:0.111535"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002230 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.198597"
     [1] "[2]: train's binary_logloss:0.111535"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002048 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.198597"
     [1] "[2]: train's binary_logloss:0.111535"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002057 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.198597"
     [1] "[2]: train's binary_logloss:0.111535"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002080 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002079 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.198597"
     [1] "[2]: train's binary_logloss:0.111535"
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001877 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 182
     [LightGBM] [Info] Number of data points in the train set: 1611, number of used features: 91
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.001985 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.198597"
     [1] "[2]: train's binary_logloss:0.111535"
     [1] "[3]: train's binary_logloss:0.0480659"
     [1] "[4]: train's binary_logloss:0.0279151"
     [1] "[5]: train's binary_logloss:0.0190479"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002030 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.198597"
     [1] "[2]: train's binary_logloss:0.111535"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002072 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.198597"
     [1] "[2]: train's binary_logloss:0.111535"
     [1] "[3]: train's binary_logloss:0.0480659"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002114 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.198597"
     [1] "[2]: train's binary_logloss:0.111535"
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002276 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 214
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 107
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [1] "[1]: train's binary_logloss:0.198597"
     [1] "[2]: train's binary_logloss:0.111535"
     -- Skip (test_lgb.Booster.R:445:5): Saving a model with unknown importance type
     Reason: Skipping this test because it causes issues for valgrind
    
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000373 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000442 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000494 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000691 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000723 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002933 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000489 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 77
     [LightGBM] [Info] Number of data points in the train set: 90, number of used features: 4
     [LightGBM] [Info] Start training from score -1.504077
     [LightGBM] [Info] Start training from score -1.098612
     [LightGBM] [Info] Start training from score -0.810930
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002544 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
     [LightGBM] [Info] [binary:BoostFromScore]: pavg=0.482113 -> initscore=-0.071580
     [LightGBM] [Info] Start training from score -0.071580
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Info] Number of positive: 3140, number of negative: 3373
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002574 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000495 seconds.
     You can set `force_col_wise=true` to remove the overhead.
     [LightGBM] [Info] Total Bins 77
     [LightGBM] [Info] Number of data points in the train set: 90, number of used features: 4
     [LightGBM] [Info] Start training from score -1.504077
     [LightGBM] [Info] Start training from score -1.098612
     [LightGBM] [Info] Start training from score -0.810930
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: -inf
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002884 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
     [LightGBM] [Info] Start training from score 0.482113
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002648 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
     [LightGBM] [Info] Start training from score 0.482113
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     [LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002538 seconds.
     You can set `force_row_wise=true` to remove the overhead.
     And if memory is not enough, you can set `force_col_wise=true`.
     [LightGBM] [Info] Total Bins 232
     [LightGBM] [Info] Number of data points in the train set: 6513, number of used features: 116
     [LightGBM] [Info] Start training from score 0.482113
     [LightGBM] [Warning] No further splits with positive gain, best gain: 0.000000
     -- Skip (test_utils.R:70:5): lgb.last_error() correctly returns errors from the
     Reason: Skipping this test because it causes valgrind to think there is a memory leak, and needs to be rethought
    
     -- Skipped tests --------------------------------------------------------------
     * Skipping this test because it causes issues for valgrind (1)
     * Skipping this test because it causes valgrind to think there is a memory leak, and needs to be rethought (1)
     * UTF-8 feature names are not fully supported in the R package (1)
    
     == testthat results ===========================================================
     FAILURE (test_learning_to_rank.R:52:9): learning-to-rank with lgb.train() works as expected
     FAILURE (test_learning_to_rank.R:53:9): learning-to-rank with lgb.train() works as expected
     FAILURE (test_learning_to_rank.R:130:5): learning-to-rank with lgb.cv() works as expected
     FAILURE (test_learning_to_rank.R:136:5): learning-to-rank with lgb.cv() works as expected
    
     [ FAIL 4 | WARN 0 | SKIP 3 | PASS 597 ]
     Error: Test failures
     Execution halted
Flavor: r-oldrel-windows-ix86+x86_64